August 20, 2025
6 min
TL;DR
For the first time in 20 years, user search behavior is no longer unified. It is now divided across two distinct paths: classic search engines and AI-powered generative systems.
Today’s digital audiences navigate between traditional search engines and AI conversational tools such as ChatGPT and Perplexity, often preferring direct, synthesised answers over scanning through multiple web pages. This behavior is not marginal. It is accelerating, and it is forcing publishers and brands to confront a fundamental change: the way people find and consume information is diverging.
This article analyzes the growing divide in user search behavior through the lens of “The Future of Search”, an e-book by SUBJCT, and insights from a recent conversation with media strategist David Buttle. It investigates the structural shift underway as users increasingly alternate between traditional search engines and AI-driven platforms. By examining which types of queries align with each system, it offers a broader perspective on how digital discovery is evolving and what that reveals about changing user intent.
At its core, this is not just a story about technology. It is a shift in attention, user behavior, and trust, one that is redrawing the boundaries of visibility for publishers, brands, and platforms alike.
The search market is undergoing a profound structural shift. For 20 years, Google has been synonymous with search, controlling both the infrastructure of discovery and the distribution of information. Today, that monopoly is fracturing. A new model of search is emerging alongside this evolution, and it is powered and driven by large language models (LLMs) capable of generating answers rather than simply indexing links.
This search divide is not hypothetical; it is observable in user behavior today. As The Future of Search e-book reports, 77% of the Information’s readers—an audience described as a likely leading indicator of digital trends— already turn to AI chatbots such as ChatGPT, Gemini, and Perplexity instead of Google for certain queries. These tools are no longer experimental; they are rapidly becoming default interfaces for a growing set of information needs.
Yet, the adoption of AI search does not signify the wholesale replacement of traditional search. It signals a segmentation of user intent. Certain queries, especially those requiring synthesis, depth, personalization, or dialogue, are migrating toward LLM-driven engines. At the same time, queries that demand accuracy, speed, and integration with trusted ecosystems continue to rely on traditional search.
As David Buttle, founder of DJB Strategies and author of SUBJCT’s ‘The Future of Search remarked during our conversation, “We’re not looking at a replacement. We’re looking at a split, two parallel search behaviors emerging side by side.” This insight is critical. Publishers are not facing a simple platform migration; they are facing the challenge of visibility across two structurally different systems of discovery.
Key Insight: Search is no longer a single-channel experience. User intent now drives whether someone turns to Google or a generative AI tool, and publishers must compete for visibility across two fundamentally different platforms.
This remains crucial because traditional search engines still drive high-volume informational queries, and optimising for them supports consistent traffic. Focus areas include structured content, entity tagging, and semantic markup.
Although LLM-powered search is advancing rapidly, traditional SEO continues to play a crucial role in digital discovery. It remains a foundational strategy for publishers because general search engines like Google continue to dominate for key user intents. People still trust Google to deliver reliable results, particularly when the stakes are high or the query requires direct and actionable answers.
David Buttle explained this clearly in the e-book: “AI chatbots can generate answers, but for things where the stakes are high or the answer needs to be 100% accurate, people still turn to Google.” His point reflects a deeper truth about user behavior. Google remains the primary gateway for specific types of search because of its infrastructure and integrated ecosystem.
These Specific types of searches include:
● Navigational intent: Finding a specific brand or website.
● Transactional intent: Searching to complete purchases or applications.
● Real-time queries: Accessing live scores, stock prices, or news.
● Localized queries: Finding nearby services or locations.
● High-stakes queries: Seeking trusted health, legal, or financial information.
These categories highlight why general search retains relevance. Google’s Knowledge Graph, schema markup, and structured data help surface trusted content for these intents.
To stay competitive, publishers and brands should still maintain SEO Best practices, including:
● Ensuring content demonstrates Experience, Expertise, Authoritativeness, and Trustworthiness (E-E-A-T) to establish credibility and reinforce value.
In an information-saturated marketplace, audiences are not just looking for content. They seek confidence. E-E-A-T is a framework used by Google to assess whether content can be trusted. It evaluates whether content is informed by real-world experiences, authored by subject matter experts, backed by a credible source, and built on accurate, reliable information. Content with strong E-E-A-T signals earns trust from both users and algorithms.
● Build topical authority for generative search alongside SEO best practices.
Authority comes from repetition and depth over time. When a site consistently publishes high-quality content within a focused subject area, it signals to search engines and AI systems that the source is reliable in that domain. This pattern recognition helps content rank higher in traditional search and improves its chances of being referenced in AI-generated answers.
● Use schema markup and structured data to connect with Google’s Knowledge Graph.
Schema Markup is a form of code that helps search engines understand the intended meaning of content. It adds context that goes beyond the words on the page. For example, if a page mentions “Cambridge,”schema markup connected to knowledge graphs can clarify whether “Cambridge” refers to the university, the city, or a person with that name. By tagging the content accurately, search engines can connect it to the right information in their databases, like Google’s Knowledge Graph, and display it more prominently in search results or summaries generated by LLM systems.
● Write semantically clear, well-structured content aligned with entity-first strategies.
Semantic content focuses on meaning, not just keywords. Aligning content with defined “entities” (such as specific people, companies, or concepts) allows AI systems and search engines to understand the relationships and relevance of what’s written. This makes the content more discoverable, more extractable, and more accurately represented in search or AI outputs. For example, instead of writing “she led the bank through the crisis,” clearly stating “Alison Rose led NatWest through the 2008 financial crisis” ties the content to recognised entities. This improves how it is indexed, retrieved, and represented in both search engines and AI-generated responses.
● Format content for extraction by crawlers for both general search and Large Language Models (LLMs).
Content today must be readable not only by humans but also by machines. This requires modular formatting, which involves using clear subheadings, short paragraphs, and self-contained sections. These elements make it easier for AI systems and search engines to pull, quote, or summarize information accurately. Modular formatting also improves accessibility across various platforms, including web search, mobile interfaces, and voice assistants.
● Ensure accurate internal linking and consistent content tagging to support both human navigation and machine readability.
Internal links are clickable connections between related pages or articles on the same website. Tagging is the practice of categorizing content using consistent labels. Together, these practices help users find related information more easily and help search machines understand how content is organized. Tools like SUBJCT’s product suite automate this process, enhancing both human navigation and machine discovery.
Key Insight: Traditional SEO continues to anchor high-intent discovery. Structured data, entity alignment, and E-E-A-T are still decisive in maintaining visibility where trust and precision are non-negotiable.
Despite the emergence of AI-driven search engines, general search remains highly effective for specific query types that are functionally precise and outcome-oriented. These categories continue to align with Google’s core strengths: speed, reliability, and integration with real-time data.
These include:
● Navigational Intent
Example: “LinkedIn login”
This query signals a user trying to reach a specific website or digital destination directly. The search engine serves as a shortcut to access known platforms or services.
● Simple Informational Queries
Example: “Weather in Paris”
These are straightforward questions with clear, factual answers. The user wants basic information—like the current temperature and conditions—without needing interpretation or deeper analysis.
● Transactional Queries
Example: “Book a flight to Rome”
These searches indicate intent to complete a specific action, such as making a purchase or booking a service. The user is ready to act, often with commercial or logistical motivation.
● Real-Time Queries
Example: “NFL scores live”
These are time-sensitive searches where the user expects immediate, continuously updated information—like live sports results or breaking news.
● Localized Intent
Example: “Coffee shops near me”
These queries include geographic context. The user is looking for results based on physical proximity, such as nearby restaurants, stores, or services.
● High-Stakes Categories
Example: “Symptoms of stroke”
These involve queries where accuracy and authoritative sources are essential due to the potential consequences. In this case, the user is seeking trustworthy medical information about a serious condition.
In these contexts, Google’s structured search engine results pages, enhanced by Knowledge Graph, schema markup, and E-E-A-T signals, continue to outperform AI chatbots that lack access to real-time updates or verified institutional data.
As David Buttle observed, “There are certain user intents which it seems we are a long way off, if ever, seeing AI serve effectively. These are the inherently human queries, where precision, attribution, and trust matter deeply.”
Key Insight: Google retains dominance in categories that demand precision, speed, and geographic or transactional specificity. For these intents, structured SERPs still outperform generative answers.
GEO, or Generative Engine Optimization, is the emerging framework here. It prioritizes clarity, context, and extractable formats to ensure content is understood by AI systems.
AI-powered search is no longer emerging. It is already here. Large Language Models (LLMs) are reshaping how people discover information, not just by indexing documents but by generating answers in response to user prompts.
This shift introduces new engines and interfaces such as ChatGPT, Claude from Anthropic, Gemini, Perplexity, and Microsoft Copilot, which provide dialogic and synthesised results rather than ranked lists of links. These models operate on Retrieval-Augmented Generation (RAG) pipelines. These models retrieve information from external sources, like the live web, and internal datasets to generate responses with both contextual depth and semantic clarity.
The logic of search has changed:
● Instead of ranking pages, LLMs retrieve and synthesize information.
● Instead of assuming the user will evaluate sources, LLMs aim to deliver an end product.
● Instead of depending on clicks, they rely on semantic optimization, embeddings, and structured representation.
This marks a significant departure from general search and forces content creators to adjust not only their publishing practices but also their monetization assumptions.
David Buttle noted this transformation is already impacting traffic patterns: “We’re now getting data on the impact of AI Overviews on clicks. Users often stop at the answer. The links are there, but no one clicks on them.”
According to a new report by Bright Edge, search clicks dropped by over 30% last year when Google AI Overviews appeared, validating the growing concern that users now stop at the AI-generated summary rather than clicking through to publisher sites.
Key Insight: LLM-based search changes the logic of visibility. Success now hinges on semantic structure, clarity, and extractability - not rankings - for response-worthiness.
AI search engines offer distinct advantages in serving complex, nuanced, or open-ended user intents. These are areas where traditional search would require multiple queries, cross-referencing, and user synthesis.
LLMs perform particularly well in:
● Exploratory queries: These are open-ended questions where people want to discover possibilities and gain insights about a topic without necessarily having a specific answer in mind. They're searching for guidance and perspectives rather than just facts. Example: "I'm interested in sustainable investing, but not sure where to start in today's market. What approaches make the most sense in 2025?"
● Compound or complex queries: These are multiple related questions or comparisons that require analysing different factors and weighing various considerations to provide a comprehensive answer. Example: "I'm trying to decide between hybrid and electric cars for my family. We do a lot of road trips - which would work better for longer distances?"
● Personalized prompts: This query covers someone sharing specific information about their situation and asking for customised recommendations based on that context. These allow for much more tailored advice. Example: "I've been keeping track of my spending in this document. Could you look it over and suggest where I might be able to save some money?"
● Conversational refinement: This happens across multiple exchanges where the person gradually adds details and adjusts their request based on previous responses, creating a back-and-forth that leads to increasingly precise and helpful answers. Example: "I'm planning a trip to Japan next spring with my partner. We love food experiences and outdoor activities, but want to avoid tourist traps. Could you help us build an itinerary?"
● Low-stakes topics are everyday questions where perfect accuracy isn't critical, and practical advice is more important than detailed technical information. Example: "My kettle's getting pretty crusty with limescale. What's a good way to clean it without using harsh chemicals?"
These query types are summarised below:
These types of queries highlight where Generative SEO strategies and entity-first content strategies are most effective. For publishers, success depends on producing content that is structured, semantically rich, and easily extractable by LLM systems.
As Buttle shared, “Optimizing for these engines isn’t radically different from good SEO. But formatting matters. Chunking content, strong subheadings, and self-contained paragraphs help these systems extract relevant information.”
Generative engines excel in serving nuanced, exploratory, and personalised queries. These models reward content that anticipates conversation, not just information.
Key Insight: Generative engines excel in serving nuanced, exploratory, and personalised queries. These models reward content that anticipates conversation, not just information.
The LLM search space introduces a fundamentally different competitive dynamic than conventional search. While Google maintained dominance for 20 years, the market for LLM-driven discovery is still in flux. It includes a wider array of players, including ChatGPT, Claude, Gemini, Perplexity, Copilot, and others, competing across interfaces, devices, and distribution channels.
Unlike Google's traditionally centralized model, many AI search tools are distributed across apps, operating systems, and integrations. However, models like Gemini are starting to blur that line. This opens up new types of partnerships and licensing opportunities for publishers, making strategic positioning increasingly critical.
As David Buttle noted, “You need to define your strategy around two substitutional revenue lines: content engagement on your own site, and licensing your content to AI providers. These are not complementary; they compete with each other.”
To help publishers make better-informed licensing decisions, SUBJCT has developed a Content Resilience Score (CRS) framework in the “The Future of Search” e-book that evaluates the likelihood of content being substituted by an AI-generated response.
SUBJCT Content Resilience Score Framework
As Buttle put it, “This is a very different world. In AI search, the content supply chain is being restructured. Publishers now have more bargaining power, but only if they act strategically.”
Key Insight: AI search introduces new gatekeepers and monetization models. Strategic positioning requires balancing engagement-driven SEO with licensing strategies for AI consumption.
Unlike general search, LLM platforms learn from prompt engagement and model fine-tuning. This reshapes how content is evaluated and surfaced.
Traditional SEO is shaped by click-through rates, bounce data, and user engagement metrics. These signals help Google refine what appears on its results page. In LLM-powered search, that feedback loop is absent.
Instead of clicks, AI systems rely on semantic optimization, embeddings, and citation-worthy structure to decide which content is surfaced. The value of content is now determined by how well it fits into a model’s Retrieval-Augmented Generation (RAG) process, not how many users visit a page.
For publishers, this changes the definition of “optimization.” It is no longer about keywords alone. It is about:
● Formatting content to be chunked and extractable.
● Aligning articles with entities, Knowledge Graphs, and structured data.
● Ensuring high topical authority for generative search, especially in categories prone to AI substitution.
Publishers must now consider visibility beyond the click and design their content with semantic clarity and licensing potential in mind.
Key Insight: In LLM-powered environments, visibility is not earned through clicks but through structure. Content must be designed to be understood, cited, and surfaced by models not just discovered by users.
The evolution of search into two distinct operational models, one link-based, one generation-based, demands a proactive response from publishers. To maintain visibility and revenue, it is not enough to be good at traditional SEO. It is now equally important to understand how LLMs retrieve, interpret, and present content.
Adapting effectively means:
● Building a dual-path strategy that optimizes for both Google’s SERP and AI response engines.
● Structuring content for semantic SEO and Generative Engine Optimization (GEO).
● Tagging and linking content internally with precision using SUBJCT’s product suite.
● Identifying which parts of your content are resilient to AI substitution through frameworks like the Content Resilience Score.
● Negotiating licensing from a position of strength by asserting control over data access.
The future of discoverability belongs to those who embrace both sides of this transformation. As David Buttle stated, “Each publisher needs to define its position on these two revenue lines: engagement on your platform, and licensing your content to someone else’s system. They are not mutually supportive. They are substitutional.”
To prepare for this transformation, download The Future of Search e-book to future-proof your digital content strategy.
Key Insight: Future-proofing content means mastering both worlds. Publishers must build dual-path strategies that optimize for link-based and generative engines simultaneously.
● Dual Search Paths: User search behavior is dividing between traditional search engines like Google and AI-powered systems like ChatGPT and Perplexity.
● Different Strengths: Traditional search remains dominant for navigational, transactional, and high-stakes queries, while LLM models excel at exploratory, complex, and conversational searches.
● Content Optimization Strategy: Publishers need a dual approach combining SEO and GEO (Generative Engine Optimization) to maintain visibility across both systems.
● Revenue Implications: The ongoing divide between link-based to generation-based search requires publishers to reconsider both engagement and licensing strategies.
● Content Resilience: Publishers should evaluate their content's uniqueness, emotional resonance, and accuracy sensitivity to determine how vulnerable it is to AI substitution. To assess your current readiness for LLM visibility, try SUBJCT’s free Content Resilience Score as highlighted in “The Future of Search” e-book.
Get in touch to find out more about our suite of content optimization tools built for AI search.
Join Beta