top of page
  • Writer's pictureJakob Nielsen

SEO Is Dead, Long Live AI-Summarized Answers

Summary: Unlike search, which just identifies and ranks potential information sources, AI synthesizes a single clear answer from all available info, tailored to the user’s specific query and circumstances. AI also provides clarifying details, analysis, and supplementary information as part of its answers.

I’ll start by admitting that my headline exhibits some hyperbole, as is required to attract clicks these days. SEO is not dead yet, but it has one leg in the grave. For another year or two, many users will still look to their trusted legacy search engines when they are in search of answers. But this behavior ingrained in users for 30 years will gradually be replaced by users turning to AI services that provide better answers, faster.

Since the launch of Google in 1998, search has been the leading portal to the web’s riches. For 25 years, users have been trained to go to a search engine when they need an answer. (Search by Dall-E.)


Recent research on AI vs. search has found that:

  • Users are 158% more productive, measured by the number of questions answered per hour when using ChatGPT instead of Google.

  • Users strongly prefer AI over Google, rating AI 6.06 and Google 5.27, on a 1–7 satisfaction scale.

  • Using AI to answer questions narrows the skill gap between users with high vs. low education, relative to using Google. With the traditional search engine, users with graduate degrees strongly outperformed less-educated users, whereas the difference was much smaller when users employed ChatGPT to answer questions.

In this study, answer quality was the same between the two solutions. However, the AI corner was represented by ChatGPT 3.5, which is notoriously weaker than the current version, ChatGPT 4, let alone the answer quality we might expect from next year’s ChatGPT release.


On all three metrics, AI chatbots handily beat search. When performance differences exceed 100%, people start paying attention and modify even long-ingrained behaviors. Google benefited from this effect in the late 1990s when it outcompeted Yahoo, Excite, Altavista, and a bunch of other more established, but weaker, search engines by the simple expedient of delivering better answers, faster.


I have switched my answer-seeking allegiance to Perplexity.AI, which I use for most information-seeking queries. (I have no economic interest in recommending Perplexity.AI; I do so to help my readers.) Perplexity has a free version, which is reasonably good, but it becomes much better if you take out a $200/year subscription. I am happy to pay this, to keep the UI advertising-free, but I’m double happy to pay up since the for-pay service is much better.


The main difference between the free and the paid Perplexity services is that the paid version provides answers that are summarized across a broader range of web sources, which again means that they are better and more insightful than the free answers.


Case Study: Answering a UX Design Question

For example, let’s look at a typical design problem for which a UX designer might seek advice: “Which way of asking for credit card expiration date on an e-commerce checkout form has the best usability?” Here are the answers to this question from Perplexity and Google:


A typical UX question answered by Perplexity.AI.


The same UX question answered by Google.


AI and the search engine both provide the correct answer: the credit card expiration date should be formatted as MM/YY on e-commerce checkout forms. Both also use the same sources: the Baymard Institute (the world’s leading authority on e-commerce usability), Stack Exchange, and Smashing Magazine. Perplexity provides 3 additional sources for even more information.


Google wins in stating the answer concisely, but it also confuses matters by repeating erroneous options sourced from Stack Exchange. Perplexity wins by providing the most useful answer, including several usability guidelines that the user didn’t ask about but which are crucial when designing this small component of the e-commerce checkout flow, such as the need to allow shoppers to edit stored card information. Perplexity also wins by explicitly warning against the low-usability solution of using dropdown menus for data input — a bad design we encounter much too often on the web and which Google confusingly mentions as an option.


Overall, this example doesn’t have a clear winner. Still, Perplexity is my preferred choice because it is more helpful to designers and provides clear links to more information about each of the provided guidelines.


Both services provide follow-up questions that can be answered at the click of the mouse (or a single tap, dramatically lowering interaction costs for mobile users). However, in this example, neither set of follow-up questions is particularly useful. (Perplexity’s follow-up questions were cut off in the above screenshot due to the length of its primary answer.)


The evolution of question-answering, from looking up information in books to web search to AI-summarized answers. (Dall-E)


Why AI Beats Search for Answering Questions

AI is generally faster (as proven by the more extensive quant study I cited above) and provides more valuable answers than search engines. Of course, search engines can turn into AI chatbots, and this may be happening. But the basic services have clear differences:

  • AI synthesizes one clear answer from all the available information. It provides clarifying information, supplementary information, and analysis — all as parts of a single short article.

  • Search identifies information sources that contain possible answers and ranks them by estimated quality, with the best sources on the top. Supplementary information is not provided, nor does the search provide analytical commentary on the answer.

Thus, AI saves users from performing their own synthesis, as is required when using search. AI also mentions additional issues that users had not thought to ask about. When using search, users can hope to stumble across such additional relevant considerations about their problem, but they’ll also have to wade through a morass of irrelevant information.


Most important, AI writes a short, individualized article specifically for that one user to explain how the answer relates to that user’s circumstances, which the AI knows from the “custom instructions” (in ChatGPT) or other earlier specifications provided by the user. AI can also adjust the readability level of the custom article to the user's literacy level, giving more complex answers to highly-educated users.


AI answers are superior because AI writes a short new article just for you, synthesizing the most important information about your question, including points you didn’t think to ask about. (Dall-E.)


(The current early AI systems sometimes go overboard with such individualization of the answers. For example, I asked Perplexity about the weather in Paris in May. Besides providing the requested information, it added, “As a UX researcher, you might appreciate the fact that the weather conditions in May provide a great opportunity to observe and interact with the local culture in a comfortable outdoor setting. The pleasant weather encourages outdoor activities, and you can expect to see Parisians soaking up the sun in parks or enjoying the city's many outdoor cafes.” Yes, I had told it that I’m a UX researcher, but I go on vacation in Paris to see art and ballet and to eat, not to conduct field research.)


By way of analogy, search is like rooting through large piles of discarded items to locate one gem that’s buried in the rubbish. Often, the poor searcher has to wade through ill-smelling garbage in the form of advertisements. I am astounded that Google didn’t pollute the SERP for my question about credit card information on e-commerce checkout pages with copious ads for credit cards or payment processing.


In recent years, search results have become almost secondary on SERPs, usually overwhelmed by ads and the search engine’s own services. Information pollution has even reached Google, which otherwise started with the cleanest pages back in the days I was on their advisory board.


Getting answers from a search is like rooting through large piles of discards to find one nice item. Yes, there’s good stuff buried under all the rubbish, but it’s unpleasant and time-consuming to retrieve it. (Midjourney.)


In contrast, AI is like a friendly and competent butler who presents the Lady of the House with a single sheet with the recommended plan for a dinner party. All the information is right there, including issues she might not have thought to bring up but which the butler remembers from having organized hundreds of previous dinners.


Of course, the actual competency of that metaphorical butler is still not always up to snuff, but AI is getting better and better with every release, just like a real butler gets better with more years of service on the same estate. Right now, we’re in a happy place where AI tools are not polluted with ads. And since we’re paying customers (on the order of $200/year for Perplexity and $20/month for ChatGPT), I hope things remain that way. I would personally rather pay a higher subscription fee and have my AI focused purely on serving my needs the absolute best it can.


Getting answers from AI is like being the Lady of a Great House, receiving the plan for a dinner party from her trusted butler. The list of dishes to be served and the suggestions for where to seat the guests are all prepared with knowledge of the house, the host, and the guests. (Midjourney.)


Sadly, the history of the Internet suggests that we may soon be spammed by ads in our AI tools. In my butler analogy, I guess this would be like the butler replacing his nice black-tie outfit with the kind of advertisement-infected uniform worn by Formula-One race drivers.


If our butler starts dressing like this, it’ll be a sign that advertising-infected AI tools are not far away. (Midjourney.)


Keyword Foraging vs. the Articulation Barrier to AI

Getting questions answered by search or AI presents usability issues in either case:

  • Search: you must know the proper keywords to enter, which again requires you to know what things are called. My good friend Kate Moran describes the process of identifying the correct keywords as “keyword foraging.” (Kate is the most talented user researcher I have met in my 40-year UX career, so it’s worth paying attention to her research findings.)

  • AI: you have to describe your problem in prose — which presents what I call the articulation barrier: most people are bad writers and can’t write a sufficiently detailed description of their problem.

What’s the worst usability problem? Keyword foraging or the articulation barrier? To make AI do something complicated, like generate the test plan for a usability study, the articulation barrier is very high. Such prompts are complicated to construct. Even a reasonably simple request — like the butler image I used above — required a fair amount of descriptive writing to drag out of Midjouney.


However, for simple questions, the AI prompt doesn’t have to be particularly descriptive, nor are intricacies required. This means that the articulation barrier is low for asking questions of AI.


Keyword foraging often presents a problem for search users, particularly people with limited vocabulary. In contrast, AI is less finicky about wanting precise vocabulary since it employs a broad interpretation of the user’s question and offers supplementary information in the initial answer.


On balance, neither interaction has perfect usability, but for straightforward questions, AI wins. And for complicated questions, search often fails so miserably that AI also wins, even if this is more due to its ability to synthesize a good customized answer than because of the usability of the query itself.


Net outcome: websites should expect a dramatic decline in SEO-driven traffic in the next few years. Prepare now! See the follow-up article: Website Survival Without SEO in the Age of AI.

Top Past Articles
bottom of page