11 May 2026 · 8 min read

What belongs on an AI-friendly FAQ page now that Google's FAQ rich results are gone?

Google removed FAQ rich results from search in May 2026, but AI engines did not. ChatGPT, Claude, Perplexity, and Gemini still lift question-and-answer pairs from FAQ pages and quote them in their responses. An AI-friendly FAQ page needs customer-language questions, answer-first sentences, specific (not vague) answers, one claim per entry, concrete numbers and names, FAQPage schema in JSON-LD, and five to fifteen entries that each earn their place. Get the seven elements right and the page becomes the source AI engines quote when prospects ask in your category.

In May 2026, Google quietly removed FAQ rich results from search. The little expandable question boxes that used to sit under search listings are gone. The Search Console report that tracked them is being retired. The Rich Results Test stopped flagging FAQPage markup. For most small business owners, the news landed as a shrug. The FAQ page was already low on the list of things to polish.

Here is the part nobody put in the headline. The audience for your FAQ page just changed. It shifted from Google's snippet box to ChatGPT, Claude, Perplexity, and Gemini. Those engines do not need Google to render a rich result. They read the page, lift the answer, and quote it back to a customer who is asking a question. The page got more useful, not less, the day Google stopped showing it as a snippet.

This post explains what now belongs on an AI-friendly FAQ page for a small business website. It covers what changed, what the AI engines actually lift from a page, and the seven elements that move a generic FAQ list into one AI engines will quote.

What changed in May 2026

Google announced the change in late 2025 and removed the search appearance on 7 May 2026. The FAQ search appearance, the report in Search Console, and the FAQ check in the Rich Results Test are all being retired across mid-2026. Even before this, FAQ rich results were only eligible for health and government sites in most regions. For the typical small business, those rich expandable answers never appeared anyway.

The change does not affect the FAQPage schema itself. Schema.org still defines FAQPage as a valid type. Google still reads the markup. It just stops using it to draw a special result in search. The schema lives on as a signal to any machine reading the page, including AI engines that read and quote your site.

If you wrote FAQ content for the Google snippet, the change is a real loss. If you wrote FAQ content as something useful to a customer who landed on the page, nothing has changed for the human reader and the case for the AI reader just got stronger.

What AI engines actually do with FAQ pages

A short tour, because what comes next depends on knowing this.

ChatGPT, Claude, Perplexity, and Gemini do not quote whole pages. They lift sentences and short passages, then stitch them into an answer with a source link. Inside the engines, your page is split into small chunks before any of this happens. OpenAI's published guidance describes chunks of around 800 tokens with overlap. Anthropic's citation system runs at the sentence level. Each chunk is then matched against a customer's question using semantic search, not keyword search.

This means three things for an FAQ page.

First, each question and answer pair is read more or less standalone. The chunking step tends to keep a question with its answer because they sit close together in the page source. That is good. It also means a thin or vague answer is judged on its own merits, not propped up by context elsewhere on the page.

Second, the first sentence of every answer matters out of proportion to the rest. The Princeton research on generative engine optimisation (Aggarwal and team, 2024) found that statements which arrive early, sit inside a clear structure, and read as direct answers are pulled into AI responses up to forty percent more often than text the engine has to dig for.

Third, customer-language questions outperform internal ones. Real customers do not type your service label. They type the words they would say to a friend. AI engines match the question text on your page against the customer's prompt before they even look at the answer, so the question wording is the first filter.

The seven elements an AI-friendly FAQ page needs

Each of these is small. Together they decide whether your FAQ page becomes the source AI engines quote or one they skip.

1. Customer-language questions, not internal labels

Write questions the way a customer would say them out loud. "Do you do emergency callouts on weekends?" beats "After-hours service availability". "Do I need a bookkeeper if I already have an accountant?" beats "Bookkeeping versus accounting services". The plain version is what a real person types into ChatGPT. The internal label is what your services page would call it.

A quick test for you. Read the question aloud. If it sounds like a brochure heading, rewrite it as a sentence you would expect from a confused first-time prospect.

2. Answer-first sentences

The first sentence of every answer should be the answer. Not the setup. Not the qualifier. Not the polite warm-up. If the question is "How long does a tax return take?", the first sentence is "A standard tax return takes us two to five business days." The context goes in the next two sentences.

This is the single biggest lift on an existing FAQ page. Owners often write answers in talking order. That buries the real answer in sentence three or four. Move it to the front.

3. A specific, useful answer, not a vague one

"It depends" is honest, but it is also hard to lift. AI engines will not quote a line that does not commit to anything. Swap "It depends on your situation" with the actual decision tree. "Most clients see results in three to six months. Service businesses with under fifty staff usually land in the lower half of that range. Larger or more complex businesses sit at the higher end."

The reader gets a real answer. The AI engine gets a line it can lift and credit to you.

4. One question, one claim

Each FAQ entry should answer one question with one main claim, supported by one or two follow-up sentences. Stuffing three answers into one entry trips up the chunking step. The engine cannot tell which part answers which sub-question, so it either picks one or pulls a muddled extract.

If your current FAQ has bundled answers, split them. "How much does it cost and what is included?" becomes two entries.

5. Numbers, names, and specifics where you have them

AI engines reward concrete detail. Stats, named tools, model numbers, ranges, time windows, and case examples are the kind of content the Princeton research found generative engines weight most. "We have served over four hundred Sydney bookkeeping clients since 2018" beats "We have helped many clients over the years." The first one is quotable. The second is filler.

This is not bragging. It is giving an AI engine the kind of detail it needs to credit a confident line to a real business.

6. FAQPage schema, even after the Google change

Add FAQPage structured data (in JSON-LD) to the FAQ page. The Google snippet is gone, but the schema is still a machine-readable label that says "this section contains question and answer pairs, here they are." Bing still uses it. Major AI engines parse structured data when they crawl. Schema.org maintains the spec and updated it as recently as March 2026.

The schema is not a magic ingredient. It is a small piece of plumbing that makes everything else clearer to the machines. Most WordPress, Shopify, and Squarespace sites can add it through a plugin or theme setting. A developer can add it in about an hour for a custom site.

7. Five to fifteen entries, no more

Five is the floor that signals depth. Fifteen is about the upper end where each entry stays distinct and useful. Past that, the entries start to repeat or turn vague, which hurts the page rather than helps it. If you have more than fifteen genuine FAQs, consider splitting into two pages by topic (for example, pricing FAQs on the pricing page, service FAQs on a service page).

The right count is the count where every entry earns its place.

The mistakes that kill extraction

A short list, drawn from auditing dozens of small business FAQ pages.

The classic mistake is writing FAQs that protect the business from work, not FAQs that answer real questions. "What is your refund policy?" with a one-line answer of "Please refer to our terms and conditions" is not an FAQ. It is a deflection. Real customer questions, answered like the customer is in front of you, are what get quoted.

The second mistake is recycling the same answer three different ways. The chunking process catches this and treats the page as low-information.

The third mistake is putting the FAQ behind an accordion that hides the answer text from the page source. Some accordions only load the answer when a reader clicks, which means the AI engine never sees it. Open the page in a private browser tab and view source. If you cannot see your answers in the HTML, the AI engine cannot either.

How to know your FAQ page is working

Three signs to watch for.

The first sign is direct. Ask ChatGPT, Claude, Perplexity, and Gemini the questions on your FAQ page, the same way a customer would, and see whether your site is the source they cite. A simple side-by-side check across the engines tells you which questions you currently own and which you do not. This is what an AI visibility scan does at scale (Get Recommended runs this kind of cross-engine check, among other tools that do similar work).

The second sign is shape. After three months of sharpening your FAQ entries, check whether AI engines are quoting whole sentences from your answers rather than paraphrasing. Direct quoting suggests the chunking step is finding clean, standalone answers. Paraphrasing suggests the engine is reassembling pieces because nothing in your text was directly liftable.

The third sign is traffic. Referral traffic from ChatGPT and Perplexity shows up in Google Analytics as referral sources. Look for chat.openai.com, perplexity.ai, and similar referrers. The volume is small for most small businesses today, but the trend line is what matters.

Where this fits in a small business website

The FAQ page sits at the bottom of most navigation menus and at the top of most AI engine extraction lists. That mismatch is the opportunity.

A working FAQ page does three jobs at once. It answers the questions prospects type into AI engines. It saves your team from answering the same email twenty times a week. It signals to every AI engine that your site has clear, structured, useful content (which lifts other pages too).

If your FAQ page is currently a short list of branded questions with vague answers, the seven elements above are the order to work through. Customer-language questions first, then answer-first sentences, then specifics. Schema and entry count are last because they amplify good content but cannot rescue weak content.

The deprecation of Google's FAQ snippet is not a reason to abandon FAQ pages. It is a prompt to write them for the readers who still pull from them every day.

Sources


Frequently asked questions

Does FAQPage schema still matter after Google removed FAQ rich results?

Yes, for a different audience. Google stopped using FAQPage markup to draw expandable rich results in search on 7 May 2026, but the markup itself is still a valid Schema.org type. Bing still uses it. Major AI engines parse structured data when they crawl. The schema is a small piece of plumbing that labels your question-and-answer pairs in machine-readable form, which makes extraction cleaner for any tool that reads the page. For a small business, the cost of adding it is low and the benefit is now skewed toward AI engines rather than Google.

Why did Google remove FAQ rich results from search?

Google announced in late 2025 that it was simplifying the search results page by removing structured data displays that were no longer providing significant additional value for most users. The FAQ search appearance was dropped on 7 May 2026, with related Search Console reports and the Rich Results Test check being retired across mid-2026. Even before the change, FAQ rich results were only eligible for well-known health and government sites in most regions, so most small businesses never saw their FAQ markup turn into a rich result anyway.

How many FAQs should a small business website have?

Between five and fifteen entries is the right range. Five is the floor that signals depth to AI engines. Fifteen is around the upper end where each entry stays distinct and useful. Past that, entries start to repeat or turn vague and the page reads as low-information to the chunking step inside AI engines. If you have more than fifteen genuine FAQs, split them across topical pages, for example pricing FAQs on the pricing page and service FAQs on the relevant service page.

What is the single biggest fix on most existing FAQ pages?

Move the answer to the first sentence of every answer. Owners often write answers in talking order, which buries the real answer in sentence three or four. AI engines weight the first sentence of any chunk disproportionately, so a buried answer often does not get extracted at all. Rewrite each answer so the very first sentence is the direct answer to the question, then add context and qualifiers in the sentences that follow.

Do I need a developer to add FAQPage schema to my website?

Usually no. WordPress sites with Yoast or RankMath have FAQPage schema built in through a block or plugin setting. Squarespace, Wix, and Shopify support the basics through native blocks or marketplace plugins. A custom site or a tightly controlled theme typically needs a developer for around an hour of work to add JSON-LD to the FAQ page template. The schema lives in a script tag in the page head or body, so once it is wired in, it updates whenever the page content updates.

How can I tell whether AI engines are quoting my FAQ page?

Three signs. The first is direct. Ask ChatGPT, Claude, Perplexity, and Gemini the questions on your FAQ page the way a customer would and see whether your site is cited as the source. The second is shape. Watch for AI engines quoting whole sentences from your answers rather than paraphrasing, because direct quoting suggests the chunking step is finding clean standalone answers. The third is traffic. Referral traffic from chat.openai.com, perplexity.ai, and similar shows up in Google Analytics as referral sources, and the trend line is what matters more than the absolute numbers for most small businesses today.

See where you stand

Free 60-second AI visibility scan. No account, no card.

Get Your Free AI Visibility Score

Get new posts in your inbox

Practical AI search guides, sent when we publish.

Unsubscribe anytime. Privacy Policy.