Checking AI visibility means measuring whether engines like ChatGPT, Perplexity, Gemini, and Google AI Overviews name, recommend, or cite your business when a buyer asks a category question. A useful check covers four data points (mentions, recommendations, citations, competitor share) and produces a six-section report ending in three or four prioritised moves.
How do I check whether ChatGPT recommends my business?
You typed your business into ChatGPT one evening to see what it would say. A competitor name came up first. Maybe a different competitor came up second. Yours was nowhere. You closed the tab and told yourself you would deal with it later.
That moment is what people mean when they say "check whether ChatGPT recommends my business." The question is not whether the model has heard of you. The question is whether AI engines name you, position you as a good choice, and cite you when a buyer asks something in your category.
Checking AI visibility is asking the engines what they say about you, then writing it down. The data is what makes it real.
The rest of this post explains what checking actually involves, what the major engines are doing under the hood, how much answers vary, and what a useful check produces. The methods are anchored in published research, not vendor opinion.
What does it mean to check if ChatGPT recommends my business?
A useful check measures four things. Most casual checks stop at one.
- Mentions. Does the engine name your business when asked about your category? A mention is the floor.
- Recommendations. Being named is one thing. Being positioned as a good choice is another. Recommendations are where the trust signal shows up.
- Citations. Some engines, including Perplexity and Google AI Overviews, link to the sources they used. The check records whether your site is one of those sources.
- Competitor share. AI does not pick from a long list. It picks from a small set. The check shows you which businesses fill the answer when yours does not.
These four come from observing how the major engines respond to category queries. The pattern is consistent enough that researchers have started to formalise it.
What do AI engines actually do when asked about a category?
Three layers shape the answer. Knowing the layers makes the result less mysterious.
- Training data. Older content the model has seen during training. This is why long-standing businesses with strong website copy show up even when they have done no specific AI work.
- Live retrieval. Newer content the model fetches in the moment from the open web. This is why your visibility can move within weeks of publishing well-structured pages.
- Pattern signals. Reviews, citations, mentions on third-party sites, structured data. These nudge the model toward who to recommend even when the literal text is not in the answer.
Princeton University's GEO paper (Aggarwal et al., ACM SIGKDD 2024) was the first peer-reviewed paper to formalise this. It introduced "Generative Engine Optimization" as a measurable target and showed that targeted optimisation can lift visibility in AI-generated responses by up to 40%, with effective patterns varying by domain. Google Search Central, the official site-owner documentation for AI Overviews and AI Mode, takes a complementary line: the same SEO best practices that have always mattered still apply, with no additional technical requirements to appear in AI features.
Most owners are surprised by how grounded the field already is. The hype cycle made it sound like a different planet. The research and the documentation read like a careful extension of work site owners have been doing for fifteen years.
How much do AI answers vary between runs?
More than people expect. Two runs of the same prompt on the same engine can return different sets of names. That is not a bug. It is how generative models work.
The practical implication: a single query is a snapshot, not a baseline. Running a prompt a handful of times across the major engines is what gives you a usable signal. Looking at one engine in isolation is also not enough. Princeton's research notes meaningful behaviour differences between engines, and any owner who has spot-checked across ChatGPT, Perplexity, and Gemini will see those differences first hand.
If you want a directional sense quickly, open a private window, type one buyer question into ChatGPT, then the same question into Perplexity, then Gemini. Read what comes back. That is enough to tell you whether the conversation is happening with you in the room. It is not enough to tell you what your share of the conversation is over time.
How does a quick spot-check differ from a structured audit?
A spot-check tells you whether you exist in the model's memory today. It is fast, free, and useful as a wake-up.
A structured audit covers more ground, in a few specific ways:
- More questions. A representative set of category, problem, and comparison prompts, not one or two.
- More engines. ChatGPT, Perplexity, and Gemini for the buyer-facing layer. Adding Claude, Grok, and DeepSeek closes the gap.
- Recorded data. Scored mentions, positions, citations, and competitor share across every prompt and engine. Without the data, you have an impression. With the data, you have a baseline.
- A technical layer. Crawl access, structured data, content clarity, internal linking. Plain language, not developer jargon.
- A trust profile. Reviews, third-party mentions, directory consistency.
- A prioritised action list. Three or four moves that move the needle, in order. Not a list of one hundred fixes.
The Get Recommended free scan covers nine results (3 queries across 3 engines) and the full diagnostic report covers sixty (10 queries across 6 engines). Tools in the same category include HubSpot's AEO Grader, Otterly, and SE Ranking. Different tools weight signals differently. The underlying check is the same.
What should a useful AI visibility check produce?
A useful report shows the work in six places.
- The prompts panel. The exact buyer questions tested. You should recognise these as questions a customer would actually ask.
- The engine-by-engine verdict. Recommended, mentioned, or missing, per engine.
- The competitor map. Which businesses showed up where yours did not.
- The technical findings. Crawl access, structured data, content clarity, internal linking.
- The trust profile. Reviews, third-party mentions, directory consistency.
- The prioritised action list. Three or four moves, ordered by leverage.
If a check skips any of these, it is incomplete.
What can a visibility check not promise?
This is the part vendors gloss over.
No check, manual or managed, can guarantee an AI engine will recommend your business. Engines are probabilistic. Their training data, retrieval rules, and ranking logic shift over time. Princeton's paper notes strategy effectiveness varies by domain. A pattern that wins in financial services may not move the needle in wellness.
A clean check today does not lock in your position three months from now. The major engines update what they reference on different cycles. Re-running after substantial content or trust-signal changes is sensible. Running one weekly is rarely useful.
What a check can do is take the question off your shoulders. You stop wondering whether the channel works for you. You start with evidence.
Why does AI visibility matter for small businesses now?
The category moved from emerging to mainstream in less than two years.
A 2025 U.S. Chamber of Commerce report found 58% of small businesses use generative AI, up from 40% the year prior. Pew Research Center finds 57% of U.S. adults interact with AI at least several times a week. The buying behaviour AI search captures is no longer a small slice of intent.
If your last marketing investment was an SEO retainer, a visibility check is the cheapest way to find out whether that work is paying off in the channel your buyers are increasingly using. If it is, you have evidence to show your team. If it is not, you have evidence to argue for the next move.
You closed the tab the night your name was missing. The check is what reopens it, with the prompts written down and the answer in your hand.
