How to check if ChatGPT knows my brand
There are three reliable tests: ask ChatGPT directly about your brand (with and without web search), ask it for category recommendations and check whether your brand appears, and ask it to compare your brand against a known competitor. Repeat each test five times — ChatGPT's responses are non-deterministic, so a single trial doesn't tell you whether your brand is consistently recognised or only occasionally surfaced.
Diagnose the cause
1. Direct identity test
Ask ChatGPT 'What is [Your Brand]?' Look for accuracy, completeness, and consistency. If the answer is vague, wrong, or non-committal, your training-data presence is weak. If it's accurate but generic, presence is fine but distinctiveness is low. Run with web search both on and off to separate training-data knowledge from live-retrieval recovery.
2. Category recommendation test
Ask ChatGPT 'best [your category] tools for [use case]'. Run it five times — the response is non-deterministic, so a single trial means nothing. Track how often your brand appears in the recommended set. Below 20% mention rate indicates a clear visibility gap; above 60% means you're in the consideration set the AI defaults to.
3. Comparison test
Ask ChatGPT 'Compare [Your Brand] vs [Known Competitor]'. The response reveals two things: whether ChatGPT has a meaningful representation of your brand at all, and how it positions you relative to a benchmark. If ChatGPT can describe the competitor in detail but only generically describes you, the gap is content-level — you need more substance for ChatGPT to extract.
Fix it
1. Use a free AI visibility check for a baseline
Manual prompts give you a feeling; a free check gives you a number. Linksii's free AI visibility checker runs your category prompts across all four major AI assistants (ChatGPT, Claude, Gemini, Perplexity) and returns a baseline score so you can track changes over time.
2. Set up continuous monitoring
ChatGPT's grounding shifts continuously. A spot check today doesn't predict tomorrow. Continuous monitoring catches drift early — visibility drops 10 points and you know within days, not the quarter after pipeline softens. Daily prompt runs across the four major AI platforms is the right cadence for actively-managed brands.
3. Act on the gap
If the audit shows weak ChatGPT presence, the playbook is structured data, llms.txt, third-party citations on the sources ChatGPT trusts, and patient consistency over six to twelve months for training-data shifts. Live retrieval recovers in days; full recognition compounds over the next training cycle.
Get a baseline in 60 seconds
Linksii's free AI visibility checker runs a curated set of category prompts across ChatGPT, Claude, Gemini, and Perplexity, and returns a baseline mention rate so you can track changes over time.
Frequently asked questions
Why do I get different answers from ChatGPT each time I ask?
ChatGPT's responses are non-deterministic — the same prompt produces different outputs depending on the model's current grounding state and the random seed used. That's why repeat-testing is essential: one query is a snapshot of noise; ten queries reveal a pattern. Pattern-detection across multiple runs is the right unit of AI visibility data.
Should I test in incognito mode?
It doesn't matter for ChatGPT visibility specifically — your conversation history doesn't change category recommendations the way some users assume. Personalisation in ChatGPT is more conservative than in social-media algorithms. What does affect results: whether web search is on, the exact prompt phrasing, and which model variant you're using.
Is there a way to test if a specific paragraph from my site is being used by ChatGPT?
Indirectly. Quote a distinctive phrase from your site verbatim in a ChatGPT query like 'Have you seen content about [phrase]?' If ChatGPT recognises and attributes the phrase, the content has been absorbed; if not, it hasn't yet been indexed in the model's grounding. This is more reliable on Perplexity (which uses live retrieval) than ChatGPT.