Test if AI mentions my brand

Quick answer

To test whether AI mentions your brand, run the same five buyer-journey prompts across ChatGPT, Claude, Gemini, and Perplexity, then count how often your brand is named. The right prompt mix: one branded query (asks about you by name), one category query ('best [category] tool'), one comparison query ('your brand vs competitor'), one problem-solving query, one decision-stage query. Different platforms surface different brands — test all four.

Diagnose the cause

1. Branded queries set the floor

Always start with branded queries — 'What is [Your Brand]?', 'What does [Your Brand] do?'. These should always succeed if the AI has any grounding for your brand at all. If branded queries fail, you have a category-presence problem so fundamental that no other test matters until it's fixed.

2. Category queries reveal commercial visibility

Branded queries tell you whether AI knows you exist. Category queries — 'best [category] tool', 'top [your space] platform' — tell you whether AI recommends you when buyers are choosing. The latter predicts pipeline. Run five variations and track mention rate; above 30% is competitive in most categories, below 10% is a meaningful gap.

3. Decision-stage queries predict bookings

Most brands are mentioned far more in awareness queries than in decision-stage queries — 'best [category] for [specific use case]', 'compare [your brand] vs [competitor]'. The gap between awareness and decision is the most diagnostic single number you can track. A 40% awareness mention rate paired with a 5% decision mention rate is the silent killer of AI-driven pipeline.

Fix it

1. Use a free tool for the baseline

Linksii's free AI visibility checker runs a curated set of category prompts across all four major AI platforms and returns a baseline mention rate. Use it as the starting point, then build a dedicated prompt set tailored to your category.

2. Track the same prompts daily

AI responses are non-deterministic; a single test is statistical noise. Daily runs of the same 25–50 prompts across all four platforms is the minimum cadence that produces a usable signal. Patterns over a month reveal what actually moves and what's noise.

3. Look at sentiment and position, not just mention rate

Mention rate is a binary view. The real signal is position (where you appear in the response — first, third, sixth) and sentiment (recommended, neutral, warned-against). A brand mentioned 80% of the time but always last and always lukewarm has worse AI visibility than a brand mentioned 30% of the time but first and enthusiastically.

Get a baseline in 60 seconds

Linksii's free AI visibility checker runs a curated set of category prompts across ChatGPT, Claude, Gemini, and Perplexity, and returns a baseline mention rate so you can track changes over time.

Frequently asked questions

How many prompts should I track to get a reliable read?

25–50 is the practical minimum for most brands. Fewer than 25 and you don't get statistically meaningful patterns across the buyer journey; more than 50 and you start capturing noise on long-tail queries that don't move pipeline. Focus on prompts that mirror what your buyers actually type into AI assistants.

Should I track the same prompts on all four AI platforms?

Yes. Different platforms recommend different brands for the same query — sometimes wildly different. Tracking only ChatGPT means missing the Perplexity story; tracking only Perplexity means missing the Gemini-via-Google-Search story. Cross-platform consistency is itself a signal: brands that appear on all four are platform-neutral; brands that appear only on one need platform-specific work.

Do I need to track prompts in multiple languages?

Only if you operate in multiple markets. AI brand visibility differs significantly by language because models draw on language-specific training data. A brand strong in English-language responses can be invisible in German or Japanese. If revenue is concentrated in English-speaking markets, English-only is fine.

Related