Your Brand Is Invisible to AI: A 30-Minute Audit Checklist
AI VisibilityAEOBrand Strategy

Your Brand Is Invisible to AI: A 30-Minute Audit Checklist

Most mid-market and SMB brands don't appear in AI-generated answers. Here's a step-by-step audit checklist: what to test, what the results mean, and where to start fixing it.

R
Rankry Team
· 7 min read

When we started testing brands across AI models at scale, the first finding was uncomfortable: roughly 7 out of 10 mid-market and SMB companies don’t appear in AI assistant responses for questions in their own category.

Not because they’re unknown. Some of these brands had solid organic traffic, dozens of pages ranking in Google’s top 10, and real name recognition in their niche. The issue wasn’t size or quality. It was a mismatch between where they’d built visibility and where AI models actually look.

Search rankings and AI visibility operate on different signals. A brand that dominates Google might be completely absent from ChatGPT, Gemini, or Perplexity responses. And the reverse happens too: companies with modest SEO sometimes show up consistently in AI recommendations because their content is structured in a way that models can extract and cite.

Before optimizing anything, you need a diagnosis. Here’s the checklist we run with every new brand.

1. Check if AI associates your brand with your category

This is the most basic test, and the most revealing one.

Open ChatGPT, Gemini, and Perplexity. In each model, ask a question like: “What is the best [your category] for [your audience]?” For example: “What is the best CRM for a 10-person startup?” or “What project management tool works best for remote teams?”

Run each query three times in separate sessions. Write down the results.

What you’re looking for:

  • Does your brand appear in at least one response out of three?
  • If it does, what position? First, middle of the list, last?
  • How does the model describe you: accurately, generically, or incorrectly?

If your brand doesn’t appear at all across any model, it means your entity weight in the training data is critically low. The model simply doesn’t associate your company with that category. This isn’t a flaw in the model. It’s a signal: across the data the model was trained on and the sources it retrieves, your brand isn’t mentioned frequently enough alongside the terms that define your niche.

2. Check who else talks about you

AI models don’t weight what a brand says about itself very highly. They weight what others say about the brand.

This is a core principle. Research from Princeton and Georgia Tech found that content backed by external citations gets 30–40% more visibility in generative models compared to unsupported claims.

Run a quick test: search your brand name on Google and look at who writes about you besides your own team.

Strong signals:

  • Comparison articles on industry sites where you’re listed alongside competitors
  • Mentions in podcasts, YouTube reviews, Reddit discussions
  • Reviews on Capterra, G2, Product Hunt, or similar platforms
  • Press coverage or features in trade publications

Red flag: if the only source of information about your product on the internet is your own website. When that’s the case, the model has no external data to build co-occurrence patterns, and it can’t recommend you. Even if your product is objectively the best option in the category. This is what we call the shadow entity problem: the brand exists, but it doesn’t have enough density in the data for AI to surface it.

3. Test consistency

Most people skip this step, but it matters more than anything else.

Ask the same prompt to the same model three times in three different sessions. If your brand appears once out of three, that’s not visibility. That’s randomness.

Language models use stochastic decoding: every time you run a query, the model generates a slightly different response due to temperature sampling and nucleus sampling. A single appearance in a single response proves nothing. What matters is frequency: out of how many runs does the brand appear consistently?

A rough scale:

  • 0 out of 3: brand is absent from the model for this category
  • 1 out of 3: weak signal, unstable visibility
  • 2 out of 3: solid base, room to grow
  • 3 out of 3: strong position, brand is confidently associated with the category

This is also why obsessing over individual daily checks creates more noise than insight. What you see today might be stochastic variance, not a real signal. A single data point tells you almost nothing. Patterns across multiple queries over time are what reveal actual position changes. We explored this topic in depth in Daily vs. Weekly: How Often Should You Track Brand Visibility in LLMs.

4. Compare across models

Different AI models are trained on different data and use different retrieval pipelines. A brand that appears consistently in ChatGPT might be completely invisible in Gemini or Claude.

Test the same set of prompts across at least three models. This reveals where your gaps are. If you show up in one model but not others, the issue likely isn’t your brand. It’s about which data sources each model pulls from during retrieval.

Perplexity, for instance, relies heavily on fresh web data and tends to surface brands with recent content. ChatGPT leans more on parametric memory (training data) that updates every few months. Gemini sits in between, combining its training with Google Search results.

Knowing these differences shapes your strategy. For Perplexity, content freshness is critical. For ChatGPT, volume and depth of historical mentions matter more. For a deeper look at how each model decides which brands to surface, see How LLMs Choose Which Brands to Recommend.

5. Check how they describe you

Showing up isn’t enough. How the model frames your brand matters just as much.

Ask the model directly: “What is [your brand] and what is it best for?” Compare the response to how you actually position yourself.

Warning signs:

  • The model lists features your product doesn’t have
  • It associates you with an outdated category
  • The description is so generic it could apply to any competitor
  • The tone skews slightly negative or dismissive

If the model confuses your product with something else or describes you in words you’d never use, that’s a framing problem in the source data. Most likely, the model found conflicting descriptions across different sources and averaged them into something vague.

What to do after the audit

The audit gives you a picture, not a fix. Here are three paths depending on what you find.

If your brand is invisible (0 out of 3 across all models)

Priority number one is building entity weight. You need mentions on third-party platforms, comparison reviews, expert publications in industry media. This takes months, not days. There is no quick hack that forces an AI model to recommend a brand that barely exists in its data. The good news: every piece of earned coverage compounds over time. AI models will eventually catch up.

If your brand appears inconsistently (1 out of 3)

The signal exists, but it’s weak. Focus on increasing the citability of your existing content: specific data instead of vague claims, structured descriptions, schema markup. We covered these techniques in detail in our guide to optimizing content for AI search.

If your brand appears but is described inaccurately

The fix is framing consistency. A clear, unified message across all platforms. When your website says one thing, G2 says another, and a blog post on TechCrunch says a third, the model averages everything into mush. Audit every public-facing description and align them around the same core positioning.

From manual audit to continuous monitoring

This checklist covers a manual one-time diagnostic. For ongoing monitoring with position tracking, sentiment analysis, and historical trends across models, specialized tools exist. At Rankry, we’ve automated this process: over 100 prompts simulating real buyer queries, tested weekly across five models, broken down by 23+ metrics.

The manual audit is where you start. It gives you ground truth. But AI visibility shifts as models update their training data and retrieval sources. A brand that appears today can disappear next quarter. Consistent measurement turns a snapshot into a strategy.

Tagged: AI VisibilityAEOBrand Strategy
Enjoyed this article?
Share it with your network

Track your AI visibility

See how your brand appears across ChatGPT, Gemini, Perplexity and other AI assistants.

Try Rankry