All articles

How do I manually audit my brand's visibility across ChatGPT, Perplexity, Gemini, and Claude?

DR

Daniel Reeves

·5 min read

How do I manually audit my brand's visibility across ChatGPT, Perplexity, Gemini, and Claude?

Run a structured query set across all four engines—direct brand questions, category comparisons, and problem-solution prompts—then document which engines cite your brand, where citations appear in responses, and which source URLs they reference.

Quick Guide

Step What to do Why it helps
Design query set Write 8-12 test prompts covering brand-direct, category, and problem queries Reveals visibility gaps across different user intent patterns
Query each engine Run identical prompts in ChatGPT, Perplexity, Gemini, and Claude Exposes which platforms cite you and which don't
Document citation signals Record brand mentions, citation links, source URLs, and position in response Creates baseline data for tracking improvement over time
Build tracking spreadsheet Log results by engine, query type, and citation format Makes patterns visible and enables month-over-month comparison

Why manual audits reveal what keyword tools miss

AI visibility audits measure where your brand is mentioned in AI search platforms, how often, how accurately, and based on which sources, according to Ahrefs. Traditional SEO tools track rankings and clicks, but they can't tell you if ChatGPT recommends your competitor when a user asks for category advice, or if Perplexity cites outdated information about your product.

  1. The citation gap matters because AI reference rate determines whether your brand enters the consideration set at all. A user who never sees your name in an AI response can't click through to your site. Manual audits expose this gap by testing the exact queries your prospects ask, then documenting which engines surface your brand and which don't.

How to structure and execute the audit

Start by building a query set that mirrors real user behavior, not just branded searches. Write 3-4 prompts in each category: direct brand questions ("What does [YourBrand] do?"), category comparisons ("Best [category] tools for [use case]"), and problem-solution queries ("How do I solve [problem]?"). This structure reveals whether you appear only when users already know your name, or if you surface during earlier research stages.

── DeepCited Platform

Explore how DeepCited automates this audit workflow across all four AI engines.

Try DeepCited Platform free

Query each engine in a clean browser session or incognito mode to avoid personalization. Paste the same prompt into ChatGPT, Perplexity, Gemini, and Claude, then document four signals: whether your brand appears in the response text, whether a citation link accompanies the mention, which source URL the engine references, and where in the response your brand appears (first recommendation, buried in a list, or absent entirely).

Build a tracking spreadsheet with columns for query text, engine name, brand mentioned (yes/no), citation link present (yes/no), source URL, and position in response. This baseline becomes your reference point for measuring improvement after you publish new content or optimize existing pages. After running this manual audit once, you'll understand your baseline visibility, but repeating this process weekly or monthly quickly becomes unsustainable. We built DeepCited to automate this exact workflow: the platform monitors your brand's presence across ChatGPT, Perplexity, Gemini, and Claude continuously, then closes the loop by generating citation-optimized content and verifying improved visibility post-publishing.

Frequently Asked Questions

Which specific query types should I test for each AI engine?

Test three query types across all engines: direct brand queries ("What is [YourBrand]?"), category comparison queries ("Best tools for [use case]"), and problem-solution queries ("How do I [solve specific problem]?"). Direct queries reveal if engines recognize your brand at all. Category queries show whether you appear in competitive contexts. Problem queries test if you surface when users describe pain points without naming solutions. Run 3-4 variations of each type to account for phrasing differences.

How do I document and compare citation formats across different AI platforms?

── DeepCited Platform

DeepCited Platform monitors your brand's presence across ChatGPT, Perplexity, Gemini, and Claude continuously, then closes the loop by generating citation-optimized content and verifying improved visibility post-publishing. Start tracking your AI visibility today.

Try DeepCited Platform free

Create a spreadsheet with columns for engine name, query text, brand mentioned (yes/no), citation format (inline link, footnote number, source list), source URL, and response position. Perplexity typically uses numbered footnotes. ChatGPT sometimes includes inline links or source lists depending on search mode. Gemini and Claude vary by whether they're pulling from live web search or training data. Record the exact format each time because consistency signals which engines reliably cite your content.

What does it mean if my brand appears in responses but without a citation link?

It means the engine learned about your brand from training data rather than live web search, or it's paraphrasing information without attributing a source. This pattern appears more often in ChatGPT and Claude, which rely heavily on pre-trained knowledge. Mentions without citations are weaker signals because users can't click through to verify claims or visit your site. Focus optimization efforts on engines that do provide citation links, since those drive referral traffic.

Should I test with a logged-in account or incognito mode?

Use incognito mode or a clean browser session without login. Logged-in accounts introduce personalization that skews results—ChatGPT may surface sources you've interacted with before, and Gemini may pull from your search history. Incognito mode shows what a new user with no prior context sees, which is the baseline visibility that matters for acquisition. Run one logged-in test afterward if you want to compare personalized vs. default results.

── DeepCited Platform

Ready to move beyond manual audits? Discover how DeepCited automates continuous monitoring and citation optimization.

Try DeepCited Platform free

How long does a complete 4-engine manual audit typically take?

Plan for 2-3 hours to run 8-12 queries across four engines and document results in a tracking spreadsheet. Writing the query set takes 20-30 minutes. Running each query and recording citation signals takes 5-7 minutes per engine, so roughly 25 minutes per query across all four platforms. Multiply by your query count and add time for spreadsheet setup. Monthly audits compound this time cost, which is why demand generation leaders are evaluating automation tools that handle continuous monitoring.

Share: