Documentation Index
Fetch the complete documentation index at: https://docs.haloxlabs.ai/llms.txt
Use this file to discover all available pages before exploring further.
Citation Tracking
In GEO, a brand mention is not the same as a citation. HaloX separates Mention, Source, and Citation so teams can see whether their assets are becoming evidence for AI answers.Where is this in the app?
| Item | Description |
|---|---|
| App menu | Citation Tracking or the related app area |
| Main inputs | brand, domain, keywords, competitors, Prompt Sets, date filters |
| Main outputs | Mention, Source, Citation, Citation Rate, Source Visibility |
| Connected features | Dashboard, Site Audit, Strategy Map, Content Factory, Weekly Reports |
When should you use it?
Use it when competitors are cited but your pages are not.How to read it first
Start with the current state
Do not stop at the score or first row. Check which question, cluster, URL, or model is driving the issue.
Separate the cause signals
Identify whether the issue is SEO, AI answer coverage, source/citation weakness, or missing content.
Questions we hear in onboarding and sales meetings
Should SEO and GEO be measured separately?
Should SEO and GEO be measured separately?
Yes, but they should be connected. SEO is the foundation that lets search engines and AI systems read your pages. GEO measures whether your brand appears as a mention, source, or citation inside AI answers.
What should we do when a score is low?
What should we do when a score is low?
Start with critical site issues, questions where competitors already appear, and non-brand clusters with search demand. HaloX documentation explains screens as score → reason → next action.
Can we use real customer examples in public docs?
Can we use real customer examples in public docs?
No. Public docs should not expose customer names, private keywords, email addresses, or actual performance numbers. Use anonymized industry scenarios instead.
WikiDocs alignment
- GEO Wiki: citation seeding, source quality, grounding
- Related concepts: GEO, AVI, Citation, Brand Mention, Grounding
Next pages
Track citation reasons and fan-out paths together
In GEO study sessions and customer conversations, the important question was not only “which URL was cited?” but “why was our page omitted while a competitor or third-party source was cited?” Citation Tracking is the starting point for understanding sub-queries, source tiers, and validation loops.| Observation | Likely interpretation | Next action |
|---|---|---|
| The brand is mentioned but no official URL is cited | The entity is known, but official evidence blocks are weak | strengthen fact sheets, FAQ, About, and product definition pages |
| Competitor comparison pages are cited repeatedly | The fan-out path moved into comparison/recommendation questions | create comparison tables and decision criteria content |
| Only media or third-party sources are cited | Third-party evidence is stronger than owned sources | strengthen newsroom hubs and link official evidence |
| Branch pages are omitted for local questions | URL structure, local schema, or branch FAQ is weak | add branch landing pages, local FAQ, and multilingual support |
| Citation sources differ by engine | Models use different channels and validation behavior | separate reporting by engine and prompt group |
Feature use cases
Citation Tracking is more than checking whether a link appears. It helps explain why AI selected one source over another.| Use case | What to inspect | How to turn it into action |
|---|---|---|
| Third-party articles outrank official sources | Missing official FAQ/fact sheet/entity hub | Strengthen newsroom hub and official evidence pages. |
| Competitor comparison pages repeat | Comparison/recommendation fan-out path | Build comparison tables and selection-criteria content. |
| Brand is mentioned but not linked | Gap between Source and Citation | Add definitions, evidence links, schema, and internal links. |
| Local questions miss branch pages | Branch URL and LocalBusiness signals | Improve branch landings, local FAQ, and consistent facts. |
| Models cite different sources | Engine-specific source pools | Separate reports for ChatGPT, Gemini, and Perplexity question sets. |
