Generative engine optimization has gone from niche concept to boardroom priority in about eighteen months. Every marketing team wants to show up in ChatGPT recommendations, Google AI Overviews, and Claude search results. And that really makes a lot of sense – these platforms are reshaping how people find and evaluate brands.
But the rush to “do GEO” has created a familiar problem. Teams are chasing tactics before they understand the fundamentals. They’re layering AI visibility hacks on top of weak foundations, gaming authority signals that AI systems are already learning to discount, and tracking metrics that don’t actually tell them anything useful.
The result is wasted budget, declining visibility, and a growing gap between brands that treat GEO as a credibility discipline and those treating it like a shortcut factory.
Continue reading: 7 Steps to Get a New Brand Into AI Search Results From Scratch
We’ve worked with enough brands at this point to see the same mistakes come up over and over. Here are nine of the most common ones – and what to do about each.
1. Publishing scaled AI content without human oversight
The logic seems sound on paper. AI tools can produce content at scale, so why not publish hundreds of pages targeting long-tail queries and flood the zone? The problem is that every other brand had the same idea, and the AI systems evaluating your content can tell.
Google explicitly warns that generating large volumes of pages without adding genuine user value may violate their spam policies. This isn’t a theoretical risk. It’s an enforcement priority.
The performance gap is significant too. Research from MarTech found that human-generated content designed to surface in AI responses performs up to an order of magnitude better than AI-generated material. That’s more than a marginal difference.
There’s also a forward-looking risk that most teams aren’t thinking about. As AI models improve, they’re getting better at detecting over-optimized, templated content. Content that looks fine to today’s models may actively get deprioritized by tomorrow’s. You’re essentially building on sand.
Just look at this example by SEO consultant Harpreet Chatha, flagged a case study on X where a brand that had ramped up mass-produced “alternative” articles against competitors was “in the process of getting hammered by both Google & ChatGPT.” It worked really well, until it didn’t.
This AEO / GEO golden goose case study ramped up "alternative" articles vs their competitors. Now their visibility is in the process of getting hammered by both Google & ChatGPT. pic.twitter.com/Y5TpH7Tvyl
— Harpreet (@harpreetchatha_) March 18, 2026
How to fix it
Use AI as a drafting tool, not a publishing engine. Every piece of content needs a human editor who adds original insight, checks claims, and ensures the piece says something that five hundred other pages don’t. The bar isn’t “does this read okay?” – it’s “does this contain something a reader can’t get anywhere else?”
If you’re producing more than a handful of articles per week and your team size hasn’t grown to match, that’s a red flag worth examining honestly.
2. Using shortcuts to external authority
This one shows up in a few forms. Publishing “best of” listicles on your own blog that rank you as the best option. Buying placements on low-authority directories. Manufacturing mentions through link schemes that technically create third-party references but carry no real trust signal.
The core problem is that AI systems look for patterns of genuine authority – real journalists, real publications, real experts recommending your brand because it actually solves a problem.
Now to be fully transparent, as of March 2026, many of these tactics still seem to be working quite well. Even though Google already pushed out new updates that were supposed to derank self-promotional listicles, we’ve still seen them show up fairly on high in search results. Additionally, we’re also continuing to see cheap press release distributions get cited by LLMs.
That said, it’s fair to assume models improve at distinguishing editorial endorsement from paid placements like press releases, as well as editorial listicles from self-promotional ones. In fact, we predict that real editorial articles will become even more important while smaller in-house blogs, press releases, and sponsored posts will be devalued at some point in 2026 or beyond.
How to fix it
Invest in editorial coverage. Pitch stories to real publications, build relationships with journalists and analysts in your space, work with a good PR agency, or produce original research that others want to reference. This is slower and harder than buying placements. It’s also the only approach that compounds over time rather than eroding.
3. Chasing citations instead of earning mentions
This is a distinction we’ve spent a lot of time thinking about at PolyGrowth, and it’s one most brands get wrong.
When people talk about GEO visibility, they usually mean citations – their URL appearing as a source in an AI-generated response. That’s a useful metric. But in most commercial contexts, what actually drives business outcomes is brand mentions – the AI recommending your brand by name, whether or not it links back to your page.
To be clear, these metrics undoubtedly intertwine. As Glenn Gabe shows, mentions and citations for a specific site often correlate. That said, these charts also show that they’re perfectly mirroring each other.
Site seeing massive decline in mentions and citations in ChatGPT. When checking what dropped, ChatGPT is surfacing much more relevant content than before. Very interesting example… Mentions is the first screenshot, then citations. I need to dig in further but my first reaction… pic.twitter.com/LHoLqHiJBc
— Glenn Gabe (@glenngabe) March 19, 2026
Citations primarily come from content structure and technical optimization. You can improve them by making your pages easier to extract and reference.
Mentions, on the other hand, come from editorial presence and authority. They’re earned when your brand shows up consistently across independent third-party sources that the AI system trusts.
In our own research, we’ve found that brands obsessing over citation counts often neglect the authority-building work that drives mentions. They’re optimizing the wrong signal.
How to fix it
Track both citations and brand mentions separately. Build a monitoring workflow that checks how AI systems talk about your brand, not just whether they link to your pages.
Furthermore, don’t just rely on tools like DataforSEO or Semrush. Use manual search for your most crucial keyphrases and periodically monitor if AI recommends your brand or products.
Then invest accordingly – editorial PR, thought leadership, industry analyst relationships, and original research all feed the mention signal. Content structure and on-page optimization feed the citation signal. You need both, but don’t mistake one for the other.
4. Going quiet – inconsistent external validation
AI models weigh recency. A brand that appeared in fifty third-party sources two years ago but hasn’t been mentioned anywhere recent is going to lose ground to a competitor with a steady drumbeat of fresh coverage.
Continue reading: 5 Steps to Build External Validation for AI Recommendations
We see this pattern regularly. A brand does a big PR push at launch, earns a wave of mentions, and then goes quiet for months. When they check their AI visibility six months later, they’ve been overtaken by competitors who maintained consistent presence – not necessarily louder, just more regular.
Sustained presence is what builds and maintains the pattern AI systems rely on when making recommendations. Our own research showed that brands with predominantly older mentions tend to fade from AI recommendation sets, even when their underlying product or service hasn’t changed.
How to fix it
Build editorial presence into your ongoing operations. That means a regular cadence of contributed articles, media engagement, conference speaking, and original research. Even one to two meaningful external touchpoints per month can maintain the recency signal that AI models factor into their recommendations.
5. Accidentally blocking AI crawlers
This is the most technically straightforward mistake on this list, and it’s surprisingly common. Many brands have outdated robots.txt rules that block AI crawlers without anyone realizing it.
The tricky part is that “AI crawlers” aren’t a single thing. OpenAI operates three distinct bots: OAI-SearchBot handles search visibility, GPTBot is for training data, and ChatGPT-User handles real-time user-triggered access. Anthropic has a similar setup: ClaudeBot for training, Claude-SearchBot for search indexing, and Claude-User for user-requested retrieval.
Blocking a training bot is a perfectly legitimate business decision. But accidentally blocking a search bot – the one that determines whether you show up in AI-powered search results – is an own-goal. And because many teams added blanket “block all AI bots” rules in 2023 or 2024 without revisiting them, that’s exactly what’s happening.
How to fix it
Audit your robots.txt today. Specifically, check whether you’re blocking OAI-SearchBot, ChatGPT-User, Claude-SearchBot, or Claude-User. If you are, and you want AI search visibility, unblock them.
Keep blocking training bots if that aligns with your content licensing position – that’s a separate decision. The key is making an intentional choice for each bot rather than applying a blanket rule you set two years ago and forgot about.
6. Weak answer design – burying the answer
AI extraction systems favor pages that lead with a direct answer, expose key facts in visible text, and include concise summaries near the top. If your content buries the actual answer under three paragraphs of context-setting, the AI system may extract a weaker summary or skip your page entirely.
Common problems include long throat-clearing introductions before getting to the point, mixed intents on a single page (trying to answer three different questions instead of one), and data tables with no textual explanation that AI systems can’t easily parse.
Google’s guidance on AI features makes this fairly explicit. Pages that are structured for extraction – clear answers, visible facts, logical hierarchy – perform better in AI-powered search experiences.
How to fix it
Structure every key page around answer design. Lead with the direct answer in the first paragraph or two. Use clear heading hierarchy so AI systems can identify distinct sections. If you have data in tables, add a textual summary.
Think of it as writing for a reader who’s scanning for a specific answer – because that’s essentially what the AI system is doing on their behalf.
This doesn’t mean dumbing down your content. It means front-loading the value and letting depth follow naturally.
7. Treating GEO as separate from SEO fundamentals
There’s a persistent myth that GEO requires some fundamentally different technical approach. Special markup, secret formatting, dedicated “AI SEO” plugins. Google has been pretty clear about this: there are no additional technical requirements for appearing in AI Overviews beyond being indexable and snippet-eligible.
That means all the boring SEO fundamentals – clean crawlability, solid internal linking, proper heading structure, fast page loads, useful content – are also GEO fundamentals.
Brands that shift budget away from technical SEO into “AI hacks” often make their visibility worse, not better. A page that isn’t properly indexed in regular search is invisible to AI features too.
As SEO expert Aleyda Solis highlights, pages occupying top ranks in Google were cited 3.5x more often than pages outside the top 20 SERPs.
👀 Interesting insights from @_oshdavidson / @AirOpsHQ research about The Influence of Retrieval, Fan-out, and Google SERPs on ChatGPT Citations:
* 85% of Sources ChatGPT Retrieves Are Never Cited: Pages with stronger title-query alignment and clearer language were more… pic.twitter.com/S8JNEVTbYL
— Aleyda Solis 🕊️ (@aleyda) March 18, 2026
Now, correlation doesn’t necessarily mean causation. In our view, one shouldn’t interpret this as pure SEO leading to AI recommendations.
What’s most likely happening here is that brands ranking #1 on Google also often have top-tier earned media coverage, tons of social activity, lots of reviews, and other external signals that AI looks at. Combined with strong SEO, this makes them a no-brainer to recommend in the eyes of ChatGPT and Co.
How to fix it
Before investing in any GEO-specific tactics, audit your SEO foundations. Is your site fully crawlable? Is your internal linking logical? Are your core pages properly indexed? Is your content genuinely useful? Fix these first. GEO builds on SEO – it doesn’t replace it, and it often doesn’t work without it.
8. Measuring GEO wrong – trusting dashboards blindly
Most teams tracking GEO performance are looking at the wrong numbers. They’re checking rankings in AI tools, raw click counts, or dashboard-reported “AI visibility scores” without validating whether any of it connects to actual business outcomes.
The same AirOps research shared by Aleyda Solis highlights a measurement gap most teams don’t even know exists: “85% of sources ChatGPT retrieves are never cited,” and “32.9% of cited pages that appeared in any top-20 SERP were discovered only through fan-out.”
That means tracking your original target keyword alone isn’t enough to understand where your citation visibility is actually being won or lost. The discovery paths are more complex than a simple keyword-to-citation pipeline.
GEO dashboards are useful as directional intelligence. They can show trends, flag drops, and highlight opportunities. But they’re not ground truth. OpenAI provides utm_source=chatgpt.com referral tracking so you can see actual traffic coming from AI search in your own analytics – that’s a much more reliable signal than any third-party dashboard estimate.
How to fix it
Build a measurement framework that connects AI visibility to business outcomes. Track AI-referred traffic in your own analytics using UTM parameters and referral data. Monitor brand mentions and citations separately.
Run regular manual prompt checks to see how AI systems actually describe your brand in response to key queries. Use dashboards for directional signals, but validate everything against your own first-party data.
9. One-size-fits-all GEO across platforms
Google, OpenAI, and Anthropic work differently. They have distinct crawler architectures, different content evaluation approaches, and separate controls for how your content gets used. A single undifferentiated GEO playbook that treats all AI platforms the same will miss platform-specific opportunities and leave visibility on the table.
For example, something that helps with OpenAI search visibility – like ensuring ChatGPT-User can access your pages – may do nothing for Google AI Overviews, which rely on standard Google Search indexing.
And Anthropic’s crawler model, with its own separation between training, search, and user retrieval bots, requires its own review of your robots.txt configuration.
Even the way each platform discovers and evaluates content varies. Google’s AI features lean heavily on existing search rankings and snippet eligibility.
OpenAI’s search tool has its own retrieval and fan-out processes. Treating them identically means you’re optimizing for an average that doesn’t actually exist.
How to fix it
Develop platform-specific awareness within your GEO strategy. You don’t need three entirely separate strategies, but you do need to understand the differences. Review your crawler permissions for each provider.
Understand how each platform discovers content – through search rankings, direct retrieval, or fan-out. Test your visibility on each platform independently, and adjust where the differences are material.
The ultimate goal is to make informed decisions about where the platforms diverge and focus your effort where it matters most.