How AI recommends local businesses — a study of lawyers, accountants, and dentists

TLDR: AI recommends a tiny fraction of local businesses, and which ones surface varies dramatically by city size, category, and query language — patterns our study across 6 cities, 3 engines, and 2 languages reveals in detail.

Table of Contents

A year ago, 6% of consumers said they’d used an AI tool to find a local business. Today that figure is 45%. According to BrightLocal’s March 2026 Local Consumer Review Survey, ChatGPT has become the most-used AI tool for local business discovery, overtaking both Gemini and Perplexity in a single year. The shift from novelty to default happened faster than most local business owners registered.

The problem is that AI operates nothing like Google’s local pack. SOCi’s 2026 analysis found that ChatGPT recommends just 1.2% of local businesses in any given category. Compare that to Google’s local 3-pack, which surfaces 35.9%. Gemini recommends 11%. Perplexity sits at 7.4%. Where Google spreads visibility across dozens of businesses per query, AI concentrates it into a handful — and often, the same handful repeatedly.

We wanted to understand who ends up in that handful, and why. So we ran a structured study across three professional service categories, six cities of deliberately varying sizes, and three AI engines — including one non-English speaking country.

We cross-validated every AI recommendation against traditional search results for the same queries. The findings challenged several of our assumptions about what drives local business AI visibility, and they suggest that the opportunity landscape is far more uneven — and in some cases far more open — than the industry conversation acknowledges.

How we studied this

Field Details
Engines used ChatGPT 5.3 · Gemini 2.5 Pro · Claude Sonnet 4.6 · Google search
Business types researched Lawyers · Accountants · Dentists
Research focus How AI recommends local service businesses and which trust signals shape those recommendations
Research date April 2026
Example prompts
“best dentist in Denver”
“best accountant for small business in Berlin Mitte”
“recommended lawyer for property law in Leeds”

We chose three business categories that share a common profile: high-value local services where buyers research before committing, and where the competitive SEO landscape is already mature. Property lawyers and solicitors, accountants and bookkeepers, and dentists each have established search marketing patterns, which gave us a useful baseline for measuring how AI recommendations diverge from — or mirror — traditional search.

We tested across six cities selected for deliberate variation in size and market character.

  1. In the United States: Los Angeles (Tier 1, approximately 4 million people) and Denver (Tier 2, approximately 700,000).
  2. In the United Kingdom: London (Tier 1, approximately 9 million) and Leeds (Tier 3, approximately 500,000).
  3. In Germany: Berlin (Tier 1, approximately 3.7 million) and Freiburg (Tier 4, approximately 230,000).

This gave us a spectrum from global mega-cities through mid-size regional centres down to a small university city. Critically, it also included one non-English-speaking market.

For each city and category combination, we ran four prompts covering different search intents: a generic city-level query (“best property lawyer in Leeds”), a qualified city-level query with a specific use case (“best property solicitor in Leeds for employment law”), a neighbourhood-level query (“property lawyer near Headingley Leeds”), and a recommendation-framing query (“recommended property lawyer in Leeds”).

We applied the same four-prompt structure across all three business types and all six cities, with German-language equivalents for Berlin and Freiburg.

For lawyers specifically, we narrowed to property law to avoid the wide variation that comes with legal specialisations — accountants and dentists have more uniform service descriptions, but legal queries can fragment significantly by practice area.

We asked each model to return five businesses per query to keep results comparable across engines and time, as unconstrained prompts produce highly variable response lengths that make cross-engine analysis unreliable.

We queried three AI engines — ChatGPT running gpt-5.4, Gemini 2.5 Pro, and Claude Sonnet 4.6 — all with web search enabled, using natural-language queries a real consumer would type.

Every city received English-language queries. Berlin and Freiburg received an additional set of identical queries in German. We then ran the same queries through traditional search and compared which businesses appeared in both channels versus only one.

The result is more than 400 city-category-engine combinations, each producing a set of recommended businesses that we could compare against each other and against search.

What follows is what we found.

Continue reading: How ChatGPT Decides Which Businesses to Recommend

Findings

The inverse city size rule

One of the clearest patterns in our data was one we didn’t expect: the smaller the city, the more genuine and useful the AI recommendations became.

We scored each city on a 1-to-5 scale for local signal quality — meaning how well the AI recommendations reflected actual local businesses rather than national brands, chains, or geographically misplaced firms. The results were striking.

City Population tier Local signal quality Best category Worst category
Los Angeles Tier 1 (~4M) 2/5 Property lawyers Accountants
Denver Tier 2 (~700K) 4/5 Dentists Accountants
London Tier 1 (~9M) 2/5 Property lawyers Accountants
Leeds Tier 3 (~500K) 5/5 Property lawyers Accountants
Berlin Tier 1 (~3.7M) 2/5 EN / 4/5 DE Dentists Accountants (EN)
Freiburg Tier 4 (~230K) 4/5 Property lawyers Dentists

The pattern is consistent across both English-speaking markets and the German market. Los Angeles, London, and Berlin (queried in English) all scored 2 out of 5. Denver, Leeds, Freiburg, and Berlin (queried in German) all scored 4 or 5.

The explanation becomes obvious when you look at what the engines actually return. In Los Angeles and London, AI defaults to established national or multinational brands. Ask for an accountant in London and every engine returns PwC, Deloitte, EY, and KPMG. Ask for a property lawyer in LA and you get large multi-office firms with substantial web footprints.

The engines are pattern-matching against prominence, and in large cities, prominence belongs to big organisations.

In mid-size and smaller cities, that dynamic flips. Leeds scored a perfect 5 out of 5, anchored by Blacks Solicitors — a regional firm that appeared in all three AI engines and on page one of traditional search. It was the only business in the entire study to achieve full cross-validation across every channel we tested.

In Denver, the Cherry Creek dental branding cluster — Cherry Creek Dentistry, Cherry Creek Family Dentistry, Cherry Creek Dental Spa — saturated all three engines. In Freiburg, the engines returned genuine local specialists like Sparwasser & Schmidt for property law and Unkelbach Treuhand for accountancy.

The implication is counterintuitive but important. If you’re a local firm in a regional city, you’re competing against far fewer incumbents for AI visibility than you might assume. The engines have less data to work with in smaller markets, which means a firm with strong, consistent signals can dominate — while in a mega-city, you’re fighting against the gravitational pull of household names.

Category matters more than you think

City size affects AI visibility, but category may matter even more. The three categories we tested behave so differently that a strategy designed for a law firm would likely fail for an accountancy practice, and vice versa.

Property lawyers — lowest AI/search divergence

Of the three categories, property law showed the strongest alignment between AI recommendations and traditional search results. The reason is structural: legal directories create dense, well-formatted data signals that both AI training corpora and search engines absorb effectively.

Justia, SuperLawyers, and FindLaw in the US; Legal500 and ReviewSolicitors in the UK — these platforms generate the kind of structured, entity-rich content that AI models can parse with confidence.

Blacks Solicitors in Leeds is the benchmark. It appeared in ChatGPT, Claude, and Gemini, and ranked on page one of traditional search. No other business in the entire study achieved that level of cross-validation. Blacks is a mid-size firm with deep roots in the Leeds market and saturated presence across legal directories — exactly the profile the data suggests AI engines reward.

Other firms showed partial cross-validation. Gantenbein Law Firm and Alderman Bernstein both appeared in AI and search results for Denver. Farrer & Co surfaced across engines and on page one for London. The pattern that emerged was consistent: named, established practices with ten or more years of operating history dominated the recommendations. Sole practitioners were invisible. Boutique firms rarely surfaced. Chains were absent entirely.

In Berlin, the engine divergence became more visible. OpenAI returned international firms — Greenberg Traurig, Hogan Lovells — while Claude and Gemini returned German-native practices like Raue LLP and Breiholdt Lawyers. The English-language query appears to bias OpenAI toward its training data’s centre of gravity, which skews Anglo-American for legal content.

Accountants — maximum divergence, widest opportunity

Accountancy produced the most striking results in the study, and arguably the most important for businesses considering their AI strategy. Across all six cities, we found zero confirmed cross-validations between AI recommendations and traditional search results. Not a single accountancy firm appeared in both channels in any market.

In London, the picture was absolute. All three AI engines returned the Big Four — PwC, Deloitte, EY, KPMG — and nothing else. Traditional search, by contrast, surfaced independent firms: BKL, Fusion Accountants, London Accountants.

The two channels produced completely non-overlapping sets. A buyer using ChatGPT and a buyer using Google Maps would encounter entirely different landscapes, with AI actively worse than search for discovering independent practitioners.

Outside London, the fragmentation was different in character but equally severe. In Los Angeles, each engine recommended a different firm: HCVT in Claude, Advise LLP in ChatGPT, Ravix Group in Gemini. No cross-engine overlap at all.

In Leeds, traditional search returned Armstrong Watson, Leon & Company, and Brown Butler — none of which appeared in any AI engine. In Freiburg, Unkelbach Treuhand showed up in ChatGPT and HSP Steuerberater in Claude, each in one engine only.

The suspected underlying cause is the absence of directory infrastructure. Property lawyers have SuperLawyers and Legal500. Dentists have Google Business Profiles with rich structured data. Accountants have no equivalent.

There is no “SuperAccountants” directory generating the kind of structured, cross-referenced signals that AI engines can anchor to. Without those signals, the engines are essentially guessing — and guessing differently from each other.

This fragmentation is simultaneously the problem and the opportunity. The first accountancy firm in any mid-size city to construct deliberate, AI-readable presence could establish the kind of dominance that Blacks Solicitors holds in Leeds property law. The field is wide open precisely because nobody has claimed it yet.

Dentists — moderate divergence, the neighbourhood name pattern

Dental practices fell between the two extremes, with moderate AI/search divergence and one of the clearest actionable signals in the study: embedding a neighbourhood or city name in the business name creates a powerful AI recognition pattern.

Denver is the defining example. Cherry Creek Dentistry, Cherry Creek Family Dentistry, and Cherry Creek Dental Spa all appeared across all three AI engines and in traditional search. Three distinct practices, all carrying the same neighbourhood identifier, all dominating the AI recommendations for their city. The pattern repeated in Leeds, where Leeds City Dentalcare appeared in ChatGPT, Gemini, and search results.

In the UK, an interesting reversal emerged. Traditional search for dentists returned institutional results — NHS.uk listings and Bupa pages — while AI returned independent practices.

This is one of the rare cases where AI proved more useful than search for discovering actual local businesses. A consumer searching Google for a dentist in London would navigate institutional directories; a consumer asking ChatGPT would get practice names they could actually visit (note: we did not measure Google Maps results, which would be expected to provide much more locally skewed results compared to regular search).

In Los Angeles, Claude displayed a distinctive pattern we observed across categories: a gravitational pull toward prestige. It returned celebrity dentists — Dr. Bill Dorfman, Dr. Kevin Sands — names with media profiles and high recognition.

Useful if you’re searching for “best dentist in LA,” but not if you’re looking for a local practice near your home. Larchmont Dental Associates held two engines, suggesting that neighbourhood naming works even in a Tier 1 city, though less reliably.

At the smallest end of the spectrum, Freiburg was too small for AI to surface practice-level dental branding at all. Engines returned individual dentist names rather than practice names in both English and German queries — a signal that below a certain population threshold, AI lacks sufficient data to make practice-level distinctions.

Google Business Profile completeness correlated with AI citation more strongly in dentistry than in the other two categories. The practices that appeared in both AI and search tended to have fully populated profiles with reviews, photos, and service descriptions — suggesting that for dentists specifically, GBP optimisation does double duty.

The language effect — what happens when you ask in German

The most dramatic finding in our study came from testing identical queries in both English and German for Berlin and Freiburg. The results were so different they might as well have been different cities.

The starkest example was Gemini’s response to an English-language query for accountants in Berlin. It returned Appletree Business Services — a Denver-area firm — and George Dimov CPA, a New York accountant. The same Appletree that correctly appeared in Gemini’s Denver results had been transplanted across the Atlantic into a Berlin query.

We initially suspected that these firms perhaps have local offices in Berlin, but a separate search highlighted that while Google AI Overview also portrayed Appletree as being located in Berlin, we couldn’t find any office of the firm in Germany.

In the case of George Dimov’s firm, it actually turned out to have a location in New Berlin, Wisconsin.Both cases are continental geo-localisation failures, the kind that would send a Berlin-based consumer to a firm 8,000 kilometres away.

Switch the same query to German and Gemini self-corrects entirely. It returns b’steuern and Guhr Steuerberatung — authentic local Berlin firms. Guhr Steuerberatung cross-validates between ChatGPT and Gemini in German, making it one of the strongest accountancy signals in the study. The failure vanishes completely when the query language matches the local language.

The three engines responded to language switching differently. Gemini showed the greatest sensitivity — its German-language results bore almost no resemblance to its English-language results for the same cities. Claude fell in the middle, showing moderate shifts. ChatGPT was the most resistant to language switching, particularly for Berlin property lawyers, where it returned international firms regardless of whether the query came in English or German.

This suggests ChatGPT’s training data for legal content skews heavily toward English-language sources, creating a persistent bias that query language alone cannot override.

SEO researcher Glenn Gabe documented a related phenomenon in December 2025. His research found that LLMs frequently reply in the query language but still link to English source pages — even when local-language versions of the same content exist. That finding concerns which page version gets cited.

Ours operates at a different level: which businesses get recommended at all. Both point toward the same underlying issue. AI systems are not reliably localised for non-English markets, and the failure modes range from citing the wrong webpage to recommending businesses on the wrong continent.

The one bright spot in the language data was the dual-language advantage. DentalFirst and Dental21 in Berlin appeared in AI results regardless of whether the query was in English or German. Both practices maintain strong German-language websites and robust search presence. This cross-language visibility — appearing in AI no matter which language the consumer uses — represents the optimal position for businesses in non-English markets.

The practical implication is that businesses in non-English markets need content in both languages, but for different reasons. Local-language content is essential for capturing the majority of queries from local speakers and for triggering correct geo-localisation in engines like Gemini.

English content provides visibility in expat and international queries, and feeds the English-heavy training data that shapes how AI engines understand your business in the first place.

How AI and traditional search diverge for local businesses

Across every city and category we tested, AI and traditional search produced meaningfully different results. The divergence follows patterns that reveal how each channel makes its decisions — and why optimising for one doesn’t automatically transfer to the other.

SOCi’s broader research found that only 45% of the top 20 brands by traditional local search visibility overlapped with the top 20 brands most recommended by AI, measured across the retail sector. Our study finds the same dynamic in professional services, and in several categories, the divergence is more extreme.

Category AI/search overlap Why
Property lawyers Low–moderate Legal directories create structured signals both channels absorb
Dentists Moderate Google Business Profile data feeds both; chains dominate search while independents dominate AI
Accountants Near zero No directory infrastructure; AI and search operate on different data entirely

The London accountancy market is the most striking single example. AI returns the Big Four exclusively. Traditional search returns independent firms exclusively. These are completely non-overlapping sets serving completely different business profiles. A small practice with strong Google Maps visibility gains nothing from it when a consumer asks ChatGPT for a recommendation instead.

The divergence has a structural explanation. AI engines favour high-reputation signals — brand recognition, review sentiment, authoritative mentions — over proximity and narrow category relevance.

SOCi found that AI-recommended businesses averaged 4.3 stars for ChatGPT, 3.9 for Gemini, and 4.1 for Perplexity. Traditional search will surface a 3.8-star practice two streets away. AI won’t. It will surface a 4.5-star firm across town, or in the case of accountants, a global brand with no local office at all.

This means the two channels require different — though overlapping — strategies. Directory presence, structured data, and Google Business Profile completeness serve both. But brand authority signals, topical content, and cross-platform mention consistency matter disproportionately for AI, while proximity signals and local pack optimisation remain essential for search. Treating them as a single channel guarantees underperformance in at least one.

How the three AI engines behave differently

Not all AI engines are equal for local recommendations. Our study revealed consistent character differences between ChatGPT, Claude, and Gemini that persisted across cities and categories — suggesting these are genuine architectural or training-data biases rather than random variation.

Continue reading: How GEO differs for ChatGPT, Gemini, and Claude (and what to do about it)

Engine Character Reliability score Best for Worst for
ChatGPT (gpt-5.4) Consistent, establishment-biased 3.5/5 Smaller cities, German-language queries Large-city local discovery
Claude (Sonnet 4.6) Prestige bias, brand-recognition-heavy 3/5 Established prestige firms, London property Small independents, local specialists
Gemini (2.5 Pro) Most language-responsive, highest variance 2.5/5 German-language queries, neighbourhood brands English queries in non-English cities

ChatGPT was the most consistent engine in the study. It rarely produced catastrophically wrong results, though it reliably gravitated toward established firms with strong web presence. For German-language queries, it was the best performer among the three — returning authentic local German firms where other engines stumbled or defaulted to international names. Its weakness was large-city local discovery, where “established” often meant “national brand” rather than “genuine local practice.”

Claude exhibited a distinctive prestige bias. It gravitates toward well-known, high-reputation brands in a way the other engines do not. In London it returned the Big Four for accountants. In Los Angeles it returned celebrity dentists — Dr. Bill Dorfman and Dr. Kevin Sands — names with media recognition rather than local service relevance.

Claude is reliable for prestige-tier recommendations, but it systematically excludes small independents and local specialists. It conflates “notable” with “locally recommended,” which makes it the least useful engine for a consumer seeking a nearby professional.

Gemini had the highest ceiling and the lowest floor of any engine in the study. The Berlin English-language accountants failure — returning Denver and New York firms for a Berlin query — was the single worst result we recorded. Yet for German-language queries, Gemini produced the most authentic local results of any engine.

The Cherry Creek dental branding appeared in Gemini cross-validated with all engines and search, demonstrating that strong local signals translate well on this platform. The pattern suggests Gemini amplifies whatever signals it finds: strong signals produce excellent results, while weak or ambiguous signals produce failures that the other engines avoid.

The key takeaway is that appearing in one engine does not constitute AI visibility. Cross-engine presence — like Blacks Solicitors in Leeds or the Cherry Creek practices in Denver — is the only reliable indicator of genuine, stable AI recommendation. A firm that shows up in ChatGPT but nowhere else holds a fragile position that could shift with the next model update.

Continue reading: How to track and measure AI visibility in 2026: a complete guide

GEO opportunity by category and city

Taking all the data together, we scored each city-and-category combination for how open or locked the AI visibility landscape currently is.

A score of 1 means the market is dominated by entrenched incumbents with little room for new entrants. A score of 5 means AI returns are fragmented, inconsistent, or absent — a wide-open field for the first business that invests in deliberate AI presence.

City Property lawyers Accountants Dentists
Los Angeles 2 4 3
Denver 3 3 2
London 2 1 4
Leeds 2 5 3
Berlin 3 (EN) / 4 (DE) 4 (DE) / 1 (EN) 4
Freiburg 3 4 5

London accountants scored 1 out of 5 — the most locked market in the entire study. The Big Four occupy all three engines with no variation. An independent London accountancy firm pouring resources into AI visibility would be pushing against a structural ceiling. For independents in this specific market, traditional search SEO remains the far better investment.

At the other end of the spectrum, Freiburg dentists and Leeds accountants both scored 5 out of 5. AI returns for these combinations are either individual practitioner names (Freiburg) or completely fragmented across engines with no overlap (Leeds). A business that invests in deliberate AI presence in either of these markets faces zero entrenched competition. The first mover advantage is real and currently unclaimed.

The accountancy column tells the most compelling story across the matrix. Outside London and Berlin’s English-language results, every city shows a 3, 4, or 5 for accountancy opportunity. This is the category most ripe for early movers — precisely because the directory infrastructure that entrenches incumbents in property law doesn’t exist for accountants. The absence of structured signals that makes AI recommendations unreliable for this category today is the same absence that creates wide-open opportunity for firms willing to build those signals deliberately.

Denver dentists scored 2 out of 5, which illustrates the other side of the equation. Cherry Creek branding has effectively locked that market. The neighbourhood-name saturation across all three engines creates a barrier that a new dental practice in Denver would need an equally strong local signal to overcome. Early movers don’t just gain advantage — they can close the door behind them.

Berlin’s split scoring reveals the language dimension of opportunity. Property lawyers score 3 in English but 4 in German. Accountants score 1 in English (where Gemini returns American firms) but 4 in German. For businesses in multilingual markets, the language of optimisation determines whether they’re competing in a locked or open field.

What local businesses should do: 5 learnings

The data points clearly toward a set of practical actions. These vary by category and city tier, but five principles hold across everything we observed.

Embed your city or neighbourhood in your business name.

The Cherry Creek pattern is the clearest signal in the study. Three dental practices carrying the Cherry Creek name dominate AI recommendations for Denver dentistry across all engines and traditional search simultaneously.

Leeds City Dentalcare achieves the same in its market. This is not a branding nicety — it is an AI recognition signal. A practice named “Cherry Creek Dentistry” outperforms a practice named “Smile Design Studio” in every AI query that includes the neighbourhood name.

If you’re launching a new practice or rebranding an existing one, embedding your geographic identifier in the business name itself affects every AI recommendation query your category generates.

Saturate professional directories for your category.

The businesses that cross-validated across AI and search — Blacks Solicitors, Farrer & Co, Gantenbein Law Firm — share deep directory presence as a common trait. For lawyers, this means Justia, SuperLawyers, and FindLaw in the US; Legal500 and ReviewSolicitors in the UK.

For dentists, Google Business Profile completeness is the single most impactful signal, correlating with AI citation more strongly than any other factor in our dental data. For accountants, the directory infrastructure barely exists — which is simultaneously the problem and the opportunity.

Building structured presence on the few platforms that do exist (industry association listings, regional business directories, accounting-specific platforms) provides outsized returns precisely because so few competitors have done it.

Build dual-language web content if you operate in a non-English market.

The Berlin data makes this case definitively. English-only content risks invisibility in local-language queries, where the majority of local consumers will search. Worse, it can trigger incorrect geo-association — as Gemini demonstrated by returning American firms for an English-language Berlin query.

Maintaining strong content in both the local language and English achieves the cross-language visibility position that DentalFirst and Dental21 hold in Berlin, appearing in AI recommendations regardless of query language.

Monitor AI citations per engine, not in aggregate. Appearing in ChatGPT does not mean you have AI visibility.

It means you have ChatGPT visibility. If Gemini returns three competitors instead, those competitors are capturing the segment of consumers who use Gemini — and each engine rewards different signals.

ChatGPT favours established web presence. Claude gravitates toward prestige and brand recognition. Gemini amplifies strong local signals but fails on weak ones. Monitoring each engine individually and investigating what your competitors in each one have done differently is the only way to build genuine cross-engine presence.

For accountancy firms specifically: the window is open, and it won’t stay that way.

Accountancy is the most fragmented category across all markets outside London. There is no established directory infrastructure creating entrenched AI signals. There are no SuperLawyers-equivalent platforms generating the structured data that locks incumbents in place.

The first accountancy firm in any mid-size city that builds deliberate AI-optimised content — structured data, consistent name-address-phone information, topical authority content on the specific services they provide — could establish the kind of cross-engine dominance that our study found only in property law.

The Cherry Creek dental practices and Blacks Solicitors didn’t achieve their positions by accident. They built signal density that AI engines could parse with confidence. For accountants, that density doesn’t exist yet. The firm that builds it first in any given market owns the category until someone else catches up.

Continue reading: What is the editorial tax (and what does it mean for GEO)?

Study limitations

These findings reflect a point-in-time snapshot. AI outputs shift as models update and new content enters training data, so the specific businesses surfaced today may differ from what we observed. What tends to be more durable is the structural layer: which types of sources get cited, how directory signals translate into recommendations, and how category and city size shape the competitive landscape. That’s what the patterns here describe.

We ran four prompts per city-category combination, which far from every possible query. Additionally, we constrained models to provide only five companies, while broader queries may yield more diverse results. That said, the prompt sets were designed to mirror how real buyers approach these tools. When the same patterns emerged across different prompt variants and across repeated runs, that consistency gave us confidence that we were seeing signal rather than noise.

The study covers six cities across the UK, the US, and Germany tested in English and German in the case of the latter. The city-size contrasts and language comparisons are most useful as a framework for hypothesis-testing in your own market – the patterns are worth validating before applying directly to cities or languages outside our scope.

Three business categories are enough to identify meaningful differences, yet not every local service will behave the same way. The underlying mechanisms – directory infrastructure, editorial coverage, structured business data – generalise to categories with similar online footprints, but categories with fundamentally different digital ecosystems (healthcare, real estate) may behave differently.

All queries ran from a fixed IP and account setup, so localisation and personalisation effects aren’t captured. For most local service queries, though, AI tools don’t heavily personalise results, which means our baseline is close to what most first-time searchers would see – but that’s an assumption worth keeping in mind.

Conclusion

What we found across six cities, three engines, and two languages is that AI-driven local discovery already behaves very differently from traditional search — selecting far fewer businesses, rewarding different signals, and diverging significantly by category, city size, and query language. These patterns reflect how AI engines are trained and what data they have access to, and they’re consistent enough to be useful as a planning input, even accounting for the variance in AI outputs.

The clearest finding is also the most actionable: smaller cities, especially, are wide open. Freiburg dentists, Leeds accountants, and several other combinations scored 5 out of 5 on our opportunity index — markets where no single business has established the kind of cross-engine presence that Blacks Solicitors holds in Leeds property law.

Continue reading: 7 Steps to Get a New Brand Into AI Search Results From Scratch

Share :

Get Started

Find out if your brand is AI‑ready.

Get a free AI visibility audit assessing your content, technical structures, and more. See exactly where you stand in under 48 hours.

What it includes:
  • AI citability & visibility score
  • Technical foundations & data review
  • Content & EEAT audit
  • Optimization score for specific platforms
  • Action plan
Request Free Audit

No commitment. No retainer. Just data.