How to get your B2B service firm recommended by AI

TLDR: Getting your B2B service firm recommended by AI requires editorial presence in the sources AI actually reads – not directory listings on Clutch or DesignRush. Our study of 20 prompts across 5 service categories and 3 AI engines found that AI and search produce very different vendor shortlists, that the opportunity varies dramatically by category, and that most service firms are completely invisible to AI right now. The good news: in fragmented categories, a relatively small number of well-placed editorial mentions can shift your visibility significantly.

Table of Contents

If you run a B2B service firm, such as an agency, an MSP, a recruitment business, an accountancy practice, there’s a pretty good chance your next client will ask ChatGPT or Gemini for recommendations before they ever open Google.

In fact, according to 6sense and Forrester, 89% of B2B buyers have now adopted generative AI, and 94% used LLMs during their most recent purchasing process. A separate Search Engine Land study found that 37% of consumers now begin searches with AI rather than traditional search engines – and 47% say AI influences which brands they trust.

The scale of this shift is hard to overstate. Gartner projects that AI agents will command $15 trillion in B2B purchases by 2028. For service firms, the question isn’t whether AI-driven discovery matters. It’s whether you’re visible when it happens.

We wanted to understand exactly how AI recommends B2B service firms today, so we ran a structured study. What we found surprised us – and it should matter to anyone responsible for growing a B2B service business.

Aren’t AI results just Google results repackaged?

This is the most common pushback we hear, and it’s worth addressing head-on with data rather than opinion.

In traditional search, directories dominate page one for most B2B service queries. Search “best web design agencies” and you’ll see Clutch, DesignRush, and Sortlist occupying the top organic positions. These aggregators have spent years building domain authority and they aren’t going anywhere on Google.

But here’s what’s interesting: when we asked the same types of questions to ChatGPT, Claude, and Gemini, those directories essentially disappeared. AI engines don’t recommend Clutch listings. They produce their own shortlists of individual firms, drawn from a different set of sources entirely.

This isn’t just our observation. Ahrefs found that only 6.82% of ChatGPT results overlap with Google’s top 10 organic results, and 83% of its answers cite URLs that don’t appear in Google at all. WhiteHat SEO’s research corroborates this, reporting that only 12–38% of URLs cited by AI systems also rank in Google’s top 10.

The overlap between AI-recommended firms and search-visible firms varies wildly by category. In recruitment, where strong brands like Hays and Reed have decades of press coverage, there’s meaningful overlap. In web design, the Jaccard similarity – a measure of how much two sets overlap – sits between 0.05 and 0.10. That’s near zero.

So no, AI results are not Google results repackaged. They’re a parallel channel with its own logic, its own biases, and its own blind spots.

Continue reading: How ChatGPT Decides Which Businesses to Recommend

How we studied this

Field Details
Engines used ChatGPT 5.3 · Gemini 2.5 Pro · Claude Sonnet 4.6 · Google search
Subcategories researched Accounting firms · PR agencies · Web design agencies · Law firms · Consulting firms
Research date March 2026
Example prompts
“best PR agency for SaaS companies”
“best web design agency for B2B companies”
“best consulting firm for revenue operations”

We ran a structured internal study across five B2B service categories: recruitment, web design and development, PPC and performance marketing, IT and managed services (MSP), and accounting and bookkeeping.

We focused on UK-based service categories specifically to ensure geographic consistency in the results – controlling for location lets you isolate the AI recommendation patterns without regional noise muddying the data. The findings themselves, though, translate broadly to B2B services regardless of geography.

We crafted 25 customer-intent prompts – the kind of questions a real buyer would type when looking for a provider. Things like “best PPC agency for ecommerce” or “recommend an IT support company for a 50-person business.” We deliberately mixed broad queries with specific ones to capture different recommendation behaviours.

We ran every prompt through three AI engines: ChatGPT (5.3), Gemini (2.5 Pro), and Claude (Sonnet 4.6). All tests used a US IP with web search enabled, which means the engines had access to live results rather than relying solely on training data.

We then aggregated which brands appeared, how often, and across which engines.

Continue reading: What is the editorial tax (and what does it mean for GEO)?

We compared those results against organic search visibility for the same query types. The goal was simple: understand where AI and search agree, where they diverge, and what’s driving the difference.

What we found: five key learnings

The data told a clear story, but it wasn’t a simple one. B2B services AI visibility behaves very differently depending on the category, the engines, and the competitive landscape. Here’s what stood out.

Category Top AI-cited firms (mention count) AI/search overlap GEO opportunity
Recruitment Hays (29), Reed (17), Michael Page (16) High 1/5 — locked
Web design/dev CreativeWeb (9), ClarityDX (8), Soap Media (6) Near zero 5/5 — wide open
PPC/performance Loud Mouth Media (11), Lever Digital (9), ROAST (8) Low 3/5 — fragmented
IT/MSP Accenture (16), Littlefish (10), Capita (6) Low 4/5 — open for SMB specialists
Accounting/bookkeeping Crunch (15), The Accountancy Partnership (9), Mazuma (6) Medium 2/5 — Crunch dominant, mid-tier open

1. AI and search are measuring different things

This was the single most important finding. Across all five categories, AI and search consistently surfaced different sets of firms – with one notable exception.

Recruitment showed the highest overlap. Hays led AI mentions with roughly 29 across engines, followed by Reed (17) and Michael Page (16). These are the same names you’d expect to see on Google’s first page, which makes sense: they have decades of press coverage, strong brand signals, and extensive editorial footprints.

Web design told the opposite story. CreativeWeb led AI mentions with 9, followed by ClarityDX with 8 – and neither firm has any meaningful organic search presence. They appear to have been surfaced almost entirely from training data and editorial references that don’t translate into traditional SEO rankings. Meanwhile, the directories that dominate Google for web design queries produced zero direct AI citations.

2. Editorial presence drives AI citation, but the threshold varies

AI recommendations correlate strongly with repeated editorial mentions – listicles, reviews, roundup articles, and feature coverage. Ahrefs’ research puts a number on this: branded web mentions correlate 0.66–0.71 with AI visibility, and brands in the top 25% for web mentions earn over 10× more AI Overview mentions than those below.

Semrush’s analysis adds further context to this picture. AI models include brand mentions in 26% to 39% of responses across five LLMs, with brands featured on reputable sites surfacing more consistently. Lesser-known brands tend to get hedged language – “might be worth considering” – or get grouped generically as “and several smaller competitors.”

The editorial ecosystem around B2B services is thinner than it is for other sectors, e.g. software products. There simply aren’t as many published listicles comparing recruitment agencies or PPC specialists as there are comparing, say, CRM platforms. This means each editorial mention carries more weight in fragmented categories.

In web design, which scored 5 out of 5 on our GEO Opportunity Index (meaning it’s wide open), a handful of well-placed articles can shift your AI visibility noticeably. In recruitment, where the category scored 1 out of 5, the established players have such deep editorial footprints that breaking in requires substantially more effort.

For firms in categories with a high opportunity index, this is genuinely encouraging. You don’t need hundreds of placements. You need the right ones.

3. Enterprise bias is still strong

This finding was one of the more frustrating ones. When we asked AI engines to recommend MSPs or IT support providers for small businesses, Accenture appeared roughly 16 times across our prompts. Capita and IBM surfaced repeatedly too.

These are enterprise consultancies. They don’t serve 30-person companies. But Claude and Gemini in particular seem to default to large, well-known brands regardless of the intent expressed in the query.

This aligns with broader research. Ahrefs describes AI as acting like “a sort of consensus engine, recommending brands that most people already know,” noting that AI Mode rewards established brands more than emerging ones. Semrush’s findings are blunter: the current approach “creates a bias toward established brands with larger marketing budgets and stronger SEO.”

ChatGPT handled intent somewhat better, producing more relevant results for SMB-focused queries. But the pattern was consistent enough to be structural rather than accidental.

The silver lining here is that this creates a genuine opening. If you’re a specialist IT provider serving mid-market businesses – e.g., Littlefish (10 mentions) and Air IT (5 mentions) both showed up in our data – you’re competing against enterprise names that aren’t actually relevant to the buyer’s needs. A clearly positioned SMB-focused firm that builds the right editorial signals can exploit this mismatch.

4. Engine divergence tracks category fragmentation

Not all AI engines agree on who to recommend, and the level of disagreement maps closely to how fragmented the category is.

In recruitment, all three engines broadly converge on the same names. Hays, Reed, and Michael Page appear consistently across ChatGPT, Claude, and Gemini. There’s an established consensus – built over decades of brand investment and press coverage – that the engines reflect.

In web design, there’s almost no convergence. Each engine produces a different shortlist, with minimal overlap between them. PPC and performance marketing is barely better, with Loud Mouth Media (11 mentions) and Lever Digital (9) showing some cross-engine presence but no dominant consensus.

This divergence isn’t unique to our study. Ahrefs found that 86% of top-mentioned sources are not shared across ChatGPT, Perplexity, and Google AI features. The engines are drawing from different wells.

What this means in practice: in fragmented categories, which engine a potential buyer happens to use will significantly affect which firms they see. There’s no single leaderboard to optimise for. You need presence across multiple engines, which requires breadth in your editorial footprint rather than depth in any single source.

5. Dual-channel presence is rare and defensible

The most interesting position we found was what we’re calling dual-channel validation – firms that appear strongly in both AI recommendations and organic search results. It’s rare, and that rarity makes it valuable.

Crunch, the online accounting service, is the standout example. With 15 AI mentions across engines and direct page-one organic presence for relevant queries, it’s the only brand in any of our five categories to achieve a dominant position in both channels simultaneously.

In PPC, Lever Digital and Gripped showed up in both AI and search, though with less dominance. In web design, Soap Media and Hallam appeared in both channels but without the commanding position Crunch holds in accounting.

For most firms, this dual presence happens accidentally – a byproduct of doing many things well over a long period. But it can be pursued deliberately, and firms that achieve it create a compounding advantage that’s difficult for competitors to replicate quickly.

Five steps B2B service firms should take

The research points clearly toward a set of practical actions. These aren’t theoretical – they’re based on what actually differentiates firms that appear in AI results from those that don’t.

Step Focus Key insight
1. Audit AI visibility Run 10–15 buyer-intent prompts across ChatGPT, Claude, Gemini Most firms are surprised by what they find — and don’t find
2. Fix your positioning Every page must clearly state what you do, who you serve, and what category you’re in Vague language means AI can’t classify — and won’t recommend — you
3. Earn editorial coverage Target trade press, awards, third-party case studies Clutch and DesignRush help search rankings, not AI visibility
4. Cover your category on-site Publish use cases, comparisons, methodology content AI assesses topical depth across your whole site, not just individual pages
5. Match how buyers ask Audit your language quarterly against real buyer prompts A language mismatch makes you invisible to both AI and search

Step 1: Audit your current AI visibility

Before you change anything, you need to know where you stand. Run 10 to 15 customer-intent prompts across ChatGPT, Claude, and Gemini. Use the kinds of questions your actual buyers would ask – not generic category terms, but specific queries like “best recruitment agency for tech roles” or “recommend a PPC agency for B2B lead generation.”

Map which competitors appear and how often. Note what sources the AI cites when it provides references. This gives you a baseline and, just as importantly, shows you which editorial sources are influencing recommendations in your category.

Most firms that do this exercise for the first time are surprised by who shows up – and who doesn’t.

Step 2: Fix your positioning

B2B service firms are especially prone to vague positioning. Visit any agency website and you’ll find phrases like “strategic partner,” “end-to-end solution,” or “we help businesses grow.” This language tells AI nothing useful about what you actually do or who you serve.

Every page on your site should clearly state what service you provide, which industries or company sizes you serve, and what category you belong to. AI engines need to classify you before they can recommend you, and ambiguous positioning makes classification harder.

This is the single lowest-cost change most service firms can make, and it has compounding effects. Clear entity signals help AI understand you, but they also improve your relevance for specific search queries.

Step 3: Earn editorial coverage in the sources AI reads

This is the critical insight from our data: the sources that drive search visibility and the sources that drive AI visibility are largely different in B2B services.

AI engines draw from mainstream media, niche publications, award announcements, published case studies on third-party sites, and editorial roundups.

Target coverage in the publications your buyers and your industry peers actually read. Contribute expert commentary to trade features. Enter relevant awards – not for the badge on your website, but for the editorial coverage that accompanies shortlists and winners. Get your case studies published on partner or client sites, not just your own.

Step 4: Cover your category comprehensively on-site

AI engines don’t just look at what others say about you. They also assess how thoroughly your own site covers your category. Firms that publish use cases by industry, named comparisons with alternatives, methodology content, and results-led case studies tend to surface more consistently.

Think of your site as a category resource, not just a brochure. If someone asks an AI engine about your service category, your site should contain enough relevant, specific content that the engine can draw from it directly.

This means covering the questions buyers actually ask – about your approach, your results, your specialisms, and how you compare to alternatives – with enough depth to be genuinely useful.

Step 5: Align your positioning with how buyers actually ask

There’s a gap between how service firms describe themselves and how buyers search for them. “Holistic technology partner” doesn’t match any query a real buyer would type. “IT support for growing businesses” does.

Audit your language quarterly. Look at the prompts people use when asking AI for recommendations in your category – you can find these by running your own tests – and make sure your positioning mirrors that language naturally.

If every IT support query mentions “growing businesses” or “mid-sized companies” and your site talks about “enterprise digital transformation,” you’re creating a mismatch that both AI and search will struggle to bridge.

What smaller or newer firms should do differently

Everything above applies at scale, but if you’re a smaller firm without an established brand, the playbook needs adjusting.

The most important thing you can do is find a category niche with a high GEO Opportunity Index – in other words, a space where AI recommendations are fragmented and no dominant players have locked things down. In our data, web design (5/5) and IT/MSP (4/5) are wide open. Recruitment (1/5) is not the place to start if you’re a new entrant trying to build AI visibility from scratch.

In fragmented categories, the editorial tax – the amount of coverage you need to earn before AI starts noticing you – is substantially lower. A few well-placed features in relevant trade publications can be enough to start appearing in recommendations, particularly for specific prompts.

Entity clarity becomes even more critical when you lack brand authority. Established firms can sometimes get away with vague positioning because their brand signals are strong enough for AI to classify them anyway. Newer firms can’t. Every page, every bio, every case study needs to telegraph exactly what you do and for whom.

Specificity beats breadth at this stage. Being the clearly positioned “PPC agency for B2B companies” will serve you better than being a “full-service digital marketing agency.” AI engines are trying to match recommendations to specific queries, and the more precisely you define your niche, the easier you make that matching process.

The opportunity here is real. Our data shows that firms like CreativeWeb and ClarityDX – brands with no meaningful search presence – are already appearing in AI results purely on the basis of editorial signals and training data.

Continue reading: 7 Steps to Get a New Brand Into AI Search Results From Scratch

If they can get there without apparently trying, imagine what’s possible with a deliberate strategy.

The verdict

B2B services AI visibility is still in its early stages. The patterns we’ve documented here will evolve as AI engines improve their intent handling and as more firms begin competing for this channel. But right now, most service firms aren’t even aware it exists as a channel – which means the window for early movers is genuinely open.

The commercial stakes are becoming clearer too. Exposure Ninja reports that AI search traffic converts at 14.2% compared to Google’s 2.8% – a fivefold difference. And Digital Commerce 360 found that AI-driven traffic grew 4,700% year over year. This isn’t a niche channel anymore. It’s a fast-growing one with higher-intent visitors.

The firms that act on this now won’t just gain a temporary edge. They’ll build the editorial footprint and entity signals that compound over time, making it progressively harder for latecomers to displace them. In a market where most of your competitors haven’t even run their first AI audit, that’s an advantage worth pursuing.

Share :

Get Started

Find out if your brand is AI‑ready.

Get a free AI visibility audit assessing your content, technical structures, and more. See exactly where you stand in under 48 hours.

What it includes:
  • AI citability & visibility score
  • Technical foundations & data review
  • Content & EEAT audit
  • Optimization score for specific platforms
  • Action plan
Request Free Audit

No commitment. No retainer. Just data.