AI Overview Ranking Signals vs. Google Search: One Search Box. Two Ranking Systems.
Jason Bland | April 23, 2026
Google is running two ranking systems behind one search box, and most law firms are only optimizing for one of them. For the past eighteen months, the ten blue links sitting below Google’s AI Overview panel have followed one set of signals, and the AI Overview citations sitting above those links have followed an increasingly different set.
In July 2025, an Ahrefs study of 1.9 million AI Overview citations found that 76 percent of cited pages also ranked in Google’s top ten for the same query. By February 2026, an updated Ahrefs analysis of 863,000 keywords put that number at 38 percent, and a parallel BrightEdge dataset put it closer to 17 percent. For law firms whose intake funnel depends on being found the moment a potential client types “do I need a lawyer after a DUI in Dallas” or “what is the statute of limitations for medical malpractice in California,” that gap directly affects client acquisition.
Table of Contents
Two Retrieval Systems Behind One Search Box
When a potential client searches for legal information or a lawyer on Google today, two separate ranking processes run in parallel. The traditional organic results are produced by Google’s long-standing ranking pipeline, which evaluates each candidate URL against the query using signals developed over two decades: relevance, authority via PageRank and newer link-based systems, content quality, page experience, user behavior signals, topical expertise, and as we learned from our CLM Sequoia research, page URLs play a significant part in Google search rankings.
The AI Overview panel that usually sits above those organic results is produced by a different system entirely. It is powered by a custom version of Gemini, with Google having rolled out Gemini 3 as the global default on January 27, 2026. That model runs a separate retrieval process called query fan-out that operates independently of the organic top ten, generates its own synthesized answer, and then selects a small number of URLs to cite alongside the response. The candidate pool for those citations is meaningfully wider than the top-ten SERP for the original query, and the selection logic weighs signals that traditional ranking has historically treated as secondary.
Did you know that AI overviews are not always on top?

In the example above, you can see that AI overviews are not always on top. When searching for a dog bite lawyer in Chicago, Custom Legal Marketing’s client, Briskman Briskman & Greenberg, has an organic top ranking that is prominently ranked above the AI overview. In fact, the local map listing is several positions below the AI overview, which means, for this keyword, the top traditional organic position is actually the highest organic ranking.
Is Good SEO Really Good GEO/AEO?
Google’s public position, stated repeatedly by Danny Sullivan and Gary Illyes throughout 2025, is that “good SEO is good GEO” and that no separate optimization playbook is required for AI Overviews. That guidance is narrowly true: your content has to be crawled, indexed, and eligible to rank before it can be cited in an AI Overview. But real-world research shows that a separate AI optimization strategy is necessary, because the actual citation data makes clear that eligibility is only the floor. Once your content is in the candidate pool, a different set of signals determines whether it gets pulled into the answer. So, good law firm SEO alone does not mean you’re also prepared for generative engine and answer engine optimization.
What Traditional Google Search Actually Ranks
When Google ranks a page in traditional organic results for a certain search query, it is evaluating that page as a whole against the query as a whole. The dominant signal families include:
Relevance signals, including topical coverage, keyword presence in title tags and headings, semantic similarity between the page and the query, and passage-level matches surfaced by systems like BERT and MUM.
Authority signals, including PageRank and its descendants, the link profile of the linking domains, anchor text relevance, and increasingly brand mentions across third-party sources.
Content quality signals, shaped by Google’s Helpful Content guidance and amplified for Your Money or Your Life topics like legal services.
User behavior signals, including click patterns tracked by Google’s internal Navboost system as confirmed in the 2024 API leak.
Page experience signals, including mobile-friendliness and Core Web Vitals; however, our research on PageSpeed and law firm SEO proved that a PageSpeed score alone does not affect your rankings.
Crawlability and indexing signals, including site architecture, internal linking, and schema markup.
The important thing about this ranking stack, for our purposes, is that it evaluates pages as atomic units competing for positions on a single results page. When a potential client searches “personal injury lawyer San Diego,” Google is comparing your practice area page against your competitors’ practice area pages and deciding which deserves position one, two, and so on. The output is a stable ordered list. That list might update with algorithm changes, but it does not reshuffle between two consecutive searches of the same query by the same user.
How AI Overviews Actually Choose Which Law Firm or Website To Cite
The AI Overview Retrieval Architecture
How Gemini converts a single user query into a set of parallel retrievals, synthesizes a draft answer, and attaches citations to the specific passages that supported specific claims.
The Google AI Overview citation process works differently at almost every step. Instead of matching one query to one ranked list, Gemini breaks the original query into a set of sub-queries, retrieves content for each sub-query independently, and then selects citations based on which sources best supported the synthesized answer at the passage level.
This is the query fan-out architecture that Google publicly introduced at I/O 2025. A single legal query like “what happens after a first DUI arrest in Texas” does not produce a single retrieval pass. It produces eight to twelve parallel sub-queries covering different facets of the user’s likely intent. Those sub-queries might include “first DUI penalties Texas,” “Texas DUI court process timeline,” “can you refuse breathalyzer Texas,” “first offense DUI jail time Texas,” “DUI attorney cost Texas,” and so on. Each sub-query retrieves its own candidate set from Google’s index. The Gemini model then reads across all retrieved passages, drafts an answer, and attaches citations to the specific sources that supported specific claims.
Two consequences follow from this architecture, and they explain most of the divergence data.
First, the candidate pool is dramatically larger than the top ten for the original query. If each of twelve sub-queries retrieves its own top set of results, the union of candidate URLs can easily exceed one hundred pages across Google’s entire index. A page that does not rank anywhere for “what happens after a first DUI arrest in Texas” can still get cited because it ranks well for the sub-query “Texas implied consent law penalties” and its relevant passage was pulled into the answer.
Second, the unit of evaluation shifts from page to passage. Traditional ranking asks “which page best matches this query?” Query fan-out plus retrieval-augmented generation asks “which passage best supports this specific claim in my synthesized answer?” A page whose main content is strong but whose opening paragraph is promotional framing can lose the citation to a page that is weaker overall but whose passage is a clean, self-contained answer. Gary Illyes’s description of the process at SEOday 2024 was more candid than Google’s public guidance: the AI generates its answer first and then matches that answer to indexed content, linking out to the elements with the highest match. The citation functions as a source attribution attached to generated text.
The Ranking Overlap Collapse
The practical result of these architectural differences is visible in the citation overlap data, which has moved in one direction over the past seven months: downward. The same Ahrefs dataset that showed 76 percent overlap in July 2025 showed 38 percent in February 2026, a decline of roughly half in seven months. BrightEdge’s methodology, which draws from a different sample and uses different attribution rules, put the overlap closer to 17 percent in the same window.
Part of the drop reflects improved citation detection on the measurement side, which Ahrefs explicitly acknowledged. But part of it reflects a real shift in Google’s citation behavior. SE Ranking’s analysis of Gemini 3 after its January 2026 rollout found that the new model replaced approximately 42 percent of previously cited domains and generates 32 percent more sources per response than Gemini 2.5. The pool of cited domains is broader, the selection logic is more aggressive about pulling from sub-query results rather than original-query results, and non-traditional sources like YouTube have grown into a structurally significant share of citations.
The AI Overview Overlap Collapse
Share of AI Overview citations that also appeared in Google’s top 10 organic results for the same query, measured over seven months across two independent datasets.
Signal by Signal: Where the Two Systems Actually Diverge
The high-level numbers tell you that divergence exists. They do not tell you which specific ranking signals have changed weight, which is the question a marketing director or managing partner actually needs to answer. Below is a signal-by-signal breakdown.
Unit of Evaluation: Page vs. Passage
Traditional Google Search ranks pages. AI Overviews cite pages but select them based on passages. Semrush and Ahrefs research converges on an optimal passage length of roughly 134 to 167 words for AI extraction, with 62 percent of cited content falling between 100 and 300 words per passage.
That passage needs to stand alone, meaning it needs to answer the sub-query on its own without requiring the reader to scroll or click for context. A law firm page that delivers a thorough answer across five paragraphs under an H2, each paragraph building on the last, will often lose the citation to a page whose first paragraph under the same H2 delivers a compressed forty to sixty word self-contained answer.
Authority: Domain-Level vs. Topic and Entity-Level
Traditional ranking relies heavily on link-based authority signals that operate at the domain level. AI Overview selection looks at authority differently. Wellows’ analysis of 2,400 AI Overview citations found that pages ranking positions six through ten with strong E-E-A-T signals were cited 2.3 times more frequently than position-one pages with weak authority signals. Domain Authority as a standalone correlation dropped from r=0.23 in 2024 to r=0.18 in the most recent measurement, which is effectively noise. The signals that replaced raw domain authority are topic-level: how clearly your content aligns with a defined knowledge graph entity, how densely it connects to related entities, how often your brand or attorneys are mentioned across third-party sources, and how much topical coverage your site demonstrates across a practice area cluster rather than in a single pillar page.
Third-Party Surfaces: YouTube and Reddit as First-Class Sources
This is where traditional search and AI Overviews diverge most sharply. YouTube is now the single most-cited domain in AI Overviews overall, accounting for roughly 5.6 percent of all citations and growing its citation share by 34 percent over the six months preceding Ahrefs’ February 2026 update. Among AI Overview citations pulled from outside Google’s top one hundred organic results, 18.2 percent were YouTube URLs. Reddit citations have grown even faster across the broader AI search ecosystem following OpenAI’s content partnership, and government sources are cited 11.75 times more often than average. Traditional search rewards your practice area page for ranking on your domain. AI Overviews reward your firm for being present as an entity across a citation-rich ecosystem that includes video transcripts, news coverage, directory mentions, and forum discussions.
Freshness: Evergreen vs. Recent
Traditional search treats legal content as largely evergreen. A well-ranking page on “California statute of limitations for personal injury” can hold position one for years with only light updates. AI Overviews apply a materially stronger freshness bias. Ahrefs found that AI assistants cite content that is 25.7 percent fresher than traditional search results, and Seer Interactive found that 85 percent of AI Overview citations were published within the past two years, with 44 percent from 2025 alone. For law firms, this means that a 2019 practice area page that has held top-three rankings reliably may be quietly excluded from AIO citations simply because the model’s selection logic deprioritizes its publish date.
Volatility: Stable Rankings vs. Probabilistic Citations
Traditional rankings are slightly more stable – although for highly competitive legal searches, CLM Sequoia is regularly seeing daily movements. But generally speaking, your position on page one moves slowly unless an algorithm update hits. AI Overview citations are volatile on the order of hours. Ahrefs’ study of AI Overview consistency found that only 54.5 percent of URLs overlap between two consecutive responses for the same query, and the full set of cited sources can change roughly every two days. The entities named in the overview shift, the passages pulled shift, the citations shift. Your firm has a probability of being cited each time the overview is generated, and that probability is the variable to optimize for.
Why Legal Content Sits In A Special Category
Two facts about legal search queries amplify everything discussed above. First, AI Overviews trigger on legal queries at rates well above the baseline. SE Ranking’s research found that 77.67 percent of Your Money or Your Life legal queries now trigger an AI Overview, though other methodologies using narrower query sets have reported rates closer to 23 percent. The high end is driven by the types of queries potential clients actually submit: question-format searches (“can I be arrested for DUI the next day”), long-tail intent queries (seven-plus words trigger AIO roughly 46 percent of the time), and reason queries (“why does a DUI arrest require a lawyer”), which trigger AIO nearly 60 percent of the time.
Second, YMYL topics trigger amplified E-E-A-T scrutiny in Google’s own quality guidelines. The Gemini model has been tuned to be cautious about citing legal, medical, and financial content, which makes the authority signals it evaluates heavier for those verticals. A January 2026 Harvard Journal of Law and Technology commentary reviewed fifty U.S. law firm websites and found three structural problems that kept firm content out of AI Overviews on common consumer legal queries.
In other words, the content patterns most law firms have been producing for a decade, focused on reassurance and brand-first framing rather than direct answers and verifiable legal citations, work against AI Overview selection specifically for the queries where intake conversion rates are highest.
Where These Differences Matter Most For Law Firms
The divergence between traditional search and AI Overviews does not hit every keyword equally. It hits hardest on exactly the kinds of queries that drive legal intake. Informational, question-format, long-tail searches are the queries most likely to trigger AI Overviews, and they are also the queries that potential clients use before they are ready to fill out an intake form. A client who has already decided to hire a personal injury lawyer is more likely to search “personal injury lawyer [city]” and go straight to the local pack. A client who is still deciding whether they need a lawyer at all is searching “should I get a lawyer after a minor car accident” or “how long does a personal injury settlement take in [state].”
The second place the divergence hurts is on practice area content that ranks well for head terms but answers them slowly. A well-optimized practice area page for “workers compensation lawyer Phoenix” may hold position three in traditional search and never once get cited in the AI Overview for “how long do I have to file a workers comp claim in Arizona,” because the practice area page buries the answer below brand-first copy while a thinner competitor page opens with “In Arizona, an injured employee has one year from the date of injury to file a workers’ compensation claim under A.R.S. §23-1061.” The ranking win and the citation win require different content shapes.
What To Do About It
Traditional SEO still matters. Google’s own guidance that “good SEO is good GEO” is correct at the floor level: your pages still need to be crawled, indexed, and eligible to rank before they can be cited. The fundamentals of law firm SEO remain necessary, and AI Overview citation becomes the second layer on top of them.
The adaptations that actually move AI Overview citation probability fall into four buckets.
Restructure answer blocks. Every H2 that phrases a question should be followed immediately by a forty to sixty word self-contained answer in the first paragraph, with statute citations, jurisdictions, and specific numbers where relevant. The supporting detail belongs in subsequent paragraphs. This is the opposite of the “brand voice first, answer later” pattern that most firm content has used for a decade. Rewriting existing high-traffic pages to lead with answers is typically higher ROI than publishing new content, because fresh updates also trigger the freshness bias AI Overviews weight heavily.
Expand entity coverage and topical depth. Because the fan-out process pulls from sub-queries you cannot see, your content needs to cover the adjacent territory around your head terms. A DUI defense practice area page that only addresses “what is a DUI” will not get cited for “Texas implied consent law penalties” even though both are part of the same intake journey. A proper cluster structure or content hub covering penalties, process, procedural questions, jurisdictional specifics, and cost will generate citations across a wider surface of sub-queries. This is where a strategy built on answer engine optimization starts to pay off.
Publish attorney-authored content with verifiable credentials. For YMYL legal content, the E-E-A-T signals AI models weight most heavily are author credentials, statute citations, and evidence of practicing legal experience. A byline with a real attorney’s name, proper attorney person schema and a link to the attorney’s verified bio page is a different trust signal than a generic “by the firm” attribution. Pages with explicit authorship, quoted statutes, and case citations get cited at markedly higher rates in YMYL verticals. If your firm’s current blog content is ghostwritten by marketing contractors without attorney review, that is the single highest-leverage change you can make.
Build presence on third-party surfaces. YouTube is now the most-cited domain in AI Overviews overall. Reddit citations are surging. Government sources, news media, and authoritative directories are cited at multiples of the average rate. For law firms, this means that attorney explainer videos with clean transcripts, placements in legal directories beyond the usual two, guest articles in local news outlets, and participation in subreddit discussions relevant to your practice all feed the third-party citation ecosystem that AI Overviews now pull from aggressively. A firm with a strong website and no YouTube or media presence is optimizing for the half of the ecosystem that matters less each quarter.
Custom Legal Marketing is Lightyears Ahead on AI Optimization

The screenshot above is what can happen when your law firm marketing company knows how AI overviews work and how to execute a strategy without sacrificing SEO. A potential client in California searches “toyota lemon law attorneys” on Google. The AI Overview panel that renders above the organic results names three firms by name in the synthesized answer, and the citation panel on the right surfaces Wirtz Law Lemon Law Attorneys as the top cited source. Directly below, in the traditional organic results, Wirtz Law’s California Toyota Lemon Law Attorney page ranks for the same query directly under the AI overviews. One firm, cited in both systems simultaneously, for the exact query that generates lemon law cases.
Wirtz Law is a Custom Legal Marketing client. The visibility rendered above is the downstream result of a systematic editorial process run against our internal Sequoia citability framework. Every new practice area page on a website connected to CLM Sequoia is scored, refined, and maintained against criteria tuned for how Gemini actually selects citations, how ChatGPT reads page content, along with ranking knowledge on how all popular chatbots and answer engines interact with content. Basically, we know how each of these platforms are going to consume the content we’re producing. We know what the end user wants to see on the site. And our system calibrates each page to the desires of all human and non-human consumers.
Custom Legal Marketing runs every page our clients publish through the CLM Sequoia AI Citability Score. Sequoia is our proprietary AI law firm marketing platform and the engine behind the content decisions that produce results like the one above. The Citability Score audits content against five categories shown in this article: answer structure, authority and attribution, topical coverage, technical markup, and freshness. Each category has weighted rules with machine-checkable detection criteria, and the aggregate score feeds directly into our editorial workflow.
The exact parameters are dynamic and change as Sequoia’s research engine monitors and analyzes the sites that are recommended for law-related searches in AI overviews.
Let’s Be Honest About Which Queries are Eligible for Total Dominance
Not every keyword yields the same outcome with the same effort, and I want to be very clear about that. Highly competitive practice area queries face real headwinds in AI Overview citation. A metro-level head term like “personal injury lawyer Los Angeles” produces a candidate pool that includes every major firm in Southern California, every legal directory, every local news outlet, and typically Wikipedia. Queries that fan out into dozens of sub-queries expand that competitive surface further, because each sub-query carries its own ranking competition behind it.
Our mission in that environment is eligibility. The Sequoia framework is designed to qualify your content for the broadest possible surface of sub-queries, which across a sustained content program translates to a higher percentage of AI Overview citations over time. No agency can honestly promise a citation on every individual query… especially with AI Overviews having instability that can change hourly. A sustained Sequoia-driven program produces compounding gains: the share of winnable citations your firm captures grows every quarter as more pages move into the Strong tier and more sub-query surfaces become eligible.
Our AI Monitor – another exclusive to CLM Sequoia users – is already showing that our system works. We have clients who were enrolled in the beta version of our AI system in October and are now enjoying a 127% increase in monthly leads from AI platforms.
If you don’t have an entire suite of agents and an expert team working for your law firm, you should change that and talk to Custom Legal Marketing today.
Jason Bland
Jason Bland is a Co-Founder of Custom Legal Marketing. He focuses on strategies for law firms in highly competitive markets. He's a contributor on Forbes.com, is a member of the Forbes Agency Council and has been quoted in Inc. Magazine, Business Journals, Above the Law, and many other publications.
The Definitive Guide to Law Firm SEO That Works
Law Firm SEO That Works is where we show attorneys what their competitors wish they knew.