Stop Optimizing for Google. Start Engineering for the Answer Layer.
Being cited is the new ranking.
A seller in Scarsdale, NY opens ChatGPT and asks, “Who are the top three agents to interview to sell my home?” AI returns three names. If yours is not one of them, you do not exist for that seller. That is already happening. It is happening now.
The crawl economy flipped
Three data points tell the story.
The crawl has flipped. Alli AI’s January to March 2026 analysis of 24 million HTTP requests across 78,000 pages found that ChatGPT alone made 3.6x more requests than Googlebot. Add OpenAI’s separate training crawler, GPTBot, and the combined OpenAI footprint is 3.8x Google’s (Search Engine Journal, sponsored by Alli AI).
The clicks are collapsing. Ahrefs re-ran its AI Overviews study using December 2025 data and found the presence of an AI Overview correlates with a 58% lower click-through rate for the top-ranked page (Ahrefs). Semrush clickstream data shows 92 to 94% of Google AI Mode sessions end without a click to any external site (Semrush). Similarweb pegs the zero-click rate on AI Overview searches at roughly 83% (Clickvision zero-click analysis). The Semrush debate is worth acknowledging: its own same-keyword study argues AI Overviews do not cleanly cause zero-click behavior (Semrush AI Overviews study). But the direction of travel is not in dispute.
The demand has moved. McKinsey’s October 2025 research found that half of consumers now intentionally choose AI-powered search, a majority call it their top source for buying decisions, and adoption spans generations, including a majority of baby boomers. Roughly 50% of Google searches already carry AI summaries, projected to exceed 75% by 2028. McKinsey estimates unprepared brands face 20 to 50% declines in traditional search traffic (McKinsey). Gartner forecast a 25% drop in traditional search volume by 2026 back in early 2024 (Gartner). That forecast is arriving on schedule.
Cloudflare’s data explains the economics behind the shift. Googlebot crawls roughly 14 pages per referred visitor. Perplexity sits near 195. GPTBot is over 1,000. ClaudeBot crawls nearly 24,000 pages for every single referral (Cloudflare, Seomator analysis of Cloudflare Radar). Some engines cite sources and drive clicks. Others absorb the answer and keep the user. Either way, your visibility depends on one thing: whether you get named in the answer.
The small slice of AI-referred traffic that does land on a website converts dramatically better than traditional sources. Knotch reports LLM-referred visitors convert at roughly twice the rate of other traffic, in one-third the number of sessions, per data shared in Conductor’s 2026 benchmarks (Conductor 2026 AEO/GEO Benchmarks). Volume is small today. Quality is not.
Real estate discovery queries are exactly where this lands first. “Best agent in Greenwich.” “Is Darien a good place to buy in 2026?” “How competitive is the buyer market in The Berkshires right now?” These are research-stage questions. They have already moved.
A counterpoint is worth surfacing here. Conductor’s 2026 benchmarks (first-party data from a vendor that sells AEO platform services, so worth treating with appropriate skepticism) show real estate has the lowest AI Overview coverage of any industry they track, around 20% on average (Conductor AIO Volatility Analysis). The reason is structural. “Best agent in Greenwich” or “three-bedroom homes in Darien under $1.5M” do not generate generic AI Overviews. They pull users straight into ChatGPT and Perplexity instead. The discovery layer is migrating regardless. It is just migrating past Google’s overlay and into the answer engines themselves.
Being cited is the new ranking
The old structure was straightforward. Let the crawlers in. Rank the page. Receive the click. That contract is not dead, but it has been demoted. Perplexity, Google AI Overviews, and ChatGPT Search still drive traffic. They drive it to the source they cite, not the page they rank. Being cited is the new ranking.
Call it AEO, GEO, answer-layer engineering. The label does not matter. The work does. Engineer your entity presence so that when AI is asked who the best agent in a market is, it names you. That is the whole game.
No vendor solves this for you
My inbox is full of “AI visibility” pitches. Dashboards that monitor where your brand shows up in ChatGPT. “GEO services” that track citations. Alerts that notify you when Perplexity mentions you. They measure what they cannot produce. They show you the scoreboard. They do not play the game.
The work lives on your side of the wall. And here is the sleeper stat from McKinsey: brand websites supply only 5 to 10% of the content AI engines reference in generated answers. The rest comes from reviews, press, directories, portals, MLS feeds, and third-party profiles. You can own your website completely and still be a minority voice in your own AI answer. No vendor rewrites your entity graph for you at scale. No subscription service cleans your NAP data. No dashboard writes your neighborhood guides. The only path is work.
The DIY stack
None of this requires code. All of it requires work.
Start with table stakes. NAP consistency (Name, Address, Phone) across every directory, portal, and profile. AI engines reconcile entity data across sources, and every inconsistency is a reason to cite someone else instead of you. Google Business Profile, fully claimed and maintained, with current photos, monthly posts, and a real review system. Your brokerage profile, your personal website, and the major portals: Zillow, Realtor.com, Redfin, Homes.com. A meaningful share of AI answers about local professionals still traces back to GBP and portal data.
Then the sustained lever: content. Hyperlocal. Data-dense. Recent. Neighborhood guides with actual numbers, not platitudes. Monthly market reports. Buyer and seller question libraries with real answers, not filler. Answer-first structure on every page, where the first sentence answers the primary question directly and each section stands alone so AI can extract it cleanly. Most agents skip this layer. It is the one that compounds. Conductor’s data backs it up: blog content was the most cited page type in real estate AI Overviews by a wide margin, with over 15,000 pages cited versus roughly 8,000 for the next category. Multiply your surface area: YouTube with proper titles, descriptions, and transcripts. LinkedIn articles. Medium or Substack. Long-form, structured, hyperlocal content is what AI reaches for first.
Then the infrastructure. JSON-LD schema (RealEstateAgent, Person, FAQPage, Review, AggregateRating) applied consistently across bios, listings, and about pages. Entity consistency across every public surface. Review velocity as an ongoing system, not a once-a-year campaign. LLMS.txt is worth implementing as a low-cost hedge, but be honest: it is a proposed convention, and adoption by OpenAI, Anthropic, and Google is still unconfirmed.
Then the third-party surface area. The credibility layer AI weighs heavily. Google Reviews on a velocity system. Rate My Agent. FastExpert. HomeLight. RealTrends rankings if you have them. U.S. News Real Estate. Press mentions in local and trade outlets. Podcast appearances with searchable transcripts. MLS data hygiene. This is where most of the answer actually comes from.
And the layer almost everyone ignores: community signals. AI engines lean heavily on Reddit, Facebook Groups, Nextdoor, and local community forums for local-market queries. You cannot manufacture authentic presence in those communities. You can show up consistently, contribute real value, and earn the references over time. It is slow work. It is also where AI looks when it wants a real human voice instead of a brand pitch.
A markdown file, an AI copilot, and a few weekends. I have been doing this work myself. If I can, an agent can. If an agent can, a brokerage can. This is the Builder, Not Coder moment for real estate discovery.
The industry problem
This is not an agent problem. It is not a brokerage problem. It is both.
Agents waiting for the firm to fix visibility will stay invisible, because the firm cannot rewrite every agent’s bio or claim every agent’s directory profile. Firms treating this as agent marketing will leave the brokerage’s own entity data broken, and the answer layer will describe them using whatever Zillow, Redfin, and Reddit say.
Two layers, one system. The brokerage owns the platform and the entity foundation. The agent owns hyperlocal content and review velocity. Neither layer wins alone. And neither gets rescued by a tool.
The Conductor 2026 benchmarks tell you exactly where residential brokerage stands today. Top five domains AI cites for real estate queries: Hines, Public Storage, CBRE, ExtraSpace, Colliers. Top five brand mentions: same list plus Zillow. Not one residential brokerage. Not one agent. The entire residential industry is currently absent from the AI citation layer for its own queries. ChatGPT drives 95.2% of AI referrals in real estate, the highest concentration of any industry Conductor tracks. The opportunity is wide open. Whoever shows up first owns the surface.
Start this weekend
The seller in Westchester will open ChatGPT again next month. So will the buyer in Greenwich, the family in Darien, the empty nester in the Berkshires. The question they ask will be answered with or without you. Being named in that answer is not a purchase. It is work.
No vendor rewrites your entity graph. No dashboard writes your neighborhood guides. No subscription claims your GBP profile or cleans your NAP data. Pick one page. Rewrite it answer-first. Claim one directory you have been ignoring. Ship one neighborhood guide with actual numbers. Write the schema template once and apply it across your site.
You are not optimizing alone for Google anymore. You are engineering for the layer above it.
When that seller asks the question next month, your name is either in the answer or it is not. Nobody else decides that for you.



