Strategy Not Optional: A Governance Framework for Real Estate AI
Why enterprise licensing is table stakes, not a strategy
Residential real estate operates under a structural condition no AI governance framework was designed for. There are approximately 1.5 million licensed Realtors® across the country, and the vast majority are independent contractors. Nearly all of them work on personal devices the brokerage firm they're associated with does not own, does not configure, and cannot fully control.
And for more than three years, a meaningful share of them have been using consumer AI tools, often with client data, almost always without any governance layer between their keyboard and a model running somewhere in the cloud.
Brokerages that have responded by purchasing enterprise AI have done something useful. But that alone is not enough.
Enterprise AI licensing is table stakes. Governance is the strategy. And in a contractor-based workforce, governance carries an obligation most broker-owners and leaders have not yet internalized. If you do not provide a sanctioned path, your agents will find an unsanctioned one. The absence of a solution exposes agents and their broker to liability and risk.
The six layers of a real governance stack
A serious brokerage AI posture has six layers. Each one depends on the others.
A written AI policy that defines acceptable use, prohibited use, and data handling expectations.
An operating model that assigns ownership, cadence, and enforcement.
Technical controls including SSO, admin access, audit logs, and data loss prevention.
An honest accounting of the BYOD (bring your own device) and independent contractor reality.
A plan for shadow AI, meaning the consumer tools your agents are already using.
Agent enablement through training, communication, and culture.
Most brokerages have one or two of these. Very few have all six. The layer most operators underweight is not the technical one. It is the structural reality those controls have to function inside.
Pressure point one: the independent contractor reality
Most business-focused Gen AI programs assume W2 employees on managed devices. That is not residential real estate.
A typical residential real estate firm has a smaller population of employees using corporate hardware and a larger population of independent contractors on personal devices. You can require federated SSO for the tools you provision. You cannot require it for a tool an agent installs on their own laptop with a personal credit card. You can offer training. You cannot mandate that an agent closes the consumer tab when the sanctioned one is open.
This is not a complaint. It is the operating reality every broker-owner and operator has to design around. The governance stack does not ignore the independent contractor status. It needs to be built on top of it.
Two consequences follow. First, enforcement is always partial. Technical controls create a sanctioned perimeter, not a total one. Second, the soft layers, meaning policy, training, and sanctioned tooling, carry more weight in brokerage than they would in a typical enterprise. In many cases they are the only levers that reach the agent at all.
Pressure point two: shadow AI and the consumer tool exposure
Shadow AI is endemic in residential brokerage, though the pattern is not unique to this business. It is not a story about agent negligence. It is a story about being able to control and mandate only so much.
An agent who has a listing to launch, a buyer to assist, or a property description to write has a job to do today. If the brokerage has not provided a company sanctioned AI tool, the agent will reach for whatever is free, fast, and available. Most do. A free consumer subscription from any of the major model providers is a thirty-second setup and a meaningful productivity lift.
The exposure is real. Contract information pasted into a prompt. A privacy setting left on its default. Model training opt-outs ignored or misunderstood. Prompt histories stored on personal accounts the brokerage cannot audit. Third-party integrations the agent never read the terms on.
None of this requires bad intent. It requires only that the agent needs to get work done and the brokerage has not told them where to do it safely.
Pressure point three: policy as the forcing function, paired with a sanctioned tool
A written AI policy is the most leveraged artifact in the stack, for one important reason. It is enforceable. Technical controls stop at the perimeter. Training only goes so far even with the most well intentioned plans. A policy, signed or acknowledged, is the document that defines expectation and accountability across the entire agent population.
But a policy that prohibits without providing is a policy that gets ignored. This is where brokerages can easily fail. They publish a document that says do not use consumer AI with sensitive information, and they stop there. The agent, still facing the listing, the buyer and the deadline, does exactly what the policy prohibits, because no alternative exists.
The enforceable policy pairs restriction with a sanctioned path. It says, here is what you cannot use. Here is what you can use. Here is how to use it. Here is what we have done at the admin level to keep your work and your client’s information inside an environment we have vetted.
This is where the stack becomes coherent. Policy defines the rule. The sanctioned enterprise tool, deployed with SSO, admin controls, and reviewed data handling terms, makes the rule followable. Training and reinforcement closes the loop.
Where this conversation is actually happening
Brokerage networking groups and real estate brands gather regularly to share best practices. When the conversation turns to AI, the governance questions are the same ones every serious firm is working through, regardless of size or geography. The sophistication of the room does not change the difficulty of the problem. It only sharpens the focus on which parts of the stack are still open.
The framework above reflects both operating experience running a multi-state brokerage at scale and the discipline of the University of Michigan’s Chief Data and AI Officer program, where governance is treated as foundational rather than optional. In our own environment, the sanctioned path is an enterprise workspace AI deployment, integrated with SSO, administered centrally, and paired with a written policy and agent training. The specifics of the model matter less than the pattern. The pattern is what travels.
The liability window has been open for years
One framing correction for any operator is still treating this as a future problem ChatGPT crossed into mainstream awareness in November 2022. For more than three years, most agents across brokerage firms have been using consumer AI tools on personal devices, often with sensitive data, almost always without governance.
Brokerages still treating governance as a planning exercise are accumulating exposure by the day. The firms building the stack now are not getting ahead of a future curve… they are closing a gap that has been compounding for years.
The diagnostic
Every brokerage leader should be able to answer the following with a yes.
Do we have a written AI policy that defines acceptable use, prohibited use, and data handling expectations?
Have we deployed a sanctioned enterprise AI tool with SSO, data loss prevention, and admin controls?
Have we reviewed the data handling and model training terms of that tool? Does it align with the brokerage’s confidentiality and compliance needs?
Do our agents know which tools are sanctioned, which are prohibited, and where the line is?
Have we provided AI training resources to agents and employees?
Do we have any visibility into what consumer AI tools our agents are currently using with client data?
If any answer is no, the governance stack is incomplete, and the exposure persists.
Enterprise AI licensing is where this conversation starts. It is not where it ends.


