Foundational UPL question: What constitutes "practicising law"?
This is the core legal architecture question for any AI product. Answer varies state by state. Every state has some variation of UPL, but they cluster around two elements:
Giving legal advice: applying law to specific facts for a specific person
Representing another: in court or before tribunals
All 50 states make it illegal to provide legal services without a license, with penalties ranging from fines to criminal charges. In a few states, UPL is a felony. Activities like drafting legal documents or providing procedural guidance frequently fall into a "gray area," making it difficult to apply UPL statutes consistently - and the advent of generative AI compounds this ambiguity significantly.
The most important distinction in practice is legal information vs. legal advice:
Legal information = explaining what the law says generally (arguably not UPL)
Legal advice = applying law to a person's specific situation and recommending a course of action (UPL if done by an unlicensed entity)
This is the line that essentially every legaltech product must straddle. It is blurry by design, because courts determine it fact-by-fact. But it is the primary safe harbor mechanism available today.
US AI Legal Regulation Map
AI & the Law: Who's Drawing the Line?
State-by-state regulatory stance on AI-generated legal documents, unauthorized practice of law (UPL), and safe harbor frameworks. Updated Mar 2026 (ContractKen)
Regulatory Innovators(sandbox / ABS)
Cautiously Expanding(paraprofessional models)
Enforcement-Active(strictest UPL)
No Specific Guidance(default UPL applies)
Hover over any state to see its AI + UPL regulatory stance
Regulatory Innovators
Active safe harbors exist. AI tools can deliver legal services within approved structures.
AZ · UT · AK · WA · IN
Cautiously Expanding
Paraprofessional reforms underway. Lawyer supervision required for AI outputs.
CO · IL · MN · TX · FL · PA · NC
Enforcement-Active
Strictest UPL posture. Platform liability bills advancing. Highest risk for consumer-facing AI legal tools.
CA · NY
No Specific Guidance
Traditional UPL rules apply. No AI-specific bar guidance issued. Monitor for change.
All remaining states
The State-by-State Spectrum
Think of states as sitting on a spectrum from open laboratory to enforcement-active.
Arizona is the most permissive jurisdiction in the US right now. In 2020, Arizona did away with restrictions on who may own law firms and share fees with lawyers, and established a license and application process for "alternative business structures" (ABSs). From 19 approved entities in 2022, Arizona has expanded to 136 as of April 2025.
Utah went further initially. Utah created a regulatory sandbox where authorized entities receive waivers of both restrictions on non-lawyer ownership and UPL - entities can both access investment from non-lawyer sources and deliver services using non-lawyers or software. However, Utah has since contracted significantly. Phase 2 of the sandbox is now focused on "Moderate and High Innovation" entities, specifically models where alternative legal providers - nonlawyers and/or software - engage in limited-scope legal practice. The bar is high: entities must demonstrate that sandbox authorization will allow them to reach Utah consumers currently underserved by the legal market. One important live example of what Utah permits: the sandbox explicitly includes "technology-based services such as AI," and at least one authorized entity is described as a company offering AI-enabled contract drafting, negotiation, and management services via a technology platform with lawyer employees.
Alaska is a natural outlier. Alaska's UPL rule creates liability only if one represents oneself to be a lawyer and either represents another in court/tribunal, or, for compensation, gives advice or prepares documents for another affecting legal rights and duties. The dual requirement - holding out and the specific conduct - makes Alaska one of the narrowest UPL statutes, leaving significant space for AI tools.
Washington State launched a new entity-regulation pilot in late 2025. Washington's new entity-regulation pilot program opened for applications in late 2025, allowing businesses and nonprofits to deliver certain legal services.
Colorado is actively modernizing. Colorado expanded the authority of its licensed paraprofessionals, giving them a broader role in family law cases. The Colorado AI Act (effective June 30, 2026) also imposes high-risk AI obligations, though notably the Trump executive order specifically cited the Colorado AI Act as an example of overreach and it is a priority target for the federal AI Litigation Task Force.
Illinois is pushing toward a supervised non-lawyer model. Illinois moved toward creating a program that would allow nonlawyers to provide limited legal advice on family law and housing issues under the supervision of a certified attorney.
Minnesota is exploring its own sandbox framework.
Indiana launched its own legal regulatory sandbox program in October 2024.
Tier 3 : Enforcement-Active (Highest UPL Risk)
California is the most significant risk state. It has both the strictest UPL enforcement posture and the most active new AI legislation. The California Bar's guidance requires deep technical competence before AI deployment; UPL is criminally prosecuted by the State Bar. The State Bar works with law enforcement to investigate UPL, and any unauthorized practice is a crime.
New York is moving toward explicit platform liability. New York Senate Bill 7263 would bar "proprietors" of AI-powered chatbots from providing "substantive" responses that, if provided by a human, would constitute unauthorized practice of a licensed profession, and creates a private right of action for actual damages, with attorneys' fees available for willful violations. This bill, if passed, would be the most consequential piece of legaltech-specific legislation in the country.
Illinois is being watched for how aggressively its UPL statute will now be interpreted in the wake of that litigation.
The Federal/ABA Layer
The ABA does not regulate AI platforms directly - it regulates lawyers. But ABA Formal Opinion 512 (July 2024) has become the effective national standard.
ABA Formal Opinion 512 confirms that AI can assist lawyers in tasks such as legal research, contract review, due diligence, document review, regulatory compliance, and drafting letters, contracts, briefs, and other legal documents - but lawyers must understand the issues involved in using this technology.
The key ethical obligations the opinion establishes for any lawyer using an AI tool are: competence (understanding the tool's limitations), confidentiality (knowing how the tool handles data), communication (disclosure to clients), supervision (oversight of outputs), and candor (never submitting hallucinated material).
ABA Formal Opinion 512 establishes that lawyers using generative AI must uphold the same professional duties that govern all legal work: competence, confidentiality, communication, candor toward the tribunal, and reasonable fees.
Where the Actual Safe Harbors Are
There are four distinct safe harbor architectures for a legaltech product today:
Safe Harbor 1: The "Tool Not Advisor" Architecture: Position thatproduct as providing information and analysis, not advice. The output surfaces options, flags issues, and explains what clauses typically mean - but never recommends a specific course of action for a specific user's situation. Pair with explicit scope disclaimers and "consult a lawyer" routing. It's imperfect but the most universally applied.
Safe Harbor 2: The Lawyer-in-the-Loop Architecture: A licensed attorney supervises and signs off on outputs. The AI tool becomes a practice tool, not a legal services provider. ABA Opinion 512 specifically validates the use of GAI tools to review and summarize long contracts, provided the lawyer tested the tool's accuracy by manually reviewing a smaller subset of documents. The lawyer accountability layer is the structural safe harbor.
Safe Harbor 3: Regulatory Sandbox Participation: Enter the Utah sandbox or register as an ABS in Arizona. This is a real safe harbor - a formal waiver of UPL for the specific service scope you describe in your application. The Stanford five-year study found remarkably little evidence of consumer harm in either state - Utah's data shows only 20 consumer complaints across all sandbox entities, a harm-to-service ratio of approximately 1:5,869. The catch: it is geographically limited, the application process is substantive, and in Utah the bar for Phase 2 is specifically about serving underserved consumers (not enterprise B2B).
Safe Harbor 4: The Narrow Jurisdiction Strategy: Build the evidence base for safety and accuracy, then expand. This is how fintech sandbox strategies work and it has direct applicability to legaltech.
Amplify your expertise in contracts, through AI
Review & mark-up third party drafts, compare redlined drafts with ease and create new drafts using your own precedents.
Built-in, industry leading 'Moderation Layer' to preserve the privacy and confidentiality of contract data.