Skip to main content
Start your free trial - 7 days and 100 pages - today!
Dodonai

The safety layer every law firm needs

AI never sends an email. It creates drafts. You review. You send. Every action is logged. If the agent isn't confident, it asks instead of guessing.

Blueprint refunded if we don't leave you with a clear path forward.

What's at stake when AI runs unsupervised

In 2023, a New York attorney filed a brief with 6 case citations generated by ChatGPT. None of them existed. The court sanctioned the attorney, sanctioned his firm, and referred the matter to the state bar. That case, Mata v. Avianca, became the cautionary tale every lawyer heard about for a year.

It was the beginning of the pattern, not the end. As of early 2026, the National Law Review has documented over 600 cases involving AI hallucinations, implicating 128 attorneys. Courts have issued monetary sanctions, disqualified firms from representations, referred lawyers to bar disciplinary authorities, and struck filings from the record.

The pattern is consistent. None of those attorneys got sanctioned for using AI. They got sanctioned for treating AI output as verified work product, skipping the step every lawyer applies to every other source: checking it. The safety layer below is what we build into every Dodonai agent so that the check is built into the workflow, not a habit you have to remember.

The 3 hallucination patterns we design against

Hallucination is a technical term for a simple failure: the model generates text that looks authoritative but is factually wrong. In legal work it shows up in 3 distinct patterns, each of which needs its own detection method.

Fabricated citation

The case doesn't exist at all. The model generates a realistic case name, reporter citation, and holding. Detection: every citation gets verified against Westlaw, Lexis, or the primary source before the brief reaches your inbox.

Real case, wrong holding

The case exists, but the model mischaracterizes what the court held. The citation passes a superficial check and fails a substantive one. Detection: agents pull the actual opinion text, not just the citation, and your reviewer sees the source alongside the summary.

Outdated or overruled authority

The case was good law when the model was trained but has since been overruled, distinguished, or superseded. Detection: every legal authority gets Shepardized or KeyCited as part of the agent's pipeline, with subsequent history surfaced in the output.

ABA Formal Opinion 512: your obligations are already defined

Many attorneys assume the ethics rules haven't caught up to AI yet. They have. ABA Formal Opinion 512 (2024) establishes that the existing Model Rules of Professional Conduct apply fully to generative AI use. It doesn't create new obligations. It maps existing ones onto AI.

Competence (Rule 1.1): you have to understand your AI tools well enough to recognize their limitations. Supervision (Rules 5.1, 5.3): the duty to supervise extends to AI-generated work product the same way it extends to associates and non-lawyer staff. Confidentiality (Rule 1.6): entering client information into a tool that stores or trains on it is a disclosure problem. Candor (Rule 3.3): submitting AI-generated content to a court without verification risks a misrepresentation, regardless of whether you knew it was wrong.

Every Dodonai agent ships with an audit trail that supports the supervision requirement directly. You can show, for any AI-assisted work product, what the agent saw, what it produced, what was reviewed, and what was changed before it left your firm.

The 4 principles, applied to every agent we ship

The pattern comes from the same architecture used in nuclear power plants, air traffic control, and financial trading systems: automated systems handle the volume, humans authorize every action that reaches the outside world.

AI drafts, you decide

No agent sends an email, files a document, contacts a client, or takes any action that reaches a person outside the firm. When an agent identifies a stale matter, it drafts a follow-up and saves it to drafts. You read, edit, send. When a deadline monitor computes a response date, it proposes a calendar event. You confirm. This is permanent architecture, not a transitional measure.

Full audit trail

Every agent action produces a log: when it ran, what data it accessed, what it produced, and what it routed to review. This serves malpractice defense (the log shows what was flagged and when), ethics compliance (the trail demonstrates competence and supervision), and continuous improvement (you can trace any unexpected result to its source and fix it).

Uncertain items route to review

Not every input fits a clean category. When an agent's confidence falls below threshold, it doesn't guess. It flags the item, leaves it in your queue, and waits. The cost is a few extra items to scan each day. The benefit is that ambiguous items never fall through the cracks. A paralegal who's unsure escalates. Your agent does the same thing, every time.

Access controls scope what an agent can touch

Each agent operates within defined permissions: read aggressively, write conservatively. An inbox triage agent reads email and applies labels; it can't delete, send, or forward. A deadline monitor reads court filings and proposes calendar events; it can't accept hearing dates or file documents. Errors stay contained to drafts and suggestions, both of which go through human review.

What the safety layer won't protect you from

Overreliance. If your team stops reading drafts critically, rubber-stamping every output, the safety layer fails. Gates only work when a human actually checks. Part of every Managed engagement is monthly review of the skip rate (how often the agent escalated to a human) to catch drift early.

Tool drift. AI vendors update models and data handling practices. We watch the model partners we use and re-validate quarterly. If a vendor changes its retention policy, you'll hear from us before you hear from someone else.

Staff training gaps. Your firm's policy is only as strong as the weakest user. If a paralegal pastes client data into a free-tier tool because nobody told them not to, the architecture above doesn't help. Every Build engagement includes a written firm AI policy and a 30-minute team walkthrough.

The verification habit

The single highest-value change a firm can make isn't a policy or a framework. It's a habit. Treat every AI output the way you'd treat a brief from a brilliant but unreliable first-year associate. Assume the research is thorough and the reasoning is sound. Then check every citation, confirm every date, validate every fact against the file.

The associate is fast, hardworking, and available at 2am. They make mistakes. So does AI. The safety layer exists to make verification efficient, not to substitute for your judgment. Nothing does.

Frequently Asked Questions

Yes. Agents run inside infrastructure your firm controls. Our LLM partners operate under zero data retention, so client data isn't stored after processing or used to train models. We sign a BAA where the work touches PHI. SOC 2 controls apply to every engagement.

Start saving time and money on Day 1 with Dodonai

Learn how Dodonai can help take your law practice to the next level.