
How Dodon.ai Protects Data Privacy When Using LLMs in Medico-Legal Workflows

Security and confidentiality aren’t optional in legal and medical work; they’ve been built into our AI from day one.
Introduction
When your work involves sensitive legal and medical information, data privacy isn’t just a checkbox; it’s non-negotiable.
At Dodon.ai, one of the most common questions we hear is:
“What happens to my data when it’s processed by large language models (LLMs)?”
In the US, the answer comes down to two critical aspects of privacy: security and confidentiality, applied together under strict contractual and technical safeguards.
Security vs. Confidentiality: What’s the Difference?
Before talking about AI, it’s important to separate two concepts that often get confused.
Data Security means preventing unauthorized access to your information. Dodon.ai follows industry best practices for security, including:
- Encryption in transit and at rest
- Role-based access controls
- SOC 2-aligned policies
- HIPAA-compliant handling through US-based infrastructure and a signed Business Associate Agreement (BAA) when applicable
Data Confidentiality means ensuring that even the vendors processing your data cannot use it for their own purposes. This is where Dodon.ai goes beyond the baseline.
Our Vendor Standards (For US Deployments)
We don’t work with just any LLM provider. Before processing a single document, each vendor must meet three non-negotiable requirements, all backed by US-based contractual agreements:
- No Model Training on Your Data
Vendors must agree in writing not to use your data to train their models — ever.
Example: OpenAI’s enterprise API policy in the US excludes customer data from model training by default. - Zero Data Retention (ZDR)
Once your document is processed, it is deleted immediately — never stored — when ZDR is contractually enabled.
Note: US enterprise/API customers with ZDR have no prompts or outputs retained; defaults without ZDR may keep data for up to 30 days. - Data Residency Controls
Processing occurs on US-only servers when required by your workflow, ensuring compliance with applicable US state privacy laws such as CCPA.
Consumer-tier services do not guarantee this — enterprise/API configuration is required.

These requirements create a closed loop: your data leaves your device, is processed securely in the US, returned to you, and never lingers “in the ether.”
Why This Matters in the US
Recent legal developments, such as federal court orders requiring preservation of certain ChatGPT consumer logs, have raised concerns for many.
For our US customers using ZDR via enterprise/API endpoints, these cases don’t apply. If nothing is stored, there is nothing to subpoena.
The Bigger Picture
Concerns about giving sensitive data to third parties aren’t new. Early in the cloud computing era, many argued it was too risky to trust outside servers. Today, cloud services are standard, offering unmatched scalability, reliability, and security for most organizations.
We believe the same trajectory applies to AI: unless you’re a massive enterprise with unique, in-house data controls, the combination of trusted cloud-hosted LLMs and strong technical/legal safeguards, like those we use, is the safest, most cost-effective approach.
Closing Statement
At Dodon.ai, privacy isn’t a marketing line but the foundation of everything we do in medico-legal AI.
US customers can focus on winning cases or delivering accurate IME reports, knowing their data is:
- Handled under HIPAA-aligned, US-based, ZDR-enabled enterprise agreements
- Never trained on
- Never stored
- Never used beyond the purpose you intend
Want to use a platform that keeps your data safe when processing sensitive medico-legal documents? Sign up at Dodon.ai