At this point, I’m no longer surprised when a client walks into my office and says some version of: “My ex is using AI against me.” Sometimes it’s a 50–page “timeline” ChatGPT drafted overnight. Sometimes it’s a custody declaration that reads like a law review article, filed by a self‑represented parent who has never set foot in a law library. Sometimes it’s a client quietly admitting they “cleaned up” a text thread with an AI screenshot editor before sending it to me.
Underneath all of it is the same anxiety: if the other side leans hard on AI—writing, editing, summarizing, even fabricating—will the court believe them more than you?
The short answer: not if the judge is paying attention and not if your lawyer is doing their job.
AI‑polished stories vs. admissible evidence
California family courts still run on evidence, not vibes. An AI‑drafted declaration may be smoother, more organized, and full of confident language, but that doesn’t make it more credible.
Judges care about:
- Personal knowledge: Can this person actually testify to what they’re saying, or is it hearsay with fancy transitions.
- Foundation: Do they explain how they know the thing they’re asserting.
- Corroboration: Are there texts, emails, school records, police reports, or third‑party witnesses that line up with the story.
- Consistency over time: Does this match what they said in prior pleadings, CPS reports, DCSS filings, or criminal matters.
An AI‑polished declaration that overreaches—asserting facts the party can’t back up—may feel intimidating when you first read it, but it’s a gift on cross‑examination. Once the witness is under oath and off‑script, the seams start to show. California courts are already signaling that lawyers and litigants cannot outsource judgment to a tool and then shrug when the content turns out to be inaccurate or fabricated.
If you’re on the receiving end of one of these glossy declarations, your job is not to match their word count. Your job—through counsel—is to expose the gap between what’s written and what can actually be proved.
When AI crosses the line from “assistive” to “abusive”
There’s a meaningful difference between a parent using AI to help outline their thoughts and a parent using AI to harass, surveil, or manipulate.
Here are patterns I’m increasingly seeing in California cases:
- AI‑amplified harassment: A co‑parent uses AI to churn out long, repetitive, accusatory emails or messages through OurFamilyWizard or Talking Parents, then points to the sheer volume of their own writing as proof of how “concerned” and “involved” they are.
- AI‑assisted character assassination: Parties ask a chatbot to “rewrite” their narrative to sound more sympathetic and their ex more dangerous, sometimes blending in half‑truths and speculation that would never survive evidentiary scrutiny.
- AI‑boosted surveillance: Tech‑savvy parents feed location logs, shared calendar entries, or cloud‑stored photos into AI tools to construct elaborate “timelines” of alleged misconduct, often built on data they had no legal right to access in the first place.
The law doesn’t give anyone a free pass because they wrapped their behavior in new technology. California already has tools to address this:
- Domestic violence restraining orders (DVROs) can cover “disturbing the peace” through digital harassment, including obsessive, hostile communications and technological abuse.
- Custody orders and parenting plans can restrict communication to specific platforms, character counts, or topics when one parent weaponizes email or apps.
- Evidence gathered through privacy violations can be excluded, and in some cases, the underlying conduct may expose the offending party to criminal or civil liability, especially where unauthorized access to cloud accounts or devices is involved, given California’s strong privacy regime.
If AI is being used as a force multiplier for bad behavior, the solution is usually not “use more AI back.” It’s targeted court orders, clear boundaries, and disciplined evidentiary strategy.
What California judges want to see from you
If the other side is flooding the court with AI‑generated content, you don’t beat them by playing the same game. You stand out by doing the opposite.
Judges in California family courts are increasingly skeptical of anything that feels over‑lawyered or over‑produced, especially when it comes from a self‑represented party who clearly had technological help. What they appreciate instead:
- Clean, human declarations: Short, fact‑driven, chronological narratives with dates, places, and concrete examples.
- Anchored exhibits: Clearly labeled, minimally annotated texts, emails, school records, medical records, and app screenshots that tie directly to specific statements in your declaration.
- Reasonable requests: Orders that seem tailored to the actual problem—specific exchanges, decision‑making breakdowns, or safety issues—rather than sweeping, punitive measures.
We’re already watching higher courts impose monetary sanctions for AI‑hallucinated case law and misused technology, and those decisions are being published “as a warning.” That same attitude will bleed into family law: judges will not reward parties who treat AI as a shortcut around honesty, evidence, or proportionality.
How I actually use AI in your case (and where I draw the line)
I’m open with clients that I use AI in my practice. Not to replace legal judgment, and not to ghost‑write your story, but as a behind‑the‑scenes tool:
- Brainstorming issues: Spotting angles or questions to investigate in discovery or at deposition.
- Organizing, not inventing: Helping outline a declaration or categorize a high‑volume document dump before I personally refine and verify it.
- Translating complexity: Testing ways of explaining a technical issue—like cloud privacy, data retention, or child‑support tax consequences—in plain English.
What I don’t do:
- I don’t file anything in court that I haven’t personally reviewed, revised, and cross‑checked against the actual evidence and the current state of California law, including recent guidance on generative AI from the State Bar and legal ethics commentators.
- I don’t let AI “sweeten” your story. If something didn’t happen, it’s not going into your declaration—no matter how good it would look on paper.
- I don’t treat AI output as legal research. Any citations, statutes, or cases still get verified the old‑fashioned way because courts have shown they are willing to sanction lawyers and parties who rely on fake or misapplied authorities.
Behind every filed document, there should still be a lawyer exercising human judgment, rooted in actual experience in front of actual judges. That part is not outsourceable.
If you suspect AI misuse in your California divorce
If you’re in a California divorce or custody case and you think AI is being used against you, here are practical steps to take before you spiral:
- Preserve, don’t edit: Save what you’re receiving—messages, filings, screenshots—without “fixing” or curating them yourself. Don’t run your own evidence through editing tools that can change timestamps, formatting, or content.
- Flag patterns, not just one document: Point out the volume, tone, and timing of communications, and any disconnect between what’s written and what actually occurred.
- Talk to your lawyer about strategy: Depending on the facts, the right move might be a narrowly tailored protective order, evidentiary objections, a discovery motion, or simply using cross‑examination to expose the gap between AI polish and real‑world parenting.
- Focus on your own credibility: Courts notice the party who stays grounded in verifiable facts, respects privacy boundaries, and resists the urge to “win the narrative” at all costs.
The rise of AI hasn’t changed the core question California family courts ask in almost every contested case: Who is acting in good faith, telling the truth, and putting the children’s interests ahead of their own need to score points?
Tools will keep evolving. That question won’t.
