The White House AI Framework Is Coming. Family Lawyers Should Pay Attention

On March 20, 2026, the White House released a National Policy Framework for Artificial Intelligence with legislative recommendations that touch child safety, deepfakes, copyright, free speech, and federal preemption of state AI laws.

At first glance, this looks like technology policy. But for family lawyers, especially those handling custody disputes, domestic violence cases, and digital evidence, this framework points to where the law is heading.

This is not abstract. These issues are already appearing in real cases.

Deepfakes Are Becoming a Family Law Problem

The framework emphasizes protections against AI generated impersonation and digital replicas of voice, likeness, and identity.

That matters in family court. Evidence used to be text messages, emails, and photos. Now it can include fabricated audio, synthetic videos, or AI generated conversations. A parent could present an audio recording that sounds real but was generated by a model. A party could submit screenshots that never existed. A litigant could create a video intended to influence a custody evaluator.

These risks are no longer theoretical. Courts will soon face authentication disputes involving AI generated evidence. Lawyers should expect more motions challenging authenticity, more requests for metadata, and eventually expert testimony on synthetic media. The days of assuming that audio or video is reliable are ending.

Child Protection Rules Will Spill Into Custody Litigation

The framework focuses heavily on protecting minors from harmful AI content, exploitation risks, and self harm exposure.

That intersects directly with custody disputes. Courts already consider social media usage and screen time. AI tools introduce new concerns. A child might interact with an AI companion for hours. A parent might allow unrestricted access to image generation tools. A minor could receive emotionally manipulative responses from a chatbot. These facts could become relevant to best interest analyses.

Expect arguments about whether a parent properly supervised a child’s use of AI. Expect disputes about AI companion apps. Expect custody orders that address parental controls, monitoring, and access to generative AI tools. This will look familiar. Courts adapted to smartphones and social media. They will now adapt to AI.

Federal Preemption Could Change California Practice

One of the most significant recommendations is that Congress should create a national AI policy framework and preempt burdensome state AI laws.

This matters for lawyers practicing in California, where AI regulation is evolving quickly. If Congress adopts a national standard, state specific rules about disclosures, liability, and AI governance could be limited. That could affect how courts analyze AI evidence, how lawyers disclose AI use, and how litigants challenge AI generated materials.

Family law tends to absorb broader legal shifts slowly. But once evidentiary disputes start appearing, uniform federal standards could shape courtroom practice.

Copyright and AI Will Affect Divorce Cases

The framework also addresses whether training AI models on copyrighted material is lawful and suggests leaving the issue to the courts.

This becomes relevant in divorces involving creative work, online businesses, and AI assisted content. A spouse may generate income using AI tools. A party may create digital assets with AI assistance. Questions will arise about ownership, valuation, and whether AI assisted output is marital property.

Family law often intersects with intellectual property in business valuations. AI will increase those intersections. Lawyers should expect disputes about AI generated revenue, prompt engineering, and digital asset ownership.

Free Speech and AI Complicate DVRO Litigation

The framework emphasizes protecting free speech while preventing misuse of AI systems.

This creates tension in domestic violence cases. AI tools can generate messages, impersonate voices, and automate communication. A restrained party could use AI to generate harassing messages. A litigant could claim that harmful content was produced by a model. Courts will need to determine intent, authorship, and responsibility.

These questions resemble earlier disputes about anonymous online harassment, but AI makes them more complex. The technology lowers the barrier to creating convincing content. That increases the likelihood that courts will confront these issues.

The Big Picture

The most important takeaway is that AI policy is quickly becoming relevant to family law practice. The framework addresses deepfakes, child safety, impersonation, and federal standards. All of those issues appear in custody disputes, domestic violence cases, and evidentiary hearings.

Family lawyers do not need to become technologists. But they should understand that AI will increasingly shape the facts of their cases. Evidence may be synthetic. Communications may be automated. Children may interact with AI systems. Income may be generated with AI tools.

These developments will not arrive all at once. They will appear gradually, case by case. But the direction is clear. AI is moving from the background of litigation to the center of it.

The White House framework signals that lawmakers see the same trend. Family court will feel the effects sooner than many expect.

Consumer vs. Enterprise AI: The Distinction Lawyers Are Still Missing

Paid Does Not Mean Private

There is a quiet misconception I keep seeing in legal practice right now. If you pay for an AI tool, it must be safe to use with client information. That instinct makes sense. It is also wrong.

The real distinction is not free versus paid. It is consumer versus enterprise. Once you understand that difference, a lot of the risk around AI in legal practice becomes easier to see and, more importantly, easier to manage.

Most lawyers are now familiar with tools like ChatGPT, Claude, Perplexity, Gemini, or Microsoft Copilot. Many are even paying for “Pro” versions of those tools. But those are still consumer systems. They are designed for individual use, not for handling confidential legal data. Even when you are paying for them, you are still operating under standard terms that you do not negotiate and do not control. Depending on those terms and settings, your inputs may be logged, retained, or used to improve the system.

That does not mean your data is automatically exposed or misused. But it does mean you are not operating in an environment built around confidentiality in the way lawyers are used to. And that is the point that often gets lost. In legal practice, the issue is not whether a tool is helpful. The issue is whether you can safely put client information into it.

What Changes in Enterprise AI

Enterprise AI is built for a different use case entirely. It is designed for organizations, not individuals, and that changes the structure of the relationship. Instead of relying on general platform terms, you are operating under a contract.

That contract typically addresses how data is handled, how it can be used, and what protections are in place. Many enterprise systems include clear commitments that customer data is not used to train the model, along with security controls and administrative oversight that simply do not exist in consumer tools.

You can see this distinction in how companies position their enterprise offerings. Perplexity’s enterprise platform, for example, is structured around organizational use, internal data integration, and governance features. The emphasis is not just on better answers. It is on control.

That shift is not just technical. It is legal. You are no longer a user clicking through terms. You are a customer operating within a negotiated framework that defines how your data is treated.

Why This Matters in Family Law

From a family law perspective, this distinction is not theoretical. We deal with financial records, medical information, domestic violence allegations, and highly sensitive communications on a daily basis.

The idea that this information could be entered into a system where retention and use are not fully controlled should give any lawyer pause. Even if nothing goes wrong, the risk itself is the problem.

There is also a more subtle issue that often gets overlooked. Some consumer tools allow you to adjust settings or opt out of certain uses of your data. But that does not necessarily mean your data is not retained. And once data is retained, it carries ongoing risk.

That is why the safest default rule remains simple. If you are using a consumer AI tool, assume what you input is not confidential.

A Simple Practical Framework

Once you strip away the marketing language, the workflow becomes straightforward.

Start by asking whether the task can be done with deidentified data. In many cases, it can. You do not need names, addresses, or specific identifiers to generate structure, outline arguments, or organize issues. A properly framed prompt can give you most of the benefit without exposing sensitive information.

If the task cannot be done with deidentified data, then the next step is to move into an enterprise environment that is actually designed to handle confidential information. That means a system with contractual protections, clear limits on data use, and organizational controls.

Even then, the analysis does not stop. You still want to limit what you input to what is reasonably necessary for the task. There is no reason to upload an entire client file if all you need is help structuring a narrow issue or identifying patterns in a subset of data.

This is not a new concept. It is the same judgment lawyers already exercise when sharing information with co-counsel, experts, or third-party vendors. AI simply lowers the friction, which makes it easier to cross that line without thinking about it.

Not All “Enterprise” Is Equal

One final point is worth emphasizing. The label “enterprise” is not a magic word.

Different platforms are at different levels of maturity. Some offer robust governance, audit controls, and integration with firm systems. Others are newer and still evolving their enterprise offerings. The existence of an enterprise tier does not eliminate the need for diligence.

Lawyers still need to understand what the contract actually says, whether data is used for training, what controls exist at the organizational level, and whether the system aligns with their professional obligations.

In other words, this is a vendor analysis problem. The technology may be new, but the underlying responsibility is not.

The Bottom Line

The biggest risk with AI in legal practice is not hallucinated case law. It is the quiet creation of a parallel record of client information in systems that were never designed to hold it.

The solution is not to avoid AI. It is to use it consciously.

Deidentify when you can. Use enterprise systems when you cannot. And do not assume that “paid” means “confidential.”

AI Chats Are Discoverable: What Lawyers Are Missing About Consumer vs. Enterprise Tools


Most of the conversation around AI in the legal profession is still stuck on the wrong issue.

We keep talking about hallucinations. Fake cases. Bad citations. The fear is that lawyers will rely on AI, get the law wrong, and embarrass themselves in court. That risk is real, and courts have already dealt with it in a very direct way. But it is also the easiest problem to fix.

You can avoid fake law by doing what lawyers are already required to do: read what you cite.

The harder problem, and the one that is quietly becoming far more important, is that lawyers and clients are creating discoverable evidence every time they use consumer AI tools. And most of them do not realize it. Once you understand that, the rest of the analysis follows pretty quickly.

The real distinction is not AI versus no AI. It is consumer AI versus enterprise AI, and whether the information being entered into these systems is actually protected or effectively being disclosed.

Courts are not struggling with this. They are applying existing rules. And those rules are not particularly forgiving.

The issue is not just that AI can be wrong. It is that, in many cases, using consumer AI is functionally no different than sharing information with a third party.

The Obvious Problem Everyone Focuses On

The starting point for most discussions is still Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023). In that case, attorneys filed a brief citing judicial decisions that did not exist. The cases were generated by an AI tool, and no one verified them before filing.

The court did not treat this as a technology issue. It treated it as a Rule 11 problem.

Attorneys have an affirmative duty to conduct a reasonable inquiry into the law before filing anything. That duty does not change because a tool produces something that looks polished. A fake case is not a weak argument. It is not law at all.

The court sanctioned the attorneys, and more importantly, emphasized that lawyers act as gatekeepers. You can use tools, but you cannot outsource judgment.

California followed quickly. In Noland v. Land of the Free, L.P., 114 Cal. App. 5th 426 (2025), an appellate brief contained 23 quotations, 21 of which were fabricated by generative AI. The Court of Appeal published the opinion specifically to make the point that should not need to be said: if you cite a case, you are expected to have read it.

These cases matter. But they are also the low-hanging fruit. They are obvious failures, and they are preventable.

The profession will adjust to that.

The Problem That Actually Matters

The more consequential issue is not accuracy. It is confidentiality.

Lawyers are used to thinking of drafting as a private process. You test arguments, explore facts, and refine strategy in a space that feels internal. Historically, that intuition has been correct.

Consumer AI tools create the same feeling. You type in facts, ask questions, and receive something that looks like work product. It feels like you are thinking out loud in a private workspace.

You are not.

That distinction is no longer theoretical. In United States v. Heppner (S.D.N.Y. Feb. 17, 2026), the court addressed whether materials created through a publicly available AI platform were protected by attorney-client privilege or the work product doctrine.

The answer was no.

The court’s reasoning was straightforward and, frankly, inevitable. Communications with a public AI system are not communications with counsel. They are not inherently confidential. And to the extent information is shared with the platform, it is shared with a third party.

That last point matters most. Once information is disclosed outside a protected relationship, privilege is either lost or never attaches in the first place.

The court went even further and made clear that even if the user inputs information originally learned from counsel, sharing that information with a public AI tool can waive privilege.

That is not a subtle shift. It is a structural one.

Why “Consumer vs. Enterprise” Actually Matters

This is where most lawyers misunderstand the issue.

The difference between consumer and enterprise AI is not branding or marketing. It is legal risk.

Consumer tools are built for broad public use. They often retain inputs, process them externally, and operate under terms that do not guarantee confidentiality in any legally meaningful sense. From a litigation perspective, that means anything entered into those systems may later exist as discoverable data.

Enterprise systems, by contrast, are designed with contractual confidentiality, restricted data use, and controlled environments. When properly implemented, they can function more like other secure legal technologies that lawyers already rely on.

But even that distinction has limits. Enterprise tools reduce risk. They do not eliminate it. Lawyers still need to understand how the system works, what data is stored, and whether anything is shared beyond the platform.

The key point is that courts are not creating new rules for AI. They are applying existing ones. If you disclose information to a third party, you should assume it may be discoverable.

In other words, the risk is not just discoverability. It is that privilege may never attach at all, or may be waived by the disclosure itself.

AI Chats Are Just Another Category of Evidence

Once you look at it that way, the discovery implications become obvious.

AI interactions are just another form of electronically stored information. They sit alongside emails, text messages, and internal notes. If they are relevant, they are potentially discoverable.

In some cases, they may be more revealing than traditional evidence. AI chats often capture how someone was thinking in real time. They can include summaries of events, admissions, evolving legal theories, or attempts to frame a narrative.

That is exactly the type of material opposing counsel will want.

And unlike a rough draft that never leaves a lawyer’s computer, these interactions may exist on third-party systems with their own retention policies.

Preservation Obligations Now Include AI

This is where practice has not caught up yet.

Once litigation is reasonably anticipated, parties have a duty to preserve relevant electronically stored information. There is no exception for AI chats, prompts, or outputs. If anything, those materials should be presumed discoverable unless there is a clear reason otherwise.

That means litigation hold notices need to evolve.

It is no longer enough to tell clients to preserve emails and text messages. They should also be instructed to preserve any AI-related materials, including chat histories, prompts, and outputs generated in connection with the dispute.

Just as importantly, they need to be told not to delete or “clean up” those interactions. The instinct to treat AI chats as disposable is understandable, but it creates real spoliation risk if the data turns out to be relevant.

This is not a new legal obligation. It is an existing one applied to a new category of data.

The Takeaway Lawyers Should Actually Focus On

None of this means lawyers should avoid AI. Used properly, these tools can improve efficiency and help manage costs in ways that are difficult to ignore.

But the way many lawyers and clients are currently using AI, particularly through consumer platforms, is creating a parallel record of communications that is neither privileged nor protected.

That record may ultimately become evidence.

The law has not changed. Courts are applying the same principles they always have. What has changed is how easily people can create a detailed, timestamped record of their own thoughts, strategies, and narratives without realizing it.

And in litigation, those records rarely stay private.

Deepfakes and the Future of Evidence in Family Court

Artificial intelligence has changed how we create and consume digital media. Images, videos, and audio recordings once carried an implicit assumption of authenticity. If something appeared on camera, the instinctive reaction was to believe it. But with the rise of deepfake technology, that assumption is no longer safe.

For family law practitioners, this development presents a serious and under-discussed challenge. As synthetic media becomes easier to produce and harder to detect, courts will increasingly face questions about whether digital evidence is real, manipulated, or entirely fabricated.

What Exactly Is a Deepfake?

The term “deepfake” refers to synthetic media created with artificial intelligence tools that can realistically alter or fabricate images, audio, or video. In many cases, the technology allows a person’s face or voice to be convincingly replaced with someone else’s likeness. The concept first gained attention several years ago when online users began sharing manipulated videos created with open-source face-swapping tools. Since then, the technology has evolved rapidly and has expanded into sophisticated systems capable of generating entirely fictional people or events.

Today, producing convincing synthetic media no longer requires advanced technical expertise. Consumer-level software, tutorials, and publicly available AI tools have dramatically lowered the barrier to entry.

Deepfakes Have Already Appeared in Family Law Disputes

The concern is not merely theoretical. There have already been instances in which manipulated audio recordings were introduced during custody disputes in an effort to discredit a parent. In one widely reported case, a recording appeared to capture a father making violent threats. The audio sounded authentic, including the speaker’s tone and accent. Yet forensic review revealed that the file had been altered and that words had been inserted that were never actually spoken.

This example illustrates the practical problem for attorneys. When confronted with apparently credible recordings, even experienced lawyers may initially struggle to determine whether the evidence is genuine. That uncertainty can complicate litigation strategy, settlement discussions, and credibility determinations.

The Broader Context: Deepfakes Outside the Courtroom

Deepfakes have also appeared in political and social contexts. Fabricated videos have been used to spread misinformation or to undermine public trust in institutions and leaders. In other cases, manipulated clips were circulated online to make public figures appear intoxicated or to falsely portray statements they never made.

These examples highlight an important point: the technology is improving faster than our ability to detect it.

Are the Rules of Evidence Ready for This?

Traditionally, courts have treated photographs and videos as powerful forms of evidence. The legal system developed around the idea that images can function as a kind of “silent witness” to events. If a photo fairly and accurately depicts what occurred, it can carry substantial evidentiary weight.

But deepfakes challenge the foundation of that assumption.

Under existing evidence rules, digital media generally must be authenticated before admission. In California, that means presenting sufficient evidence to support a finding that the item is what the proponent claims it to be. Courts often rely on witness testimony, circumstantial evidence, or the context surrounding the recording to establish authenticity.

The problem is that these traditional authentication methods may not always detect sophisticated digital manipulation. A witness might genuinely believe a video accurately depicts an event without realizing the footage has been altered.

The “Liar’s Dividend”

Deepfake technology creates another, less obvious risk known as the “liar’s dividend.” As public awareness of synthetic media increases, individuals caught on authentic recordings may claim the evidence is fake.

In other words, the existence of deepfakes can undermine trust in legitimate evidence. A real recording might be dismissed as fabricated simply because the technology to fabricate such recordings exists.

For courts tasked with determining the truth, this creates a difficult evidentiary landscape.

Detecting Manipulated Media

Researchers and technologists have identified several indicators that may suggest a video has been altered. These include unnatural facial movements, inconsistent lighting or shadows, mismatched lip movements, or irregular blinking patterns. Other signs may appear in the way reflections behave on glasses or how facial hair or skin textures change frame-to-frame.

However, these clues are not always reliable. Studies suggest that even when people are warned about deepfakes, they still struggle to identify them accurately.

Ethical Duties for Attorneys

For lawyers, the rise of deepfakes intersects with existing professional responsibilities. Attorneys have a duty of candor to the tribunal and cannot knowingly present false evidence. If a lawyer later discovers that evidence introduced in a case is fabricated, the rules of professional conduct require remedial action.

At the same time, attorneys also have a duty of competence, which increasingly includes technological competence. Lawyers must understand the risks associated with emerging technologies and remain informed about developments that affect their practice.

In the context of digital evidence, that may mean asking harder questions about the origin of recordings, preserving metadata, consulting forensic experts when appropriate, and avoiding assumptions about authenticity.

The Path Forward

The legal system is still adapting to the realities of AI-generated media. Some commentators have suggested that courts may eventually require stronger corroboration for digital recordings or adopt stricter authentication standards. Legislators and policy groups have also begun studying the societal risks associated with deepfake technology and potential regulatory responses.

In the meantime, the practical lesson for family law practitioners is simple: digital evidence deserves careful scrutiny.

Video clips, voice recordings, and screenshots may appear compelling, but appearances alone are no longer enough. In an era where artificial intelligence can manufacture convincing realities, the legal profession must adapt its evidentiary instincts accordingly.

The next generation of family law disputes will not only involve contested facts. Increasingly, they may involve contested realities.

Navigating Digital Surveillance and Privacy in California Divorce and Custody Cases

In 2026, almost no California family law case is “just” about he‑said/she‑said anymore—it’s he‑said/she‑screenshotted. Digital surveillance has become one of the most important (and misunderstood) pressure points in divorce and custody litigation. It sits where domestic violence, privacy, credibility, and co‑parenting collide—usually on a shared iCloud account.

What digital surveillance really looks like

In family law, surveillance usually doesn’t look like movie‑style hacking. It looks like a spouse who knows your passcode “because we’re married” and quietly scrolls your texts at night. It’s the shared Apple ID that keeps mirroring your messages and photos to an iPad in your ex’s kitchen. It’s Find My, Google location history, Life360, or car apps that started as a “safety thing” and turned into a running commentary on where you parked and who you visited. It’s Ring or Nest cameras used to check when you come and go and who shows up at the door. In more extreme cases, it’s stalkerware hidden on a phone, keyloggers on a laptop, or location trackers tucked into a car.

None of this feels cinematic; it feels like someone living inside your life.

How California courts are starting to see it

California has been steadily expanding its understanding of domestic violence to include coercive control and tech‑facilitated abuse. A partner doesn’t need to lay a hand on you to have a serious impact on your autonomy if they are effectively the unseen third party in every text, drive, and outing. Judges look less at the brand of app and more at the pattern: constant surprise appearances, interrogation about your movements, references to private conversations they shouldn’t know about, and the way your behavior changed in response.

In September 2020, California Gov. Gavin Newsom (D) signed Senate Bill 1141, one of the country’s first laws explicitly allowing courts to consider coercive control as domestic violence in family court matters. The law defined coercive control as “a pattern of behavior that unreasonably interferes with a person’s free will and personal liberty.”

In 2021, California amended Family Code section 6320 to include coercive control as grounds for a domestic violence restraining order and to provide survivors of coercive control with a rebuttable presumption of child custody in their favor in the event that they have children with their abuser.

If you are seeking a DVRO or asking the court to weigh domestic violence in custody, the goal is to show that the tech is part of a system of monitoring and control, not just an unfortunate gadget choice.

The self‑help discovery trap

Once people realize they’re being monitored, they often go straight into self‑help investigator mode: logging into the other person’s email, guessing passwords, downloading entire accounts “for evidence,” or quietly collecting their own stash of recordings. On a human level, that reaction makes sense. Legally and strategically, it can be a disaster. Unauthorized access can flirt with criminal statutes, curated screenshots invite credibility attacks, and if both sides are spying, your legitimate concerns about being watched can be reframed as mutual bad behavior instead of a power imbalance.

A safer line: preserve what you can lawfully access on your own devices and accounts, then stop and get legal advice before you turn into your ex’s IT department.

When kids and “safety” are the justification

Things get even more complicated when children are involved, because almost every surveillance tactic gets wrapped in the language of “safety.” Tracking apps on the child’s phone, smartwatches that let one parent listen in on calls, reading messages between the child and the other parent, or using location data to critique every stop during the other parent’s time—these are all framed as concern, not control.

California judges are increasingly skeptical of that framing. They ask whether the tech genuinely serves the child’s safety, or whether it’s really about monitoring and undermining the other parent. A parent who turns every car ride and phone call into a surveillance opportunity can easily be seen as increasing the child’s anxiety and conflict, not protecting them.

What to do if you suspect you’re being monitored

If you think you’re being surveilled, the goal is to stabilize first, strategize second. Quietly secure your own digital life: change passwords to strong, unique ones, enable two‑factor authentication, review which devices are logged into your accounts, and turn off location or sharing you no longer consent to.

Set up at least one reasonably private channel for legal and personal support—a new email, phone number, or device your ex has never touched—so you can talk freely with your lawyer and support system. Then, start documenting specific incidents that made you suspect monitoring: dates, what happened, what tipped you off, and how it affected your behavior. What you should generally avoid without targeted legal advice is wiping devices, factory‑resetting everything, or installing your own stealth tools to “get them back.” Those moves can destroy useful evidence and make you look like you have something to hide.

Turning a tech mess into a legal strategy

In court, the technology is the method; the legal issue is the pattern. A judge doesn’t need to understand every setting on every app, but they do need a clear narrative: what the other person did, how they did it, how it changed your life and your children’s lives, and how you responded once you realized what was happening. From there, the goal is to translate that story into concrete orders—limits on tracking and monitoring, boundaries around shared accounts and devices, and clear rules for children’s tech use that prioritize their emotional safety.

If your phone feels more like a leash than a tool, digital surveillance isn’t a side note in your case; it’s a core issue. A California family law attorney who is fluent in both the Family Code and the modern tech stack can help you turn that invisible, background layer of your relationship into a focused, persuasive part of your litigation strategy.

The bottom line

In the end, digital surveillance isn’t a quirky subplot to your California family law case; it’s a core fact pattern that judges are learning to recognize and punish. The same tools that make modern life convenient—shared clouds, tracking apps, smart cameras—can, in the wrong hands, become a quiet but pervasive form of control. If you ignore that layer, you risk walking into court with only half your story. If you name it, document it, and build orders around it, you turn an invisible problem into a legally actionable one.

You do not need to become a cybersecurity expert overnight, but you do need to take your digital reality seriously. That means tightening your own privacy, resisting the urge to play counter‑spy, and working with counsel who understands both how families actually use technology and how California judges are responding when that technology is weaponized. When your phone, your accounts, and your apps are part of the abuse—or part of the conflict—your legal strategy has to meet you where you live now: online, connected, and, with the right plan, no longer under someone else’s quiet watch.

When Your Co‑Parent Uses AI as a Weapon (And What California Courts Actually Care About)


At this point, I’m no longer surprised when a client walks into my office and says some version of: “My ex is using AI against me.” Sometimes it’s a 50–page “timeline” ChatGPT drafted overnight. Sometimes it’s a custody declaration that reads like a law review article, filed by a self‑represented parent who has never set foot in a law library. Sometimes it’s a client quietly admitting they “cleaned up” a text thread with an AI screenshot editor before sending it to me.

Underneath all of it is the same anxiety: if the other side leans hard on AI—writing, editing, summarizing, even fabricating—will the court believe them more than you?

The short answer: not if the judge is paying attention and not if your lawyer is doing their job.

AI‑polished stories vs. admissible evidence

California family courts still run on evidence, not vibes. An AI‑drafted declaration may be smoother, more organized, and full of confident language, but that doesn’t make it more credible.

Judges care about:

  • Personal knowledge: Can this person actually testify to what they’re saying, or is it hearsay with fancy transitions.
  • Foundation: Do they explain how they know the thing they’re asserting.
  • Corroboration: Are there texts, emails, school records, police reports, or third‑party witnesses that line up with the story.
  • Consistency over time: Does this match what they said in prior pleadings, CPS reports, DCSS filings, or criminal matters.

An AI‑polished declaration that overreaches—asserting facts the party can’t back up—may feel intimidating when you first read it, but it’s a gift on cross‑examination. Once the witness is under oath and off‑script, the seams start to show. California courts are already signaling that lawyers and litigants cannot outsource judgment to a tool and then shrug when the content turns out to be inaccurate or fabricated.

If you’re on the receiving end of one of these glossy declarations, your job is not to match their word count. Your job—through counsel—is to expose the gap between what’s written and what can actually be proved.

When AI crosses the line from “assistive” to “abusive”

There’s a meaningful difference between a parent using AI to help outline their thoughts and a parent using AI to harass, surveil, or manipulate.

Here are patterns I’m increasingly seeing in California cases:

  • AI‑amplified harassment: A co‑parent uses AI to churn out long, repetitive, accusatory emails or messages through OurFamilyWizard or Talking Parents, then points to the sheer volume of their own writing as proof of how “concerned” and “involved” they are.
  • AI‑assisted character assassination: Parties ask a chatbot to “rewrite” their narrative to sound more sympathetic and their ex more dangerous, sometimes blending in half‑truths and speculation that would never survive evidentiary scrutiny.
  • AI‑boosted surveillance: Tech‑savvy parents feed location logs, shared calendar entries, or cloud‑stored photos into AI tools to construct elaborate “timelines” of alleged misconduct, often built on data they had no legal right to access in the first place.

The law doesn’t give anyone a free pass because they wrapped their behavior in new technology. California already has tools to address this:

  • Domestic violence restraining orders (DVROs) can cover “disturbing the peace” through digital harassment, including obsessive, hostile communications and technological abuse.
  • Custody orders and parenting plans can restrict communication to specific platforms, character counts, or topics when one parent weaponizes email or apps.
  • Evidence gathered through privacy violations can be excluded, and in some cases, the underlying conduct may expose the offending party to criminal or civil liability, especially where unauthorized access to cloud accounts or devices is involved, given California’s strong privacy regime.

If AI is being used as a force multiplier for bad behavior, the solution is usually not “use more AI back.” It’s targeted court orders, clear boundaries, and disciplined evidentiary strategy.

What California judges want to see from you

If the other side is flooding the court with AI‑generated content, you don’t beat them by playing the same game. You stand out by doing the opposite.

Judges in California family courts are increasingly skeptical of anything that feels over‑lawyered or over‑produced, especially when it comes from a self‑represented party who clearly had technological help. What they appreciate instead:

  • Clean, human declarations: Short, fact‑driven, chronological narratives with dates, places, and concrete examples.
  • Anchored exhibits: Clearly labeled, minimally annotated texts, emails, school records, medical records, and app screenshots that tie directly to specific statements in your declaration.
  • Reasonable requests: Orders that seem tailored to the actual problem—specific exchanges, decision‑making breakdowns, or safety issues—rather than sweeping, punitive measures.

We’re already watching higher courts impose monetary sanctions for AI‑hallucinated case law and misused technology, and those decisions are being published “as a warning.” That same attitude will bleed into family law: judges will not reward parties who treat AI as a shortcut around honesty, evidence, or proportionality.

How I actually use AI in your case (and where I draw the line)

I’m open with clients that I use AI in my practice. Not to replace legal judgment, and not to ghost‑write your story, but as a behind‑the‑scenes tool:

  • Brainstorming issues: Spotting angles or questions to investigate in discovery or at deposition.
  • Organizing, not inventing: Helping outline a declaration or categorize a high‑volume document dump before I personally refine and verify it.
  • Translating complexity: Testing ways of explaining a technical issue—like cloud privacy, data retention, or child‑support tax consequences—in plain English.

What I don’t do:

  • I don’t file anything in court that I haven’t personally reviewed, revised, and cross‑checked against the actual evidence and the current state of California law, including recent guidance on generative AI from the State Bar and legal ethics commentators.
  • I don’t let AI “sweeten” your story. If something didn’t happen, it’s not going into your declaration—no matter how good it would look on paper.
  • I don’t treat AI output as legal research. Any citations, statutes, or cases still get verified the old‑fashioned way because courts have shown they are willing to sanction lawyers and parties who rely on fake or misapplied authorities.

Behind every filed document, there should still be a lawyer exercising human judgment, rooted in actual experience in front of actual judges. That part is not outsourceable.

If you suspect AI misuse in your California divorce

If you’re in a California divorce or custody case and you think AI is being used against you, here are practical steps to take before you spiral:

  • Preserve, don’t edit: Save what you’re receiving—messages, filings, screenshots—without “fixing” or curating them yourself. Don’t run your own evidence through editing tools that can change timestamps, formatting, or content.
  • Flag patterns, not just one document: Point out the volume, tone, and timing of communications, and any disconnect between what’s written and what actually occurred.
  • Talk to your lawyer about strategy: Depending on the facts, the right move might be a narrowly tailored protective order, evidentiary objections, a discovery motion, or simply using cross‑examination to expose the gap between AI polish and real‑world parenting.
  • Focus on your own credibility: Courts notice the party who stays grounded in verifiable facts, respects privacy boundaries, and resists the urge to “win the narrative” at all costs.

The rise of AI hasn’t changed the core question California family courts ask in almost every contested case: Who is acting in good faith, telling the truth, and putting the children’s interests ahead of their own need to score points?

Tools will keep evolving. That question won’t.

The Ethics of Using AI in Divorce Law: A California Attorney’s Perspective

If you’ve been following the hype, it sounds like AI is about to revolutionize everything from grocery shopping to courtroom litigation. For us family law attorneys, it already has — at least in small but significant ways. AI tools now help manage the mountains of paperwork, scheduling nightmares, and data-heavy discovery that come with divorce cases.

But with great tech comes great responsibility.

While AI might be the shiny new assistant in the law office, ethics are the guardrails keeping us from turning legal practice into an unsupervised science experiment. In family law, where privacy, accuracy, and human judgment are everything, these guardrails matter.

Let’s talk about why.

Competence: Yes, Lawyers Must Understand Their AI Tools

Under California’s Rules of Professional Conduct (Rule 1.1) and echoed by the American Bar Association’s Formal Opinion 512 (2024), attorneys have an ethical duty to remain competent in the technology they use.

That doesn’t mean we all have to become AI engineers. But it does mean:

  • We need to understand how AI tools work, especially their limits.
  • We must assess the risks and benefits of using AI in client matters.
  • We’re responsible for supervising AI output, the same way we would supervise a paralegal or junior attorney.

Put simply: AI can draft your discovery requests faster than any human, but I’m the one who has to make sure they’re correct, complete, and legally sound before they go out the door.

Confidentiality: Safeguarding Sensitive Divorce Data

Family law involves deeply personal information: finances, child custody disputes, medical histories, allegations of abuse. When AI tools are involved in processing this data, confidentiality concerns are front and center.

According to both ABA guidance and state bar recommendations, attorneys must:

  • Vet AI tools and cloud services for strong data security protections.
  • Understand where and how client data is stored and processed.
  • Avoid using AI platforms that share data for training large language models without client consent.

For example, tools like LawToolBox process data securely within a law firm’s private Microsoft 365 environment — a safer choice than free or public AI platforms with unclear data policies.

This matters because mishandling client data isn’t just embarrassing — it’s a potential ethics violation and malpractice risk.

Accuracy and the Hallucination Problem: Lawyers Are Still the Gatekeepers

One of the most famous AI blunders happened in 2023, when lawyers submitted a court brief filled with fake case citations generated by ChatGPT. The judge was not amused. (Mata v. Avianca, SDNY 2023.)

This is called “AI hallucination” — when AI confidently fabricates information that looks real but isn’t.

For family law attorneys, this is a huge ethical landmine. Imagine AI hallucinating a case precedent about child custody or spousal support. If an attorney fails to verify that information, they could mislead the court, violate duties of candor (Rule 3.3), and face sanctions.

That’s why ethical use of AI means:

  • Double-checking every citation.
  • Fact-checking AI-generated summaries.
  • Never filing anything AI drafted without personal attorney review.

AI can assist, but it cannot replace human legal judgment. Period.

Bias and Fairness: Not All Data is Created Equal

AI tools learn from historical data. But what if that data reflects biased outcomes?

For example, if a predictive analytics platform is trained on family law cases where mothers overwhelmingly received primary custody, its outputs might lean toward assuming that trend continues — regardless of your specific facts.

The ethical lawyer’s role is to:

  • Recognize and correct for inherent biases in AI recommendations.
  • Ensure AI outputs are used as informative tools, not as gospel.
  • Advocate for outcomes based on the client’s unique situation, not outdated trends.

The ABA and bar associations have raised serious concerns about bias in AI systems, urging lawyers to be vigilant about how these tools might perpetuate inequities if left unchecked.

Transparency: Telling Clients When AI Is Involved

Clients deserve to know when technology is being used in their case. While AI tools can help streamline tasks and lower costs, attorneys should be upfront about their role.

The ethical duty of communication (Rule 1.4) includes:

  • Informing clients when AI tools are being used to assist with their case.
  • Clarifying that all final work product is still supervised and approved by the attorney.
  • Explaining the benefits (efficiency, lower cost) and limits (AI isn’t giving you legal advice).

Transparency builds trust — especially when people are wary of technology handling their personal divorce matters.

Ethics Are the Foundation, Not an Afterthought

At the end of the day, using AI in divorce law isn’t unethical. Using it irresponsibly is.

California family law attorneys must approach AI the same way we approach any new technology:

  • With professional skepticism.
  • With clear ethical oversight.
  • With a commitment to client protection above all.

AI can help me process 1,000 pages of financial records faster. It can remind me of obscure filing deadlines. It can even draft a first version of a spousal support proposal. But it’s still my legal brain — my ethical obligation — that ensures those tools serve my clients well.

The machines are not taking over.
They’re just making the paperwork less painful.


AI in Divorce Law: Why Your Family Lawyer Has a Robot (and That’s a Good Thing)

Let me start with a confession: as a divorce attorney, I used to think “Artificial Intelligence” was just a buzzword tech companies threw around to impress investors. Fast forward to today, and AI is sitting right next to me—summarizing discovery responses, drafting rough pleadings, and politely reminding me of court deadlines I almost forgot.

No, AI isn’t replacing me. But it is making me a smarter, faster, and (dare I say) less-stressed attorney. And if you’re going through a divorce in California, that’s good news for you, too.

Why AI and Divorce Are a Perfect Match

Family law is a paper-heavy, data-heavy, emotionally-charged practice area. We’re not just arguing over child custody and community property—we’re also dealing with tax returns, financial statements, text message logs, social media screenshots, and more bank records than any sane person should have to review manually.

AI thrives on this kind of data chaos. Tools powered by artificial intelligence can process and organize huge volumes of information with a speed no human (or intern) can match. They spot discrepancies across financial statements, flag unusual transactions, and can even help predict case outcomes based on past rulings.

Think of it as having a data-savvy paralegal who never gets tired or distracted by office gossip.

From Calendars to Courtrooms: How AI Works Behind the Scenes

Here’s where the rubber meets the road. AI is helping family lawyers manage the nuts and bolts of divorce cases in ways that are both practical and powerful:

1. Automated Scheduling & Deadlines

Ever worried your lawyer might miss a court deadline? AI tools like LawToolBox take court rules, apply them to your case timeline, and sync key deadlines into the attorney’s calendar—automatically. Even better, they update in real-time if court dates change. In fast-moving custody or support cases, this can be the difference between staying on track and scrambling for continuances.

2. Document Drafting & Organization

Divorce cases generate mountains of paperwork. AI drafting tools now create solid first drafts of pleadings, settlement agreements, discovery requests, and even financial disclosures. Systems like MyCase IQ and Clio Duo can scan through hundreds of pages, summarize key points, and help lawyers maintain organized case files.

This doesn’t just save time—it reduces human error. After all, it’s easy to miss a zero in a busy day, but AI double-checks every figure.

3. Client Communication with a Digital Touch

AI chatbots and virtual assistants are now handling the flood of client questions that once buried law firm inboxes. These bots answer FAQs (like “When’s my mediation?” or “What do I need for a custody hearing?”) instantly—at midnight if needed. They draft polite, factual responses without the emotional baggage, which frankly, is refreshing in high-conflict divorces.

For co-parenting communication, tools like ToneMeter even analyze the tone of messages to help parties keep it civil. Yes, your ex’s snarky email might get “ToneMetered” into something the judge won’t frown at.

4. Financial Analysis & Discovery

AI isn’t just for clerical work. Advanced platforms are diving into forensic accounting tasks—analyzing tax returns, business valuations, and hunting for hidden assets. In cases where one party “forgets” to disclose crypto wallets or side businesses, AI can help connect the dots faster than traditional methods.

5. Predictive Analytics & Case Strategy

One of the most exciting (and slightly intimidating) applications of AI is predictive analytics. Platforms like Lex Machina and Pre/Dicta can analyze thousands of family law cases, including judicial tendencies, and give lawyers a data-backed forecast of possible outcomes.

Want to know if your judge typically awards spousal support above guideline recommendations? There’s AI for that. These insights help attorneys fine-tune negotiation strategies and set realistic expectations for clients.

The Ethics of AI in Divorce (Or, “Don’t Worry, I’m Still the Lawyer”)

With all this technology buzzing in the background, you might wonder: is AI running my case? The answer is a firm no. The American Bar Association and California State Bar have made it clear: AI is a tool, not a substitute for legal judgment.

Attorneys must supervise AI outputs, protect client confidentiality, and ensure all filings meet professional standards. AI might draft a motion, but a real, licensed human (me) is responsible for reviewing, correcting, and filing it.

Also, AI sometimes “hallucinates”—it might invent a legal citation or misread a document. Remember those New York lawyers who got sanctioned for submitting fake cases from ChatGPT? That’s why lawyers need to remain the gatekeepers.

What This Means for You, the Client

For my clients, AI isn’t some cold, robotic overlord. It’s the reason your emails get answered faster, your documents are reviewed more thoroughly, and your case moves along with fewer delays. It means I can spend more time strategizing for your custody hearing and less time manually cross-referencing bank statements.

It also means that even solo and small family law firms can provide Big Law-level efficiency—without charging Big Law fees.

The Bottom Line: AI Is Here to Help (But I’m Still Driving)

AI is revolutionizing divorce law—but it’s not replacing the human side of what we do. Empathy, judgment, and experience are still irreplaceable. Technology handles the grunt work so I can focus on the hard stuff: advocating for you, negotiating fair outcomes, and helping you navigate one of life’s most challenging transitions.

So, next time you hear about AI in divorce cases, don’t picture a robot lawyer in a suit. Picture your human attorney—armed with smarter tools, sharper data, and maybe, finally, a little less caffeine-induced panic.

Want to know how AI could streamline your divorce case? Contact us for a consultation. No robots will answer (but they might help me prep for our meeting).

Tech Meets Tension: How AI Is Changing Divorce in California

Let’s be honest: divorce is already hard enough without also having to figure out how to fill out your financial disclosure forms while sobbing into your coffee. That’s why it’s no surprise that people are starting to turn to artificial intelligence for help—because if a chatbot can plan your vacation, maybe it can also explain how to divide your retirement accounts.

As a California divorce attorney, I’ve seen more and more clients using AI tools to stay organized. Some folks use it to help write declarations for court. Others use it to summarize years of texts with their ex (which they swear will prove emotional abuse). One client used ChatGPT to generate a “sample” custody schedule—it wasn’t terrible, though it did suggest alternating weekends and Thursdays, which sounded more like a dinner reservation system than a parenting plan.

And to be fair, AI can be helpful. It can draft, organize, calculate, even remind you that yes, you do have to list your Coinbase account on your disclosures. It’s like a robot paralegal—but without the judgmental sigh when you hand in your documents two weeks late.

But here’s the thing: divorce isn’t just paperwork. It’s strategy. It’s judgment. It’s law. No algorithm—at least not yet—can tell you whether to settle or fight, or how the judge is likely to rule on your custody modification request. That’s where having a lawyer who actually understands Family Code section 4320 (and maybe also how to spot a narcissist) comes in.

Then there’s the privacy piece. California has some of the strongest digital privacy laws in the country—like the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA). If you’re feeding sensitive information into an AI tool, you may be waiving confidentiality protections without even knowing it. So before you ask ChatGPT to “write me a declaration explaining why I should get the house,” ask yourself: is this something I’d want floating around in a training database?

All of this is to say: AI is here, and it’s changing the way we approach divorce. It can save time, money, and a little bit of your sanity. But it can’t replace legal advice, or common sense. So go ahead and use the tools—but make sure you’ve also got a real person in your corner who knows the law, the court, and how to navigate the messy emotional side of ending a marriage.

And if your ex walks into mediation waving an AI-generated parenting plan? Don’t panic. Just send it to me. We’ll run it through a real filter—one that includes legal knowledge, experience, and maybe a strong cup of coffee.


Divorce Meets the Cloud: How California Privacy Laws Complicate a Breakup

In my divorce practice, I’ve come to expect the usual issues—custody battles, financial disclosures, and the occasional argument over who gets the Peloton. But lately, I spend just as much time talking about iCloud access, shared GPS apps, and whether Alexa heard something useful.

The truth is, divorce in California now comes with a digital layer—and it’s not always easy to peel back. Thanks to the California Consumer Privacy Act (CCPA), codified at California Civil Code §§ 1798.100–1798.199.100, and its expansion under the California Privacy Rights Act (CPRA) (which amended and extended the CCPA effective January 1, 2023), people now have strong rights over their personal data. You can request what data a business has collected about you, ask for it to be deleted, and opt out of its sale. CPRA also gives you the right to correct inaccuracies and to restrict how companies use “sensitive personal information,” like precise geolocation or health data.

All of this is great for individual privacy. But in a divorce? It’s complicated.

I’ve had clients try to bring in everything from Nest camera clips to shared calendars to prove a point. And look, I get it—when you’re hurt or frustrated, your instinct is to gather everything. But just because it’s on your phone doesn’t mean it’s automatically usable in court. If you accessed it without permission—or if it involves your kids—there are legal boundaries you can’t cross, even if your ex “deserves it.”

And yes, people are using AI now too. I’ve seen clients run all their old texts through a chatbot to catch inconsistencies or summarize arguments. Some of it is helpful, but the court still expects actual, authenticated evidence—not a digital vibe check.

What’s clear is that the breakup process today isn’t just emotional—it’s technical. We’re not just dividing homes and parenting time. We’re untangling shared logins, navigating cloud storage, and figuring out who has the right to see what.

So if you’re in the middle of a California divorce, my advice is this: before you start combing through your ex’s digital footprint or screen-recording your co-parenting app, pause. Talk to your lawyer. Understand what’s fair game and what’s a privacy violation.

Because in today’s world, how you gather evidence matters just as much as what you find.