Navigating Digital Surveillance and Privacy in California Divorce and Custody Cases

In 2026, almost no California family law case is “just” about he‑said/she‑said anymore—it’s he‑said/she‑screenshotted. Digital surveillance has become one of the most important (and misunderstood) pressure points in divorce and custody litigation. It sits where domestic violence, privacy, credibility, and co‑parenting collide—usually on a shared iCloud account.

What digital surveillance really looks like

In family law, surveillance usually doesn’t look like movie‑style hacking. It looks like a spouse who knows your passcode “because we’re married” and quietly scrolls your texts at night. It’s the shared Apple ID that keeps mirroring your messages and photos to an iPad in your ex’s kitchen. It’s Find My, Google location history, Life360, or car apps that started as a “safety thing” and turned into a running commentary on where you parked and who you visited. It’s Ring or Nest cameras used to check when you come and go and who shows up at the door. In more extreme cases, it’s stalkerware hidden on a phone, keyloggers on a laptop, or location trackers tucked into a car.

None of this feels cinematic; it feels like someone living inside your life.

How California courts are starting to see it

California has been steadily expanding its understanding of domestic violence to include coercive control and tech‑facilitated abuse. A partner doesn’t need to lay a hand on you to have a serious impact on your autonomy if they are effectively the unseen third party in every text, drive, and outing. Judges look less at the brand of app and more at the pattern: constant surprise appearances, interrogation about your movements, references to private conversations they shouldn’t know about, and the way your behavior changed in response.

In September 2020, California Gov. Gavin Newsom (D) signed Senate Bill 1141, one of the country’s first laws explicitly allowing courts to consider coercive control as domestic violence in family court matters. The law defined coercive control as “a pattern of behavior that unreasonably interferes with a person’s free will and personal liberty.”

In 2021, California amended Family Code section 6320 to include coercive control as grounds for a domestic violence restraining order and to provide survivors of coercive control with a rebuttable presumption of child custody in their favor in the event that they have children with their abuser.

If you are seeking a DVRO or asking the court to weigh domestic violence in custody, the goal is to show that the tech is part of a system of monitoring and control, not just an unfortunate gadget choice.

The self‑help discovery trap

Once people realize they’re being monitored, they often go straight into self‑help investigator mode: logging into the other person’s email, guessing passwords, downloading entire accounts “for evidence,” or quietly collecting their own stash of recordings. On a human level, that reaction makes sense. Legally and strategically, it can be a disaster. Unauthorized access can flirt with criminal statutes, curated screenshots invite credibility attacks, and if both sides are spying, your legitimate concerns about being watched can be reframed as mutual bad behavior instead of a power imbalance.

A safer line: preserve what you can lawfully access on your own devices and accounts, then stop and get legal advice before you turn into your ex’s IT department.

When kids and “safety” are the justification

Things get even more complicated when children are involved, because almost every surveillance tactic gets wrapped in the language of “safety.” Tracking apps on the child’s phone, smartwatches that let one parent listen in on calls, reading messages between the child and the other parent, or using location data to critique every stop during the other parent’s time—these are all framed as concern, not control.

California judges are increasingly skeptical of that framing. They ask whether the tech genuinely serves the child’s safety, or whether it’s really about monitoring and undermining the other parent. A parent who turns every car ride and phone call into a surveillance opportunity can easily be seen as increasing the child’s anxiety and conflict, not protecting them.

What to do if you suspect you’re being monitored

If you think you’re being surveilled, the goal is to stabilize first, strategize second. Quietly secure your own digital life: change passwords to strong, unique ones, enable two‑factor authentication, review which devices are logged into your accounts, and turn off location or sharing you no longer consent to.

Set up at least one reasonably private channel for legal and personal support—a new email, phone number, or device your ex has never touched—so you can talk freely with your lawyer and support system. Then, start documenting specific incidents that made you suspect monitoring: dates, what happened, what tipped you off, and how it affected your behavior. What you should generally avoid without targeted legal advice is wiping devices, factory‑resetting everything, or installing your own stealth tools to “get them back.” Those moves can destroy useful evidence and make you look like you have something to hide.

Turning a tech mess into a legal strategy

In court, the technology is the method; the legal issue is the pattern. A judge doesn’t need to understand every setting on every app, but they do need a clear narrative: what the other person did, how they did it, how it changed your life and your children’s lives, and how you responded once you realized what was happening. From there, the goal is to translate that story into concrete orders—limits on tracking and monitoring, boundaries around shared accounts and devices, and clear rules for children’s tech use that prioritize their emotional safety.

If your phone feels more like a leash than a tool, digital surveillance isn’t a side note in your case; it’s a core issue. A California family law attorney who is fluent in both the Family Code and the modern tech stack can help you turn that invisible, background layer of your relationship into a focused, persuasive part of your litigation strategy.

The bottom line

In the end, digital surveillance isn’t a quirky subplot to your California family law case; it’s a core fact pattern that judges are learning to recognize and punish. The same tools that make modern life convenient—shared clouds, tracking apps, smart cameras—can, in the wrong hands, become a quiet but pervasive form of control. If you ignore that layer, you risk walking into court with only half your story. If you name it, document it, and build orders around it, you turn an invisible problem into a legally actionable one.

You do not need to become a cybersecurity expert overnight, but you do need to take your digital reality seriously. That means tightening your own privacy, resisting the urge to play counter‑spy, and working with counsel who understands both how families actually use technology and how California judges are responding when that technology is weaponized. When your phone, your accounts, and your apps are part of the abuse—or part of the conflict—your legal strategy has to meet you where you live now: online, connected, and, with the right plan, no longer under someone else’s quiet watch.

When Your Co‑Parent Uses AI as a Weapon (And What California Courts Actually Care About)


At this point, I’m no longer surprised when a client walks into my office and says some version of: “My ex is using AI against me.” Sometimes it’s a 50–page “timeline” ChatGPT drafted overnight. Sometimes it’s a custody declaration that reads like a law review article, filed by a self‑represented parent who has never set foot in a law library. Sometimes it’s a client quietly admitting they “cleaned up” a text thread with an AI screenshot editor before sending it to me.

Underneath all of it is the same anxiety: if the other side leans hard on AI—writing, editing, summarizing, even fabricating—will the court believe them more than you?

The short answer: not if the judge is paying attention and not if your lawyer is doing their job.

AI‑polished stories vs. admissible evidence

California family courts still run on evidence, not vibes. An AI‑drafted declaration may be smoother, more organized, and full of confident language, but that doesn’t make it more credible.

Judges care about:

  • Personal knowledge: Can this person actually testify to what they’re saying, or is it hearsay with fancy transitions.
  • Foundation: Do they explain how they know the thing they’re asserting.
  • Corroboration: Are there texts, emails, school records, police reports, or third‑party witnesses that line up with the story.
  • Consistency over time: Does this match what they said in prior pleadings, CPS reports, DCSS filings, or criminal matters.

An AI‑polished declaration that overreaches—asserting facts the party can’t back up—may feel intimidating when you first read it, but it’s a gift on cross‑examination. Once the witness is under oath and off‑script, the seams start to show. California courts are already signaling that lawyers and litigants cannot outsource judgment to a tool and then shrug when the content turns out to be inaccurate or fabricated.

If you’re on the receiving end of one of these glossy declarations, your job is not to match their word count. Your job—through counsel—is to expose the gap between what’s written and what can actually be proved.

When AI crosses the line from “assistive” to “abusive”

There’s a meaningful difference between a parent using AI to help outline their thoughts and a parent using AI to harass, surveil, or manipulate.

Here are patterns I’m increasingly seeing in California cases:

  • AI‑amplified harassment: A co‑parent uses AI to churn out long, repetitive, accusatory emails or messages through OurFamilyWizard or Talking Parents, then points to the sheer volume of their own writing as proof of how “concerned” and “involved” they are.
  • AI‑assisted character assassination: Parties ask a chatbot to “rewrite” their narrative to sound more sympathetic and their ex more dangerous, sometimes blending in half‑truths and speculation that would never survive evidentiary scrutiny.
  • AI‑boosted surveillance: Tech‑savvy parents feed location logs, shared calendar entries, or cloud‑stored photos into AI tools to construct elaborate “timelines” of alleged misconduct, often built on data they had no legal right to access in the first place.

The law doesn’t give anyone a free pass because they wrapped their behavior in new technology. California already has tools to address this:

  • Domestic violence restraining orders (DVROs) can cover “disturbing the peace” through digital harassment, including obsessive, hostile communications and technological abuse.
  • Custody orders and parenting plans can restrict communication to specific platforms, character counts, or topics when one parent weaponizes email or apps.
  • Evidence gathered through privacy violations can be excluded, and in some cases, the underlying conduct may expose the offending party to criminal or civil liability, especially where unauthorized access to cloud accounts or devices is involved, given California’s strong privacy regime.

If AI is being used as a force multiplier for bad behavior, the solution is usually not “use more AI back.” It’s targeted court orders, clear boundaries, and disciplined evidentiary strategy.

What California judges want to see from you

If the other side is flooding the court with AI‑generated content, you don’t beat them by playing the same game. You stand out by doing the opposite.

Judges in California family courts are increasingly skeptical of anything that feels over‑lawyered or over‑produced, especially when it comes from a self‑represented party who clearly had technological help. What they appreciate instead:

  • Clean, human declarations: Short, fact‑driven, chronological narratives with dates, places, and concrete examples.
  • Anchored exhibits: Clearly labeled, minimally annotated texts, emails, school records, medical records, and app screenshots that tie directly to specific statements in your declaration.
  • Reasonable requests: Orders that seem tailored to the actual problem—specific exchanges, decision‑making breakdowns, or safety issues—rather than sweeping, punitive measures.

We’re already watching higher courts impose monetary sanctions for AI‑hallucinated case law and misused technology, and those decisions are being published “as a warning.” That same attitude will bleed into family law: judges will not reward parties who treat AI as a shortcut around honesty, evidence, or proportionality.

How I actually use AI in your case (and where I draw the line)

I’m open with clients that I use AI in my practice. Not to replace legal judgment, and not to ghost‑write your story, but as a behind‑the‑scenes tool:

  • Brainstorming issues: Spotting angles or questions to investigate in discovery or at deposition.
  • Organizing, not inventing: Helping outline a declaration or categorize a high‑volume document dump before I personally refine and verify it.
  • Translating complexity: Testing ways of explaining a technical issue—like cloud privacy, data retention, or child‑support tax consequences—in plain English.

What I don’t do:

  • I don’t file anything in court that I haven’t personally reviewed, revised, and cross‑checked against the actual evidence and the current state of California law, including recent guidance on generative AI from the State Bar and legal ethics commentators.
  • I don’t let AI “sweeten” your story. If something didn’t happen, it’s not going into your declaration—no matter how good it would look on paper.
  • I don’t treat AI output as legal research. Any citations, statutes, or cases still get verified the old‑fashioned way because courts have shown they are willing to sanction lawyers and parties who rely on fake or misapplied authorities.

Behind every filed document, there should still be a lawyer exercising human judgment, rooted in actual experience in front of actual judges. That part is not outsourceable.

If you suspect AI misuse in your California divorce

If you’re in a California divorce or custody case and you think AI is being used against you, here are practical steps to take before you spiral:

  • Preserve, don’t edit: Save what you’re receiving—messages, filings, screenshots—without “fixing” or curating them yourself. Don’t run your own evidence through editing tools that can change timestamps, formatting, or content.
  • Flag patterns, not just one document: Point out the volume, tone, and timing of communications, and any disconnect between what’s written and what actually occurred.
  • Talk to your lawyer about strategy: Depending on the facts, the right move might be a narrowly tailored protective order, evidentiary objections, a discovery motion, or simply using cross‑examination to expose the gap between AI polish and real‑world parenting.
  • Focus on your own credibility: Courts notice the party who stays grounded in verifiable facts, respects privacy boundaries, and resists the urge to “win the narrative” at all costs.

The rise of AI hasn’t changed the core question California family courts ask in almost every contested case: Who is acting in good faith, telling the truth, and putting the children’s interests ahead of their own need to score points?

Tools will keep evolving. That question won’t.

The Ethics of Using AI in Divorce Law: A California Attorney’s Perspective

If you’ve been following the hype, it sounds like AI is about to revolutionize everything from grocery shopping to courtroom litigation. For us family law attorneys, it already has — at least in small but significant ways. AI tools now help manage the mountains of paperwork, scheduling nightmares, and data-heavy discovery that come with divorce cases.

But with great tech comes great responsibility.

While AI might be the shiny new assistant in the law office, ethics are the guardrails keeping us from turning legal practice into an unsupervised science experiment. In family law, where privacy, accuracy, and human judgment are everything, these guardrails matter.

Let’s talk about why.

Competence: Yes, Lawyers Must Understand Their AI Tools

Under California’s Rules of Professional Conduct (Rule 1.1) and echoed by the American Bar Association’s Formal Opinion 512 (2024), attorneys have an ethical duty to remain competent in the technology they use.

That doesn’t mean we all have to become AI engineers. But it does mean:

  • We need to understand how AI tools work, especially their limits.
  • We must assess the risks and benefits of using AI in client matters.
  • We’re responsible for supervising AI output, the same way we would supervise a paralegal or junior attorney.

Put simply: AI can draft your discovery requests faster than any human, but I’m the one who has to make sure they’re correct, complete, and legally sound before they go out the door.

Confidentiality: Safeguarding Sensitive Divorce Data

Family law involves deeply personal information: finances, child custody disputes, medical histories, allegations of abuse. When AI tools are involved in processing this data, confidentiality concerns are front and center.

According to both ABA guidance and state bar recommendations, attorneys must:

  • Vet AI tools and cloud services for strong data security protections.
  • Understand where and how client data is stored and processed.
  • Avoid using AI platforms that share data for training large language models without client consent.

For example, tools like LawToolBox process data securely within a law firm’s private Microsoft 365 environment — a safer choice than free or public AI platforms with unclear data policies.

This matters because mishandling client data isn’t just embarrassing — it’s a potential ethics violation and malpractice risk.

Accuracy and the Hallucination Problem: Lawyers Are Still the Gatekeepers

One of the most famous AI blunders happened in 2023, when lawyers submitted a court brief filled with fake case citations generated by ChatGPT. The judge was not amused. (Mata v. Avianca, SDNY 2023.)

This is called “AI hallucination” — when AI confidently fabricates information that looks real but isn’t.

For family law attorneys, this is a huge ethical landmine. Imagine AI hallucinating a case precedent about child custody or spousal support. If an attorney fails to verify that information, they could mislead the court, violate duties of candor (Rule 3.3), and face sanctions.

That’s why ethical use of AI means:

  • Double-checking every citation.
  • Fact-checking AI-generated summaries.
  • Never filing anything AI drafted without personal attorney review.

AI can assist, but it cannot replace human legal judgment. Period.

Bias and Fairness: Not All Data is Created Equal

AI tools learn from historical data. But what if that data reflects biased outcomes?

For example, if a predictive analytics platform is trained on family law cases where mothers overwhelmingly received primary custody, its outputs might lean toward assuming that trend continues — regardless of your specific facts.

The ethical lawyer’s role is to:

  • Recognize and correct for inherent biases in AI recommendations.
  • Ensure AI outputs are used as informative tools, not as gospel.
  • Advocate for outcomes based on the client’s unique situation, not outdated trends.

The ABA and bar associations have raised serious concerns about bias in AI systems, urging lawyers to be vigilant about how these tools might perpetuate inequities if left unchecked.

Transparency: Telling Clients When AI Is Involved

Clients deserve to know when technology is being used in their case. While AI tools can help streamline tasks and lower costs, attorneys should be upfront about their role.

The ethical duty of communication (Rule 1.4) includes:

  • Informing clients when AI tools are being used to assist with their case.
  • Clarifying that all final work product is still supervised and approved by the attorney.
  • Explaining the benefits (efficiency, lower cost) and limits (AI isn’t giving you legal advice).

Transparency builds trust — especially when people are wary of technology handling their personal divorce matters.

Ethics Are the Foundation, Not an Afterthought

At the end of the day, using AI in divorce law isn’t unethical. Using it irresponsibly is.

California family law attorneys must approach AI the same way we approach any new technology:

  • With professional skepticism.
  • With clear ethical oversight.
  • With a commitment to client protection above all.

AI can help me process 1,000 pages of financial records faster. It can remind me of obscure filing deadlines. It can even draft a first version of a spousal support proposal. But it’s still my legal brain — my ethical obligation — that ensures those tools serve my clients well.

The machines are not taking over.
They’re just making the paperwork less painful.


AI in Divorce Law: Why Your Family Lawyer Has a Robot (and That’s a Good Thing)

Let me start with a confession: as a divorce attorney, I used to think “Artificial Intelligence” was just a buzzword tech companies threw around to impress investors. Fast forward to today, and AI is sitting right next to me—summarizing discovery responses, drafting rough pleadings, and politely reminding me of court deadlines I almost forgot.

No, AI isn’t replacing me. But it is making me a smarter, faster, and (dare I say) less-stressed attorney. And if you’re going through a divorce in California, that’s good news for you, too.

Why AI and Divorce Are a Perfect Match

Family law is a paper-heavy, data-heavy, emotionally-charged practice area. We’re not just arguing over child custody and community property—we’re also dealing with tax returns, financial statements, text message logs, social media screenshots, and more bank records than any sane person should have to review manually.

AI thrives on this kind of data chaos. Tools powered by artificial intelligence can process and organize huge volumes of information with a speed no human (or intern) can match. They spot discrepancies across financial statements, flag unusual transactions, and can even help predict case outcomes based on past rulings.

Think of it as having a data-savvy paralegal who never gets tired or distracted by office gossip.

From Calendars to Courtrooms: How AI Works Behind the Scenes

Here’s where the rubber meets the road. AI is helping family lawyers manage the nuts and bolts of divorce cases in ways that are both practical and powerful:

1. Automated Scheduling & Deadlines

Ever worried your lawyer might miss a court deadline? AI tools like LawToolBox take court rules, apply them to your case timeline, and sync key deadlines into the attorney’s calendar—automatically. Even better, they update in real-time if court dates change. In fast-moving custody or support cases, this can be the difference between staying on track and scrambling for continuances.

2. Document Drafting & Organization

Divorce cases generate mountains of paperwork. AI drafting tools now create solid first drafts of pleadings, settlement agreements, discovery requests, and even financial disclosures. Systems like MyCase IQ and Clio Duo can scan through hundreds of pages, summarize key points, and help lawyers maintain organized case files.

This doesn’t just save time—it reduces human error. After all, it’s easy to miss a zero in a busy day, but AI double-checks every figure.

3. Client Communication with a Digital Touch

AI chatbots and virtual assistants are now handling the flood of client questions that once buried law firm inboxes. These bots answer FAQs (like “When’s my mediation?” or “What do I need for a custody hearing?”) instantly—at midnight if needed. They draft polite, factual responses without the emotional baggage, which frankly, is refreshing in high-conflict divorces.

For co-parenting communication, tools like ToneMeter even analyze the tone of messages to help parties keep it civil. Yes, your ex’s snarky email might get “ToneMetered” into something the judge won’t frown at.

4. Financial Analysis & Discovery

AI isn’t just for clerical work. Advanced platforms are diving into forensic accounting tasks—analyzing tax returns, business valuations, and hunting for hidden assets. In cases where one party “forgets” to disclose crypto wallets or side businesses, AI can help connect the dots faster than traditional methods.

5. Predictive Analytics & Case Strategy

One of the most exciting (and slightly intimidating) applications of AI is predictive analytics. Platforms like Lex Machina and Pre/Dicta can analyze thousands of family law cases, including judicial tendencies, and give lawyers a data-backed forecast of possible outcomes.

Want to know if your judge typically awards spousal support above guideline recommendations? There’s AI for that. These insights help attorneys fine-tune negotiation strategies and set realistic expectations for clients.

The Ethics of AI in Divorce (Or, “Don’t Worry, I’m Still the Lawyer”)

With all this technology buzzing in the background, you might wonder: is AI running my case? The answer is a firm no. The American Bar Association and California State Bar have made it clear: AI is a tool, not a substitute for legal judgment.

Attorneys must supervise AI outputs, protect client confidentiality, and ensure all filings meet professional standards. AI might draft a motion, but a real, licensed human (me) is responsible for reviewing, correcting, and filing it.

Also, AI sometimes “hallucinates”—it might invent a legal citation or misread a document. Remember those New York lawyers who got sanctioned for submitting fake cases from ChatGPT? That’s why lawyers need to remain the gatekeepers.

What This Means for You, the Client

For my clients, AI isn’t some cold, robotic overlord. It’s the reason your emails get answered faster, your documents are reviewed more thoroughly, and your case moves along with fewer delays. It means I can spend more time strategizing for your custody hearing and less time manually cross-referencing bank statements.

It also means that even solo and small family law firms can provide Big Law-level efficiency—without charging Big Law fees.

The Bottom Line: AI Is Here to Help (But I’m Still Driving)

AI is revolutionizing divorce law—but it’s not replacing the human side of what we do. Empathy, judgment, and experience are still irreplaceable. Technology handles the grunt work so I can focus on the hard stuff: advocating for you, negotiating fair outcomes, and helping you navigate one of life’s most challenging transitions.

So, next time you hear about AI in divorce cases, don’t picture a robot lawyer in a suit. Picture your human attorney—armed with smarter tools, sharper data, and maybe, finally, a little less caffeine-induced panic.

Want to know how AI could streamline your divorce case? Contact us for a consultation. No robots will answer (but they might help me prep for our meeting).

Tech Meets Tension: How AI Is Changing Divorce in California

Let’s be honest: divorce is already hard enough without also having to figure out how to fill out your financial disclosure forms while sobbing into your coffee. That’s why it’s no surprise that people are starting to turn to artificial intelligence for help—because if a chatbot can plan your vacation, maybe it can also explain how to divide your retirement accounts.

As a California divorce attorney, I’ve seen more and more clients using AI tools to stay organized. Some folks use it to help write declarations for court. Others use it to summarize years of texts with their ex (which they swear will prove emotional abuse). One client used ChatGPT to generate a “sample” custody schedule—it wasn’t terrible, though it did suggest alternating weekends and Thursdays, which sounded more like a dinner reservation system than a parenting plan.

And to be fair, AI can be helpful. It can draft, organize, calculate, even remind you that yes, you do have to list your Coinbase account on your disclosures. It’s like a robot paralegal—but without the judgmental sigh when you hand in your documents two weeks late.

But here’s the thing: divorce isn’t just paperwork. It’s strategy. It’s judgment. It’s law. No algorithm—at least not yet—can tell you whether to settle or fight, or how the judge is likely to rule on your custody modification request. That’s where having a lawyer who actually understands Family Code section 4320 (and maybe also how to spot a narcissist) comes in.

Then there’s the privacy piece. California has some of the strongest digital privacy laws in the country—like the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA). If you’re feeding sensitive information into an AI tool, you may be waiving confidentiality protections without even knowing it. So before you ask ChatGPT to “write me a declaration explaining why I should get the house,” ask yourself: is this something I’d want floating around in a training database?

All of this is to say: AI is here, and it’s changing the way we approach divorce. It can save time, money, and a little bit of your sanity. But it can’t replace legal advice, or common sense. So go ahead and use the tools—but make sure you’ve also got a real person in your corner who knows the law, the court, and how to navigate the messy emotional side of ending a marriage.

And if your ex walks into mediation waving an AI-generated parenting plan? Don’t panic. Just send it to me. We’ll run it through a real filter—one that includes legal knowledge, experience, and maybe a strong cup of coffee.


Divorce Meets the Cloud: How California Privacy Laws Complicate a Breakup

In my divorce practice, I’ve come to expect the usual issues—custody battles, financial disclosures, and the occasional argument over who gets the Peloton. But lately, I spend just as much time talking about iCloud access, shared GPS apps, and whether Alexa heard something useful.

The truth is, divorce in California now comes with a digital layer—and it’s not always easy to peel back. Thanks to the California Consumer Privacy Act (CCPA), codified at California Civil Code §§ 1798.100–1798.199.100, and its expansion under the California Privacy Rights Act (CPRA) (which amended and extended the CCPA effective January 1, 2023), people now have strong rights over their personal data. You can request what data a business has collected about you, ask for it to be deleted, and opt out of its sale. CPRA also gives you the right to correct inaccuracies and to restrict how companies use “sensitive personal information,” like precise geolocation or health data.

All of this is great for individual privacy. But in a divorce? It’s complicated.

I’ve had clients try to bring in everything from Nest camera clips to shared calendars to prove a point. And look, I get it—when you’re hurt or frustrated, your instinct is to gather everything. But just because it’s on your phone doesn’t mean it’s automatically usable in court. If you accessed it without permission—or if it involves your kids—there are legal boundaries you can’t cross, even if your ex “deserves it.”

And yes, people are using AI now too. I’ve seen clients run all their old texts through a chatbot to catch inconsistencies or summarize arguments. Some of it is helpful, but the court still expects actual, authenticated evidence—not a digital vibe check.

What’s clear is that the breakup process today isn’t just emotional—it’s technical. We’re not just dividing homes and parenting time. We’re untangling shared logins, navigating cloud storage, and figuring out who has the right to see what.

So if you’re in the middle of a California divorce, my advice is this: before you start combing through your ex’s digital footprint or screen-recording your co-parenting app, pause. Talk to your lawyer. Understand what’s fair game and what’s a privacy violation.

Because in today’s world, how you gather evidence matters just as much as what you find.

An Introduction to Data&Divorce

The legal landscape is changing—and fast. From digital privacy concerns in custody battles to the growing role of AI in legal decision-making, family law is facing unprecedented transformation.

Data&Divorce is a new blog by Amrit Kullar, CIPP/E, CIPP/C, CIPP/US, Esq. — a California family law attorney and certified privacy professional. This platform is dedicated to exploring how data, AI, and digital rights are transforming the practice and impact of family law.

Whether you’re a legal practitioner, privacy advocate, technologist, or parent navigating modern litigation, this blog will offer grounded insights, policy analysis, and forward-thinking commentary to help you stay ahead of the curve.

Stay tuned for content at the cutting edge of family law in the digital age.