The Robot Speaks: AI-Generated Evidence and the Modern Rules of Admissibility

The Robot Speaks: AI-Generated Evidence and the Modern Rules of Admissibility

Artificial intelligence tools – from chatbots to co-parenting apps – now generate text and audio that sound uncannily human, yet our evidentiary rules still presume a human speaker behind every statement. This article examines whether statements “authored” or heavily reshaped by AI are admissible in court, and if so, under what theory.

Modern AI can compose emails, transcribe meetings, and even “filter” the tone of co-parent communications to reduce conflict. But when those AI-mediated statements end up as evidence, courts face a quandary: Does the utterance have a declarant in the eyes of the law? If not, can it be hearsay at all? If yes, who is “speaking” – the human user, the programmer, or the algorithmic robot? As we’ll see, the answers are evolving, raising novel questions about hearsay exceptions, authentication of machine outputs, and the practical impact on family law disputes (with a dash of wit along the way).

The Threshold Question: Is AI a Declarant? Rule 801(b) of the Federal Rules of Evidence defines a declarant as “the person who made the statement.” On its face, this excludes non-humans. Traditionally, courts have treated purely machine-generated data as non-hearsay, precisely because no person made a “statement”. A classic example: a GPS log or breathalyzer printout isn’t hearsay – it’s the output of an apparatus, admissible once properly authenticated, since a computer isn’t a “person” capable of an assertion. In evidence law lore, only a human can be the declarant of a statement.

Enter generative AI. Today’s AI systems (think Alexa, Siri, or ChatGPT) blur the line by producing human-like statements in natural language. For instance, a co-parenting app might automatically rephrase an angry text into polite language before delivering it. If Parent A types “You’re always late and irresponsible!” but the app’s AI Tone-Mediation delivers to Parent B a toned-down version (“I feel frustrated when you’re late”), who is the declarant of the delivered message? Parent A, who originated the content? The AI developer, whose algorithm chose the gentler words? Or no one at all? Our evidentiary framework has never had to ask “who” is speaking when the “speaker” is a piece of code.

One useful analogy is a human interpreter. Courts often treat an interpreter as a “language conduit” for the real human declarant, so long as the translation is reliable. In such cases, the translated statement is attributed to the original speaker, not the interpreter, and doesn’t add a hearsay layer. By analogy, if an AI merely translates or faithfully paraphrases a person’s statement (e.g. converting Spanish to English, or converting a verbal rant into milder wording without changing substance), one could argue the person remains the declarant – the AI is just an instrument. In that view, the output would be treated like the person’s own statement (and potentially admissible as, say, a party’s admission or prior statement, if applicable). However, this assumes the AI is truly accurate and neutral. Unlike a sworn interpreter, an AI might subtly (or not so subtly) alter meaning or omit context. If the “translation” isn’t faithful, it’s as if the interpreter went rogue – and now we have an unreliable paraphrase with no human to cross-examine.

Another analogy might be a transcription device. If a voicemail is transcribed by software, the text is simply a new form of the same human statement. But generative AI goes beyond mere transcription – it can inject its own phrasing or even hallucinate facts. So the simple conduit theory breaks down if the AI is doing more than mechanically relaying human input.

Some legal scholars have mused about corporate “collective intent” or organizational voice as an analogy: corporations speak through many humans; perhaps an AI’s output reflects the collective input of its human trainers or programmers. Indeed, AI outputs are shaped by human-created training data and algorithms – essentially the echo of many human voices. But attributing an AI statement to a particular person (or group of people) is usually speculative. If an AI chatbot spouts a defamatory sentence learned from no single identifiable source, it’s hard to pin that on any one human declarant. In effect, the AI itself “spoke,” yet legally it’s not a person. This is a square peg for the round hole of Rule 801(b).

Courts are just beginning to wrestle with this threshold question. One federal court noted bluntly: “Only a person may be a declarant and make a statement,” so a machine-generated output by itself cannot be hearsay. In United States v. Washington (4th Cir. 2007), data from diagnostic machines were deemed not to implicate hearsay or the Confrontation Clause because no human made an out-of-court assertion. Similarly, the Tenth Circuit found computer-generated records, created without human intervention, fell outside Rule 801 altogether. These cases involved things like lab results and ATM logs – far from Siri chattering or an AI co-parenting assistant rewriting messages. Still, they set a baseline principle: if a computer is the “speaker” and operates autonomously, the law’s current inclination is “no person, no hearsay”.

That principle, however, can lead to puzzling outcomes. It made sense when applied to breathalyzers and radar guns – tools that passively record measurements. But it becomes murkier when AI generates fluent sentences that look like statements of fact or opinion. Is the law prepared for a future where an AI “describes in lurid detail what it heard at a meeting”? The answer is in flux. If the AI’s output is entirely free of human input, courts may treat it as a non-hearsay machine statement (subject only to authentication). But when AI is intertwined with human communication – rephrasing a person’s words or drawing on human-supplied prompts – courts may need to decide “who’s really talking.” Is the human user the declarant, with AI as their megaphone? Or is it no declarant at all, leaving a kind of evidentiary ghost in the machine?

At the threshold, then, AI forces us to reconsider the definition of a “statement” and a “person” under hearsay rules. As one commentator put it, AI blurs the line between human testimony and mechanical output. The rules assumed a world of persons, but the robots have entered the chat.

Hearsay and Its (Inapplicable?) Exceptions Suppose a piece of AI-generated content is offered in court to prove the truth of something it asserts. If no human declarant is behind it, can the hearsay rule even apply? By the letter of Rule 801, it wouldn’t be hearsay at all – an out-of-court statement must be by a declarant (a person) This loophole has led some litigators to argue that AI outputs sidestep hearsay objections entirely. Indeed, one court observed that because the “programs make the relevant assertions, without any intervention or modification by a person,” the hearsay rules simply don’t apply. In practice, that means an AI-generated report or conversation could be admitted for its truth even though no human can be cross-examined about it.

Before we celebrate our new robot witnesses, consider why hearsay is generally barred: we distrust out-of-court assertions unless we can test the speaker’s perception, memory, narration, and sincerity. With AI, there’s no conscious perception or memory, just algorithmic processing – and certainly no oath or cross-examination. Treating AI outputs as non-hearsay bypasses those safeguards, potentially letting in evidence that has not been subject to any human truth-testing. A chatbot’s authoritative-sounding statement might be riddled with errors or “hallucinated” facts, yet a jury might credit it as if it were a neutral computer oracle. The risk is that we give undue weight to what the robot said, precisely because it feels like machine-generated data (historically seen as objective), even though generative AI can be dead wrong or biased.

What if, on the other hand, a court views the AI output as essentially a human statement in disguise? For example, imagine an AI platform that summarizes user reviews and the summary is offered to prove what customers thought. The summary’s content ultimately originates from humans (the reviewers), so hearsay is lurking beneath the AI veneer. We might have a double-hearsay scenario: human statements filtered through AI. Each layer would need an exception or exemption. Perhaps the underlying human statements are admissible under an exception (say, present sense impressions or party admissions, depending on context), and the AI’s summarization is treated as a sort of translation (as discussed earlier) rather than an independent assertion. But these analyses get very complex, and few courts have tackled them head-on yet.

Let’s assume for a moment that an AI-generated statement is considered hearsay. Could it fit any of the familiar exceptions in Rules 803 and 804? Many exceptions are premised on human qualities that AI lacks. Take the excited utterance exception: a statement made in the heat of excitement about a startling event is considered reliable because humans don’t have time to lie under stress. An AI, however, doesn’t get excited (no cortisol rush in silicon chips). A present sense impression requires a declarant describing an event as it happens – again, it assumes human perception and spontaneity. AI has no “senses” or contemporaneous consciousness of an event, apart from what its programming provides. Dying declarations? An AI isn’t exactly aware of impending death (unless your laptop’s battery is at 1%, but that’s a stretch). In short, the classic hearsay exceptions (excited utterance, present sense, state of mind, etc.) don’t map neatly onto machine speech, because they rely on human emotional or sensory conditions to guarantee trustworthiness.

What about treating an AI output as a business record? Rule 803(6) allows records of regularly conducted activity if made at or near the time by someone with knowledge, as part of a routine procedure, deemed reliable. One could argue that an automated system’s outputs (say, an AI-generated customer service chat transcript or a financial analysis produced daily by AI) are business records if the business regularly relies on them and a qualified witness can explain how they’re generated. Indeed, some courts have indicated that machine-generated logs can be admitted as business records or simply as non-hearsay records of a process. However, even here, a human custodian must lay the foundation that the process is reliable. If pushed, the custodian might have to testify about the algorithm’s accuracy and the method of its creation – not a simple task if the AI is a complex, evolving system. Furthermore, business record exception assumes the entrant of the data had a duty to report accurately (or at least no incentive to lie). An AI has no “incentive” at all, but also no sense of accuracy or truth – it just generates content based on patterns. Using 803(6) might be possible, but expect vigorous arguments over whether the AI’s output truly is a record “kept in the course of a regularly conducted activity” and whether the system is shown to be reliable.

Residual exception (Rule 807): the catch-all safety valve for hearsay that doesn’t fit elsewhere, if it has equivalent guarantees of trustworthiness and is very probative. Could an AI’s statement come in under Rule 807? Possibly – if a party can demonstrate the output is highly reliable and necessary. For example, if an AI system independently analyzed thousands of documents and produced a summary that no human could practically produce, a court might entertain 807 if that summary is crucial. But “trustworthiness” is the sticking point. Opponents will highlight that generative AI can fabricate facts out of thin air with complete confidence. Without a human declarant, how do we gauge credibility? Perhaps through technical evidence of accuracy (e.g. showing the AI’s output was verified against known data). The residual exception requires notice to the other side, so at least everyone would know of the intention to use the AI statement and could prepare challenges to its reliability.

Interestingly, treating AI outputs as non-hearsay because of no human declarant is a double-edged sword. On one hand, it avoids hearsay hurdles and Confrontation Clause issues in criminal cases. On the other, it means the usual assurances of reliability built into hearsay exceptions aren’t automatically available. Some commentators warn that admitting AI “statements” too freely – as if they were just machine readings – could let in false or biased evidence that jurors might take at face value. Unlike a human witness, an AI cannot be cross-examined on what facts it considered or why it responded a certain way. Its “memory” can’t be tested, nor its honesty, because it has neither. As one article wryly noted, you can’t put Alexa or ChatGPT on the stand and ask, “Were you lying or hallucinating when you said this?” – the “robot witness” simply isn’t equipped for our adversarial truth-seeking process.

Courts may eventually develop a new approach, perhaps treating AI-generated statements as a special category. For now, lawyers confronting AI evidence are left analogizing to imperfect fit doctrines. In sum, many AI outputs will technically not be hearsay at all under current definitions, but that very fact might prompt judges to scrutinize them even more under other rules (like authenticity and Rule 403 balancing). After all, if evidence seems unreliable or confusing, a judge can always exclude it for lacking probative value or causing undue prejudice – hearsay or not. A “robotic assertion” that can’t be trusted is a prime candidate for the chopping block under Rule 403’s common sense check.

Laying the Foundation: Authenticating the Unauthored Getting AI-derived material admitted into evidence will often boil down to authentication under Rule 901. Rule 901(a) requires the proponent to produce evidence “sufficient to support a finding that the item is what its proponent claims it is.” Normally, that might mean a witness testifying, “I recognize this as the email I received from X,” or an expert saying, “This photo hasn’t been altered.” With AI outputs, what does it mean to authenticate something that had no human author? What exactly must we show “it is”?

At minimum, the proponent must show that the AI-generated exhibit actually came from an AI process as claimed. This could involve a chain of custody for algorithmic output: documenting how the input was given to the AI, which system or model (and what version) was used, and what output was produced. For example, an attorney wanting to introduce a ChatGPT conversation log as evidence would need to explain, perhaps through a witness or affidavit, the steps taken: “On March 1, 2025, I entered the following prompt... the system (GPT-4, version X) produced the following response, which I saved... this printout is a true and accurate copy of that response.” This is akin to authenticating a printout of an online chat or text message – you need some testimony or certificate of how it was obtained and that it hasn’t been altered.

However, unlike a simple chat log, an AI’s output may not be reproducible later. AI models evolve – developers update them, or if it’s a learning system, it might change over time. Even on the same day, many generative AIs produce non-deterministic results (they include a degree of randomness). So a crucial issue is: Can you replicate the result? If two people input the same query into the same AI model, will they get the same answer? Not necessarily. This makes authentication tricky – the opponent can’t just rerun the query to verify the output’s content, especially if the model has been updated or the output depends on context. It’s as if you had a photograph that, when retaken, never looks the same twice.

Courts and rulemakers are actively grappling with these challenges. Notably, in 2024 the U.S. Judicial Conference’s Advisory Committee on Evidence Rules began considering amendments to address AI evidence. One proposed change is to Rule 901(b)(9) (the subsection for authenticating processes or systems) to explicitly account for AI. The draft proposal would require that for AI-generated outputs, the proponent must show the system is “reliable” – not just that it produced an accurate result in one instance. In other words, you’d need evidence describing how the AI was trained and operates, and that it was reliable in generating the particular output at issue. This is a taller order than current 901, which only asks for “accurate result” from a system. Given AI’s complexity, proving reliability might involve expert testimony about the model’s testing, error rates, or perhaps the training data that underlies it.

One New York court has already signaled a cautious approach: it held that due to the “rapid evolution of artificial intelligence and its inherent reliability issues,” a hearing should be held prior to admitting AI-generated evidence, to rigorously test reliability. In effect, the judge treated the AI output a bit like scientific evidence requiring a pretrial Daubert/Frye reliability check. The court wasn’t content to accept the printout at face value; it wanted to probe under the hood first.

When authenticating AI outputs, lawyers might need to address questions like: Which version of the algorithm was used? Software often gets updated – the exact model that generated the content could be deprecated by the time of trial. If you can’t show which version created the statement, how can the court assess its reliability? Additionally, was the AI model properly functioning and not corrupted at that time? This is analogous to showing a breathalyzer was calibrated and working correctly on the test date. With AI, “calibration” might mean showing it was using a known architecture and that its parameters hadn’t been tampered with.

Another factor is provenance of inputs. If the AI output is based on user-provided information (for instance, an AI that summarizes the parties’ conversation), you must authenticate those inputs as well. There could be a chain-of-custody within the chain-of-custody: prove the original data was fed in correctly and the output is truly generated from that data via the algorithm.

Courts might also worry about digital tampering. An AI-generated text is just bits – what if someone manually edited the output after generation? Preservation of metadata becomes important. Just as in other digital evidence, you’d want to preserve logs or metadata that show when the output was generated and that it hasn’t been modified since. For instance, a platform like OurFamilyWizard might be able to provide records showing a message was auto-edited by ToneMeter and the final version was sent at a certain time. If an app is to be a source of evidence, one hopes it retains an audit trail.

The problem of evolving models also raises a subtle authenticity point: if the same prompt given today yields a different message than it did a year ago, is the exhibit truly what it “purports to be”? It purports to be the AI’s answer at that time. So the proponent must nail down that time-and-version context. This could involve documentation from the AI provider about how the system operated at that time. Yet, AI companies are often secretive (trade secrets and all). The Colorado Lawyer noted that efforts to obtain info to test an AI’s reliability often face resistance from vendors claiming proprietary secrets. So authenticating an AI output might require cooperation from the AI’s maker – or at least an affidavit from a custodian of records at, say, OpenAI or the co-parenting app company, attesting to the output’s generation.

Beyond Rule 901, some have suggested treating AI outputs a bit like expert evidence: after all, an AI’s analysis or summary is kind of an expert opinion without an expert. In fact, the November 2025 Advisory Committee meeting notes hint that any AI output offered (even outside expert testimony) must meet the standard for expert testimony in terms of reliability. This implies that to admit AI-derived information, one might need to satisfy a court that it’s as reliable as a qualified expert’s opinion under Rule 702 (which is a high bar!). While not yet law, it shows the mindset: reliability and intelligibility of process are paramount.

To sum up, authenticating “unauthored” evidence means proving the AI’s identity and trustworthiness. Practitioners should be prepared to: (1) document the exact process that produced the exhibit (inputs, software, outputs), (2) call an expert or custodian who can explain that process, and (3) perhaps demonstrate that the AI was operating in a reliable manner. If there’s any doubt about tampering or if the output seems suspect (think deepfakes or heavily manipulated media), courts may even apply heightened scrutiny or burden shifting – e.g., requiring the proponent to prove authenticity by a preponderance if the opponent raises a credible question of alteration. The age of “plug and play” evidence is over; with AI, it’s “prove and lay (foundation)”.

Practical Implications for Family Law What do all these lofty evidentiary doctrines mean for the everyday reality of family law, especially in high-conflict divorce and custody cases? Family law is often on the front lines of new tech in court – think text messages, social media posts, parenting apps – and AI is no exception. Here are some concrete implications, particularly for co-parenting communications and evidence of behavior:

  1. Tone-Filtered Communication: Blessing or Curse? Many co-parents now use court-approved apps like OurFamilyWizard (OFW) or TalkingParents to message each other. These platforms create an official record of all communications for later reference by lawyers or judges, which should discourage bad behavior. Features like OFW’s ToneMeter™ take it a step further: using AI, ToneMeter will flag or even help rephrase messages that come off as hostile or emotionally charged. The goal is to prevent inflammatory language – a relief to parents and children alike. But the flip side is sanitized evidence. If a parent inclined to send abusive tirades is consistently toned down by the app, the record ends up showing relatively polite exchanges. In a way, the AI acts like a automatic PR agent, scrubbing out the nastiest parts. Later in court, the abusive parent can appear much more reasonable on paper than they truly were in the moment.
  2. Consider a scenario: One parent believes the other is verbally abusive. They communicate only through a monitored app. The abusive partner types vile insults, but the app’s AI filter blocks or softens them. The receiving parent might be spared the hurtful words (good for their sanity), but now there’s no documentation of the abuse in the communication log – just a note that a message was blocked, perhaps. This “evidence laundering” means that a pattern of harassment might not be readily demonstrable to a judge. The abusive parent, in essence, gets to have their vitriol and hide it too. The victimized parent could testify, “They write awful things to me,” but without a record or with only milder versions, it can become a he-said/she-said issue.

    Even when messages aren’t blocked entirely, tone-polishing can mask intent. If Alex writes to Jordan: “I can’t wait to see how you screw this up, like always,” and ToneMeter suggests a rewrite that Alex then sends as, “Looking forward to seeing your approach on this, let’s discuss if any issues,” the literal content is now benign. But the intent and hostility behind it have been papered over. A judge reading the exchange might think, “What’s the problem? These seem cordial.” Context is lost. In family law, where courts examine the cooperation and demeanor of parties, an AI-mediated calm tone could mislead the court about how cooperative or contrite a parent truly is. It’s a bit like a notorious curmudgeon hiring a speechwriter to draft only polite statements – the court sees the polished product, not the rough reality.

    None of this is to say tone-filtering is bad – it serves an important purpose in reducing day-to-day conflict. But lawyers should be aware of its evidentiary impact. It may be wise to ask, in discovery or testimony, whether a party’s communications were AI-assisted. If Parent B claims Parent A’s messages always seemed artificially sweet, maybe they were! The judge might consider that in assessing credibility – e.g., if it’s revealed that all of one parent’s messages were run through an AI sugarcoating filter, the court might not take the polite tone at face value.

    1. AI-Mediated Dialogue and Misreading Intent: Beyond tone, AI may mediate communication content in other ways. For instance, an app might automatically summarize lengthy back-and-forth exchanges for easier reading, or prioritize certain messages using AI. This introduces a layer of interpretation that could misrepresent what happened. AI algorithms don’t grasp sarcasm, inside jokes, or subtle threatening undertones. A sarcastic remark like “Sure, you’re Father of the Year” might be summarized as “He acknowledged that you are a good father” – 180 degrees off! If courts or evaluators rely on such AI-curated summaries, they could seriously misread the situation.
    2. Even without summaries, one party using AI to draft messages can affect how their intent is perceived. A parent who isn’t a native English speaker might use AI to translate or polish their messages, resulting in unusually formal or eloquent notes. The other side might argue “They’re having someone else write their messages – maybe they don’t even understand what ‘they’ wrote,” casting doubt on genuineness. We haven’t quite seen a “I demand proof you actually wrote this sincere apology” motion yet, but it could happen in extreme cases.

      1. Loss of Spontaneity as Evidence: Family disputes often hinge on proving a pattern of behavior (e.g., one parent’s anger issues or contempt for the other). Pre-AI, an angry text replete with expletives could be Exhibit A showing that pattern. If AI coaching and filters remove the expletives and aggressive wording, the “receipts” are cleaner. That could be good (people behave better) or bad (true feelings go underground). It raises a philosophical question: do we want the evidence of bad behavior, or do we want to prevent the bad behavior in the first place? Family courts certainly prefer parents to actually behave better, not just look better on paper. Yet when making decisions like custody, courts also need the truth of family dynamics. AI tools inadvertently create a tension between improving communication and recording genuine behavior.
      2. Authentication and Metadata in App Records: From a practical standpoint, attorneys should not assume that printouts from a co-parenting app will be automatically accepted as evidence. You should obtain proper records (often, these apps offer certified downloadable logs for court). If the app allows message editing or deletion (most claim to preserve everything, but one should verify), you’ll want to know that. And if the app uses AI for anything, find out if the system logs the original vs. modified messages. For instance, does OFW ToneMeter keep a copy of the unedited text before the user hits send with the changes? If not, the original words may be gone forever. If it does, a subpoena might retrieve them – potentially revealing what a party wanted to say before self-censoring. That could be very interesting evidence (imagine a printout showing: Original draft: “You are a @$%#&!!” Edited version sent: “I’m upset with you.”). However, accessing such data might raise privacy concerns and currently to our knowledge, most apps do not make the “pre-edit” text available to the opposing party, only the final message.
      3. Policy Proposals and Best Practices: To address these issues, a few ideas are floating around in legal circles. One is to require transparency in AI-assisted communications. For example, a family court could order that any message modified by an app should be flagged as such (e.g., an asterisk noting “Tone adjusted by software”). This at least alerts the court that what it’s reading isn’t the unfiltered voice of the party. Another idea is metadata retention mandates: family communication platforms could be required (perhaps by statute or court rule) to retain the original content and any AI suggestions in a secure way, so that if needed (say, in an abuse allegation), the data can be reviewed by the court under seal. This is akin to preserving an original and redlined version of a document.
      4. Additionally, lawyers and judges should be educated about these tools. If one side is suspicious that the other’s communications are too perfect, a bit of savvy might uncover AI involvement. Judges might ask, in credibility determinations, “Did you personally write this message to your ex, or did you use assistance?” Such questions may become as routine as asking whether a document was edited or a photo filtered.

        Finally, we can’t ignore the converse scenario: what if a parent tries to introduce an AI-generated fake – like a bogus text purportedly from the other parent? Deepfake text and images are a concern in all areas of law, including family cases. Parties could potentially attempt to manufacture an incriminating chat log with AI. This again underscores the need for authentication vigilance: examine metadata, get app records directly from the provider when possible, and use experts if there’s any doubt. The technology cuts both ways – it can fabricate evidence as easily as it can sanitize it.

        In conclusion, AI is rapidly changing how we communicate and, by extension, the evidence that emerges from those communications. The rules of evidence were not drafted with robot authors or algorithmic interpreters in mind, and it shows. Courts and practitioners must adapt, applying old principles to new technology in a way that preserves fairness. We’ve seen that under current rules, many AI-generated statements simply fall outside hearsay definitions, yet their reliability can’t be assumed. Authenticity and accuracy become the battleground – expect more mini-trials on whether an AI’s output is what it purports to be (and does what it purports to do). In the family law arena, judges may need to take AI-influenced evidence with a grain of silicon, being mindful of what might have been lost or altered in translation by the well-meaning robot intermediary.

        As one commentator aptly noted, admitting machine-generated evidence forces us to ask basic questions anew: “What is a statement? And who – or what – is a declarant?” Right now, the robot may speak, but the law is still learning how to listen. In time, our evidence rules will likely evolve to explicitly address AI’s role, perhaps treating AI outputs neither purely as human statements nor as infallible machine readings, but as a new category of evidence requiring its own safeguards. Until then, lawyers in the trenches (like this humble divorce lawyer in Chicago) must tread carefully. Use AI’s help to write a witty LinkedIn post, sure – but when it comes to the courtroom, be ready to show the provenance of every pixel and byte. The robots may not get the last word in court, but they’re certainly going to have their say.

        Sources:

        Hope E. Newkirk, Can Algorithms Be Declarants? The Future of Hearsay in the AI Landscape, The Advocate’s Advantage (June 2, 2025)

        Allen Waxman et al., Proving Admissibility of AI Outputs Centers on Authenticity, Bloomberg Law (Feb. 25, 2025)

        Paul Spadafora, Your AI Can Testify Against You: The AI Revolution Comes to the Court of Law, Lasher Holzapfel Sperry & Ebberson Blog (Feb. 15, 2023)

        The Algorithmic Family: How AI Is Rewriting the Rules of Family Law, Colorado Lawyer (Sept/Oct. 2024)

        Shea Denning, Is the Translation or Interpretation of Another’s Statements Hearsay?, UNC School of Gov’t Blog (Dec. 7, 2011)

        The Hidden Risk of Using AI Like ChatGPT for Co-Parenting, BestInterest.app Blog (July 7, 2025)

        --- ## Related Articles - [Automated Legal Document Review](https://steelefamlaw.com/article/automated-legal-document-review) - [Navigating the Digital Divorce in Illinois: A 2025 Guide to Evidence, Assets, and Privacy](https://steelefamlaw.com/article/navigating-the-digital-divorce-in-illinois-a-2025-guide-to-evidence-assets-and-privacy) - [Using Encrypted Messaging Services In High-Conflict Family Cases](https://steelefamlaw.com/article/using-encrypted-messaging-services-in-high-conflict-family-cases)

        For more insights, read our Divorce Decoded blog.