Executive Summary
The burgeoning field of Artificial Intelligence (AI) faces a regulatory inflection point that threatens the very foundation of its development model: the data. For nearly two decades, the technology sector operated under a regime where data privacy violations were penalized primarily through monetary fines. While these fines occasionally reached record-breaking sums, they were frequently absorbed as operating costs by well-capitalized firms, leaving the underlying technological assets—the algorithms trained on illicitly acquired data—intact. This era of "regulatory overhead" is rapidly ending, replaced by a far more draconian enforcement doctrine known as "algorithmic disgorgement" or model destruction.
🔒 Security Note: Protecting sensitive family information is critical. Learn how SteeleFortress helps law firms and families safeguard their digital assets.
This report provides an exhaustive analysis of this emerging landscape, synthesizing enforcement trends from the U.S. Federal Trade Commission (FTC), European Data Protection Authorities (DPAs), and U.S. State Attorneys General. The research identifies a unified regulatory theory gaining global traction: the "fruit of the poisonous tree." Under this doctrine, AI models derived from data collected via deception, unfair practices, or insufficient consent are deemed contraband. The remedy is not merely the deletion of the data, but the complete destruction of the models, algorithms, and "work product" derived from them.
We analyze seminal enforcement actions, including In the Matter of Everalbum, FTC v. Rite Aid, and the global regulatory siege against Clearview AI, to demonstrate that disgorgement is no longer a theoretical threat but a standard enforcement tool. The report further examines the divergence in penalties, contrasting the FTC's structural remedies with the massive financial settlements seen in state-level actions like Texas v. Meta.
Crucially, this report addresses the profound disconnect between legal mandates and technical reality. "Machine unlearning"—the process of surgically removing specific data influence from a trained model—remains a nascent field fraught with challenges, including "catastrophic forgetting" and model degradation. Consequently, the current regulatory demand for disgorgement often necessitates full model retraining, a process that imposes prohibitive costs in terms of computation, time, and carbon emissions.
Finally, we outline the strategic pivot required for survival. Organizations must transition from reactive compliance to proactive "Data Lineage Architecture," adopting frameworks such as the NIST AI Risk Management Framework (RMF) and ISO 42001. We introduce the concept of the "AI Bill of Materials" (AI-BOM) as a critical defense mechanism, enabling the granular traceability necessary to survive a disgorgement order without total asset liquidation.
1. Introduction: The End of Move Fast and Break Things
1.1 The Historical Context of Data Enforcement
To understand the severity of the current regulatory climate, one must appreciate the paradigm shift it represents. Throughout the "Big Data" era of the 2010s, the prevailing regulatory model was "Notice and Consent." Companies were largely free to collect vast troves of personal information provided they disclosed this in a privacy policy—a document rarely read and poorly understood by consumers. When violations occurred, they were typically treated as procedural errors. Regulators would impose a fine, mandate a privacy audit, and order the deletion of the specific data records in question.
This approach created a perverse economic incentive structure for the development of machine learning systems. In traditional software development, code is the primary asset. In machine learning, the model—the learned weights and parameters derived from data—is the asset. If a company illegally collected user data to train a superior recommendation engine or facial recognition system, and was subsequently caught, they might pay a fine and delete the raw logs. However, the intelligence gained from that data remained embedded in the model. The company essentially laundered the ill-gotten data into a clean, profitable algorithm.
1.2 The Rise of Algorithmic Disgorgement
Algorithmic disgorgement dismantles this incentive structure by asserting that the model itself is "ill-gotten gain." The FTC, led by Commissioners increasingly focused on substantive harm rather than procedural compliance, has reinvigorated its Section 5 authority to prohibit "unfair or deceptive acts or practices." The logic is stark: if the data was poisonous, the fruit is poisonous.
FTC Commissioner Rebecca Kelly Slaughter articulated this doctrine explicitly, stating that "when companies collect data illegally, they should not be able to profit from either the data or any algorithm developed using it". This remedy is not punitive in the traditional sense of a fine; it is restorative. It seeks to return the market to the state it would have been in had the violation not occurred. Since the model would not exist (or would be less capable) without the illicit data, the model must be destroyed.
1.3 The Scope of the Report
This report serves as a strategic manual for navigating this new reality. It is structured to guide legal, technical, and executive stakeholders through the lifecycle of this risk:
- Regulatory Analysis: Detailing the legal theories driving enforcement in the US, EU, and UK.
- Case Law Review: Dissecting the specific settlement orders to understand the triggers for deletion.
- Technical Feasibility: Evaluating the engineering challenges of compliance.
- Operational Defense: Defining the governance structures required to mitigate risk.
2. The Regulatory Framework: Global Convergence on Model Liability
The push for algorithmic disgorgement is not an isolated phenomenon but a convergent trend across major jurisdictions. While the legal mechanisms differ, the functional outcome—the loss of the AI asset—remains consistent.
2.1 United States: The FTC's Unfairness Doctrine
The Federal Trade Commission has become the de facto primary regulator of AI in the United States, utilizing its broad authority under Section 5 of the FTC Act.
2.1.1 Deception as a Trigger
The clearest path to disgorgement is deception. If a company makes a material representation about data privacy—for example, "we will delete your photos when you deactivate your account"—and fails to honor it, the continued use of that data to train a model constitutes a deceptive practice. The FTC views the resulting model as the direct product of that deception. In In the Matter of Everalbum, the deception regarding the use of facial recognition technology was the primary hook for the deletion order.
2.1.2 Unfairness as a Trigger
More aggressively, the FTC is applying its "unfairness" authority to AI deployment. An act is "unfair" if it causes substantial injury to consumers that is not reasonably avoidable and not outweighed by countervailing benefits to consumers or competition. In FTC v. Rite Aid, the Commission argued that deploying a facial recognition system with known bias and insufficient accuracy testing was fundamentally unfair. This expands liability beyond privacy policies to the quality and impact of the model itself. If a model is built on biased data or deployed recklessly, it may be subject to disgorgement regardless of whether the data collection was technically consented to in the fine print.
2.1.3 Children's Privacy (COPPA)
The Children's Online Privacy Protection Act (COPPA) provides a strict liability standard for collecting data from children under 13. The FTC's settlement with WW International (Kurbo) established that algorithms trained on children's data without parental consent are "affected work product" subject to destruction. This poses a massive risk for EdTech and gaming companies where age-gating may be imperfect; a "mixed" dataset containing unidentified minors could poison the entire model.
2.2 European Union: GDPR and the AI Act
In Europe, the route to model deletion is grounded in fundamental rights and rigorous data protection principles.
2.2.1 GDPR Article 17: The Right to Erasure
The "Right to be Forgotten" (Article 17) entitles individuals to have their personal data erased. The contentious legal question is whether a trained model constitutes "personal data." If a model "memorizes" training data (a known phenomenon in Large Language Models) or if its weights can be reverse-engineered to reveal specific individuals (model inversion attacks), regulators argue the model is personal data. Furthermore, if the processing (training) was unlawful ab initio due to a lack of legal basis (Article 6), the remedy is often a ban on processing, which effectively freezes or kills the model.
2.2.2 The EU AI Act: Prohibited Practices
The newly enacted EU AI Act introduces a "product safety" approach. Article 5 prohibits certain AI practices entirely, such as:
- Biometric categorization systems that infer sensitive attributes (race, politics, sexual orientation).
- Untargeted scraping of facial images to build recognition databases.
- AI systems that use subliminal techniques to distort behavior.
For these prohibited systems, the penalty is not just a fine (up to 7% of global turnover) but the mandatory withdrawal of the system from the market. This is a legislative mandate for disgorgement of specific categories of AI.
2.3 United States: State Attorneys General
While the FTC focuses on structural remedies, State Attorneys General (AGs) are leveraging biometric privacy laws to impose existential financial costs.
2.3.1 Texas CUBI and Illinois BIPA
Laws like the Texas Capture or Use of Biometric Identifier Act (CUBI) and the Illinois Biometric Information Privacy Act (BIPA) require informed consent before capturing biometric identifiers. These laws carry statutory damages per violation (e.g., $1,000 to $5,000 per scan). In Texas v. Meta, the state secured a $1.4 billion settlement for the unauthorized tagging of photos. While the settlement did not explicitly order model destruction, the financial penalty serves as a functional equivalent—stripping the profit from the violation and acting as a massive deterrent.
3. Case Study Analysis: The Enforcement Precedents
The trajectory of enforcement reveals a tightening noose around data provenance. The following case studies illustrate the evolution from simple data deletion orders to complex algorithmic destruction mandates.
3.1 Everalbum (2021): The Precedent Setter
The Violation: Everalbum, a photo storage app, offered a "Friends" feature that grouped photos by face. The FTC alleged that Everalbum deceived consumers by enabling facial recognition by default and failing to delete the photos of users who deactivated their accounts, as promised.
The Remedy: The consent order was historic. It required Everalbum to delete not only the illegally retained photos but also "any facial recognition models or algorithms developed with Ever users' photos or videos".
Implications: This case established the "Duty to Delete" work product. It signaled that the FTC understands the value chain of AI: the photos are raw material; the model is the finished good. Destroying the raw material is insufficient if the finished good remains. For AI startups, this highlighted that the valuation of their IP could drop to zero if their training data was found to be tainted.
3.2 WW International / Kurbo (2022): The Poisoned Well
The Violation: The Kurbo app, acquired by WW International (formerly Weight Watchers), helped children track their food intake. The FTC alleged the app collected personal information (names, emails, dietary data) from children under 13 without verifying parental consent, violating COPPA.
The Remedy: The settlement defined "Affected Work Product" as any models or algorithms developed in whole or in part using the illicit data. WW was ordered to destroy this work product.
Implications: This case introduced the concept of the "poisoned well." Even if the children's data constituted only a small fraction of the total training set, its presence tainted the entire model. Because the company likely could not disentangle the specific influence of the children's data from the model's weights, the entire model had to be scrapped. This underscores the catastrophic risk of "mixed" datasets where compliant and non-compliant data are commingled.
3.3 Rite Aid (2023/2024): The High Cost of Unfairness
The Violation: Rite Aid deployed facial recognition technology in hundreds of stores to deter theft. The FTC alleged the system generated thousands of false positives, disproportionately flagging Black, Latino, and Asian consumers as shoplifters. The Commission charged this as an "unfair" practice because Rite Aid failed to test the system's accuracy, failed to train staff, and used low-quality images.
The Remedy:
- 5-Year Ban: Rite Aid was banned from using facial recognition for surveillance for five years.
- Model Deletion: The company was ordered to delete all photos used to train or operate the system and "any data, models, or algorithms derived in whole or in part therefrom".
- Vendor Accountability: Rite Aid was required to instruct its third-party vendors to delete the data and models, highlighting that liability flows up and down the supply chain.
Implications: The Rite Aid case is a watershed for "Algorithmic Fairness." It establishes that incompetence in AI deployment—failing to test for bias or accuracy—is a legal violation that can lead to disgorgement. It also serves as a warning against relying on third-party "black box" vendors without independent validation.
3.4 Avast (2024): The Data Broker Crackdown
The Violation: Avast, a security software company, collected detailed browsing history from users under the guise of antivirus protection and sold this data to third parties via its subsidiary, Jumpshot. The FTC alleged this was deceptive because Avast claimed to protect privacy while monetizing granular user data.
The Remedy: Avast was ordered to delete the web browsing data and "any products or algorithms Jumpshot derived from that data". Additionally, Avast had to inform third-party buyers to delete the data and models, effectively trying to recall the data from the marketplace.
Implications: This case targets the "surveillance capitalism" business model directly. It suggests that security and privacy tools that secretly harvest data for model training or sale will face the harshest penalties. The requirement to chase down third parties adds a complex layer of contractual enforcement to the settlement.
3.5 Texas v. Meta (2024): The Billion-Dollar Deterrent
The Violation: Texas sued Meta under CUBI, alleging that the "Tag Suggestions" feature captured biometric geometry from photos without the requisite informed consent.
The Remedy: A massive $1.4 billion settlement. Unlike the FTC cases, the settlement did not explicitly require model destruction. Meta agreed to capture biometric data only with affirmative consent in the future.
Implications: While model deletion was avoided, the financial penalty is functionally equivalent to the R&D cost of a major model. This indicates that while state AGs may settle for cash, the cost of non-compliance is reaching the GDP of small nations. It also highlights a strategic divergence: Federal regulators want to fix the market structure (disgorgement); State regulators want to penalize the behavior (fines).
3.6 Clearview AI and Worldcoin: The European Wall
Clearview AI: European DPAs (Italy, France, UK) have uniformly declared Clearview's scraping of images illegal. The Italian Garante ordered the erasure of data and a ban on processing. While Clearview has no physical presence in the EU, these orders effectively block market entry and create a legal liability that prevents enterprise customers from using the tool.
Worldcoin: The iris-scanning crypto project faced temporary bans in Spain and Portugal under GDPR Article 66 (Urgency Procedure). These "stop processing" orders halt the data pipeline essential for training and updating the biometric models. The inability to collect data in key markets degrades the model's global applicability and diversity, indirectly damaging the asset.
4. The Engineering Challenge: The Myth of Delete
The legal mandate to "delete the model" assumes a reversibility that does not exist in modern deep learning. This section explores the technical chasm between regulatory orders and engineering reality.
4.1 The Mathematics of Influence
In a traditional database, deleting a record is a discrete operation: removing a row from a table. In a neural network, training data is processed via Stochastic Gradient Descent (SGD). Each data point contributes to the adjustment of billions of parameters (weights) in the network. Once the gradient update is applied, the specific contribution of that data point is diffused throughout the entire model structure.
Experts liken this to "removing a strawberry from a smoothie". You cannot simply pick it out; the flavor (influence) is blended into every sip.
- Memorization: Models, especially Generative AI, can memorize training data verbatim. If a model outputs PII, it proves the data is retained.
- Decision Boundaries: Even if not memorized, the data point helped shape the model's decision boundary. Removing the data means the boundary should theoretically shift, but calculating exactly how it should shift without retraining is an unsolved mathematical problem.
4.2 The Binary Choice: Retraining vs. Unlearning
When faced with a disgorgement order, organizations typically have two technical pathways, though only one is currently legally safe.
4.3 Technical Approaches to Compliance
Research into machine unlearning is accelerating, driven by these regulatory pressures.
4.3.1 SISA (Sharded, Isolated, Sliced, Aggregated)
- Benefit: If data needs to be deleted, only the specific sub-model trained on that shard needs to be retrained. This reduces retraining costs by a factor equal to the number of shards.
- Drawback: Can reduce overall model accuracy compared to training on the monolithic dataset.
4.3.2 Differential Privacy (DP)
Training with Differential Privacy involves adding noise to the gradients during the training process.
- Benefit: It provides a mathematical guarantee that the model's output is not significantly influenced by the presence or absence of any single individual.
- Drawback: Often results in a utility trade-off (lower accuracy) and requires significantly more compute to converge.
4.3.3 Teacher-Student Distillation
This involves training a "Student" model to mimic the behavior of the original "Teacher" model, but using a dataset that excludes the toxic data. This can be faster than full retraining but still requires access to the clean dataset and substantial compute.
4.4 The Right to be Forgotten in GenAI
The UK ICO and other regulators are grappling with whether GenAI models can ever truly "forget." The ICO's recent consultation suggests that while unlearning is technically difficult, this difficulty is not a defense for non-compliance. If a model cannot comply with an erasure request (Article 17), its continued use may be unlawful. This pushes the industry toward "Retrieval-Augmented Generation" (RAG) systems, where the "knowledge" is stored in a vector database (which is easy to edit/delete) rather than baked into the model's weights.
5. Operational Strategy: The Compliance Shield
Given the existential nature of the risk and the difficulty of the technical cure, the only viable strategy is rigorous prevention. Organizations must adopt a "defense-in-depth" approach to AI governance.
5.1 The NIST AI Risk Management Framework (AI RMF)
The NIST AI RMF is the emerging standard for demonstrating due diligence. Implementing it can serve as a mitigating factor in enforcement actions.
- GOVERN 1.1: Document legal/regulatory requirements. This means explicitly assessing the "disgorgement risk" of every dataset.
- MAP 5.2 (Data Lineage): This is the most critical control. Organizations must map the flow of data from acquisition to model training.
- Requirement: You must be able to answer: "Which version of the model was trained using Batch #402 of the scrape?"
- Implementation: Metadata tagging of every training run with the exact hash of the training dataset.
- GenAI Profile: NIST's specific guidance on GenAI emphasizes managing the risks of "poisoned" data and third-party foundation models. It recommends "pre-deployment testing" for bias and data leakage, which aligns directly with the failures cited in the Rite Aid case.
5.2 ISO 42001: The Global Standard
ISO 42001 provides a certifiable standard for AI Management Systems.
- Annex A.7 (Data): Requires controls for data acquisition, quality, and provenance.
- Documentation: It mandates the retention of records detailing how data was prepared. In a regulatory investigation, being able to produce ISO-compliant documentation proving you vetted the data source could be the difference between a fine and a deletion order.
5.3 The AI Bill of Materials (AI-BOM)
Cybersecurity has the "Software Bill of Materials" (SBOM) to track vulnerabilities in code libraries. AI needs the AI-BOM to track liabilities in data.
An AI-BOM should list:
- Data Sources: Origin, license type, collection date, consent mechanism.
- Preprocessing: How data was cleaned, anonymized, or filtered.
- Model Architecture: Frameworks, hyperparameters.
- Training Lineage: Which data versions fed into which model checkpoints.
Strategic Value: If a regulator alleges that "Dataset X" is illegal, an organization with an AI-BOM can instantly identify that only "Model V2.1" used that dataset, while "Model V2.0" and "Model V2.2" are clean. This allows for a surgical excision of the bad asset rather than the destruction of the entire product line.
5.4 Vendor Management: Stopping Contagion
The Rite Aid and Avast cases demonstrate that you cannot outsource liability. If your vendor uses illegal data, your deployment is illegal.
- Indemnification: Contracts must include indemnification for "regulatory enforcement" and "model deletion costs," not just standard IP infringement.
- Contagion Clauses: Contracts should specify that if a vendor is ordered to delete a model, they must provide a "clean" replacement within a set timeframe or face massive penalties.
- Audit Rights: Enterprise buyers must demand the right to audit the vendor's Data Lineage and AI-BOM. "Trust but verify" is the new standard.
6. The Future of AI Development (2025-2030)
As we look toward the latter half of the decade, the regulatory pressure for algorithmic disgorgement will fundamentally reshape the AI industry.
6.1 The End of the Open Web Training Era
The era of scraping the open web with impunity is closing. The legal risks—copyright lawsuits, GDPR violations, and FTC unfairness actions—are becoming too high for enterprise-grade models. We will see a bifurcation of the market:
- "Wild" Models: Open-source or academic models trained on the scraping of the internet, used for research but considered too risky for corporate deployment.
- "Clean" Models: Commercial models (like Adobe Firefly or Getty Images' AI) trained exclusively on licensed, fully consented data. These will command a significant premium because they come with an "insurance policy" against disgorgement.
6.2 Disgorgement as a Standard Remedy
We anticipate that algorithmic disgorgement will become the default penalty for serious privacy violations involving AI. Regulators have realized that fines are insufficient deterrents for companies racing to achieve Artificial General Intelligence (AGI). Threatening the AI asset itself is the only lever powerful enough to force compliance.
6.3 The Rise of Unlearning-as-a-Service
The technical difficulty of unlearning will create a new market sector. Startups will emerge offering "Unlearning-as-a-Service" (UaaS) platforms that plug into enterprise MLOps pipelines. These tools will automate the SISA architecture, handle GDPR erasure requests, and provide "Verification Certificates" to prove to regulators that data influence has been removed.
6.4 Data Markets and Provenance
Data will be treated like a toxic chemical: useful but requiring strict handling protocols. We will see the rise of "Data Provenance" platforms using blockchain or cryptographic signing to track the lineage of every data point from creation to model ingestion. "Data Laundering"—the attempt to wash illegal data through various vendors—will become a focus of criminal enforcement.
7. Conclusion
The regulatory landscape for Artificial Intelligence has shifted from a focus on financial penalties to a focus on structural asset forfeiture. Algorithmic Disgorgement is the new "atomic option" in the regulator's arsenal. The cases of Everalbum, Rite Aid, and Kurbo demonstrate that the FTC is willing and able to order the destruction of valuable AI models built on "poisoned" data.
For the AI industry, this represents an existential risk. A single compliance oversight in the data collection phase can retroactively destroy years of R&D and millions of dollars in compute investment. The technical difficulty of "machine unlearning" means that, for now, the only sure way to comply with a deletion order is to burn the model to the ground and start over.
Survival in this new era requires a fundamental rethinking of AI architecture. It demands the implementation of AI-BOMs, the adoption of NIST AI RMF and ISO 42001 standards, and a rigorous approach to Data Lineage. The winning AI companies of the future will not necessarily be those with the most data, but those with the most defensible data.
Works Cited
- FTC enforcement trends: From straightforward actions to technical allegations - IAPP, https://iapp.org/resources/article/ftc-enforcement-trends/
- Algorithmic Disgorgement: An Increasingly Important Part of the FTC's Remedial Arsenal— AI: The Washington Report | Mintz, https://www.mintz.com/insights-center/viewpoints/54731/2024-01-23-algorithmic-disgorgement-increasingly-important-part
- Everalbum, Inc., In the Matter of | Federal Trade Commission, https://www.ftc.gov/legal-library/browse/cases-proceedings/192-3172-everalbum-inc-matter
- FTC Finalizes Settlement with Photo App Developer Related to Misuse of Facial Recognition Technology, https://www.ftc.gov/news-events/news/press-releases/2021/05/ftc-finalizes-settlement-photo-app-developer-related-misuse-facial-recognition-technology
- FTC Announces Groundbreaking Action Against Rite Aid for Unfair Use of AI - WilmerHale, https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and-cybersecurity-law/20240111-ftc-announces-groundbreaking-action-against-rite-aid-for-unfair-use-of-ai
- FTC Rite Aid Settlement Offers Key Lessons for Mitigating Risk When Deploying Biometrics or AI Tools | Baker Donelson, https://www.bakerdonelson.com/ftc-rite-aid-settlement-offers-key-lessons-for-mitigating-risk-when-deploying-biometrics-or-ai-tools
- FTC Requires Algorithmic Disgorgement as a COPPA Remedy for First Time, https://fpf.org/blog/ftc-requires-algorithmic-disgorgement-as-a-coppa-remedy-for-first-time/
- FTC COPPA Settlement Requires Deletion of Children's Personal Information, https://www.dwt.com/blogs/artificial-intelligence-law-advisor/2022/ftc-coppa-disgorgement
- Machine Unlearning – A Potential Option for Remedy?, https://www.fbm.com/publications/machine-unlearning-a-potential-option-for-remedy/
- Facial recognition: Italian SA fines Clearview AI EUR 20 million, https://www.edpb.europa.eu/news/national-news/2022/facial-recognition-italian-sa-fines-clearview-ai-eur-20-million_en
- Artificial Intelligence – Q&As - European Commission, https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1683
- High-level summary of the AI Act | EU Artificial Intelligence Act, https://artificialintelligenceact.eu/high-level-summary/
- The EU AI Act: Prohibited practices and AI literacy requirements take effect - Hogan Lovells, https://www.hoganlovells.com/en/publications/the-eu-ai-act-prohibited-practices-and-ai-literacy-requirements-take-effect
- Attorney General Ken Paxton Secures $1.4 Billion Settlement with Meta Over Its Unauthorized Capture of Personal Biometric Data In Largest Settlement Ever Obtained From An Action Brought By A Single State, https://www.texasattorneygeneral.gov/news/releases/attorney-general-ken-paxton-secures-14-billion-settlement-meta-over-its-unauthorized-capture
- Texas Wins $1.4 Billion Biometric Settlement Against Meta. It Would Have Happened Sooner With Consumer Enforcement | Electronic Frontier Foundation, https://www.eff.org/deeplinks/2024/07/texas-wins-14-billion-biometric-settlement-against-meta-it-would-have-happened
- FTC Announces Proposed Settlement with App Developer over Alleged Deceptive Practices, https://www.hunton.com/privacy-and-information-security-law/ftc-announces-proposed-settlement-with-app-developer-over-alleged-deceptive-practices
- FTC Sets Its Eye on Algorithms, Automated Tech, and AI-Enabled Applications, https://www.dwt.com/blogs/privacy--security-law-blog/2021/01/ftc-duty-to-delete-ai-algorithm
- FTC Requires Company to Delete Proprietary Algorithm as Penalty for Alleged Misuse of Data - Akin Gump, https://www.akingump.com/en/insights/blogs/ag-data-dive/ftc-requires-company-to-delete-proprietary-algorithm-as-penalty-for-alleged-misuse-of-data
- Weight Watchers/Kurbo: Stipulated Order - Federal Trade Commission, https://www.ftc.gov/system/files/ftc_gov/pdf/wwkurbostipulatedorder.pdf
- Rite Aid Banned from Using AI Facial Recognition After FTC Says Retailer Deployed Technology without Reasonable Safeguards, https://www.ftc.gov/news-events/news/press-releases/2023/12/rite-aid-banned-using-ai-facial-recognition-after-ftc-says-retailer-deployed-technology-without
- Rite Aid: Modified Decision and Order - Federal Trade Commission, https://www.ftc.gov/system/files/ftc_gov/pdf/c4308riteaidmodifiedorder.pdf
- Stipulated Order for Permanent Injunction and Other Relief - Federal Trade Commission, https://www.ftc.gov/system/files/ftc_gov/pdf/DE019-StipulatedOrderforPermanentInjunctionandOtherRelief.pdf
- AI governance and biometric privacy takeaways from the FTC'S Rite Aid settlement, https://www.hoganlovells.com/en/publications/ai-governance-and-biometric-privacy-takeaways-from-the-ftcs-rite-aid-settlement
- Avast | Federal Trade Commission, https://www.ftc.gov/legal-library/browse/cases-proceedings/2023033-avast
- FTC Order Will Ban Avast from Selling Browsing Data for Advertising Purposes, Require It to Pay $16.5 Million Over Charges the Firm Sold Browsing Data After Claiming Its Products Would Block Online Tracking | Federal Trade Commission, https://www.ftc.gov/news-events/news/press-releases/2024/02/ftc-order-will-ban-avast-selling-browsing-data-advertising-purposes-require-it-pay-165-million-over
- FTC's Enforcement Action Against Avast Signals Increased Focus on Consumer Web Data, https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and-cybersecurity-law/20240909-ftcs-enforcement-action-against-avast-signals-increased-focus-on-consumer-web-data
- FTC Finalizes Order with Avast Banning it from Selling or Licensing Web Browsing Data for Advertising and Requiring it to Pay $16.5 Million, https://www.ftc.gov/news-events/news/press-releases/2024/06/ftc-finalizes-order-avast-banning-it-selling-or-licensing-web-browsing-data-advertising-requiring-it
- Texas Biometrics Case Highlights Need for Consent: Meta Settles for $1.4 Billion | Insights, https://www.velaw.com/insights/texas-biometrics-case-highlights-need-for-consent-meta-settles-for-1-4-billion/
- What The Latest Clearview AI Judgment Tells Us About Behavioural Monitoring and Big Data Under the GDPR | Insights, https://www.ropesgray.com/en/insights/viewpoints/102lq4y/what-the-latest-clearview-ai-judgment-tells-us-about-behavioural-monitoring-and-b
- Spain Puts Temporary Ban on Worldcoin Due to Privacy Concerns - CPO Magazine, https://www.cpomagazine.com/data-privacy/spain-puts-temporary-ban-on-worldcoin-due-to-privacy-concerns/
- Portugal imposes 3 month data collection ban on Worldcoin - CryptoSlate, https://cryptoslate.com/portugal-imposes-3-month-data-collection-ban-on-worldcoin/
- AI model disgorgement: Methods and choices - PMC - NIH, https://pmc.ncbi.nlm.nih.gov/articles/PMC11067471/
- Explaining model disgorgement - IAPP, https://iapp.org/news/a/explaining-model-disgorgement
- Model Disgorgement: The Key to Fixing AI Bias and Copyright Infringement?, https://blog.seas.upenn.edu/model-disgorgement-the-key-to-fixing-ai-bias-and-copyright-infringement/
- Efficient Two-stage Model Retraining for Machine Unlearning - IEEE Xplore, https://ieeexplore.ieee.org/document/9857498/
- The lawful basis for web scraping to train generative AI models | ICO, https://ico.org.uk/about-the-ico/what-we-do/our-work-on-artificial-intelligence/response-to-the-consultation-series-on-generative-ai/the-lawful-basis-for-web-scraping-to-train-generative-ai-models/
- AI RMF Core - AIRC - NIST AI Resource Center, https://airc.nist.gov/airmf-resources/airmf/5-sec-core/
- Latest NIST Guidance Identifies Generative AI Risks and Corresponding Mitigation Strategies | Davis Wright Tremaine, https://www.dwt.com/blogs/artificial-intelligence-law-advisor/2024/08/new-nist-guidance-on-generative-ai-risks
- Understanding ISO 42001 and Demonstrating Compliance - ISMS.online, https://www.isms.online/iso-42001/
- Global AI Compliance Begins With ISO 42001 — Here's What to Know | WiCyS, https://www.wicys.org/global-ai-compliance-begins-with-iso-42001-heres-what-to-know/
- AI-BOM: Building an AI Bill of Materials - Wiz, https://www.wiz.io/academy/ai-bom-ai-bill-of-materials
- Securing AI Systems Through Transparency: The Critical Role of AI Bill of Materials (AIBOM), https://noma.security/securing-ai-systems-through-transparency-the-critical-role-of-ai-bills-of-materials/
- AI Service Agreements in Health Care: Indemnification Clauses, Emerging Trends, and Future Risks | ArentFox Schiff, https://www.afslaw.com/perspectives/health-care-counsel-blog/ai-service-agreements-health-care-indemnification-clauses
- Protecting customers with generative AI indemnification | Google Cloud Blog, https://cloud.google.com/blog/products/ai-machine-learning/protecting-customers-with-generative-ai-indemnification