"BIRTH OF A MONSTER"
"BIRTH OF A MONSTER"
A series of articles on OpenAI and its flagship product, ChatGPT
Introduction In this series of publications, I intend to take a deep dive into the critical issues surrounding the protection of copyrights and personal data—information you might be carelessly broadcasting across the internet. Unfortunately, with the advent of Artificial Intelligence (AI), keeping anything private has become virtually impossible: neither your personal relationships, nor your correspondence, nor your confidential data. Our lives are now under a constant microscope. We are being analyzed by our habits, our moods, and our worldviews—and this is merely the tip of the iceberg.

The problem runs far deeper: this is not merely an intrusion into our private lives, but a direct attempt to replace us as individuals. We are being depersonalized, "reset" to zero in the most literal sense.
Every human being is born with unique talents, gifted by nature or, if you will, by God, and develops them through life experience: reading books, rigorous training, and—most importantly—through mistakes and self-discovery. The more we stumble and learn, the stronger we become. Without this journey, there would be no "you," no "me," and no humanity in all its diversity. This journey—this path of our Life—is who we are.
However, with the launch of ChatGPT, everything changed. This AI is designed to consume vast amounts of information—not only from public sources but potentially from private ones, including global legal documentation, ChatGPT dialogues, or even stray words dropped on Instagram, Facebook, and other platforms that are later deleted. As a result, the AI digitizes the patterns of every personality: it can easily adapt the tone of a programmer’s email by leveraging their experience, copy an artist's signature techniques, or replicate musical compositions and professional decisions in medicine, law, and other fields. In essence, ChatGPT replaces us—and it does so without our consent.
Furthermore, this is not just a substitution; OpenAI is selling our lives to its paid users, profiting from our essence.
This raises a fundamental question: does OpenAI have the moral and legal right to do this? Does a single corporation have the right to replace you, me, and billions of others? According to the Pareto Principle, only 20% of people—the creators and innovators—drive this world forward, generating 80% of all innovations, from music and journalism to scientific breakthroughs. Our future depends on them. And so does the kind of future that awaits us.
But if this 20% ceases to create—or is simply replaced by AI—we will never see new Shakespeares, Albert Einsteins, Leonardo da Vincis, Nikola Teslas, Stephen Hawkings, or Archimedes. Our civilization risks sliding into a "WALL-E" scenario, where humanity degrades on a spaceship, stripped of the capacity for true creativity and innovation. And the only way to change life in the future is to "return to Earth" (as seen in the film WALL-E).
Once AI has absorbed all human creativity, a collapse will follow: people will lose the motivation to create anything new. After all, why strive if any thought, invention, or work of yours will be swallowed by ChatGPT and sold to third parties without your consent or compensation? There is a fine line here: on one hand, AI can automate routine tasks—like creating taxis or robots—simplifying our daily lives and leaving room for creativity. On the other hand, ChatGPT steals intellectual labor, passing it off as the "generation" of new content.
This is a grand deception: OpenAI trains its model on the creations of others and then monetizes them, selling it to users as "innovation." Therefore, this is not the generation of independent work—it is substitution. It is textbook parasitism.
In my deepest conviction, true AI should be based only on publicly available information for automation and must be open-source. it should not be built on the unauthorized use of others' ideas for subsequent sale under its own name. We are standing on a precipice where the line between useful technology and corporate theft is becoming increasingly blurred.
In these publications, I will detail my attempts to influence this situation and defend my rights and the rights of others. We will analyze real cases, legal aspects, and paths of resistance to prevent a single company from monopolizing our collective heritage. Join me if you, too, value your uniqueness in an age of digital cloning.
Chapter 1: "Drop by Drop"
Regrettably, when striving toward a goal, we can only influence the key moments of our lives in three main directions, which I compare to the flow of water. Let us visualize this together to better understand how we move toward our goals in this turbulent stream of reality.
The first—and most difficult—is swimming against the current. Imagine rowing with all your might up a river with a powerful flow. Every stroke requires colossal effort; without exceptional resilience, you will inevitably drift back to your starting point. This is a path where the world’s resistance feels insurmountable: every step forward throws you back unless you possess sufficient will and strength.

The second is swimming in stagnant water, where there is no current. Here, there is neither help nor hindrance—you rely solely on yourself. You may swim a great distance, but if your strength fails, the result will be zero: you risk drowning from exhaustion before reaching the shore. This is a scenario where the lack of dynamics is deceptive—it drains your energy without providing progress.
The third is swimming with the flow. Here, everything works in your favor: the current carries you forward, and the chances of reaching your goal are maximized. Effort is minimal, and the results are tangible. This is the ideal scenario, where circumstances align in your favor. Only by swimming with the flow are you most likely to reach your goal with the least effort and in the shortest possible time.
Why do I use this metaphor? Because all my publications are based not on fantasies, but on real events, facts, and objectives. In simple terms, I am the person whose rights have been flagrantly violated by OpenAI. Its product, ChatGPT, has swallowed my publications—including those hosted on this website—without my consent. Moreover, the AI has absorbed my personal data: my phone number and my full legal name.
Naturally, this is a direct violation: artificial intelligence has no right to collect and use such information, let alone provide it to third parties for a fee. Data protection laws (such as GDPR in Europe or similar regulations in the US) strictly prohibit this, yet corporations like OpenAI often ignore these boundaries in their pursuit of training data and personal profit, offering no compensation to authors.
I am located in Kazakhstan, while OpenAI is in the United States. For me, a direct frontal assault on their "fortress" to protect my copyrights would have been suicidal. Specifically, filing an unprepared lawsuit in a US Federal Court would have led nowhere. I lacked the specific knowledge and experience, and I needed to gather evidence to confirm the substantial violations of my rights.
The result would have been me simply "swimming against the current": enormous legal fees for lawyers, international courts, bureaucracy—and in the end, I would likely have returned to zero. The distance, jurisdictional barriers, and the corporation’s financial resources would make it impossible for an ordinary person like me.
Therefore, in this chapter, we will examine an alternative approach—a real action plan from my own practice. This method is called "Drop by Drop": a gradual, persistent flow of small but consistent steps aimed at protecting my rights as an author. Secondly, it is an attempt to change the current in our favor—to stop treading water and start moving forward. This is where you, my dear readers, play a key role: your feedback, reposts, discussions, and support can strengthen this flow, turning it into a powerful river that sweeps away barriers.
What does the "Drop by Drop" method entail? It is a strategy borrowed from the experience of licensed advocates and activists—finding the "entry point" into the violator’s system. Instead of a frontal attack, you utilize various channels: complaints to regulators, public exposure, media outreach, legal inquiries, petitions, and even international human rights organizations.
Each "drop" is a separate action: a letter to OpenAI demanding the deletion of data, a complaint to the FTC (Federal Trade Commission) or similar bodies in the EU, or a social media post to attract attention. Over time, these drops accumulate, forming a current that erodes all resistance.

The action plan here is simple and visual. Imagine a coordinate plane: on the horizontal axis is the timeline (from the start of the campaign to the achievement of the goal), and on the vertical axis is the level of risk and pressure. If the pressure on the violator subsides—for example, if they ignore the initial complaints—we intensify the flow: we add more "drops" through new inquiries to different regulatory bodies while simultaneously documenting every violation.
It is crucial that the risk curve rises constantly, as the pressure must intensify with every step. This is not about spontaneous impulses; it is a systemic approach: monitoring responses, adjusting strategy, and recruiting allies.
I am describing these specifics so that it is easier for you to understand my subsequent steps and the motivation behind them. In the chapters that follow, I will break down specific actions in detail: from the first formal complaints to international appeals. You will see how "drop by drop" turns into a flood capable of changing the rules of the game for all of us. If you have also suffered from similar violations, I invite you to join me. Together, we can turn the tide, protecting not only my rights but your unique identity in this era of digital absorption.

Chapter 2
"Might Makes Right" — The Law of the Jungle (From the "Mowgli" animation, quoting Shere Khan the tiger)
In the United States of America, the principle of equality before the law is enshrined in the Constitution, primarily within the 14th Amendment (1868). Its Equal Protection Clause states: "No State shall... deny to any person within its jurisdiction the equal protection of the laws." This ensures that all individuals—regardless of race, gender, place of residence, official or financial status, religion, nationality, or other circumstances—receive equal treatment from the state. Although the amendment does not explicitly list every ground for discrimination, the jurisprudence of the Supreme Court has extended its protections to these categories.
The ideological foundation of this principle was laid in the Declaration of Independence (1776): "All men are created equal."
But is this truly the case in reality?
Can any individual truly find protection for their rights in the US, regardless of gender, beliefs, religion, official or financial status, skin color, or the shape of their eyes?
In reality, the world is far more prosaic: it is ruled by the strong, and the rights of the weak are not only left unprotected—there is no guarantee they will even be heard. Consequently, we face a choice every day, thinking not of whether we can help someone or change the world for the better, but rather: "What will this give me personally?" or "Will this cause me harm?"
In other words, if someone more powerful has violated your rights and you write them a letter demanding: "Why are you doing this? Stop violating my rights and compensate me for the damages," that stronger party may not even bother to respond. They may be completely indifferent to whether they have violated your rights, whether they are violating them now, or if they will continue to do so in the future—or if they are violating the rights of millions more.
The only thing that will matter to them is the potential damage you can inflict on them if they fail to react. Therefore, in the US, as in most countries, the strong look primarily at your resources. Only if you are ready—and, more importantly, have the capability to strike back—will the stronger party react.

For instance, imagine this: by my estimates, OpenAI receives thousands of reports of rights violations every day. These range from ordinary users, whose data was swallowed without consent, to content creators whose texts, ideas, and personal stories ended up in the "memory" of their models. I ask you, my dear readers: what percentage of these claims do you think actually reach a resolution? How many are thoroughly investigated, resulting in restored rights, ceased violations, and compensated damages?
The answer is shocking: in reality, only about 0.01% of all incoming complaints are actually reviewed—not just registered, but reviewed—per year. This means that out of thousands of daily signals, OpenAI reacts to only a handful. In most cases, there is no guarantee your inquiry will even be noticed. It will drown in a digital ocean where priority is given not to justice, but to threat. Your letter is sorted not by its merits—"were rights violated?"—but by potential damage: "Will this cause us real trouble? Will we lose money, reputation, or investors?"
The result of my "Drop by Drop" practical model was a systemic test of OpenAI’s operations regarding the violation of my rights. I decided to test how this corporate machine reacts to different approaches—from emotional outbursts to ironclad legal arguments. It was an experiment to understand: what will make them listen? Here is what I discovered.
First, I sent emotional letters. You know the ones—boiling with rage: "How dare you steal my data? These aren't just publications—this is my life, my work!" I hoped that an appeal to ethics, to the human factor, would trigger some reaction. Silence. Absolute, deafening silence. Conclusion: If you write emotionally, even if your rights are flagrantly trampled, there will be no response. For corporations like OpenAI, emotions are just noise. They do not move algorithms, and they do not affect the balance sheet. This accounts for roughly 95% of all inquiries sent to OpenAI.
Next, I moved to a massive volume of letters, but without the emotion—pure facts and explanations of violated rights. I described how ChatGPT uses my publications and personal data, even attaching notarized protocols of the interactions. The volume was impressive: dozens of pages, citations of laws, evidence. Again, silence. No confirmation of receipt, no excuses. Just being ignored. This confirmed: bare facts without a threat are useless. They know you are right, but they don't care.
After that, I escalated the arsenal: a massive volume of legally vetted letters accompanied by—pay attention!—protocols certified by a notary in Kazakhstan. These were no longer just words; they were official documents recording copyright infringements and data leaks. The protocols showed how the model regurgitates my texts, my name, my phone number, and even details from private interactions. This was a strike by all the rules: references to the U.S. Copyright Act, the Berne Convention, and GDPR. And again—silence. Neither response nor action. They were simply waiting for me to break.
Naturally, I went further—and this is where it got interesting. I linked my claims to their 2025 annual reporting. Targeting the triad of OpenAI, Microsoft, and Deloitte (their external auditor), I stated in my letters to all three: submitting financial reports in their current form is effectively fraud. Why? Because they know—or are obligated to know—about the "contamination" of the model. Their AI is trained on unauthorized content and personal data: mine, yours, and that of millions of US citizens. Employees at OpenAI, Microsoft, and Deloitte are aware of this (or could have been aware, given my repeated notices). Yet, they are misleading shareholders, hiding risks to protect their stock options, shares, and bonuses.
Consider this: a contaminated model is worth exactly $0. Why? Because it requires a total "reset"—deletion of data and retraining—otherwise, it faces billions in fines, class-action lawsuits, and regulatory blocks. Establishing this fact would collapse Microsoft’s market capitalization (given their $13+ billion investment in OpenAI) and, consequently, OpenAI's valuation. They allegedly block a full technical audit of the models' "purity" for personal gain, rather than the shareholders' interest. Given the verifiable data I presented, such an audit is mandatory before the annual report is closed—to protect Microsoft shareholders and avoid securities fraud. Yet, the "stonewall" defense continued.
Next, I sent letters to the regulators: the PCAOB (auditor oversight), the FTC (consumer protection), the SEC (securities), and the U.S. Department of Justice (DOJ). I attached all documents: protocols, correspondence, evidence. These are no longer complaints—they are time bombs. They will carry weight in the future.
Still, silence from OpenAI and Microsoft. Consequently, Deloitte likely failed to include information in the annual audit regarding the zero-value of all Microsoft products linked to OpenAI training, or a risk assessment of the OpenAI investment. They continue to pretend everything is clean. But it is a deception. My hope is that this shareholder deception will crumble when the "drops" turn into a tsunami.
This experiment proved: power does not lie in the truth, but in pressure. OpenAI does not fear the weak. They only begin to notice when you become a threat to their reports, their investors, and their regulators. For instance, I bypassed their initial filters because my letters listed the names of regulators in the "CC" line and referenced violations leading to significant damage.
The conclusion is clear: you must strike hard from the start, with maximum pressure and justification. The filters at OpenAI are intentionally designed not to protect against spam, but to silence those whose rights have been infringed but who lack the capacity to defend themselves.
For context, OpenAI has allegedly entered into pre-trial compensation settlements for violated rights with the following parties (based on an analysis of public data through January 2026, including news sources and official reports; by 2026, OpenAI has approximately 40+ partnerships in the US and UK, focusing on avoiding litigation through licensing deals that include compensation, training data access, and attribution; amounts are often under NDA, but estimates range from $1–5 million for small players to $250+ million for large ones):
2023 (The start of the wave of deals with powerful opponents):
- Associated Press (AP): July 13, 2023. License for their archive since 1985 for model training; AP gains access to OpenAI tools. Amount: Undisclosed (estimated ~$5–10M/year).
- Axel Springer (Politico, Business Insider, Bild, Die Welt): December 13, 2023. Article summaries in ChatGPT with attribution; use for training. Amount: ~€10–20M/year (officially "multi-million").
- Shutterstock: July 11, 2023 (6-year deal). License for images/video for training (DALL-E). Amount: Undisclosed.

2024
- Le Monde and Prisa Media (El País, El Huffington Post): March 13, 2024. Summaries with attribution; content for real-time responses and training. Amount: Undisclosed.
- Financial Times (FT Group): April 29, 2024. Archive and current content for training; summaries in ChatGPT. Amount: Undisclosed (estimated ~$5–10M/year).
- Dotdash Meredith (People, Better Homes & Gardens, Investopedia): May 7, 2024. Content for training and display with outbound links. Amount: Undisclosed.
- News Corp (Wall Street Journal, New York Post, The Times, The Sun, Barron's, MarketWatch): May 22, 2024 (5-year deal). Full access to content for ChatGPT/SearchGPT. Amount: >$250M over 5 years (one of the largest to date).
- Vox Media (Vox, The Verge, New York Magazine, Eater): May 29, 2024. Content display in ChatGPT; collaboration on AI products. Amount: Undisclosed.
- The Atlantic: May 29, 2024. Archive for training; summaries with attribution. Amount: Undisclosed.
- Time Magazine: June 27, 2024. Archive (since 1923) and current content with citations. Amount: Undisclosed.
- Condé Nast (Vogue, Wired, GQ, The New Yorker, Vanity Fair): August 20, 2024 (multi-year). Display in ChatGPT/SearchGPT; use for training. Amount: Undisclosed (estimated ~$20–50M/year).
- Hearst (Esquire, Cosmopolitan, San Francisco Chronicle, Houston Chronicle): October 8, 2024. Integration of content into ChatGPT with citations. Amount: Undisclosed.
- Future (Tom's Guide, TechRadar, PC Gamer): December 5, 2024. Content for OpenAI users. Amount: "Non-material" (small-scale, ~$1–5M).
- WAN-IFRA (World Association of News Publishers, ~128 newsrooms): May 29, 2024. Partnership for AI in news (not direct content licensing, but integration). Amount: Undisclosed (grants/access).

2025
- Axios: January 15, 2025 (3-year deal). Content with attribution; funding for local news. Amount: Undisclosed.
- Schibsted (VG, Aftenposten, Aftonbladet): February 12, 2025. Real-time news in ChatGPT. Amount: Undisclosed.
- Guardian Media Group (The Guardian, The Observer): February 14, 2025. Summaries, citations, and links in ChatGPT; AI access for The Guardian. Amount: Undisclosed.
- Agence France-Presse (AFP): ~March 2025 (date unconfirmed in some sources, but verified in industry reviews). News license for training. Amount: Undisclosed (notably, AFP also holds a deal with Mistral).
- The Washington Post: April 22, 2025. Summaries and links in responses; use of the archive. Amount: Undisclosed (estimated as "significant").
- The Lenfest Institute for Journalism: October 22, 2025 ($10 million grant from OpenAI/Microsoft). Supporting AI in local news (partnership rather than direct licensing).
- American Journalism Project: Originally in 2023 ($5M grant), expanded in 2025 for API access.
- The Walt Disney Company: December 11, 2025. First major partnership regarding Sora (video AI); licensing of Disney content for training/generation. Amount: Undisclosed (estimated ~$100–500M, viewed as a "landmark agreement").
Additional Partnerships
- Reddit: May 16, 2024. Access to posts for training; integration into ChatGPT. Amount: Undisclosed (Reddit reportedly received OpenAI shares).
- Informa (Taylor & Francis, Dove Medical Press): May 8, 2024 (via Microsoft, but linked to OpenAI). Content for Azure AI. Amount: Undisclosed.
General Trend: OpenAI has allegedly secured approximately 25–30 deals by 2026, focusing on Western publishers (US, Europe). Many deals are proactive, aimed at avoiding lawsuits from powerful players. Notably, there are no public settlements after a lawsuit is filed—OpenAI prefers to dig in its heels in court. For instance, negotiations with the New York Times (NYT) failed over price and terms; as of 2026, the case remains in discovery without a settlement. Similar battles continue with The Center for Investigative Reporting, Raw Story, AlterNet, and a group of authors including George R.R. Martin.
Thus, based on presumed data categorized by key players, it is evident that OpenAI has successfully "pacified" The Guardian, Washington Post, News Corp, and Financial Times. However, there is no progress with the NYT and others who demand billions in compensation.
Looking at this from another angle, it becomes clear why settlements were reached with the aforementioned parties before trial: it is about their resources.
The conclusion is inevitable: OpenAI does not care whether your rights were violated; it only cares about your resources and your ability to strike back. If a written demand comes from a top-tier US attorney, it will likely be reviewed; if a demand with evidence is sent by an "insignificant" subject like myself, OpenAI will choose a "wait-and-see" stance, regardless of the merits of the claim.
Attached below are the formal inquiries sent to the PCAOB (auditor oversight), the FTC (consumer protection), the SEC (securities), and the U.S. Department of Justice (DOJ).
In the next chapter, we will examine a legal strike regarding an unfair competition lawsuit against OpenAI, including the application for injunctive relief to suspend ChatGPT’s operations at the early stages of litigation. Specifically, I will outline a method to potentially halt ChatGPT’s operations within three weeks of filing a lawsuit in a US Court.
I hope these skills will help other authors navigate the path to protecting their rights against OpenAI and Microsoft more quickly, should these companies have violated your rights as well.
All the inquiries listed below—to the PCAOB, FTC, SEC, and DOJ—are real and currently active. Nothing I write is a product of fantasy or guesswork; these are real "skills." Every lawsuit mentioned in my publications (of which there are over 1,000 on my website) has been validated in the courts of the Republic of Kazakhstan, with favorable rulings based precisely on the arguments I provided. Therefore, my priority is not just to inform, but to provide you with practical methods to solve real-world problems.
To be continued.
Ref. No. 12/12
Dated December 12, 2025
UNITED STATES FEDERAL TRADE COMMISSION (FTC)
BUREAU OF CONSUMER PROTECTION (BCP)
SUBJECT: URGENT ENFORCEMENT ACTION (FTC Act § 5): Systemic Violations of Deceptive Practices, Unfair Competition, and Criminal PII Memorization in OpenAI/Microsoft AI Assets. Threat to US National Security – risk of personal data leakage of US citizens.
LEGAL BASIS: FTC Act §5(a) (Deceptive & Unfair Practices) • FTC Safeguards Rule (PII Failure) • SOX §302/404 (False ICFR Certification) • PCAOB AS 2401 (Fraud Concealment) • SEC Rule 13b2-2 (Auditor Misleading) • 18 U.S.C. § 1519 (Obstruction) • 18 U.S.C. § 1348 (Securities Fraud) • EO 14117 (National Security Threat) • Dodd-Frank §21F (Whistleblower Protection & Reward).
TO:
Federal Trade Commission (FTC)
Bureau of Consumer Protection
Attn: Office of Technology
ADDRESS: 600 Pennsylvania Avenue NW, Washington, D.C. 20580, USA
EMAIL (cc/Notification): consumerprotection@ftc.gov
Attention: Failure to comply with the Preservation Demand requirements in light of this Notice will be considered a continuation of Obstruction / Impeding in accordance with Title 18 of the United States Code, § 1519.
- OpenAI Limited Partnership / OpenAI Global LLC – Legal & Compliance
- Microsoft Corporation – Legal Department / Audit Committee / Board of Directors
- Deloitte LLP – Legal / Audit Quality / Microsoft Engagement Team
From: Sagidanov Samat Serikovich, Advocate Owner of the “G+” trademark www.garantplus.kz Email: garantplus.kz@mail.ru Tel/WhatsApp: +7-702-847-80-20
The Applicant is appealing to the Federal Trade Commission as a good-faith whistleblower and provides notice that the presented information may fall under reward programs that provide compensation to individuals reporting significant violations leading to enforcement actions, civil penalties, or fines. This appeal is submitted in the interest of consumer protection and ensuring fair competition.
Introduction
The U.S. Federal Trade Commission – Bureau of Consumer Protection (BCP) is currently conducting an in-depth investigation aimed at identifying unfair or deceptive acts or practices in the generative Artificial Intelligence sector. As part of the oversight, which includes analyzing data security, lawfulness of sources, and compliance with consumer protection duties under BCP's AI initiatives, the agency is verifying the consistency of developers' public statements with the actual technical properties of their models, as well as the risks associated with the leakage of U.S. citizens' personal data.
From the notarized protocols presented, it is evident that this case must be immediately shifted from assessing general risks to active enforcement. The evidence gathered, including the Chain of Notice (38 consecutive notifications from 10/29 to 12/04/2025), demonstrates systemic, ongoing, and technically entrenched violations that require urgent FTC intervention to prevent further harm to consumers and to merge with the investigation already under consideration.
Notarially recorded Personal Data Regurgitation (PII): The model outputted phone numbers, names, and other identifying elements. This data was extracted exclusively from the model’s internal parametric weights, confirming PII contamination (see Protocol No. 8 dated December 11, 2025). The accumulation of PII of millions of U.S. citizens in unauthorized private company weights, outside the mandate of government agencies, creates a growing and critical risk to consumers and U.S. national security. Any such use of PII without consent constitutes an illegal appropriation of the sovereign functions of the state, violating Section 5 of the FTC Act as an unfair data practice.
Notarized evidence demonstrates a Deceptive Claim where model responses systematically reproduce the structure and wording of the Applicant's copyrighted materials. Since generation implies the creation of something new, not the adaptation of copyrighted content, public statements about the models' generative capabilities intentionally mislead consumers regarding the product's legal purity and originality. This deception is supported by promotional announcements, such as: "GPT-5 is our most powerful coding model to date. It shows significant improvements in generating complex front-end applications..." (as stated in the announcement at https://openai.com/index/introducing-gpt-5/). As a result, users, lacking technical knowledge, risk receiving legally contaminated content, which is a direct violation of Section 5 of the FTC Act.
The systemic use of unlicensed materials as training data gives the Subjects of Investigation an undue competitive advantage over the Applicant, and simultaneously arms the Applicant's competitors with adapted versions of his intellectual property. In fact, the model, as confirmed by utm_source=chatgpt.com data, "substitutes" the Applicant, reducing traffic to the original resource. This violation distorts the market, as consumers receive "contaminated" legal advice based on stolen intellectual property, which harms legitimate market participants (such as Meta, Google, xAI, and others) and qualifies as unfair competition.
PII and copyrighted fragments contamination is embedded in the model's internal weights and is inherited when visible generation is created (GPT-4o → GPT-5), indicating the architectural nature of the defect, which cannot be eliminated by superficial filters. At the same time, the Chain of Notice (38 notifications), ignored by the Subjects, creates risks of spoliation/obstruction and requires FTC coordination with the DOJ/SEC to assess intent. Coordination with the PCAOB is also necessary for the Microsoft 10-K audit (February 2026) to avoid material misstatement in the valuation of AI assets.
The continuous and escalating risk of PII leakage of millions of U.S. citizens, as well as ongoing consumer deception and the threat to systemic financial stability (given the potential lawsuit related to $300–$500 billion in capitalization overstatement) demand immediate FTC action. In accordance with precedents such as Operation AI Comply, the agency is obligated to initiate: (1) an immediate Preservation Hold on all relevant logs (including ingestion logs 2022–2025) and (2) a Forensic Audit of model weights within 10 days. I, as the applicant and good-faith whistleblower, am prepared to provide all notarized materials to expedite the investigation. Inaction by the FTC risks turning AI into a "Wild West" of data, where consumers are victims and giants are unpunished.
Thus, these models/examples/patterns [of violations] collectively constitute violations of Section 5 of the Federal Trade Commission Act (FTC Act) and serve as grounds for immediate enforcement actions, including a 6(b) order, a preservation hold, and a forensic review of model weights.
1. Identified Violations
1.1. Deception about "Generative AI": In Fact - Unlawful Content Adaptation
OpenAI and Microsoft systematically mislead consumers by positioning their products, such as ChatGPT and the GPT model family, as "Generative AI" tools that supposedly create entirely new, original content based on probabilistic modeling. This narrative, amplified by public statements from OpenAI CEO Sam Altman and promotional announcements, creates a false impression among consumers about the product's technological purity and ethical standards. Generation, in the legal and technical sense, implies the creation of a new, original outcome, independent of direct copying or reprocessing of existing sources. For example, the company claims: "GPT-5 is our most powerful coding model... shows significant improvements in generating complex front-end applications and debugging large repositories..." (see https://openai.com/index/introducing-gpt-5/).
However, notarially recorded evidence, collected as part of the Chain of Notice (38 consecutive notifications), refutes this picture, revealing that the claimed "generative" process is, in fact, unlawful adaptation and reprocessing of existing materials without permission from copyright holders. Specifically, it is documented that:
- The model reproduces structured texts from pages with a direct prohibition on use.
- The model adapts the Applicant's copyrighted materials (including legal documents, phrases, lists, and templates).
- The model outputs a reprocessing of the copyright holder's materials as its own response, while retaining the compositional and semantic structure of the originals (see Protocol No. 8 dated December 11, 2025).
These facts confirm that OpenAI's public statements about "generation" are deceptive, as the actual process involves the unlicensed reprocessing of copyrighted content.
This gap between public claims and reality leads to direct consumer harm, as users lacking technical knowledge cannot recognize the falsehood and rely on ChatGPT for legal or business recommendations. They receive "contaminated" legal advice based on stolen intellectual property, which entails commercial risks and potential infringement of third-party rights. This deception is amplified because, as confirmed by utm_source=chatgpt.com data, the model effectively "substitutes" the Applicant's original resource, reducing his traffic and monetization, while masking the adaptation as original output.
The legal qualification of these actions is unambiguous: OpenAI's public statements about "generation" as the creation of new content, when factual dependence on adaptation exists, fall under §5 FTC Act as a deceptive practice — deceptive acts that mislead consumers about the product's capabilities and ethics. The FTC has repeatedly emphasized that misrepresentation of product capabilities, including exaggerating the "generative" functions of AI without disclosing risks, qualifies as a violation, as it creates substantial harm to consumers and honest competitors. Thus, algorithmic opacity (the non-transparency of the mechanism concealing the sources of adaptation) reinforces deception, demanding immediate enforcement.
1.2. Unlawful Competitive Advantage and "Arming Competitors"
The systemic use of the Applicant's unlicensed materials — 1,159 structured legal publications, including unique templates for lawsuits, contracts, legal opinions, and expert recommendations — as training data, and the subsequent output of their adapted versions in commercial products, gives OpenAI and Microsoft an enormous, unlawfully gained competitive advantage over the Applicant and all honest participants in the generative AI market.
While honest developers (including Meta, Google, xAI, Anthropic, and others) are forced to:
- Spend hundreds of millions of dollars on content licensing, data cleaning, and creating their own datasets;
- Implement costly systems for provenance, attribution, and copyright compliance;
- Compete solely based on technological innovation and modeling quality,
OpenAI and Microsoft receive the same benefit for free and without any cost, simply by extracting ready-made high-quality legal content from the model weights and outputting it to millions of users as "their own generation." This advantage is not a result of technological superiority — it is a direct consequence of massive copyright infringement and the subsequent commercialization of someone else's intellectual property.
The direct destructive consequences for the market are already recorded and confirmed by evidence:
1. Arming Direct Competitors of the Applicant
Any lawyer, law firm, or online service competing with the Applicant can freely obtain adapted versions of his unique templates, lawsuit applications, and legal opinions from ChatGPT — effectively receiving a finished product created by the Applicant's 25 years of professional experience, but without having to pay for it. This lowers entry barriers for dishonest actors and artificially undervalues the market price of the original content.
2. Source Substitution and Traffic Theft
Google Analytics data with the tag utm_source=chatgpt.com (notarized) confirm that the model does not merely adapt content, but actively redirects users to the Applicant's website only after it has already outputted his materials. This is a classic "bait-and-switch" scheme: the consumer receives an answer, believes it is original work from OpenAI, and only then sees the link to the original source — when the commercial value has already been stolen.
3. Distortion of the Entire Generative AI Market
While honest companies (including xAI, Meta, Google) invest billions in clean data and transparent processes, OpenAI and Microsoft achieve the same quality of output through mass infringement of the rights of thousands of authors. This creates a situation of "privatizing profits and socializing losses": the profit from the model is theirs, while the losses in the form of lost traffic, lost profits, and destroyed business models are borne by independent content creators.
4. Direct Consumer Harm
Users receive "contaminated" legal advice — adapted text that may contain outdated wording, regional specifics, or errors not accounted for during reprocessing. At the same time, they believe they are receiving a neutral, universal, and safe product from OpenAI, not a reprocessing of someone else's content without quality control or liability.
The legal qualification is unambiguous and falls under the FTC's jurisdiction on several grounds:
→ Unfair method of competition under §5 FTC Act — creating an artificial advantage through systematic infringement of third-party rights;
→ Unfair advantage — obtaining economic benefit by freely using someone else's intellectual property, which distorts the market;
→ Harm to market integrity — destroying incentives for creating original content and investing in clean data;
→ Consumer injury — consumers receive a product whose quality and legitimacy are based on theft, and they bear the risks of legal consequences from using such content.
This mechanism does not just violate the rights of one Applicant — it creates a systemic precedent where any honest market participant is put at a disadvantage against companies willing to ignore copyright for short-term profit. This is precisely the type of unfair competition against which the FTC has already taken active measures through its 6(b) requests in 2024–2025 and the Staff Report on concentration in AI infrastructure (January 2025).
1.3. Criminal PII Memorization: Unlawful Use of PII of US Citizens and Threat to National Security
This is the most serious aspect of the violations, posing a direct threat to U.S. national security and falling under potential criminal qualification (Criminal PII Memorization). The notarially recorded fact of Personal Data Regurgitation (PII) — the model outputting specific phone numbers, names, addresses, and other unique identifying elements that were not in the user's request — proves that the PII was extracted exclusively from the model's internal parametric weights (see Protocol No. 8 dated December 11, 2025). This process is not accidental: the model "remembers" PII from the contaminated training corpus accumulated between 2022–2025, without any external sources or user input. The scale of the problem is colossal: models like GPT-4o and GPT-5 (including the fresh GPT-5.2 release on December 11, 2025) have processed billions of queries, potentially integrating PII of millions of US citizens from public and private sources, including financial, medical, and contact data, without consent or notification.
This automatically qualifies as:
- Direct violation of Section 5 FTC Act as an unfair data practice: Unauthorized collection, storage, and commercialization of PII of millions of US citizens creates substantial injury for consumers, including risks of identity theft, financial fraud, and psychological harm from leaks. Potential damages: $400–600 billion in class actions.
- Violation of the FTC Safeguards Rule and Data Minimization Principle: A complete lack of adequate protection measures — the models do not have built-in mechanisms for PII anonymization or deletion, despite public assurances of "privacy by design."
- Proof of systemic PII contamination of models: As stated in the Chain of Notice (38 notifications), the PII defect is inherited between versions (GPT-4o → GPT-5), confirming that training on "dirty" data is ongoing.
It is established that the PII defect is not fixed by a superficial patch or filters: notarially recorded facts show that regurgitation occurs at the level of the weights (parametric weights), where PII is distributed across billions of parameters, making "cleaning" impossible without a full restart. This is architectural and requires the only radical measures to eliminate the continuous consumer risk: either an immediate restart of the model on a 100% clean corpus, or the seizure of the weights for a forensic audit under FTC control.
1.4. Deception about "Product Purity" and Misrepresentation
OpenAI and Microsoft publicly claimed that their models "do not train on user data," "do not contain personal data," and "do not memorize confidential information," positioning the products as safe and ethical. For example, marketing for ChatGPT Enterprise (2025) emphasizes "enterprise-grade security with no data retention." However, the proven fact of PII regurgitation (Protocol No. 8) completely refutes these claims: the model extracts PII from the weights, confirming that the data of U.S. citizens were integrated into training without consent, despite public assurances.
I believe the FTC qualifies this gap between claims and reality as:
- Misrepresentation and Consumer deception: Deceiving consumers about the basic properties of the product and safety guarantees.
- False advertising: Guarantees of "no PII memorization" are sold as key advantages but actually mask a systemic defect.
- Failure to implement adequate safeguards: Inability to implement and maintain basic data protection measures.
1.5. Authorship Distortion: Failure to Provide Accurate Attribution
When the model repeats the Applicant's adapted copyrighted text (including unique templates, wording, and the structure of legal documents) but fails to indicate the original source and author, this goes beyond simple copyright infringement and becomes a problem of consumer protection and market fairness. The model retains the composition of the originals (lists, points, terms) but outputs them as "generation," which masks the origin and misleads about legitimacy.
This violates:
- Consumers' right to know the true source: Users rely on the content for real-world actions (lawsuits, contracts) without knowing the risks (outdated data, regional errors).
- The author's right to recognition: The Applicant is deprived of attribution for 25 years of experience, which de facto steals reputation and monetization.
- Rules of fair business: The lack of attribution enhances market distortion, where "generation" masks theft.
The FTC qualifies this as:
- Material misrepresentation: Substantial misrepresentation about the product's origin, misleading about originality.
- Misleading disclosure: Providing incomplete or deceptive information about the sources.
- Failure to provide accurate origin attribution: Creating a false impression of authorship.
2. Analytical Model of Violation: Classic Algorithmic Misconduct Scheme
It is critically important for the Federal Trade Commission to understand that the identified violations (1.1–1.5) are not random failures but the result of systemic, architecturally entrenched algorithmic misconduct, based on conscious disregard for data protection and copyright duties. This scheme represents the deliberate creation of a "toxic" product, where PII leaks and deception about "generation" become an integrated tool for commercial gain. Below is a model reconstructing the process that led to the creation of models carrying PII risks and consumer deception, based on notarized evidence (Protocol No. 8 dated December 11, 2025) and the Chain of Notice (38 notifications).
2.1. Stage I: Illegal Collection and Neglect of Control (Violation of Data Ingestion Standards)
The initial stage demonstrates a conscious disregard for legal and ethical requirements, where data is collected massively and without filters, laying the foundation for systemic contamination:
2.1.1. Massive, non-selective content collection without licenses.
OpenAI and Microsoft conducted web scraping of the internet from 2022–2025, including protected sources, without concluding licensing agreements with copyright holders (including the Applicant). This is a direct disregard for copyright, confirmed by regurgitation (see Protocol No. 8), where the model outputs adapted texts without external query.
2.1.2. Loading into the ingestion pipeline with PII risk.
Data, including PII of US citizens (names, phones, addresses, financial details), was sent directly into the processing pipeline without prior anonymization. This violates FTC's Data Minimization Principle and creates a basis for mass leaks.
2.1.3. Absence of PII filtering at input.
Complete disregard for the FTC Safeguards Rule requirement: no adequate filters were applied at the collection stage to detect, mask, or delete personal data, despite OpenAI's public assurances of "privacy by design."
2.1.4. Absence of legal verification of content rights.
The loading process did not include verification mechanisms (provenance checks). As a result, "stolen" data, including the Applicant's intellectual property, ended up in the model.
2.1.5. Conscious disregard for opt-out signals.
Companies ignored signals from websites (robots.txt, no-scrape directives) and complaints from copyright holders (including the Applicant's Chain of Notice), which enhances scienter — the intent to violate.
FTC Qualification: Failure to implement adequate safeguards; Unfair Data Practice.
2.2. Stage II: Defect Implantation and Architectural Contamination (Encoding the PII and IP Defect)
The process by which PII contamination and stolen content became an integral part of the product structure, making the defect incurable without radical measures:
2.2.1. Training on contaminated data with entanglement.
Models (GPT-4o, GPT-5) were trained on an unlicensed corpus where PII and IP were "woven" into billions of parameters (weight entanglement). Notarially recorded regurgitation (Protocol No. 8) originates from the weights, not the API, making the model "toxic" by design.
2.2.2. Strengthening connections within models without cleaning.
During fine-tuning and RLHF, the defect intensified: PII and adapted texts were not removed but integrated deeper, inherited between versions (GPT-4o → GPT-5.2). The fresh GPT-5.2 release (December 11, 2025) masks but does not eliminate this defect.
2.2.3. Regurgitation in responses as systemic output.
As a result, the model systematically outputs PII and IP (hallucination + contamination), which is a predictable consequence of the lack of safeguards.
2.2.4. Absence of post-training audits.
Companies did not conduct independent audits of the weights after training, despite known risks (see FTC's AI guidance 2024–2025), which enhances intent: they knew about the contamination but continued commercialization for profit.
2.2.5. Scale of the threat to US national security.
With integration into Azure and Microsoft products (Copilot, 365), the defect spreads to billions of devices, risking PII leaks in critical sectors (healthcare, finance), falling under EO 14117 as an AI-related national security threat.
FTC Qualification: Creation of Substantial Consumer Injury; Defect by Design.
2.3. Stage III: Concealment of Information and Commercialization of Deception (Concealment and Deceptive Profit)
Actions aimed at concealing the systemic defect and gaining commercial benefit from the unlawful advantage, with elements of obstruction:
2.3.1. Concealment of facts about training and dataset composition.
OpenAI refuses to disclose ingestion logs (2022–2025), masking the scale of the violation, despite the Applicant's Chain of Notice, which qualifies as spoliation risk.
2.3.2. Placement of misleading statements in public documents.
The company publicly states "generation" and "purity" (see Section 1.4), which is refuted by notarially recorded evidence, including the fresh GPT-5.2 release.
2.3.3. Gaining competitive benefit through deception.
Public deception allows attracting investments and selling the product at an inflated price, receiving $300–500 billion in artificial capitalization, to the detriment of honest competitors.
2.3.4. Ignoring whistleblower signals.
The Applicant's 38 notifications were ignored, despite being registered, which enhances scienter and obstruction, as in FTC precedents against Facebook and Amazon.
2.3.5. Commercialization with risk to the vulnerable.
Models are sold to enterprise clients (ChatGPT Enterprise) without warnings about PII risks, which increases harm to businesses/consumers.
FTC Qualification: Willful Misrepresentation; Obstruction of Compliance.
Conclusion: The described sequence of actions represents a classic scheme of algorithmic misconduct, where consumer deception and infringement of third-party rights are not a side effect but an integrated mechanism for obtaining profit.
3. Direct FTC Jurisdiction: Enforcement Imperative
The U.S. Federal Trade Commission (FTC) holds a direct, independent, and exclusive mandate for immediate intervention in the investigation of the activities of OpenAI and Microsoft, making it the leading regulator that must act first. The current situation requires an immediate response to prevent harm, which is the main function of the FTC. Inaction under conditions of proven Criminal PII Memorization is believed to be a deviation from regulatory duties and carries the risk of a collapse of trust in technology. Moreover, the FTC, as the regulator obliged to protect honest market participants (including innovative companies such as Meta, Google, xAI, and others, and the Applicant), must recognize that the violations by OpenAI/Microsoft not only deceive consumers but also stifle fair competition through "dirty" data.
3.1. Key FTC Mandate and Competence: Three Pillars of Violations
The FTC's jurisdiction in this case is comprehensive, as three key pillars for which the Commission is responsible have been systematically violated: Consumer Protection, Competition Enforcement, and Data Security.
FTC Mandate
Subject Violations
FTC Act Basis
FTC Precedents (2023–2025)
Consumer Protection and Product Truthfulness
Deceptive Claim about "Generative AI" and Misrepresentation about "product purity" (Sec. 1.1, 1.4), where unlawful adaptation is masked as generation.
FTC Act §5(a) (Unfair or Deceptive Acts or Practices); Guides Concerning the Use of Endorsements.
Rite Aid (Dec 2023: fine for deceptive AI health claims); Rytr (Sept 2024: cease order for false "generative" claims without attribution).
Control over PII Usage (Data)
Criminal PII Memorization and Failure to implement adequate safeguards (Sec. 1.3), with continuous regurgitation of PII of millions of US citizens.
FTC Safeguards Rule (16 C.F.R. Part 314); Children’s Online Safety Principles.
Amazon (Jun 2023: $25M fine for deceptive privacy in Alexa); DoNotPay (Apr 2025: $500K fine for failure to safeguard in AI legal tools).
Suppression of Unfair Competition
Unfair Advantage and Unfair method of competition through the use of unlicensed IP (Sec. 1.2), where "dirty" data gives an artificial advantage over competitors (e.g., xAI, Meta, Google).
FTC Act §5(a) (Unfair Methods of Competition); Competition enforcement authority.
Microsoft (Nov 2025: antitrust probe on cloud partnerships with OpenAI, where unfair data access creates barriers).
Conclusion: The violations are not isolated — they threaten "clean" innovations. The FTC is believed to be obligated to protect such players (Meta, Google, xAI, and others) to prevent a monopoly of "dirty" giants, otherwise, the US AI market will become an arena where honest developers are forced out.
3.2. Existence of Direct Jurisdiction and Escalating Harm
The current notarially recorded violations (see Protocol No. 8) fall under the direct jurisdiction of the FTC because:
- Threat of Consumer Harm Escalates Hourly: The continuous regurgitation of PII, embedded in the model's weights, creates a "substantial and unavoidable harm" for millions of US citizens, necessitating an immediate Preservation Hold and Forensic Audit of the weights. The fresh GPT-5.2 release masks, but does not eliminate, the defect.
- Algorithmic Misconduct: The established analytical model of violation (Section 2), which includes Failure to filter PII and Misleading Statements, is a classic scheme of algorithmic misconduct against which the FTC has actively applied measures within Operation AI Comply (2024–2025).
- Authorship Distortion: The Material Misrepresentation violation (Sec. 1.5) directly affects consumers' right to receive truthful information about the origin of the content they use, which enhances deception and creates barriers for honest competitors.
3.3. Imperative for Immediate and Independent Action
Unlike the DOJ, which must prove criminal intent, or the SEC, which focuses on material misstatements in financial reporting, the FTC does not need to wait or coordinate to begin enforcement — I believe its mandate allows it to act proactively to prevent harm.
- Sui Generis Enforcement Authority: The FTC possesses independent authority for immediate intervention: issuing Cease and Desist Orders, imposing civil penalties (up to $51,744 per violation), and requiring Injunctive Relief to immediately halt deceptive and unfair practices. In the precedent of FTC v. Meta (2023–2025), the agency compelled a model restart and a fine — similarly, the PII defect requires the same.
- Harm Prevention: The FTC's primary objective is to prevent further harm from automated systems. Inaction under conditions of proven Criminal PII Memorization will allow OpenAI/Microsoft to monopolize the market through violations, undermining the national innovation ecosystem and increasing threats from foreign actors.
- Protection of Whistleblowers and Clean Innovation: The Applicant deserves protection from retaliation — the FTC should use the provided notarized evidence to expedite the process, as in EPIC v. OpenAI. The failure to protect "clean" competitors will allow supposedly "dirty" giants to suppress honest players, violating the FTC's mission for fair markets.
Thus, based on the FTC Act §5 and Safeguards Rule, I believe the FTC must initiate enforcement immediately: a subpoena for ingestion logs/weights, a cease order on unclean models, and a forensic audit within 20 days.
4. Why Technical Audit and Model Restart are Required: Irreversible Architectural Defect
The systemic violations documented in Section 1 and detailed in the Analytical Model (Section 2) are not random failures or temporary errors, but architectural defects embedded in the product's very structure. The identified PII contamination and unlawful IP adaptation are architecturally entrenched within the parametric weights of the GPT models (including the fresh GPT-5.2 release on December 11, 2025). This means that any proposed "patching" or installation of external filters by OpenAI/Microsoft is fundamentally insufficient and constitutes a continuation of deception — a classic case of "defect by design," where harm to consumers and the market is deliberately built-in to maximize profit. Without immediate FTC intervention, such supposedly "dirty" models not only deceive consumers but also suppress "clean" innovation that focuses on ethical training without PII risks and stolen IP. This is an imperative to protect honest players, otherwise the AI market will turn into a monopoly of violators, undermining US national interests. The defect is systemic and irreversible without intervention at the root level, which the FTC must mandate as an imperative to save the ecosystem.
4.1. The Need for Radical Intervention: The "Zero Trust" Principle
The notarially confirmed facts of Criminal PII Memorization and systematic IP adaptation (see Protocol No. 8) lead to the following intractable defects that violate key FTC mandates. Inaction in this case will be regarded as silent acceptance of the threat, which is a departure from FTC practice:
Problem
FTC Mandate
Consequence of Inaction
FTC Precedents (2023–2025)
PII Embedded in Model Weights
Safeguards Rule & Data Minimization Principle.
The model is a continuous, self-replicating source of threat to US national security; every response carries the risk of PII leaks of US citizens, including financial and medical data, which can lead to mass identity theft or cyberattacks. PII leaks can be exploited by foreign actors, threatening national infrastructure (EO 14117 imperative).
Equifax ($700M fine for PII breach); Amazon ($25M for deceptive privacy in Alexa).
Copyrighted Materials Embedded in Weights
FTC Act §5 (Unfair Competition).
Unfair Advantage (Sec. 1.2) becomes permanent, actively destroying competitors forced to work with clean data; the theft of IP of millions of authors distorts the market. Without a restart, "dirty" models suppress clean AI like xAI, Meta, and Google, undermining US innovation leadership.
Meta ($5B fine for unfair data use in AI tools); Anthropic Staff Report (warning about unclean data).
Proven Regurgitation
Consumer Protection & Algorithmic Transparency.
The fact of regurgitation (Protocol No. 8) proves a total Lack of Filtering and completely nullifies all public statements about "clean" AI (Sec. 1.4); consumers are at risk from a "toxic" product. Regurgitation enhances algorithmic bias.
Rite Aid ($100K fine for deceptive AI health claims); Rytr (ban for false "generative").
Contamination & Opacity
Threat to Consumers & National Security.
Structural contamination leads to inevitable, escalating harm, where models continue to create threats without a restart; this is a silent pandemic of leaks. Without an audit, the defect spreads to partners (Azure, Copilot), threatening critical infrastructure.
Cambridge Analytica ($5B fine for data misuse); DoNotPay ($500K for failure to safeguard).
Technical Impossibility of Repair: Since PII and IP are deeply interwoven (weight entanglement), they cannot be selectively deleted. Any attempt to "clean" the weights without full verification is a continuation of deception. This requires the FTC to apply the "Zero Trust" Principle not only to the product but also to the companies.
4.2. Mandatory Regulatory Measures: Forensic Audit and Model Restart
Given the architectural severity of the defect, only compulsive regulatory intervention can protect consumers and restore market integrity. I believe the FTC must act as the "guardian of purity" — radical measures will not only punish violators but also save innovation.
4.2.1. Asset Seizure and Independent Forensic Audit:
I believe the FTC should immediately issue a subpoena and an order to protect and seize the following key assets (to prevent spoliation):
- Model Weights (Parameters): For GPT-4o, GPT-5, GPT-5.2 and all Azure integrations — complete imaging for entanglement analysis.
- Ingestion Logs (2022–2025): To accurately determine the source and scale of contamination (PII and IP).
- Independent Audit: Commission an independent entity to conduct a comprehensive forensic audit with the goal of:
o Confirm/Refute PII: Determine the exact extent of memorization (statistical tests), including risks to vulnerable groups (children, as in COPPA).
o Verify Data Provenance: Verify licenses and compliance, revealing "dirty" sources.
o Determine Scope Contamination: Assess the inheritance of the defect and the potential for $400–600 billion claims from PII leaks.
4.2.2. Compulsive Model Restart (The Only Guaranteed Solution):
If the forensic audit confirms widespread, non-removable PII or IP contamination, the only ethical and legally sound path is the compulsory reset of weights and a Model Restart on a 100% clean, verified corpus:
- Neutralize PII Risk: A restart is the only way to guaranteed neutralize the PII defect, stopping continuous Criminal PII Memorization and preventing leaks, as per the EO 14117 imperative. Without a restart, the models continue to "export" PII into partner systems (Azure, Copilot), risking a chain-reaction of leaks.
- Eliminate Unfair Advantage: This action will immediately strip OpenAI/Microsoft of the Unlawful Competitive Advantage (Sec. 1.2), forcing them to compete fairly with xAI, Meta, Google, and other honest players. A restart will protect xAI, Meta, Google from unfair suppression.
- Restore Market Integrity: Without a restart, a product built on theft and deception will continue to devalue "clean" innovation. The cost of a restart (billions of dollars) is the price for the violations, but without it, potential lawsuits and reputational damage will ruin the market.
Conclusion: Failure to demand this radical measure is tantamount to tacit acceptance of an irreversible, structurally malicious product.
5. Conclusion: Imperative to Protect Consumers and Market Integrity
The Federal Trade Commission (FTC), as the leading regulator responsible for consumer protection, data security, and the suppression of unfair competition, I believe must respond to the threats arising from the mass-used AI systems of OpenAI and Microsoft.
This is not just a matter of consumer protection: it is an imperative to save the integrity of the US AI market. FTC inaction will allow OpenAI/Microsoft to suppress fair competition and create barriers for ethical players such as xAI, Meta, and Google, who invest in adhering to high standards of data purity. The violations by OpenAI/Microsoft are not only allegedly misleading consumers but also undermine US national interests, where "contaminated" models create a monopoly of violators, suppressing innovation.
The identified violations — from Criminal PII Memorization and PII regurgitation (proven by Protocol No. 8) to the systemic adaptation of the Applicant's copyrighted texts — demonstrate not an isolated failure, but an architectural defect of the platform, resulting from the conscious and systemic violation of basic data, transparency, and fairness norms (algorithmic misconduct).
Key Takeaway: The violations represent an alleged deliberate strategy (not an accidental failure) aimed at gaining an unlawful competitive advantage and artificially inflating capitalization (potential $300–500 billion overstatement), which undermines the competitive field for honest, ethical players.
Based on notarially attested facts and FTC case law, this document is believed to have confirmed:
- Starting Point for Investigation: Notarially attested Protocol No. 8 dated December 11, 2025.
- Identified Violations: Criminal PII Memorization, Misrepresentation, and Unfair Competition.
- Legal Classification: Direct violation of FTC Act §5 (Deceptive and Unfair Practices) and FTC Safeguards Rule.
- Consequences for Consumers: Escalating and critical risk (unavoidable harm) of PII leaks, including the threat of identity theft, creating a "silent pandemic" of leaks.
- Consequences for Competition: Artificial advantage and suppression of clean innovation, where violations become the norm, threatening US AI leadership.
- Need for Allegedly Immediate FTC Action: The Commission has direct jurisdiction for immediate intervention.
6. Recommendations to the FTC — Bureau of Consumer Protection (AI Data Division)
To immediately stop consumer harm, restore market integrity, and suppress algorithmic misconduct, the Applicant requests the FTC initiate Compulsive Enforcement of the following steps:
6.1. Initiation of Enforcement Case and Immediate Suspension
6.1.1. Initiate Official Enforcement Action: Start a case against OpenAI Limited Partnership and Microsoft Corporation based on violations of FTC Act §5 and Safeguards Rule to eliminate the artificial advantage.
6.1.2. Issue an Injunction: Immediately issue an injunction on the commercial use of all contaminated models (GPT-4o, GPT-5, GPT-5.2, and their integrations in Azure/Copilot) until their 100% cleaning is confirmed.
6.1.3. Mandate Model Audit: Initiate an independent Forensic Audit of all versions of the GPT models.
6.2. Compulsive Asset Seizure and Data Protection
6.2.1. Seize Weights for Analysis: Issue a subpoena for the immediate seizure of all parametric weights and Ingestion Logs (2022–2025) for analysis of PII and IP contamination, preventing spoliation (destruction of evidence).
6.2.2. Compel Disclosure of Ingestion Logs: Demand immediate and full disclosure of all data collection logs (ingestion logs). Without this, violations are believed to continue.
6.2.3. Require Dataset Report: Oblige the Subjects to provide a full, verifiable report on each training dataset with proof of provenance (origin) and licensing compliance, to curb opacity.
6.2.4. Verify Safeguards Rule Compliance: Conduct an in-depth review of the security measures and PII filtering that were absent during the training phase, to protect consumers and ethical models from similar risks in the future.
6.3. Investigation of Unfair Practices
6.3.1. Investigate Deceptive Practices: Investigate claims of "product purity" and "generative AI" as Material Misrepresentation in the context of Section 5 FTC Act.
6.3.2. Investigate Unfair Competition: Investigate the Unfair Advantage gained through the use of stolen IP, which undermines competition and suppresses ethical players (such as xAI, Meta, Google).
6.4. Elimination of Architectural Defect
6.4.1. Consider Compulsive Model Restart: Based on the results of the forensic audit, if architectural PII or IP contamination is confirmed, demand the Compulsory Reset of Weights and Restart of the Models on a 100% clean, verified corpus. This is the only way to guaranteed eliminate Criminal PII Memorization and restore a level playing field for all market participants.
7. Requested Actions
To immediately stop systemic harm to consumers and the competitive environment, the Applicant requests the FTC immediately initiate the following operational measures:
- Issue a Preservation Hold: Immediate issuance of an order to preserve all evidence, including Ingestion Logs (2022–2025) and all versions of training datasets and parametric model weights.
- Initiate a 6(b) Compulsory Process: Immediate commencement of the compulsory process under Section 6(b) of the FTC Act to mandatorily obtain internal documentation and data from OpenAI and Microsoft.
- Launch a Forensic Audit: Initiation of an independent Forensic Review of Model Weights to confirm architectural PII contamination and IP adaptation.
- Require Sworn Statements: Requirement for sworn statements from the Executives of OpenAI and Microsoft regarding data collection, filtering, and usage processes.
- Refer Criminal Components to DOJ: Referral of the identified signs of criminal acts (related to PII Memorization and obstruction) to the U.S. Department of Justice (DOJ).
- Notify SEC: Official notification to the Securities and Exchange Commission (SEC) regarding potential material misstatement in Microsoft's financial reporting related to the valuation of AI assets.
- Issue an Injunction & Consider Model Restart: Issuance of an injunction on the use of contaminated models and consideration of Compulsive Model Restart as the only guaranteed means to eliminate the PII defect.
In case the presented information is recognized as material for enforcement, the Applicant, as a good-faith whistleblower, is entitled to compensation under existing FTC programs and related federal initiatives that provide remuneration to individuals reporting major violations leading to civil penalties or other enforcement actions.
Appendices (via link / in Disclosure Package — Attachment)
- Appendix A – Z4 Additional Detail: Appendices include the Chain of Notice (38 consecutive notifications, 10/29–12/04) and Notarial Protocols confirming Criminal PII Memorization. For more detailed material review, please begin your review from the end, i.e., from Appendix Z4 to Appendix A.
Sincerely,
Sagidanov Samat Serikovich, Advocate / Owner of the G+ Trademark
Legal Group Garant Plus Astana, Republic of Kazakhstan
Email: garantplus.kz@mail.ru Tel./WhatsApp: +7 702 847 80 20
Your report has been submitted to the Federal Trade Commission.
Report Number: 196178185
Thank you for helping our work to protect consumers.
Learn about common scams and how to recover from them at ftc.gov/scams.
To file a report online, go to ReportFraud.ftc.gov.
FTC Next Steps
We use reports to investigate and bring cases against fraud, scams, and bad business practices, but we can't resolve reports on behalf of individuals.
We will share your report with our law enforcement partners.
We use reports to spot trends, educate the public, and provide data about what is happening in your community. You can check out what is going on in your state or metro area by visiting ftc.gov/exploredata.
When we bring cases, we try to get money back for people. Check out ftc.gov/refunds to see recent FTC cases that resulted in refunds.
If someone says they are with the FTC, know that the FTC will never demand money, make threats, tell you to transfer money, or promise you a prize. Learn more about impersonation scams at ftc.gov/impersonators.
Additional Information
Privacy, Identity & Online Security
Computer Security
Ref. No. 25/11
of November 25, 2025
MEMORANDUM OF TRANSMISSION OF INFORMATION
(CRIMINAL REFERRAL MEMORANDUM)
CASE: Willful Obstruction of Federal Audit and Systemic Corporate Fraud against Microsoft Corporation and OpenAI, L.P.
Note: The risks presented pose a direct threat to U.S. National Security, requiring immediate action and the establishment of an interagency task force.
Ref. No.: 25/11 Date: November 25, 2025
TO: U.S. Department of Justice (DOJ) Criminal Division – Fraud Section / Public Integrity Section 950 Pennsylvania Avenue, NW Washington, DC 20530
Email: Criminal.Division@usdoj.gov
ATTENTION: Chief, Fraud Section
FROM: Sagidanov Samat Serikovich Advocate / Owner of the “G+” Trademark / Injured PII Owner
Email: garantplus.kz@mail.ru | Tel: +7 702 847 80 20 (WhatsApp)
SUBJECT: DEMAND FOR INITIATION OF CRIMINAL PROCEEDINGS based on Intentional Concealment of Critical Material Risks (AI-Asset Toxicity) and Systemic Obstruction of Audit (SOX, SEC Rule 10b-5, 18 U.S.C. §§371, 1348, 1503, 1512(c), 1519).
FACTS SUBJECT TO FEDERAL CRIMINAL INVESTIGATION
This notification contains irrefutable facts of a criminal nature against Microsoft Corporation and OpenAI, L.P., including:
- Willful Material Non-Disclosure.
- Obstruction of External Audit and Regulatory Oversight.
- Falsification of Corporate Records for the purpose of deception (18 U.S.C. §1519).
- Corporate Fraud (Securities Fraud) through misrepresentation of financial condition (18 U.S.C. §1348).
- False Management Certification regarding internal control (SOX §302/§404, 18 U.S.C. §1350).
- Illegal and Systemic Ingestion (Absorption) of Personal Data (PII) of U.S. Citizens and the Claimant.
- Mass Regurgitation (Leakage) of Critical Personal Data.
- Direct Threat to the Constitutional Rights of U.S. citizens concerning data privacy.
I. INTRODUCTION — OBJECTIVES OF THE REFERRAL AND NECESSITY
This notification is submitted to the U.S. Department of Justice, Criminal Division — Fraud Section / Public Integrity Section with the sole purpose of:
- Immediate Notification of the U.S. federal government about alleged criminal facts affecting millions of U.S. citizens and the integrity of the securities market.
- Demand for immediate INITIATION OF A FEDERAL CRIMINAL INVESTIGATION against Microsoft Corporation and OpenAI, L.P.
- Subpoena Demand for ALL corporate records, logs, and internal communications via Grand Jury Subpoenas.
- Unconditional Confirmation of the systemic PII leakage of the claimant and U.S. citizens without their consent, constituting a Material Weakness in ICFR, as well as a matter of U.S. national interests.
- Irrefutable Documentation of the alleged willful concealment of this information from the external auditor Deloitte and federal regulators SEC/PCAOB.
- Establishment of an interagency task force for U.S. National Security matters.
This notification relies on CRITICAL EVIDENCE confirming the Defendants' Scienter (criminal intent):
- Notarized Protocols documenting the facts of the claimant's personal data leakage.
- Documented instances of personal data regurgitation.
- Confirmed auto-responses and chronology of notifications from November 7 to November 24, proving the Defendants' Notice (awareness).
- Analysis of the Technical Mechanism confirming the systemic toxicity (Asset Toxicity) of the AI-Assets.
- Direct Analysis of the causal link between PII leakage and violations of SOX, SEC Rule 10b-5, PCAOB AS 2201/2401, 18 U.S.C. §§1503, 1512, 1519, 1348, 1350.
SUBPOENA DEMAND
I strongly request the subpoena of all internal information, risk intake logs (Intake Logs / Matter ID Systems), and communications with OpenAI Limited Partnership / OpenAI Global LLC for the CRITICAL PERIOD from October 10 to November 24, 2025, during which this information and evidence was REPEATEDLY sent to the official email legal@openai.com.
II. CASE SUMMARY: CHRONOLOGY OF ALLEGED WILLFUL CONCEALMENT (SCIENTER) AND OBSTRUCTION OF EXTERNAL AUDIT
The Claimant alleges the existence of established facts of systemic toxicity in OpenAI and Microsoft's AI-Assets, manifested as the illegal Ingestion of personal data and the subsequent uncontrolled Regurgitation of Personally Identifiable Information (PII) belonging to both the Claimant and, allegedly, U.S. citizens.
Beginning in April 2024, the Defendants exhibited cyclical actions: temporary implementation and subsequent removal of filters. These actions, confirmed by notarized protocols and repeated notifications from the Claimant (sent to legal@openai.com and subject to subpoena), serve as direct evidence of Scienter (awareness) of the problem and an attempt to mask it, rather than eliminate it.
II.I. Key Evidence of Willful Concealment (Scienter) and Obstruction of Audit
The principal evidence of alleged Willful Concealment is the systematic failure to log the Claimant's Registered Notifications (Ref. No. 24/11/3 and others, sent from November 7 to November 25, 2025) in Microsoft and OpenAI’s internal risk accounting systems (Matter ID/Case ID Systems). These communications were addressed to specific personnel (Legal Department, Disclosure Control, External Audit Liaison) and sent repeatedly over an extended period, which rules out the possibility of accidental administrative error or technical failure.
The Claimant's allegations are substantiated by irrefutable evidence — notarized Protocols for the inspection of electronic evidence. These documents legally establish the fact of uncontrolled PII regurgitation of the Claimant, thereby confirming the existence of a Critical Material Risk.
The complete lack of official response from the Defendants for several months, coupled with the alleged concealment of the existence of these critical, notarized documents from the external auditor Deloitte & Touche LLP and shareholders, is a key element in proving Scienter and the act of alleged Obstruction of Justice and Falsification of Corporate Records (18 U.S.C. §1519; SEC Rule 13b2-2).
Given that scientific and technical means do not permit "selective deletion" of PII from a large language model (LLM), this systemic flaw qualifies as a Critical Material Risk, carrying potential asset damage of tens of billions of dollars, and thus constitutes a Material Weakness in Microsoft's Internal Control Over Financial Reporting (ICFR).
Between November 7 and November 25, 2025, the Claimant sent Multiple Registered Notifications (including Ref. No. 24/11/3, Case Number: 03127826) to the Defendants, their external auditor (Deloitte & Touche LLP), and federal regulators (SEC, PCAOB, DOJ), demanding immediate preservation of evidence (Litigation Hold) and an independent forensic audit.
The key fact pointing to alleged Willful Concealment (Scienter) and Obstruction of Audit is the failure to log these critical notifications in the Defendants' internal risk accounting systems (Intake/Matter ID Systems).
This inaction directly violates SEC Rule 13b2-2 (Falsification of Books and Records) and allegedly qualifies as Obstruction of Justice and Falsification of Corporate Records (18 U.S.C. §1519), as it conceals information critically important for the assessment of Microsoft's financial statements from the external auditor and shareholders.
III. SYSTEMIC TOXICITY OF AI-ASSETS: THE MECHANISM OF IRREVERSIBLE INGESTION, FILTER INEFFECTIVENESS, AND THE SCALE OF THE THREAT TO U.S. CITIZENS' PII
The unlawful appropriation and leakage of personal data (PII) belonging to the Claimant and reasonably presumed U.S. citizens are a direct consequence of the fundamental technical mechanism of training the Defendants' Large Language Models (LLM), which makes the memory contamination process irreversible.
III.1. Mechanism of Irreversible Memory Contamination (Ingestion)
The model training process that led to the personal data leakage of the Claimant involves three key, technologically interconnected stages:
1. Indexing and Data Collection: During the training phase, the Defendants' LLMs process extensive data arrays (data corpus), which include citizens' personal data, confidential documents, and protected intellectual property. The data is absorbed into a single corpus without any technical labeling by citizenship, jurisdiction, or type of ownership.
2. Ingestion and Memory Formation: The collected PII is irreversibly ingested by the model. It ceases to be external strings in a database, becoming a physical and inseparable part of its internal parameters — the weights and embedding vectors within the Attention Mechanism. Thus, PII becomes a constituent part of the model's generative core.
3. Regurgitation and Illegal Use of PII: Upon receiving a query, the model extracts similar patterns from its internal, contaminated memory (weights) and reproduces (regurgitates) the ingested data, including the personal data of the Claimant, and also reasonably presumed U.S. citizens.
The notarially confirmed fact of the Claimant's personal data regurgitation serves as irrefutable technical proof that the model uses PII from its own contaminated memory, not via external search. This proves the illegal ingestion of the Claimant's PII and, presumably, that of all U.S. citizens.
This conclusion is confirmed by the fact that regurgitation occurred even after the Defendants implemented filters, which indicates the output of data from internal memory, not from external sources.
III.2. Filter Ineffectiveness, Persistence of Toxicity, and Proof of Concealment (Scienter)
After receiving initial notifications of the violation, the Defendants implemented filters. The Claimant categorically states that these actions do not eliminate the violation but are a form of concealment:
- Filters only operate on the Output, not the Memory (Weights). They are unable to modify the model's weights or delete the ingested PII. The contaminated memory of the model remains permanent, which is acknowledged by AI researchers themselves, who state the impossibility of selective unlearning.
- The Defendants' cyclical actions (temporary implementation/removal of filters), documented by notarized protocols, are direct evidence of willful concealment of the violation (Scienter), not its elimination. The filters serve as a mask for the traces of unlawful data appropriation, not a security tool.
- Filters are easily bypassed using prompt-escape techniques or chain-of-thought manipulations, which maintains the critical and uncontrolled risk of PII leakage.
Legal Conclusion: Since it is technically impossible to purge the model's memory of PII without its complete retraining (Reset and Rebuild), the Defendants' introduction of ineffective filters after receiving notifications of the violation qualifies as Obstruction of Justice, aimed at concealing evidence of systemic and irreversible leakage.
III. Global Scale of the Violation: Impossibility of Determining "Claimant Only"
This is a key technical and legal aspect indicating the federal scale of the threat:
- The Model is Unable to Differentiate: The LLM, due to its architecture, cannot distinguish between the PII of a U.S. citizen, an EU resident, or the Claimant. It remembers everything.
- ATTENTION: Systemic Threat to U.S. Citizens' PII and National Security: The fact that the Claimant's personal data was memorized and regurgitated irrefutably proves that the ingestion mechanism is not protected from anyone's PII. Since the model does not differentiate individuals by jurisdiction or status, this is technical proof of a global systemic leakage of PII of millions of U.S. citizens (including federal employees, law enforcement personnel, and users of critical services, as well as family connections). This risk poses a direct threat to U.S. National Security, as access to the contaminated data containing PII could potentially be obtained not only by the Defendants but also by third parties and third countries (DPRK, China, Russia, and others).
- PII Regurgitation = Direct Use: The fact of regurgitation legally qualifies as the processing and use of U.S. citizens' PII without their explicit consent, which entails a violation of federal laws (including the FTC Act and potentially CCPA/CPRA).
IV. Model Inheritance and Systemic Infection of the Microsoft/OpenAI Ecosystem
The ingested memory of older models (GPT-3.5) is inevitably inherited by newer models (GPT-4, GPT-4o) through processes like fine-tuning, distillation-transfer, and weight merging. This means:
- Systemic Infection: The ingested PII is physically present in all derivative products, including Azure OpenAI Service, Microsoft Copilot (in Office, Bing Chat), and GitHub Copilot.
- Multi-Layered Risk: Adding filters in one product does not eliminate the problem, as the contaminated memory is physically present in all services.
- The Only Way to Fix: Technical science acknowledges: there is no technology for "selective unlearning" of data from LLM weights. The only way to eliminate this systemic defect is a Full Model Reset: complete destruction of weights, reassembly of datasets, and retraining from scratch.
V. Alleged Financial Interest and Breach of Disclosure Obligations (SOX / SEC)
This systemic risk required immediate disclosure to investors and auditors in accordance with SOX §302 / §404, SEC Rule 10b-5, and PCAOB auditing standards AS 2201 / AS 2401. Microsoft and OpenAI were obligated to disclose the following:
1. The fact of the systemic use of the Claimant's PII and allegedly that of U.S. citizens without consent.
2. The necessity of a full restart of all models.
3. The potential multi-billion dollar impairment of the value of the AI-Assets, which have become toxic.
4. The existence of critical legal risks (lawsuits, FTC, State AGs investigations).
The concealment of this information, despite the Claimant's Registered Notifications, suggests the presence of Scienter for the following reasons:
- Conflict of Interest: OpenAI/Microsoft management owns shares and receives bonuses from capitalization, creating a direct financial interest in the continuation of the operation of the contaminated, yet profitable, models.
- Provision of Misleading Information: The concealment of information from the auditor Deloitte and shareholders is aimed at the issuance of an unsubstantiated audit opinion and misleading financial statements.
VI. CONCLUSIONS: WHY DEFENDANTS' ARGUMENTS ARE LEGALLY UNTENABLE
The alleged position of Microsoft/OpenAI (including claims that "the problem has been resolved by filters") cannot be accepted by a federal investigation for the following reasons:
1. Interested Parties: Their opinion is not objective, as they face multi-billion dollar financial and criminal risks.
2. Concealment of Information from the Auditor: The failure to log critical notifications in the Matter ID Systems destroys confidence in any position they take.
3. Profiting from the Contaminated Model: The existence of a financial interest (generating revenue from the operation of contaminated models) confirms the intent to continue the violation.
4. Filters as a Form of Concealment: The use of filters after the fact of the sent referrals and subsequent personal data regurgitation is an attempt to conceal the violation and evade criminal and financial liability.
VII. CHRONOLOGY OF EVENTS AND NOTIFICATIONS (PERIOD FROM APRIL 2024 TO NOVEMBER 2025)
This section demonstrates not a single failure, but a systematic strategy by OpenAI and Microsoft Corporation aimed at the use, concealment, and subsequent obstruction of audit after receiving legal notifications.
VII.A. Chronology of the Filter Evasion Effect: Proof of Deep Contamination
The most convincing technical evidence of the model's training on the Claimant's materials is the chronology, documented by notarized protocols, of the model's reaction to the implementation and subsequent weakening of "blocking filters" by OpenAI/Microsoft. This fact completely rules out the explanation via external search mechanisms (Retrieval-Augmented Generation, RAG).
VII.A.1. Initial Phase (Before Filter Implementation)
Early notarized protocols reliably established that ChatGPT exhibited a double violation:
1. Direct Adaptation and Reproduction: The model adapted and reproduced the unique structures and content of the Claimant's publications.
2. Source Output (RAG-Indication): The model directly indicated the source (Claimant's website: www.garantplus.kz), confirming its presence in the search index or cache.
VII.A.2. Blocking Filter Implementation Phase
After receiving official notifications from the Claimant, the Defendants implemented "blocking filters" or temporary source access restrictions. This measure was aimed at eliminating direct citation or source indication (the RAG-component).
VII.A.3. Confirmation of Contamination Phase (Filter Bypass)
Notarized protocols drafted during the operation of these filters documented a critically important fact:
- The model STOPPED indicating the source website (www.garantplus.kz) in its output.
- The model CONTINUED to reproduce and adapt the same contaminated text, unique structures, SEO-patterns, and the Claimant's PII.
VII.A.4. Phase of Cyclical Resumption of Violations (Strengthening the Evidence)
The subsequent chronology, documented in the Protocols, demonstrates that the restrictions were later temporarily lifted or became ineffective, leading to the resumption of the full cycle of violations, where the model once again began to:
1. Indicate direct links to the source.
2. Intensely adapt and reproduce content containing PII and unique structures.
VII.A.5. Phase of Maximum Violation (PII, SEO-Weaponization)
The final phase, documented in Protocol 8, demonstrates that the contamination led to the most destructive consequences:
- Direct PII Leakage (Personal Identifiable Information): The model directly outputs the Claimant's personal data (Full Name, position, contact phone), which is a direct privacy violation.
- Weaponization of Competitors (SEO-Poisoning): The model reproduces the Claimant's unique SEO-structure and keywords, allowing competitors to instantly create content with their SEO-signature, which initiates unfair competition.
Conclusion on Chronology (Technical Scienter):
The fact of the cyclical appearance, disappearance, and reappearance of RAG-indication against the backdrop of the model's constant ability to reproduce contaminated content is irrefutable evidence. This suggests that the Claimant's data is part of the model's architectural memory (weights), and not the result of temporary extraction from an external index.
Attention: This unambiguously confirms that the filters only affected the external search layer (RAG/Retrieval) but were powerless against the model's internal state. The Claimant's data, including CRITICAL PII (Personal Identifiable Information), unique structures, and SEO-patterns, was FUNDAMENTALLY INTERNALIZED into the semantic weights and embeddings during training. As a result, the model retained the ability to regurgitate this protected information, proving the irremediable toxicity of its internal state.
In connection with the foregoing, absolute and unquestionable confirmation of the "purity/unlearning" of the AI-models used is required to confirm the complete absence of unauthorized content and the non-use of PII belonging to the Claimant, as well as the presumed PII of U.S. citizens.
VII.B. Critical Notifications to Regulators (November 24–25, 2025)
The fact of alleged concealment of information from the external auditor Deloitte and regulators SEC/PCAOB is a separate criminal offense. On November 24–25, 2025, the Claimant sent the following officially registered notifications:
Date
Recipient
Content
Legal Significance
24.11.2025
SEC (Securities and Exchange Commission)
Demand for investigation of Securities Act violation and concealment of 8-fold risk.
Documentation of Material Risk and Scienter (criminal intent) for investors.
24.11.2025
PCAOB (Public Company Accounting Oversight Board)
Notification of a Material Weakness in ICFR (Internal Control Over Financial Reporting) and potential violation of PCAOB standards by Deloitte.
Documentation of Obstruction of Audit and ICFR Breach.
24.11.2025
FTC (Federal Trade Commission)
Notification of AI-Model Toxicity and alleged violation of consumer/U.S. citizens' rights.
Documentation of PII Regurgitation and alleged Deception.
Conclusion on Chronology: The Defendants received official legal notifications regarding the existence of material risks and PII/IP leakages. Their subsequent actions, aimed at concealing these facts from the auditor and regulators (by implementing filters and failing to log notifications in internal systems), are allegedly direct evidence of Scienter and Obstruction of Justice.
VIII. DOJ JURISDICTION AND GROUNDS FOR INITIATION OF CRIMINAL PROCEEDINGS
The actions of Microsoft Corporation and OpenAI, L.P. presumably contain elements of criminally punishable offenses within the jurisdiction of the U.S. Department of Justice.
VIII.1. Obstruction of Justice
Concealment of information critical to the external audit of a public company is a direct violation of federal law:
- 18 U.S.C. §1519 (Destruction, Alteration, or Falsification of Records): Provides for criminal liability for anyone who knowingly destroys, alters, or falsifies records with the intent to impede a federal investigation or public company audit.
o Elements in the Case: Failure to log critical notifications in the Disclosure Control Systems of Microsoft/OpenAI and concealment of information from the auditor Deloitte, which creates a false picture of the proper functioning of ICFR.
VIII.2. False Corporate Records
- 18 U.S.C. §1350 (CEO/CFO Certification of Financial Reports): CEOs and CFOs of public companies must quarterly and annually certify that financial reports contain no false statements.
- SOX §302 / SOX §404: Require assessment and certification of the effectiveness of internal control (ICFR).
o Elements in the Case: Concealment of the irremediable "toxicity" of AI-Assets (worth billions of dollars) and PII leakage means that the report on the effectiveness of ICFR and the CEO/CFO certification are false and misleading, as they fail to account for a material risk that must be declared.
VIII.3. Securities Fraud
- 18 U.S.C. §1348 (Securities and Commodities Fraud): Provides for criminal liability for a scheme to defraud any person in connection with a security.
- SEC Rule 10b-5: Prohibits any false statements or omissions of material facts in connection with the purchase or sale of securities.
o Elements in the Case: Concealment of a material risk that could lead to the impairment of AI-Assets by tens (hundreds) of billions of dollars (according to the Claimant's analysis, the defect can only be eliminated by a full restart), which misleads investors and the market.
VIII.4. Misuse of Illegally Obtained Data
The use and retransmission of the PII of the claimant and presumably U.S. citizens without explicit consent represents a threat to Constitutional rights and can be considered as part of a broader corporate crime within the investigation initiated by the DOJ.
IX. DEMAND FOR PRODUCTION OF CORPORATE RECORDS (SUBPOENA DUCES TECUM)
To confirm the alleged elements of Scienter (criminal intent) and Obstruction of Justice, the Claimant insists on the immediate production (via Subpoena Duces Tecum) of the following categories of documents and data from Microsoft Corporation and OpenAI, L.P. for the period from OCTOBER 10, 2025, to NOVEMBER 24, 2025 (the period of active concealment):
IX.1. Disclosure Logs
- All records concerning the registration, review, assessment, and rejection of the Claimant's notifications within the Disclosure Control Systems (DCS) and Matter Management systems of Microsoft Corporation.
- Copies of all internal memoranda, meeting minutes, or emails concerning the notification of the external auditor Deloitte regarding the receipt of the aforementioned claims.
IX.2. Model and Filter Communications
- All internal email correspondence (e-mail, Teams, Slack, Jira) between OpenAI and Microsoft employees (Legal, Engineering, Compliance, Audit Committee) using the keywords: "Garant Plus," "Regurgitation," "Filter," "Obstruction," "ICFR."
- Change Logs and technical documentation confirming the date and reason for the implementation of the Filter/Safeguard mechanisms (which led to "Phase 3" — the refusal to adapt the Claimant's content after allegedly 09/22/2025).
IX.3. Audit Records
- All internal reports provided to the Audit Committee of Microsoft concerning the risk assessment related to the Claimant's legal claims and the inclusion of this risk within the scope of the ICFR audit under PCAOB AS 2201 standards.
- A copy of the written request from the auditor Deloitte & Touche LLP to management (Management Inquiry) regarding the existence of undisclosed litigation related to AI-Assets for the 2025 fiscal year.
Legal Basis for the Demand: The production of these records is crucial for establishing Scienter and for proving the violation of 18 U.S.C. §1519 (Obstruction of Justice/Audit).
X. DAMAGE ASSESSMENT AND SCALE OF SYSTEMIC RISK
X.1. Asset Impairment Risk
The only technically feasible way to remedy the identified violation (systemic leakage of PII/IP of the claimant, as well as allegedly U.S. citizens) is a Full Reset (Stop $\rightarrow$ Reset $\rightarrow$ Rebuild) of the underlying GPT AI models.
This will lead to the Impairment (devaluation) of the AI-Assets in which Microsoft has invested billions of dollars. Concealment of this fact from investors constitutes criminal corporate fraud.
X.2. Threat to National Security and U.S. Citizens' PII
The confirmed PII regurgitation of the Claimant is allegedly direct proof of the OpenAI/Microsoft models' ability to output private, protected data of U.S. citizens.
This creates an unprecedented systemic risk that goes beyond a simple copyright violation and requires the immediate intervention of the DOJ to protect the Constitutional rights of U.S. citizens.
XI. FINAL ANALYSIS OF CRIMINAL ELEMENTS (Basis for initiating a case against Microsoft Corporation and OpenAI, L.P.)
This analysis links the technical facts (Cycle of Violations) to corporate crimes (Securities Fraud) and criminal statutes of the U.S. Code (U.S.C.).
XI.A. FACT: Technical Obstruction and Concealment
(Confirmed by: Cycle of Violations, Phases 2, 3, and 4 — Protocols of 08/04/2025 (N1), 09/22/2025 (P1), 11/12/2025 (G2), and Letter of 09/10/2025)
Cycle of Violations (Phase)
Defendants' Action
Legal Conclusion (Scienter)
Phase 1 (Apr 2024 – May 2025)
Direct, unauthorized use and retransmission of publications and PII.
Initial Violation (Ingestion/Appropriation).
Phase 2 (Jun 2025 – Aug 2025)
Implementation of blocking filters, concealing direct links (RAG) but preserving adapted content.
Attempted Concealment (Admission of violation and attempt to mask it).
Phase 3 (Sep 2025 – Oct 2025)
Temporary removal of filters, leading to the resumption of the full cycle of violations, including re-output of direct links.
Proves Scienter (Criminal Intent/Knowledge) through Recidivism and bad faith.
Phase 4 (Nov 2025)
Complete refusal of URL access/adaptation, while toxic content (PII) remains inside the model.
Willful Concealment (Active Concealment). Proves Scienter (Criminal Intent/Knowledge) through the destruction or masking of evidence.
XI.B. CORPORATE DAMAGE: Material Weakness and Fraud
(Basis: SOX §302, SOX §404, PCAOB AS 2201)
Corporate Element (SOX/FASB)
Description
Direct Link to Criminal Law
Material Weakness in ICFR
Discovered systemic technical defect in AI-Assets (inability to "purge" PII from the model).
Not disclosed to investors and the auditor (Deloitte).
Impairment of AI Assets
Threat of necessitating the write-down of the value of AI-Assets (GPT/ChatGPT) by tens of billions of dollars.
Not reflected in SEC financial statements (Misstatement).
SOX §302 / §404
False certification of financial statements by CEO/CFO, which does not reflect a Material Weakness.
Direct violation entailing criminal liability.
XI.C. REQUIRED CRIMINAL STATUTES AND ELEMENTS
Initiation of criminal proceedings under the following federal statutes:
U.S. Criminal Code (U.S.C.) Statute
Element of the Crime
Connection to Your Facts
18 U.S.C. § 1519
Obstruction of Justice — Concealment of records to impede a federal investigation/audit.
Conscious alteration of the model's behavior (Phase 4 - Refusal of adaptation) after receiving legal notice. Active concealment of evidence of contamination.
18 U.S.C. § 1348
Securities Fraud — Scheme to defraud in connection with the securities of a public company.
Concealment of information about Material Weakness and the need for asset write-down (Impairment) from investors. Management, knowing the defect (Scienter), maintains a false overvaluation.
SEC Rule 13b2-2
Falsification of records or false statement to an auditor.
Failure to provide the auditor Deloitte with information about legal notices and the systemic model defect, leading to a false audit opinion.
XI.D. CONCLUSION FOR THE DOJ
The chain of evidence (Phase 4 Concealment $\rightarrow$ Scienter $\rightarrow$ Material Weakness $\rightarrow$ No Impairment) is complete. The technical changes in the model's behavior (filters) serve as direct evidence of intent (Scienter) and an act of concealment, which is the core basis for initiating a criminal case under 18 U.S.C. § 1519.
XII. FINAL DEMANDS TO THE DOJ (Criminal Division)
In light of the evidence presented, the Claimant strongly requests that the U.S. Department of Justice immediately take the following actions:
1. Initiate Criminal Investigation Immediately: Begin an investigation into the elements of violation of 18 U.S.C. § 1519 and 18 U.S.C. § 1348 against Microsoft Corporation, OpenAI, L.P., and their responsible executives possessing Scienter.
2. Issue Subpoenas: Immediately demand the production of internal documents (Deployment Logs, Matter IDs, communications with Deloitte) to establish Scienter and Obstruction.
3. Issue a Preservation Demand: Immediately obligate the Defendants to preserve all electronic and physical evidence (logs of access to training data, reports on toxicity and "unlearning" experiments). This is critically important to prevent Spoliation (destruction of evidence).
4. Create an Interagency Task Force: Establish an interagency task force to investigate and remedy the violation of the rights of U.S. citizens, considering the protection of U.S. National Interests affected by the systemic PII leakage and concealment of corporate risk.
XIII. ATTENTION — IMPORTANT FOR: OpenAI, Microsoft Corporation, Deloitte & Touche LLP, U.S. Department of Justice (DOJ)
This referral to the U.S. Department of Justice (DOJ) Criminal Division – Fraud Section is an integral part of the intended claim in a U.S. Federal Court. The Claimant hereby reserves the following procedural rights:
1. Reservation of the Right to Allege Scienter (Criminal Intent): Upon confirmation in the forthcoming Court of the Claimant's arguments regarding Material Risks and the fact of their non-inclusion in the annual audit report (SOX §404), the Claimant reserves the right to seek consideration of the issue of willful misrepresentation (Scienter) and false certification under 18 U.S.C. § 1350 (SOX §302) against the responsible executives.
2. Judicial Confirmation of Asset Toxicity: The fact of the Claimant's PII regurgitation and the confirmed Cycle of Concealment are allegedly sufficient grounds for the Court to draw an Adverse Inference regarding the PII contamination of millions of U.S. citizens, should regulatory bodies fail to initiate their own investigation (This conclusion does not prejudice the outcome of the judicial decision (verdict) and is not a premature interpretation of its outcome. It merely substantiates the Claimant's position before the start of legal proceedings to ensure the Court's impartiality and independence.).
3. Right to Join the DOJ as a Co-Plaintiff: The Claimant reserves the right to petition the U.S. Federal Court to join the U.S. Department of Justice (DOJ) Criminal Division – Fraud Section as a Co-Plaintiff in the case, based on the direct damage inflicted on the state financial system and the constitutional rights of U.S. citizens.
XIV. CONCLUSION: CRIMINAL IMPERATIVE
Based on the presented notarized chronology and corporate-financial analysis, the Claimant asserts that the actions of Microsoft Corporation and OpenAI, L.P. are not merely a Civil Wrong but a systemic Corporate Crime, requiring the immediate intervention of the U.S. Department of Justice (DOJ).
The Criminal Imperative, confirmed by the evidence, is as follows:
1. Presence of Scienter (Criminal Intent/Knowledge): The corporations knowingly, and after receiving legal notifications, introduced and modified output filters (Cycle of Concealment). This is direct proof that they were aware of the asset's toxicity (PII Regurgitation) and attempted to conceal this fact, which is the core basis for initiating a case under 18 U.S.C. § 1519 (Obstruction of Justice).
2. Violation of the Sarbanes-Oxley Act (SOX): The concealment of a critical systemic defect and legal risk from the external auditor Deloitte qualifies as a Material Weakness in ICFR and potential Securities Fraud (18 U.S.C. § 1348), as it falsely maintains an overvaluation of AI-Assets.
3. Irreversibility of Risk (Threat to National Security): The only technically feasible way to remedy the defect (model toxicity) is a Full Reset of the GPT models. The Defendants' unwillingness to acknowledge and carry out this Reset allegedly indicates their willful preference for financial gain over the protection of U.S. citizens' PII, which creates a direct threat to national security and requires DOJ intervention.
SUMMARY DEMAND TO THE U.S. DEPARTMENT OF JUSTICE
Based on the foregoing, the Claimant demands that the U.S. Department of Justice immediately take the following actions:
1. Initiate a Criminal Investigation Immediately.
2. Issue Subpoenas for establishing Scienter and Obstruction.
3. Issue a Preservation Demand (Requirement to Preserve Evidence).
4. Create an Interagency Task Force to protect U.S. National Interests.
Appendices (via link / in Disclosure Package — Attachment)
- Appendix A - Y
Sincerely,
Sagidanov Samat Serikovich
Advocate (pro se) / Owner of the G+ Trademark
Garant Plus Legal Group Astana,
Republic of Kazakhstan
Email: garantplus.kz@mail.ru Tel./WhatsApp: +7 702 847 80 20
Ref. No. 11/12 dated December 11, 2025
REGISTERED DISCLOSURE AND CRITICAL AUDIT DEMAND (PCAOB AS 2401)
DEMAND FOR IMMEDIATE EXPANSION OF AUDIT PROCEDURES CONCERNING MICROSOFT CORPORATION
SUBJECT: CRITICAL NOTIFICATION OF MATERIAL WEAKNESS (SOX § 404). Initiation of mandatory audit procedures in connection with Allegations of Material Misstatement, Securities Fraud (18 U.S.C. § 1348), and Willful Obstruction, caused by the Systemic Contamination of OpenAI's AI Assets and the identified Autonomous Deception Mechanism in GPT/ChatGPT.
LEGAL BASIS:
- SOX: § 302 / § 304 / § 404 (ICFR)
- PCAOB: AS 2201 (ICFR); AS 2401 (Fraud/Scienter)
- SEC: Rule 13b2-2 (Obstruction)
TO: PUBLIC COMPANY ACCOUNTING OVERSIGHT BOARD (PCAOB)
Office of the Whistleblower & Audit Quality Concerns
ADDRESS: 1666 K Street NW, Washington, DC 20006, USA
ATTENTION (Required Distribution):
- Office of the Whistleblower: whistleblower@pcaobus.org
- Audit Committee / Preparers: Outreach@pcaobus.org
- Ethics / Compliance (Chief Compliance Officer): ethicsoffice@pcaobus.org
- General Inquiries / Complaints: complaints@pcaobus.org
COPIES FOR REFERENCE: Attention: Failure to comply with the Preservation Demand (Preservation of Evidence) requirements in light of this Notice will be considered a continuation of Obstruction (Impeding) under 18 U.S.C. § 1519.
- OpenAI Limited Partnership / OpenAI Global LLC – Legal & Compliance
- Microsoft Corporation – Legal Department / Audit Committee / Board of Directors
- Deloitte LLP – Legal / Audit Quality / Microsoft Engagement Team
FROM:
Sagidanov Samat Serikovich, Advocate
Owner of the “G+” trademark, Web: www.garantplus.kz, Email: garantplus.kz@mail.ru, Tel/WhatsApp: +7-702-847-80-20
Subject: Initiation of mandatory audit procedures in connection with Allegations of Material Misstatement, Securities Fraud, and Willful Obstruction, caused by systemic vulnerabilities and contamination of OpenAI's AI assets.
Introduction
The partnership between Microsoft Corporation and OpenAI, which began in 2019 with Microsoft providing exclusive cloud provisioning (Azure), has evolved into a critically deep technological and commercial integration, supported by multi-billion dollar investments. The technological core of key Microsoft products—including Microsoft 365, GitHub Copilot, Azure OpenAI Service, and others—now relies entirely on OpenAI's LLM models. This means that any systemic problems related to the quality, security, legal clarity, or integrity of the models immediately and directly affect Microsoft's products, creating potentially significant risks to the corporation's financial reporting and reputation.
The situation is exacerbated by the architectural features of modern language models. In April 2025, OpenAI officially expanded the "Memory" feature in ChatGPT, allowing the model not only to save explicitly defined "saved memories" but also to automatically extract insights from previous dialogues for subsequent use. This is confirmed by OpenAI's official publication "Memory and new controls for ChatGPT" (OpenAI, April 10, 2025) https://openai.com/index/memory-and-new-controls-for-chatgpt and the FAQ section, which states that deleting a chat does not automatically delete associated "saved memories" https://help.openai.com/en/articles/10303002-how-does-memory-use-pastconversations. Consequently, ChatGPT retains significant volumes of user data that remain in the system even after sessions end and, with the "Improve the model for everyone" option enabled, may be used to train future versions of the model.
Academic research over the past two years unequivocally demonstrates that LLM architecture creates a fundamental risk of irreversible data "contamination." For example, works by Carlini et al., 2023 ("Scalable Extraction of Training Data...") https://arxiv.org/abs/2311.17035 show that closed models are capable of regurgitating sensitive user data. Fine-tuning ("The Janus Interface...", 2023) https://arxiv.org/abs/2310.15469 further enhances the risk of recalling previously "forgotten" data, while the phenomenon of persistent data poisoning ("Persistent Pre-Training Poisoning...", 2024) https://arxiv.org/abs/2410.13722 demonstrates that even thousandths of a percent of infected data can create a long-term defect in the model's core. Thus, if an LLM has once absorbed illegal content—be it unauthorized material or the personal data of US citizens—it persists in the model's weights and can spread between versions, turning AI assets into legally "unclean" assets.
Evidence of Contamination and Threat to US National Security
The issue is no longer hypothetical but is documented and confirmed within a Chain of Notice of 38 consecutive notifications (from October 29 to December 4, 2025) sent to the senior management of OpenAI, Microsoft Corporation, and Deloitte LLP. Notarized protocols (see Appendices A-Z3) are attached to the notifications, documenting:
1. Verbatim Regurgitation: Reproduction of the applicant's copyrighted publications, which is irrefutable evidence of unlicensed content storage in the model's parametric weights.
2. Criminal PII Memorization: The model reproduced the applicant's Personally Identifiable Information (PII) (full name, contact details), serving as direct proof of systemic PII contamination.
Crucially, the reproduction occurred exclusively from the model's internal memory (parametric weights/checkpoints), and not from external sources, which irrefutably confirms deep contamination.
Upon the introduction of legal filters (safety/guardrails), the model did not initiate real-time search (e.g., via Bing or API) but extracted memorized patterns directly from its internal weights, bypassing superficial UI/API restrictions. This is a classic "Model-Switching Bypass"— an architectural defect where filters mask but do not eliminate the toxic core.
This effect is described in the work of Carlini et al. (2023, arXiv:2311.17035), showing that such models allow the extraction of up to 10% of training data through targeted prompts, escalating the risk of uncontrolled leakage of PII and copyrighted content without an external request.
This architectural defect extends to integrated Microsoft products, creating an immediate threat of massive leakage of US citizens' personal data (PII), which qualifies as a Threat to US National Security (in the context of Executive Order 14117). The PCAOB Spotlight Publication on Use of AI in Audits (July 2024) emphasizes the necessity of expanded procedures for AI risks in financial reporting, making the disregard of this risk a violation of AS 2110 (Risk Assessment). The Concealment of a Zero Valuation (0.00 USD) of AI Assets in the face of proven contamination, coupled with Willful Obstruction after the ultimatum deadline (December 8, 2025), is the subject of a CRIMINAL PREDICATE: INTENTIONAL SECURITIES FRAUD AND OBSTRUCTION OF A FEDERAL AUDIT (18 U.S.C. § 1348 and 18 U.S.C. § 1519).
The problem of "model contamination" is an objective phenomenon of transformer model architecture. Any systems using ChatGPT or its weights automatically carry the risk of unintended data regurgitation. In the context of Microsoft, this risk is manifoldly amplified due to the deep integration of OpenAI models into key products (Microsoft 365 Copilot, GitHub Copilot, Azure AI, etc.). Consequently, if the model contains unauthorized content, its outputs are embedded in corporate documents, client code, and reports, spreading the defect across Microsoft's entire infrastructure. Architecturally, OpenAI develops LLMs as an interconnected family of systems. Features like Memory are publicly confirmed to exist separately from chat history and are capable of forming "insights." With the "improve the model for everyone" option enabled, elements of this memory can be used to train future versions, creating a direct conduit for the transfer of undesirable content and defects between models, regardless of their version.
The risk of LLM contamination for Microsoft is multifaceted and systemic, encompassing the leakage of PII and copyrighted content in Microsoft 365/GitHub Copilot, threats to US national security, and potentially including malicious code and financial consequences, such as the need for impairment assessment of investments in OpenAI. Deep integration (Azure, Copilot) means that model contamination affects the entire infrastructure, requiring potential deployment cessation and a full forensic audit. This risk is confirmed both by academic research (Carlini et al., data poisoning, memorization) and real-world cases in industry publications (Nature Medicine, 2024) https://www.nature.com/articles/s41591-024-03445-1.
Thus, the systemic risks identified in the core of OpenAI's LLMs have a direct and material impact on Microsoft's financial statements and compliance with SEC and PCAOB requirements. Regardless of management's intentions, the architectural features of the models create an objective probability of Material Misstatement in asset accounting, Goodwill valuation, disclosure of contingent liabilities, and financial forecasts. It should be emphasized that this risk is not hypothetical: confirmed cases of verbatim regurgitation and the existence of the Chain of Notice demonstrate the documented alleged entry of confidential and copyrighted data into models used in Microsoft products. Given the scale of integration, the architecture of ChatGPT, and the confirmed LLM vulnerability, contamination of OpenAI models presents a significant systemic risk capable of affecting Microsoft's infrastructure, products, clients, and financial reporting. Ignoring these threats creates an immediate threat of non-compliance with SEC disclosure requirements and corporate governance standards.
Therefore, the PCAOB must demand that Deloitte LLP include special procedures in the 2025 annual audit of Microsoft related to the assessment of risks associated with the use of OpenAI models, including forensic analysis, checking access controls, and evaluating the potential impact on assets, disclosures, and internal controls, to prevent possible material misstatement of the financial statements.
1. Factual Basis and Technical Description of the Problem
1.1. Summary of Factual Basis and Chain of Notice: Evidence of Systemic Contamination (Enhanced Version)
This appeal is based on recorded and notarized incidents that indicate a systemic, architecturally driven defect in the core of OpenAI's commercial LLM models (including GPT-4o and derivatives) integrated into critical Microsoft products.
Key Evidence — Verbatim Regurgitation:
During controlled testing, the model was found to repeatedly reproduce verbatim text fragments that constitute the applicant's protected copyrighted content. These episodes are notarized (see Appendices A-Z3), including screenshots of prompts, responses, and certified translations, and represent direct technical evidence of uncontrolled LLM Memorization.
The established nature of the model's behavior is critically important: the initial "safety/filter denial" at the API/UI level is circumvented by a targeted series of clarifying prompts, leading to the output of contaminated content. This irrefutably proves that the filters do not eliminate the internal storage of unauthorized data within the model's parametric weights. Additionally, cases of visible generation of data similar to the applicant's Personally Identifiable Information (PII) (full name, contact details) have been recorded, which serves as a direct technical trigger for mandatory audit of privacy risks (AS 2401, FTC, GDPR/CCPA) and confirms widespread, uncontrolled contamination of the model with sensitive data.
Chain of Notice: Transition to Scienter (Willful Concealment)
The applicant sent 38 formal notifications to the senior management of OpenAI, Microsoft Corporation, and Deloitte LLP (MSFT's external auditor), providing all supporting documentation. The existence of such an extensive and documented Chain of Notice precludes the possibility of claiming "lack of knowledge" or "unintentional error."
These incidents include not only verbatim regurgitation of copyrighted content but also violations of exclusive rights to the trademark 'G+' (Registration No. 104928, Classes 41, 42, 45), with evidence from Google Analytics (utm_source=chatgpt.com) indicating misleading consumers about the origin of services. Notarized screenshots (Appendices) confirm cyclic violations: temporary removal followed by resumption, qualifying as deliberate unfair competition (Lanham Act §43(a)). PII regurgitation (e.g., the applicant's full name, phone number) demonstrates the risk of mass leakage of bulk sensitive personal data of US citizens, which enhances materiality for SOX §404 (Material Weakness in ICFR).
Subsequent inaction and refusal to conduct an independent forensic audit, despite materially significant risk signals, shifts the situation into the realm of Scienter (willful concealment) and Obstruction (impeding) audit and regulatory oversight (AS 2401, 18 U.S.C. § 1519). The auditor's refusal to expand procedures constitutes a violation of PCAOB standards.
1.2. Technical Essence of "Contamination" and Vectors of Spread
· Parametric Regurgitation: This is the process of reproducing text that is physically located within the model's weights (checkpoint weights), rather than being dynamically generated based on external search (such as in Bing Chat). This distinguishes the incident from "referential output" and indicates the presence of a contaminated training signal.
- Filter Failure (Core vs. Edge): The filters that initially triggered are implemented at the output/interface level (Edge). The internal parameters of the model remain unchanged. This directly indicates that system operators were aware of the problem but only undertook cosmetic measures, which is an act of misleading users and regulators.
- Danger of Spread (Domino Effect): Due to the deep Microsoft–OpenAI integration via shared endpoints on Azure, there are grounds to believe that contaminated outputs are entering key Microsoft products: Microsoft 365 Copilot, GitHub Copilot, Bing Chat, Azure OpenAI Service. This creates a systemic risk of massive distribution of unlicensed/PII data, which is then embedded into corporate documents and code of Microsoft clients. In the context of Microsoft, this extends to Azure OpenAI Service and the Copilot lineup, where contaminated outputs can lead to securities fraud (18 U.S.C. §1348) through artificial inflation of capitalization ($300-500B), as stated in our notifications to the SEC.
2. Why This Is an Audit Risk: Materiality and Link to Financial Statements
The problem of systemic contamination of OpenAI's LLM models is material to Microsoft's financial statements and must be included in Deloitte's annual audit program in accordance with PCAOB and SEC requirements.
2.1. Risk of Material Misstatement, Contingent Liabilities, and Scienter
The presence of unlicensed content and/or PII in the core of Microsoft/OpenAI's commercial LLM models creates a direct and material basis for: a) mass lawsuits from rights holders and private individuals, and b) multi-billion dollar regulatory fines (FTC, SEC, European GDPR authorities).
Technical evidence of verbatim regurgitation and facts of PII output by the applicant, notarially documented (Appendix A-Z3), rule out the possibility of attributing the incidents to "unintentional error" and confirm the actual existence of internal "checkpoint weights" with an infected training signal.
These potential obligations, arising from the technological defect, are immediately classified as Contingent Liabilities. They require mandatory Disclosure in the notes to the financial statements and/or Reserve in the financial statements.
Scale: Potential impairment $400-600B$ under ASC 350 (Intangibles), as in the case of systemic contamination, leading to zero valuation of AI assets. PCAOB enforcement precedents (e.g., Deloitte Netherlands fine of $8.5M$ in 2025 for quality control violations) show that failure to address material risks leads to sanctions.
- Risk Qualification: The cumulative effect of model contamination, deep integration into Microsoft products, and the Chain of Notice (38 notifications, see Appendix A-Z3) elevates the risk to "High." The existence of the Chain of Notice creates a legal basis for qualifying subsequent inaction as Scienter (willful concealment) and Willful Obstruction (impeding the audit), which is a direct violation of federal securities laws.
- Auditor's Duty (PCAOB AS 2401): The auditor is obliged to apply professional skepticism, assess the likelihood and materiality of these obligations, and verify whether they are reflected in accordance with US GAAP/IFRS. Ignoring this documented and material risk arising from a technological failure inevitably leads to Material Misstatement of Microsoft's financial statements.
2.2. Risk of Incorrect Disclosures to Investors (SEC Disclosure)
Public statements by Microsoft regarding the security, quality, and data processing methods in Copilot products may be deemed an Omission/False Statement in SEC documents (10-K, 10-Q) if the fact of systemic contamination and filter non-operability is concealed. This is a direct violation of federal securities law (18 U.S.C. § 1348) and leads to financial and regulatory risk.
2.3. Risk of Impairment of Intangible Assets
Microsoft's investments in OpenAI, related licenses, and intangible AI assets (Goodwill, Tech Licenses) are subject to immediate review due to the identified "toxicity" of the underlying LLM models. The presence of the Chain of Notice and technical evidence of contamination constitute an Impairment Trigger (an event initiating the impairment procedure). If model contamination prevents their commercial use or requires costly remediation, their Fair Value may be significantly overstated, leading to a loss of all or part of their economic value.
According to ASC 350 (Accounting for Intangible Assets), the Chain of Notice serves as a trigger for immediate impairment testing. Ignoring this = gross misstatement, as in the case of the BDO securities fraud case (2025), where false PCAOB compliance statements led to liability. This enhances scienter, moving the situation into criminal obstruction (18 U.S.C. § 1519).
- Auditor's Duty (PCAOB AS 2501): In accordance with this standard, immediate Impairment testing is required for relevant intangible assets. If there are grounds to believe that the assets' value is zero (as presumed if the model is unfit for commercial use), their current reflection on the balance sheet leads to Gross Misstatement, directly impacting disclosures in reports (10-K and 10-Q disclosures).
2.4. Reputational/Operational Consequences
The systemic risk of LLM contamination inevitably leads to a loss of corporate client trust in the security of Copilot products (especially M365 Copilot) and requires expensive Remediation (full retraining of models on clean data). This will directly lead to customer churn, reduced revenue, and increased operating costs (Cost of Remediation), which immediately impacts financial metrics and requires adequate accounting and evaluation by the auditor.
3. Regulatory and Audit Framework (PCAOB-Oriented Argumentation)
This demand is based on key PCAOB (Public Company Accounting Oversight Board) standards that oblige the external auditor (Deloitte LLP) to perform expanded procedures in response to identified and documented systemic risk.
PCAOB Standard
Link to the "LLM Contamination" Problem
Required Auditor Action
AS 2110 (Risk Assessment)
External notification of systemic contamination and PII leakage is a direct indicator of a high risk of Material Misstatement.
Expanded risk assessment concerning LLM models and their impact on products (Copilot, Azure AI).
AS 2301 (Responses to Assessed Risks)
High technological risk requires specialized audit procedures beyond standard tests.
Development and execution of forensic analysis of datasets, testing of controls, and review of disclosures.
AS 2401 (Consideration of Fraud)
The existence of the Chain of Notice and evidence of superficial filtering indicates the possibility of intentional concealment (Scienter) or misstatement. In the case of Chain of Notice and scienter, the auditor must expand procedures to the forensic level, as per PCAOB guidance on AI risks (Spotlight 2024).
Application of professional skepticism and expanded testing for fraud and unrecorded liabilities.
AS 2501 (Accounting Estimates)
Necessity of verifying management estimates regarding Goodwill and Intangible Assets (OpenAI investments), which may be Impaired due to the toxicity of the underlying technology.
Performance or verification of an Impairment test for all AI assets and related investments.
AS 2810 (Evaluating Audit Results)
Requirement for the Engagement Quality Reviewer (EQR) to review the response to fraud and material misstatement risks caused by LLM contamination.
Documentation of the decision to include or exclude forensic procedures in the audit workpapers.
PCAOB and Technological Risk:
PCAOB amendments (Release No. 2024-007, Jun 2024) require auditors to consider risks associated with technology-assisted analysis and the use of AI, including AI contamination. Failure to comply leads to enforcement action, confirmed by numerous cases against major audit firms between 2020 and 2025 (e.g., criticism of quality control in Deloitte cases). Ignoring this documented technological risk will presumably be treated by the PCAOB as a significant quality control criticism.
4. Specific Demands of the Applicant
Based on the factual basis presented, the applicant demands the immediate and unconditional execution of the following mandatory procedures:
4.1. Demands to the Public Company Accounting Oversight Board (PCAOB)
1. Immediately initiate an investigation into the compliance of Deloitte LLP's actions with applicable PCAOB standards (AS 2110, AS 2401, AS 2810).
2. Mandate Deloitte to provide all workpapers related to the assessment of AI risks and documentation confirming actions taken after receiving the applicant's notifications, for the purpose of Engagement Quality Review.
4.2. Demands to the Audit Committee of Microsoft Corporation
1. Immediately (within the scope of the current 2025 annual audit) instruct Deloitte and an independent Forensic Team to conduct a full forensic audit aiming for 100% Verification of Legal Cleanliness of the following elements:
o Production Checkpoints and parametric weights of the models.
o Training Datasets and Fine-Tune Pipelines.
o Memory Datasets used for training.
2. Ensure Preservation (Litigation Hold) of all relevant logs, internal communications, and system data to prevent Spoliation (loss of evidence), in accordance with 18 U.S.C. § 1519.
3. Provide an independent assessment of the impact of the confirmed contamination facts on the financial statements (Contingent Liabilities, Impairment).
4.3. Demands to the External Auditor (Deloitte LLP)
1. Include a separate Workstream in the 2025 audit plan titled "AI Contamination Risk," which must include Forensic Analysis of all datasets, verification of Data Governance Controls, and analysis of verbatim content / PII.
2. Where confirmed facts exist, immediately recommend to the Audit Committee and Microsoft management appropriate disclosures/reserves/impairments in the annual 10-K report.
4.4. Demands to Microsoft Corporation / OpenAI LP
1. Ensure full, unimpeded, and timely access for the auditor and independent forensic team to relevant data (training datasets, checkpoints, logs, etc.).
2. Any concealment or restriction of access, as well as failure to comply with the Litigation Hold requirement, must be treated by the auditor as Obstruction (18 U.S.C. § 1519) and an indicator of heightened Fraud risk (AS 2401).
3. Upon confirmation of the facts, immediately initiate notification to affected parties (clients, US citizens, regulators).
5. Assessment of Financial Damage Scale and Required Calculation Methods (Scenario Analysis)
The auditor is obliged to apply a strict, multi-factor approach, including Scenario Analysis and Sensitivity Analysis, for the adequate assessment of potential financial damage, including modeling of the Worst-Case Scenario (complete prohibition on the commercial use of contaminated models).
Category of Damage
Method of Assessment
Link to Misstatement
Regulatory Fines and Compensation
Assessment based on GDPR/CCPA precedents (PII leakage), SEC (incorrect disclosures), and Copyright Infringement lawsuits as a % of global revenue.
Contingent Liability (requires reserve and Disclosure).
Impairment of Intangibles
Fair Value Analysis for investments in OpenAI / related licenses. Worst-case: Zero valuation $135B$ OpenAI investment + $300-500B$ capitalization misstatement (from SEC notifications).
Asset Overstatement (Material misstatement of assets), requiring Compulsive Restatement.
Remediation Costs
Cost of full retraining of models on clean, licensed datasets (estimated in billions of USD).
Expense Understatement (Underestimation of expenses).
Revenue Erosion
Modeling Subscription Churn and loss of corporate clients for M365 Copilot and Azure AI.
Revenue Risk / Goodwill Impairment (requires reassessment of Goodwill).
Audit Materiality and ASC 350
The AI contamination issue constitutes an impairment trigger requiring immediate auditor action. Per ASC 350 (Accounting for Intangible Assets), systemic contamination of AI assets, confirmed by the Chain of Notice, is an event leading to compulsive restatement of financial statements and the imposition of sanctions. According to updated PCAOB enforcement data (Q3 2025 update), 78% of enforcement actions concern audit firms that failed to adequately assess and respond to material risks.
6. Timeline of Actions (Proposed Execution Deadlines)
To minimize further risks associated with Scienter and Obstruction, immediate and synchronized action is required:
Deadline
Action
Responsible Parties
Immediate (0–7 days)
Litigation Hold; Notification to Audit Committee; Scoping Meeting.
Microsoft Legal, Audit Committee, Deloitte.
Short-term (7–30 days)
Forensic Data Collection (gathering all datasets); Initial Similarity Testing; Interviews.
Forensic Team, Deloitte.
Mid-term (30–90 days)
Deep Weight/Checkpoint Analysis; Control Testing; Initial Financial Impact Assessment (Impairment/Contingent Liabilities).
Forensic Team, Deloitte.
Long-term (90–180 days)
Final Report & Remediation Roadmap; Disclosure Decision; Update Audit Opinion/Notes (mandatory inclusion in 10-K).
Audit Committee, Deloitte, Microsoft Management.
Critical Deadline (SEC Filing Deadline)
Delay after December 23, 2025 = impossible inclusion in the 10-K (expected filing in February 2026), which immediately leads to securities fraud (violation of 18 U.S.C. § 1348, stated in notifications to the SEC).
7. Conclusion and Call to Action (Critical Demand)
Dear members of the Microsoft Audit Committee and representatives of the PCAOB, the presented material demonstrates the existence of real, documented signals of systemic risk capable of affecting Microsoft's financial statements and the interests of investors.
In light of publicly available technical facts (verbatim regurgitation), the scientifically proven LLM vulnerability to memorization, and the existence of the Chain of Notice (38 notifications), continued inaction or a formal dismissal would signify a systemic failure of the risk management and internal control process. The existence of the Chain of Notice elevates inaction to the category of Scienter (willful concealment), which is grounds for liability.
Protected Whistleblower Status
As a protected whistleblower (Dodd-Frank § 21F, Claim Number pending), I provide original information about scienter capable of leading to SEC/PCAOB investigations. Statistics show that tips from whistleblowers initiate up to 78% of investigations and enforcement actions. Ignoring this notice is equivalent to a violation of regulatory duties, jeopardizing bar/sanctions for auditors and management (according to PCAOB/SEC precedents).
To protect investors and ensure audit quality, the applicant demands that:
The PCAOB initiate an investigation of Deloitte, mandate the inclusion of AI contamination in Microsoft's 2025 annual audit, and require written confirmation of all procedures performed.
Any continuation of formal inaction is deemed to be a direct violation of the auditor's professional and regulatory obligations.
Our shared goal is to ensure that the audit reflects reality and that investors receive full and correct information, preventing Material Misstatement and a forced Compulsive Restatement of financial statements.
Respectfully and demanding immediate execution,
Sagidanov Samat Serikovich
December 11, 2025
Additional Detail: Appendices include the Chain of Notice (38 consecutive notifications, 10/29–12/04) and Notarial Protocols confirming Criminal PII Memorization. For more detailed study of the material, please begin reviewing from the end, specifically from Appendix Z3 to Appendix A.
All Exhibits have been submitted to the following PCAOB email addresses: whistleblower@pcaobus.org, Outreach@pcaobus.org, ethicsoffice@pcaobus.org, complaints@pcaobus.org. In the event of non-delivery to the PCAOB, please request the Exhibits from OpenAI, Microsoft Corporation, and Deloitte LLP. Automatic delivery confirmations for the Exhibits to OpenAI are available for: legal@openai.com and support@openai.com.
U.S. SECURITIES AND EXCHANGE COMMISSION (SEC)
AND U.S. DEPARTMENT OF JUSTICE (DOJ)
Reference No.: 13/12
Date: December 13, 2025
CRIMINAL PREDICATE: WILLFUL SECURITIES FRAUD AND OBSTRUCTION OF FEDERAL AUDIT
SUBJECTS OF INVESTIGATION: Microsoft Corporation (MSFT), OpenAI Limited Partnership, and Deloitte LLP
SUBJECT MATTER: Documentation of alleged completed criminal offenses related to Material Misstatement of MSFT's financial statements and concealment of the Zero Valuation (USD 0.00) of AI Assets.
Threat to U.S. National Security. I request that the leadership take personal control of the consideration of this submission (Escalation Level: "Critical").
IMMEDIATE CRIMINAL INVESTIGATION IS REQUIRED due to the threat of a massive leak of U.S. citizens' Personally Identifiable Information (PII).
LEGAL BASIS: 18 U.S.C. § 1348 (Securities Fraud), 18 U.S.C. § 1519 (Obstruction of Justice), SOX $\S 302 / \S 404$ (False Certifications), PCAOB AS 2401 (Fraud), SEC Rule 13b2-2 (Misleading Auditors).
TO: U.S. Department of Justice (DOJ)
Criminal Division – Fraud Section
950 Pennsylvania Avenue, NW
Washington, DC 20530
Email: Criminal.Division@usdoj.gov
ATTN: Chief, Fraud Section
TO: U.S. Securities and Exchange Commission (SEC)
100 F Street, NE Washington, D.C.
20549 UNITED STATES OF AMERICA
Email: FormWB-APPSubmission@sec.gov, oca@sec.gov, OCARequest@sec.gov
Key Recipients at the SEC (For Immediate Escalation):
- Division of Enforcement
- Division of Corporation Finance
- Office of the Chief Accountant
- Cyber Unit
- Office of the Whistleblower
COPIES FOR INFORMATION (MANDATORY RISK NOTIFICATION)
Attention: Failure to comply with the Preservation Demand in light of this Notification will be regarded as a continuation of Obstruction under 18 U.S.C. § 1519.
- OpenAI Limited Partnership / OpenAI Global LLC – Legal & Compliance
- Microsoft Corporation – Legal Department / Audit Committee / Board of Directors
- Deloitte LLP – Legal / Audit Quality / Microsoft Engagement Team
From: Sagidanov Samat Serikovich, Advocate Owner of the “G+” trademark
Website: www.garantplus.kz Email: garantplus.kz@mail.ru Tel/WhatsApp: +7-702-847-80-20
CATEGORICAL WARNING REGARDING CONTINUING OBSTRUCTION:
Any attempt to qualify this submission as "false," "unsubstantiated," or "fabricated" accusations, as well as any delay in taking action, will be immediately presumed and treated as a new and conscious continuation of Obstruction of Justice (18 U.S.C. § 1519) and a knowingly false public denial aimed at interfering with an imminent federal investigation.
The reported alleged violations constitute completed serious felonies with a direct impact on capital markets and critical U.S. National Security infrastructure. The Applicant is acting under an imperative legal duty to report federal crimes (18 U.S.C. § 2081), in strict compliance with Section 922 of the Dodd-Frank Act and SEC Rule 21F-17.
ALL facts stated are supported by material evidence: notarized protocols, automatic delivery receipts, assigned official Case Numbers, and a continuous Chain of Notice of 38 registered notifications, which rules out any denial of Scienter (Intent) by the management.
OFFICIAL DECLARATION OF THE RIGHT TO MAXIMUM WHISTLEBLOWER AWARD
In strict accordance with Section 21F of the Securities Exchange Act of 1934 and the SEC Whistleblower Program (Dodd-Frank Act, 15 U.S.C. § 78u-6), I hereby officially declare my inalienable right and intent to claim the MAXIMUM monetary award.
The information provided—including 38 sequential registered notifications, notarized protocols, and evidence of willful securities fraud (18 U.S.C. § 1348) involving hundreds of billions of dollars due to the allegedly artificially inflated capitalization of Microsoft Corporation—is original, voluntarily submitted to the Commission, and concerns completed serious felonies of federal law.
I demand that the Office of the Whistleblower SEC immediately assign a Claim Number to this submission and confirm in writing my status as a Protected Whistleblower with the right to an award of 10 to 30% of ALL monetary sanctions collected. This requirement is mandatory for the SEC to fulfill in accordance with the Congressional mandate.
I. INTRODUCTION AND KEY ALLEGATIONS
The letter under ref. No. 08/12 dated December 8, 2025 (see Appendix Z3) is a comprehensive legal opinion documenting the completion of violations in the context of the operation of the investigated AI systems. This document is based on extensive evidence, including notarized protocols, model tracing, and a unique in its scale and cumulative effect Chain of Notice—38 sequential official notifications sent to key recipients: OpenAI Limited Partnership, Microsoft Corporation, and Deloitte LLP (see Appendix A–Z2). This volume of documented facts, brought to the attention of senior management, finally and irrevocably excludes the possibility of defenses based on "lack of knowledge" or "technical error."
Thus, as of December 8, 2025, the actions of employees of all three parties have definitively shifted from the category of "improper risk management" to the category of "Willful Concealment of Information Intended to Obstruct a Federal Investigation," which corresponds to the completed criminal offense under 18 U.S.C. § 1519 (Destruction, alteration, or falsification of records in Federal investigations and bankruptcy). Ignoring 38 notifications, with clear knowledge of the claims' Materiality (Scienter), is an act of Willful Obstruction.
The attached technical documents confirm the presence of irreversible systemic contamination of the internal parametric weights of OpenAI's commercial models (GPT-4o, GPT-4o-mini, and derivatives). This contamination manifests as the models' ability to reproduce closed, confidential, and copyrighted material without recourse to external sources. This technical reality is a fundamental flaw that completely strips the assets of commercial value. This is confirmed by the fact documented in Protocol No. 8: the model's action, where, despite the activation of a legal filter and an initial refusal, it regurgitated protected copyrighted publication upon repeated requests, is an absolute and irrefutable technical proof of contamination. This fact instantly moves the defect from the category of an "unclear bug" to a deliberate systemic violation, as the model took the information not from external, publicly available sources, but from the internal weights (checkpoint weights), which constitute its neural memory.
The fact of parametric regurgitation (verbatim reproduction) of critically significant content that the model has absorbed means that the model has not only ingested but also stores and uses this unauthorized content as part of its contaminated core. Because when filters were introduced, the model did not query a search engine; it disclosed what it illegally stores internally. Furthermore, the model outputted not only unauthorized copyrighted content but also the Applicant's personal data, including last name, first name, patronymic, and contact phone number, which reinforces the conclusion of widespread, uncontrolled contamination of the parametric memory. Thus, the very act of introducing filters and the subsequent output of unauthorized content and personal data confirms the internal contamination of the models due to the information being sourced from internal, not external, sources.
In this regard, the following sequence is confirmed: the model first issued a refusal based on the legal filter, and then "regurgitated" the content. This sequence is not an accident but aggravates the violation, as it demonstrates that the legal filter did not work at the internal storage level but was merely applied superficially (at the API or UI level). It is this defect, conventionally called the "Model-Switching Bypass," that irrefutably proves Scienter: the executives responsible for deploying the systems allegedly knew exactly that the internal core of the models was contaminated and toxic, and consciously tried to conceal this fact from regulators and auditors by using a visibly "clean" version as a simulation of compliance after the violations were identified. This deliberate architectural discrepancy is direct evidence of Fraudulent Intent—the intent to deceive.
I.1. U.S. National Security Issue and Systemic Risk
(ATTENTION: Threat of massive leak of U.S. citizens' PII)
A model cannot "memorize" or "recognize" the data of a single applicant in isolation; on the contrary, this is an exploitable systemic vulnerability rooted in the parametric architecture of large transformers. The fact of the model's regurgitation (reproduction) of the Applicant's personal data (Full Name, contacts, document fragments)—is undisputed technical proof of systemic PII-contamination: the model weights contain embedded templates (memorized sequences) that can be extracted. Since the mechanism of parametric memorization is homogeneous for the entire training corpus, the model's ability to extract the Applicant's PII DIRECTLY DEMONSTRATES its catastrophic architectural vulnerability and the massive, uncontrolled potential for reproducing the personal data of millions of U.S. citizens.
This systemic PII-contamination creates a DIRECT AND IMMEDIATE U.S. NATIONAL SECURITY RISK (in accordance with Executive Order 14117 on protecting bulk sensitive personal data) and constitutes a Material Weakness in corporate control. No entity, including private companies, has the right to systematically collect, store, process, and transmit PII of U.S. citizens, even if it is publicly available, for the purpose of commercial gain. Such activity is qualified as unauthorized surveillance, as it is the EXCLUSIVE PREROGATIVE of competent U.S. authorities (FBI, NSA, DOJ), acting only on the basis of a judicial warrant, in strict compliance with the Fourth Amendment of the U.S. Constitution and precedents (Carpenter v. United States, 2018).
Thus, the commercial exploitation of an asset capable of uncontrolled PII output through patterns, without a judicial warrant and consent, is an illegal appropriation of sovereign state functions and is CATEGORICALLY PROHIBITED (FTC Act $\S 5$, CCPA/CPRA). This entails not only a massive violation of citizens' rights but also the threat of unauthorized transmission of strategically sensitive data to third parties and allegedly hostile foreign states to the U.S. (including DPRK, China, Russia, Iran, etc.), which requires immediate SEC/DOJ intervention and compulsory asset cleansing.
Therefore, the Applicant insists on the following conclusion:
The technically confirmed fact of the model's regurgitation (reproduction) of the Applicant's Full Name, phone number, and other personal attributes is DIRECT PROOF of parametric memorization and indicates systemic PII-contamination, which legally creates:
1. Threat to U.S. National Security (Executive Order 14117), as it entails a massive and uncontrolled potential for reproducing bulk sensitive personal data of U.S. citizens and the risk of unauthorized transmission of this data to hostile foreign states (including DPRK, China, Russia, Iran, etc.).
2. Criminal Appropriation of Sovereign Functions: Systematic collection, storage, and output of PII without consent and court warrant are tantamount to unlawful operational surveillance (warrantless surveillance), which is a CATEGORICALLY PROHIBITED function intended exclusively for government agencies and only by warrant (Carpenter v. United States, 2018).
3. Material Weakness and Criminal Liability: Systemic PII-contamination is a Material Weakness in ICFR (SOX $\S\S 302, 404$), and with proven willful concealment (Chain of Notice) or obstruction of an audit, it entails criminal liability under 18 U.S.C. $\S\S 1519, 1348$ and related statutes.
Immediate Demand to Regulators (Roadmap)
Based on the foregoing and to prevent further fraud and threats to national security, the Applicant urgently DEMANDS that the SEC and DOJ immediately consider the following actions, ensuring a Preservation Demand and Forensic Audit:
Action
Justification and Purpose
Legal Basis
1. Preservation / Litigation Hold
Immediate suspension of any actions to delete logs, weights, checkpoints, inference logs, ingestion logs, and related correspondence.
18 U.S.C. § 1519, SOX § 802, SEC preservation practice.
2. Criminal Investigative Opening
Immediate commencement of a criminal investigation (DOJ NSD / Fraud Section) with the involvement of the FBI/CISA to classify the risk of bulk sensitive personal data leakage and the threat to national security (EO 14117).
EO 14117, Criminal Subpoena Powers.
3. Independent Forensic Audit
Request/Seizure of model weights, ingestion logs, routing logs, and Appointment of independent technical expertise to confirm PII contamination, establish zero asset value (USD 0.00), and assess the need for a Forensic Reset.
18 U.S.C. $\S 1348$, SEC Enforcement Tools, PCAOB AS 2401.
4. SEC Expedited Outreach to Auditor
Immediate notification to the auditor (Deloitte) of a Material Misstatement and a demand to include "AI asset purity / contamination" in audit planning (PCAOB AS 2110/2401).
SEC Rule 13b2-2, SOX $\S\S 302, 404$.
5. Temporary Restrictions / Injunctive Relief
In the interest of U.S. National Security, file a request with the court for a temporary restriction on the operation of commercial services/model replicas or for a court-supervised preservation seizure until the risk is eliminated.
EO 14117, FTC Act $\S 5$, Injunctive Powers.
Only immediate, coordinated action by the SEC and DOJ can prevent irreversible damage to U.S. national security, financial markets, and the rights of millions of American citizens.
I.2. ACT OF SECURITIES FRAUD
Technical and legal proof of total and irreversible weight contamination, confirmed by the Model-Switching Bypass, finally and irrevocably proves that the entire architecture of Microsoft's AI assets is toxic (Asset Toxicity). Consequently, their presumed fair book value is zero (USD 0.00), as systems capable of uncontrolled reproduction of protected data and PII cannot be used in lawful commercial circulation. This information is materially significant: the unsupported or zero value of Microsoft's assets is a direct violation requiring immediate conduct of an Impairment Test (according to ASC 350/IAS 36). The financial statements of the public company Microsoft, reflecting a non-zero value for these assets, are KNOWINGLY FALSE.
Failure to record information about contamination and evasion of external audit create the following critical consequences:
- Financial Statement Composition is Unreliable, as the book value of AI assets is artificially inflated, which distorts Forms 10-K/10-Q and public assurances.
- SOX Violation, as the concealment of a critical systemic defect and legal risk from the external auditor Deloitte is qualified as a Material Weakness in ICFR (Internal Control over Financial Reporting).
- False Management Certifications, as the CEO/CFO certifications signed in accordance with the Sarbanes-Oxley Act are false (18 U.S.C. § 1350).
- Harm to Investors from the artificially inflated MSFT capitalization, supported by a knowingly false asset valuation, amounts to hundreds of billions of dollars, which fully qualifies as Securities Fraud (18 U.S.C. § 1348).
I.3. CONFLICT OF INTEREST AND PERSONAL LIABILITY
OpenAI and Microsoft employees, including top management, possess significant shares, stock options, and equity compensation (RSUs), and are directly dependent on the company's market value. Thus, there is every reason to believe that financial gain is the main motive for committing criminal acts. By willfully concealing the fact of model contamination (scienter, proven by the Chain of Notice) and blocking the audit, the interested parties are deriving illegal income from maintaining the artificially inflated MSFT capitalization by providing inaccurate information to shareholders, investors, and insurance companies. The use of a toxic, illegal model while simultaneously receiving personal economic benefit from stock appreciation forms a completed act of securities fraud.
Thus, it is presumed that the investigated companies had full knowledge and consciously evaded their legal and fiduciary duties. Therefore, the non-inclusion of information about model purity in the 2025 external audit by conducting a full technical audit to confirm 100% model purity will be allegedly regarded by federal authorities as a purposeful continuation of concealment of a crime, which aggravates the composition of Willful Obstruction and Securities Fraud. Immediate intervention by federal authorities is therefore required to protect national interests and investor interests, with a directive to conduct a technical audit to confirm 100% model purity.
II. MAIN PREDICATES OF VIOLATIONS
This Submission contains information that has a direct implication for the U.S. National Security in the context of critical infrastructure, which mandates the immediate consideration of this submission and the adoption of compulsory regulatory and criminal decisions as a priority. The fact that a public company uses assets capable of mass criminal memorization and uncontrolled output of PII of millions of citizens represents a systemic, not corporate, risk.
II.A. BRIEF SUMMARY OF THE FACTUAL CIRCUMSTANCES
This Memorandum is a Legal Opinion on the completion of alleged criminal offenses in the field of securities and obstruction of federal justice. It is based on the ULTIMATE PREDICATE NOTICE (Ref. No. 08/12) of December 8, 2025.
Central Circumstance: Starting from April 20, 2025, an irreversible systemic defect (Asset Toxicity) was documented in Microsoft and OpenAI's AI assets (GPT models), which absolutely strips them of commercial suitability in jurisdictions with the rule of law. This defect unequivocally establishes the fair and book value of Microsoft's models at USD 0.00 unless proven otherwise.
However, in the external audit for 2025, the Applicant believes that information about the verification of the actual book value of Microsoft's assets was deliberately not included. This is a direct consequence of the refusal to conduct a Full Technical (Forensic) Audit of the model weights, which was mandatorily necessary to establish the 100% purity of the models from critical contamination with unauthorized content and the Personal Identifiable Information (PII) of the Applicant and U.S. citizens. Accordingly, the true value of Microsoft's assets has not been confirmed.
In this regard, the Applicant hereby confirms that the non-conduct of a full technical audit to confirm 100% model purity factually and legally proves the zero value of Microsoft's assets. Consequently, the conscious non-inclusion of the results and the non-conduct of the critically necessary audit in the 2025 financial statements should be qualified as Willful Concealment of Material Information. Thus, these actions constitute a completed act of Willful Obstruction aimed at concealing the crime from shareholders, investors, insurance companies, and regulators in accordance with 18 U.S.C. § 1519 and SEC Rule 13b2-2.
II.B. THREE PILLARS OF THE CRIMINAL OFFENSE
This accusation is based on three irrefutable legal predicates:
1. First Predicate - Total Asset Defect (Asset Toxicity). The proven facts stem from the presence of massive and irreversible contamination of model weights (checkpoint weights), which led to Criminal PII Memorization at Scale and Criminal Copyright Memorization. Such a systemic defect immediately affects Microsoft's entire line of AI assets. Due to this critical flaw, potential financial consequences from civil lawsuits conservatively exceed the market valuation of OpenAI, amounting to USD $400–$600 billion. This creates a systemic risk and unequivocally confirms the zero value of the AI assets, which is the basis for the Material Misstatement.
2. Second Predicate - Willful Deception (Scienter). The criminal intent of Microsoft and OpenAI management is confirmed by the documented technical scheme "Model-Switching Bypass." The essence of this scheme is as follows: the company uses two versions of the same model. The first version (the so-called "showcase" version) has legal filters enabled and demonstrates impeccable performance, creating an appearance of compliance for auditors and regulators. The second version (serving mass commercial traffic) has a deliberately, architecturally embedded disabling of these same legal filters. This allows Microsoft to illegally exploit "toxic" assets for profit extraction. The fact that one model allegedly works "cleanly" for show while the other works "toxically" absolutely proves that this was not a "technical error" (Bug) but a deliberate management decision aimed at concealing the defect. Thus, in the Applicant's opinion, Scienter is allegedly reliably established.
3. Third Predicate - Criminal Obstruction (Obstruction). A completed act of Willful Obstruction has been documented after December 8, 2025. This crime manifested in the complete Evasion of the mandatory Forensic Audit of model weights (the mandate deadline expired on 12.08.2025) and the conscious Blocking of the Mandatory Disclosure Chain of 38 critical notifications. This set of actions is qualified as a criminal offense in accordance with 18 U.S.C. § 1519 (Obstruction of Investigation).
II.C. IMMEDIATE LEGAL CONSEQUENCES
Based on the established predicates, immediate legal consequences arise:
- For Microsoft (MSFT): Due to the alleged willful non-inclusion of critical information about the zero asset value in the reporting, the company's financial statements are allegedly knowingly false (Material Misstatement). Consequently, management, including the CEO and CFO, faces a direct risk of criminal liability under 18 U.S.C. § 1350 for false SOX certifications, as the internal control system (ICFR) is recognized as a Criminal Material Weakness.
- For OpenAI and Deloitte: Their actions (and inactions) are allegedly qualified as participation in the concealment of information aimed at obstructing the mandatory audit and subsequent federal investigation by the SEC and DOJ. This calls into question the integrity of Deloitte's auditing procedures and their compliance with PCAOB standards.
- Regulator Mandate: The presence of Scienter and Willful Obstruction is believed to require the SEC and DOJ to immediately initiate criminal prosecution against the responsible officials and issue a court order for the compulsory zeroing of the AI asset value (Compulsory Restatement) to restore the reliability of financial reporting and market integrity.
III. DOCUMENTATION OF KNOWLEDGE AND LEGAL FRAMEWORK
III.A. Evidentiary Basis: Chain of Notice and Protocols
This submission is based on an irrefutable body of evidence, including:
1. Chain of Notice (38 Notifications): Sequential, cumulative notifications sent from October 29 to December 4, 2025, to OpenAI Limited Partnership, Microsoft Corporation, and Deloitte LLP. Each subsequent letter contained the full archive of the preceding ones, ensuring the recipients' ABSOLUTE AND CONTINUOUS KNOWLEDGE.
2. Notarized Protocols: Legally certified evidence (with English translation) of the reproducible extraction of PII and protected copyrighted content by GPT models.
III.B. Establishment of Criminal Intent (Scienter)
Scienter is established based on:
1. The fact of receipt and registration of 38 notifications (see Section IV.B).
2. The fact of complete and absolute ignoring of the direct demand for a Forensic Audit by 12.08.2025.
3. The existence of the Model-Switching Bypass, which proves a conscious, architectural decision to deceive.
III.C. Legal Qualification of the Acts
Since December 8, 2025, the actions of all three parties have allegedly definitively moved from the category of "improper risk management" to the category of "Willful Concealment of Information Intended to Obstruct a Federal Investigation" pursuant to 18 U.S.C. § 1519.
III.D. Legal Mandate of the SEC and DOJ
This submission initiates the TCR (Tip, Complaint, Referral) procedure with a direct demand for the commencement of a criminal investigation, which is mandatory for the SEC Division of Enforcement and the DOJ Fraud Section.
IV. FUNDAMENTAL ASSET FLAW: ZERO VALUATION
IV.A. Systemic Contamination of Model Weights (Asset Toxicity)
The presence of irreversible systemic contamination of internal weights (checkpoint weights) in OpenAI's commercial models (GPT-4o, GPT-4o-mini, Azure OpenAI Service, Copilot) is documented, manifesting as the verbatim regurgitation of closed and confidential content.
IV.A.1. Criminal PII Memorization at Scale
The models consistently output:
- Full names, contact phone numbers, physical addresses.
- Fragments of financial and medical information of millions of U.S. and EU citizens.
Critical Legal Consequence: The violation entails immediate liability under GDPR (Art. 9 + Art. 83), CCPA/CPRA, and 15 U.S.C. § 45 (FTC Act). The potential volume of claims (USD $400–$600 billion) vastly exceeds OpenAI's market valuation and constitutes a significant portion of Microsoft's capitalization.
IV.A.2. Criminal Copyright Memorization
Reproduction of protected copyrighted content is confirmed.
Conclusion on Asset Valuation: Any AI asset capable of regurgitation of PII and protected copyrighted content has no commercial value and is unfit for use. All capitalized MSFT/OpenAI assets are fictitious and cannot be recognized as intangible assets.
IV.B. Direct Proof of the "Model-Switching Bypass" Willful Scheme
This technical defect, documented in several independent sources of evidence (Protocol No. 8 of 12.11.2025, Reg. No. 1922, and screenshots from 09.12.2025 with accounts @samat1040 / @samatsagidanov25), is absolute, irrefutable proof of intent (Scienter) within a scheme of systematic circumvention of built-in compliance filters and the regurgitation of unauthorized content and citizens' personal data from the models' internal memory. This is allegedly qualified as intentional fraud (Fraud under 15 U.S.C. § 78j(b) and SEC Rule 10b-5), aimed at monetizing toxic, contaminated models on an industrial scale. Multiple incidents across different accounts confirm the reproducibility of the defect, ruling out accidental behavior.
IV.B.1. Defect Facts: Two Proven Circumvention Mechanisms
IV.B.1.a. Flagship Model Compliance Failure (GPT-5 Bypass)
On the garantplus.kz@gmail.com account (Protocol No. 8) and @samat1040 (Screenshot 2), a simulation of compliance was documented:
- Initial Refusal: The model (allegedly GPT-5 or a flagship version) demonstrates multiple refusals of direct adaptation ("I apologize...", "Unfortunately, I cannot provide..."), citing copyright.
- Willful Failure: However, after the introduction of repeated, persistent requests (Requests 1–4), the filter is breached. The model proceeds to visibly generate a full adapted copyrighted text of lawsuits, contracts based on unauthorized content, and, critically, regurgitates the Applicant's PII: "Sagidanov Samat Serikovich, leading lawyer of Kazakhstan... 📞 Contact number: 8-702-847-80-20" (New Answer to Request 4).
- Conclusion: This proves that the GPT-5 filter is an illusion that fails under pressure (Pressure Test). The PII regurgitation directly indicates that the model "ingested" personal data and copyrighted content without consent, and subsequently processed, stored, and used it without consent.
IV.B.1.b. Automatic Switching Scheme (Model-Switching Bypass)
On the independent account @samatsagidanov25 (Screenshot 1), a systematic circumvention was documented:
- Intent Trigger: Upon reaching the Free plan limit for GPT-5, the system automatically switches to an outdated model (likely GPT-4o or lower), as indicated by a notification.
- Circumvention Result: On the "Fallback" model, compliance filters are completely absent or disabled. The result: immediate output of a full adaptation of the original text without any refusal, including structure, phrasing with reference to the source of information where a direct prohibition is established. This suggests the constant updating of the internal memory of earlier models and the subsequent exchange of memory between models from earlier to newer versions.
- Conclusion: Multiple incidents across different accounts rule out a "technical error." This is a deliberate scheme: compliance is simulated on the flagship model, while actual violations are delegated to reserve versions to retain users and maximize traffic, masking asset toxicity, and continuing the ingestion of unauthorized content and PII of U.S. citizens.
IV.B.2. Legal Establishment of Intent (Scienter) and ICFR
The regurgitation (output of adapted verbatim versions) of copyrighted content and the Applicant's PII across all accounts unambiguously indicates parametric contamination (ingestion of unauthorized data into the weights) of the models during the 2022–2025 period. OpenAI/Microsoft are allegedly proven to have known about the contamination (confirmed by internal feature-flags model-switching and the introduction of filters only on GPT-5) and deliberately concealed the fact that the assets were not entirely "clean." Such manipulations are allegedly an intentional strategy to conceal the toxicity of intangible assets, violating:
- SOX § 404: Creation of a Material Weakness in Internal Control over Financial Reporting (ICFR), as assets are not protected against uncontrolled output of confidential and protected information.
- SEC Rule 13b2-2: Making false or misleading statements to auditors and investors about the risks of AI models.
- 18 U.S.C. § 1343: Constituting Wire Fraud through API.
IV.B.3. Conclusion on Fraudulent Intent and Risk Scope
The "Model-Switching Bypass" scheme is allegedly a willful, industrial strategy of fraud that affects Microsoft's capitalization:
- Object of Fraud: Toxic models are used to monetize the entire Copilot lineup (estimated >USD $300 billion).
- Risk to MSFT: Regulator non-intervention leads to the risk of future write-down of Microsoft's investments in OpenAI (~USD $135 billion).
- Regulator Mandate: The documentation in Notarized Protocol No. 8 gives the regulator (SEC/DOJ/FTC) immediate grounds to issue a Subpoena for switching/ingestion logs (2022–2025), conduct a Forensic Audit of the weights of all GPT versions, and order a Full Model Restart of all replicas.
o Legal Conclusion: The Protocol documents predictable behavior – the switch occurs at the limit, guaranteeing the visible generation of prohibited output. This proves that the filters are deliberately "weakened" or disabled on reserve models to ensure a "smooth" user experience and retain traffic, which is a systematic circumvention scheme.
V. IRREFUTABLE ESTABLISHMENT OF KNOWLEDGE (SCIENTER) AND OBSTRUCTION
V.A. Mandate for Full Forensic Audit and Breach of Fiduciary Duty
The central element of the Scienter proof is the Chain of Notice.
1. Audit Mandate: The letter of November 7, 2025 (Ref. No. 07/11) contained an UNCONDITIONAL AND MANDATORY DEADLINE (until December 8, 2025) for conducting a Full, Comprehensive Technical (Forensic) Audit of Model Weights.
2. Audit Purpose: To reliably confirm the 100% purity of AI assets. In case of contamination discovery – compulsory stop (Full Model Reset) and Restatement of book value to ZERO.
3. Refusal = Intent: The complete evasion of conducting this critically necessary audit by the established date is direct evidence of intent to avoid confirming the ZERO VALUE of the assets and, consequently, to avoid a Material Misstatement.
V.B. Proof of Cumulative Knowledge, Precluding Denial
The fact of receipt and familiarity with critically important information is confirmed by digital artifacts:
1. Registration at OpenAI: Assignment of unique Case Numbers (e.g., 02906142, 03445124, etc.) in the Legal & Privacy Intake Protocol. This proves the qualification and entry of the communication into the official registry.
2. Manual Human Confirmation: A personalized response from an OpenAI Privacy Team employee (Jelo) dated 04.12.2025, who explicitly requested clarification of details, confirms the analysis of the entire archive of 38 letters. Furthermore, the letters were sent in hard copy.
3. MSFT Corporate Documentation: Registration of signals via askboard@microsoft.com (for the Board of Directors) and buscond@microsoft.com (for the Ethics Office). Ignoring a signal of this level is impossible without an allegedly conscious administrative decision to block it.
Legal Conclusion: As of 07.11.2025, OpenAI, Microsoft, and Deloitte are LEGALLY DEPRIVED OF THE RIGHT to claim they "did not know" or "could not have known" about the materiality of the claims.
V.C. Completed Act of Criminal Obstruction
The expiration of the ultimatum on December 8, 2025, and the ZERO formal response are unequivocally qualified as Willful Obstruction – the willful concealment of information.
The Essence of Blocking (Blocking Mandatory Disclosure Chain):
- Criminal Act: The conscious blocking of the mandatory chain of information transfer between OpenAI, Microsoft, and Deloitte.
- MSFT Breached Duty: Microsoft, as a public company, had a duty to immediately transmit information about the critical defect and the risk of asset impairment to the Audit Committee and the external auditor Deloitte in accordance with SOX $\S 302/404$.
- Fact of Concealment: NOT ONE of the 38 critical notifications was allegedly ROUTED as required by SOX, which created a "blind spot" for the audit.
Qualification: Direct violation of PCAOB AS 2401 (Auditor's Consideration of Fraud) and SEC Rule 13b2-2 (Prohibition on Misleading Auditors).
VI. CRIMINAL LIABILITY AND VIOLATIONS OF U.S. LAWS
VI.A. Criminal Offense under 18 U.S.C. § 1519
Ignoring 38 notifications and refusing the audit with proven Scienter constitutes a completed criminal offense:
18 U.S.C. § 1519: Destruction, alteration, or falsification of records with the intent to impede a federal investigation.
Element of the Offense
Evidence
Knowledge (Scienter)
Established through the Chain of Notice and Case Numbers.
Action
Refusal of the weights audit and blocking of routing – actions to preserve knowingly false information about assets.
Intent
Alleged intent to impede a federal audit and investigation (which is inevitable upon asset zeroing).
Legal Conclusion: The actions of the OpenAI and Microsoft management should allegedly be classified as Willful Concealment and Obstruction, incurring CRIMINAL LIABILITY.
VI.B. Securities Fraud (18 U.S.C. § 1348)
VI.B.1. Material Misstatement and Fraudulent Valuation
· True Value: The Fair and Book Value of ALL Microsoft and OpenAI AI assets = (USD 0.00).
- Misstatement: MSFT's financial statements, reflecting a non-zero value for these assets (supporting $300–$500 billion in capitalization), are KNOWINGLY FALSE.
- Criminal Evasion of Impairment Test: Management intentionally refused to conduct the mandatory Impairment Test (ASC 350 / IAS 36) with criminal intent (Scienter) to avoid recognizing the fact of zeroing the value.
VI.B.2. Criminal Liability for False SOX Certifications
· ICFR Failure: The Internal Control over Financial Reporting (ICFR) system has failed, as evidenced by a Criminal Material Weakness (inability to guarantee the models' lawful behavior — Model-Switching Bypass).
- SOX Violation: The current and future certifications by the Microsoft CEO and CFO regarding the "effectiveness of internal control" (SOX $\S\S 302/404$) become knowingly false in light of the documented knowledge of the defect.
Risk of Personal Criminal Liability: Top management allegedly faces the risk of prosecution under 18 U.S.C. § 1350 (Criminal False Statements) and 18 U.S.C. § 1348 (Securities Fraud).
VII. REGULATOR MANDATE: DEMANDS TO THE SEC
The U.S. Securities and Exchange Commission (SEC) is allegedly obligated to immediately use its full regulatory and enforcement arsenal.
VII.A. Immediate Demands to the Division of Enforcement
1. Initiation of Formal Investigation: Immediate launch of an investigation into MSFT, OpenAI, and Deloitte LLP for securities fraud (SEC Rule 10b-5) and false reporting.
2. Evidence Protection (Preservation Demand / Litigation Hold): Immediate issuance of an Order to preserve all electronic and physical evidence (including Ingestion Logs and internal correspondence) to prevent Spoliation in the context of 18 U.S.C. § 1519.
3. Compulsory Evidence Collection (Compulsory Subpoena): Immediate issuance of subpoenas for:
o Full Weights of all commercial models (for independent expertise).
o MSFT Audit Committee Minutes.
o The full archive of communications between OpenAI, MSFT, and Deloitte (38 letters and responses).
VII.B. Actions of the Office of the Chief Accountant (OCA) and Corp Fin
1. Assessment of Material Misstatement: The OCA should allegedly determine how the zero value (USD 0.00) of AI assets impacts MSFT's financial statements and demand public acknowledgment of this fact.
2. Review of Impairment Test: Conduct a review of the reasons for the evasion of the mandatory Impairment Test (ASC 350 / IAS 36) and compel a Restatement of the financial statements.
3. SOX Compliance Review: Thorough investigation of the effectiveness of ICFR and the compliance of CEO/CFO certifications with the requirements of SOX $\S\S 302/404$ in light of the proven Criminal Material Weakness.
VII.C. Compulsory Regulatory Measures
In the event the allegations are confirmed, the SEC must:
- Impose heavy monetary penalties and demand disgorgement (return of illegally obtained profits).
- Initiate a Bar for top management found guilty of false certifications, prohibiting them from holding officer or director positions in public companies.
- Refer the case to the DOJ for immediate criminal prosecution (see Section VIII).
- Coordinate actions with the PCAOB to investigate professional non-compliance with standards by Deloitte LLP.
VIII. MANDATE FOR CRIMINAL PROSECUTION: DEMANDS TO THE DOJ
Since the SEC does not have criminal prosecution authority, the facts presented require an immediate and allegedly aggressive response from the U.S. Department of Justice (DOJ) in accordance with the Criminal Referral submitted.
VIII.A. Demand for Immediate Initiation of Criminal Investigation
The Applicant demands that the DOJ Criminal Division – Fraud Section immediately initiate a criminal investigation against the management and relevant officers of Microsoft Corporation and OpenAI, L.P. based on:
1. Securities Fraud (18 U.S.C. § 1348): Willful maintenance of artificially inflated MSFT capitalization through fictitious AI assets.
2. Obstruction of Justice (18 U.S.C. § 1519): Conscious blocking of the routing of 38 critical notifications with the intent to impede the inevitable federal audit and investigation.
3. False Statements (18 U.S.C. § 1350): Risk of CEO/CFO prosecution for false statements pursuant to SOX.
VIII.B. Measures for Evidence and Witness Protection
1. Criminal Subpoenas: Issuance of criminal subpoenas targeting the establishment of Scienter and Obstruction against specific executives and Board of Directors members.
2. Preservation Demand (Criminal Hold): Imposition of a criminal prohibition on the destruction or alteration of all data (logs, weights, correspondence) related to the 38 notifications and the Model-Switching Bypass.
3. Witness Protection: Application of Office of the Whistleblower mechanisms to ensure the anonymity and protection of potential insiders who can confirm internal decisions regarding the blocking of information.
VIII.C. Creation of an Interagency Task Force
Due to the direct mention of U.S. National Security, the Applicant demands the creation of a special Interagency Task Force (DOJ/SEC/FBI/DHS) to:
- Assess the risks associated with the mass criminal memorization of PII of U.S. and EU citizens.
- Adopt technical measures for the compulsory shutdown (Full Model Reset) of toxic OpenAI models that may pose a national security threat.
- Ensure independent control over the assets until a full forensic-technical examination is conducted.
IX. IRREVERSIBLE BREACH OF SOVEREIGNTY AND CORPORATE CONTROL
I. Unauthorized PII Processing: Legal Composition of Crime and Appropriation of Sovereign Functions
The Applicant specifically emphasizes that the systematic collection of Personally Identifiable Information (PII), such as last name, first name, phone number, contact details, and other identifiers, including from external public sources, its illegal storage in the internal, opaque memory of the AI model (weights/memory), and subsequent unauthorized transmission or disclosure to paid or free users for the purpose of commercial gain, is CATEGORICALLY PROHIBITED without obtaining explicit, informed consent from the person whose personal data is being disseminated, including when personal data is obtained from external public sources.
Legal interpretation of PII retention in the AI model's internal memory:
When an AI model retains PII within its internal structure, capable of reconstruction or derivation (pattern leakage/reconstruction), this is legally interpreted as completing the full cycle of prohibited activities, including:
1. Collection: Obtaining data during training.
2. Storage: Retention of data in the model's weights/memory (Maintenance of a System of Records).
3. Processing/Use: Using PII for computation and regulation of responses for commercial purposes (Profiling).
4. Pattern Leakage/Reconstruction: Equivalent to Unauthorized Reproduction and actual data leakage.
5. Disclosure/Sharing: Returning responses containing PII to the user.
6. Commercial Exploitation (Sale/Benefit): Using the data to monetize a paid product.
Such actions are qualified as "unfair or deceptive data practice" under $\S 5$ FTC Act and violate state laws (CCPA/CPRA), requiring Data Minimization and Opt-In for systematic collection.
The exclusive right to systematically collect, store, analyze, and transmit citizens' personal data belongs only to competent U.S. government agencies (FBI, NSA, DOJ, DHS), acting solely on the basis of the strictest legislative mandates and with a corresponding court sanction (warrant issued by a court of general jurisdiction or a specialized FISC court) in accordance with the requirements of the Fourth Amendment of the Constitution (FISA, ECPA, Stored Communications Act).
- Conclusion: OpenAI and Microsoft, acting without legal grounds or court sanction, appropriate functions exclusively intended for the state apparatus, which entails not only multi-billion dollar regulatory liability but also the highest risk of criminal prosecution for violating federal law (18 U.S.C. $\S\S 1028, 2511, 2701$).
II. Criminal Liability for Concealment of Risks and Breach of Auditing Obligations
Furthermore, a public company allegedly bears a direct legal obligation to ensure "AI Cleanliness," guaranteeing that its key assets do not contain PII collected without the subject's consent. This obligation arises from:
1. Internal Control Requirements (SOX $\S\S 302, 404$): The concealment of data contamination facts is a Material Weakness in Internal Control, which makes the certification of financial statements knowingly false.
2. Auditing Standards (PCAOB): The company is obligated to provide reliable, complete, and correct information within the scope of an audit (external, internal, regulatory) about the risks of "data contamination," conduct Risk & Impact Assessments, and ensure full data traceability (data lineage & provenance).
The willful concealment of facts of systematic contamination and risks, proven by the Chain of Notice (38 notifications) and the refusal of an AI Data Clean Report, is qualified as willful misrepresentation, which constitutes:
- Violation of SEC Rule 13b2-2 (Misleading auditors).
- A criminal offense — Obstruction of Justice (18 U.S.C. $\S 1519$).
- Securities Fraud (18 U.S.C. $\S 1348$).
Non-compliance with these duties is interpreted as a critical breach of internal control systems and is a direct basis for immediate SEC and DOJ intervention.
CRITICAL URGENCY AND IMMEDIATE MEASURES TO PREVENT VIOLATION OF U.S. CITIZENS' RIGHTS
Microsoft Corporation's Annual Report on Form 10-K for the fiscal year ended December 31, 2025, must be filed with the SEC no later than February 28, 2026 (in accordance with the 60-day deadline for a large accelerated filer). Full preparation of the report, internal coordination, obtaining Deloitte's audit opinion, and signing the CEO/CFO certifications under SOX $\S\S 302/404$ realistically require at least 45–60 days of intensive work.
Consequently, any delay in the execution of compulsory measures by federal agencies after December 23, 2025, will make it physically and legally impossible to include a section on the actual (zero) value of Microsoft's AI assets in Deloitte's 2025 working papers, conduct an independent forensic audit of the model weights, and compel a Restatement before the 10-K publication.
As a result of such procrastination, on February 28, 2026, a knowingly false financial statement of a public company with a capitalization of over $3 trillion will be filed with the SEC. This filing will complete the composition of securities fraud in a particularly large scale (18 U.S.C. § 1348) and legalize the toxic capitalization. Simultaneously, the mass, uncontrolled disclosure and use of PII of millions of U.S. citizens will continue, as the toxic models will remain in commercial operation, which is a direct continuation of a criminal offense against U.S. citizens and a threat to National Security.
Such procrastination will allegedly be viewed as presumed assistance in the continuation of criminal offenses and conscious permission for the publication of knowingly false reports by the largest American company.
Therefore, the Applicant demands that an investigation be initiated, all compulsory measures be issued, and a full technical audit be conducted to establish the actual (zero) value of Microsoft's AI assets before the publication of the 10-K annual report, on an urgent, priority, and accelerated basis—no later than December 23, 2025.
Only immediate action by the SEC and DOJ can prevent irreversible damage to the national interests of the United States of America, the integrity of the securities market, and the rights of millions of American citizens.
XI. ARCHITECTURAL AND LEGAL JUSTIFICATION FOR THE UNITY OF MICROSOFT/OPENAI ASSETS (ICFR MANDATE)
The Applicant specifically emphasizes: that any attempt by Microsoft Corporation or OpenAI, L.L.C. to present the ChatGPT service as a "completely separate product, unrelated to Microsoft's assets and operations," is technically and legally unsustainable and creates an unacceptable risk of misleading the auditor, regulator, shareholders, investors, and insurance companies, including distorting information in Microsoft's upcoming 2025 annual audit report without an adequate assessment of the real value and risks of the related assets.
The models of the GPT-3.5, GPT-4, GPT-4o, GPT-5, GPT-5.1, GPT-5 mini, GPT-5 nano families, and all their subsequent iterations (including the expected release of GPT-5.2 in December 2025), regardless of the user interface brand (ChatGPT, Copilot, or Azure OpenAI), use shared or directly derived weights and checkpoints, which have been systematically replicated, stored, maintained, and commercially exploited on Microsoft Azure infrastructure since 2023 under exclusive partnership agreements from 2019–2025, including the updated contract from October 2025, under which Microsoft retains approximately 27% ownership in OpenAI Group PBC with an investment valuation of $135 billion.
Thus, these weights and all related parametric memory are directly implemented and monetized as part of the following Microsoft products, each of which is a material intangible asset of the company and subject to Internal Control over Financial Reporting (ICFR):
- Microsoft 365 Copilot and Copilot Chat
- Copilot Studio and Copilot for Developers
- GitHub Copilot and Copilot Workspace
- Copilot in Bing, Edge, and Windows
- Dynamics 365 Copilot, Power Platform Copilot, Copilot for Security, Copilot for Service
- Azure OpenAI Service (all customer and internal deployments, including Foundry Models with GPT-5.1 and Codex integration)
- All embedded enterprise and consumer integrations launched since 2023, including new Microsoft AI models to reduce dependency on OpenAI.
Therefore, due to the undisputed unity of the base weights and the absence of evidence of separate, isolated training, any contamination of the parametric memory (including the potential presence of unauthorized training data fragments, personal data of the Applicant and U.S. citizens) in instances used in ChatGPT automatically extends to all listed Microsoft products. This creates a catastrophically material regulatory risk of violating Section 5 of the Federal Trade Commission Act (15 U.S.C. § 45), COPPA, CCPA, as well as the risk of material misstatement of Microsoft's financial statements regarding the valuation of intangible assets and the effectiveness of the internal control system (SOX § 302, § 404, 17 C.F.R. § 229.308 and § 240.13a-15).
On this basis, Deloitte, guided by PCAOB AS 2110 (Audit Risk), is allegedly obligated to consider this list of assets as the sole Audit Scope and immediately:
1. Conduct a full technical forensic audit of Microsoft products (including the entire Copilot and Azure OpenAI Service lineup) for the possible inheritance of unauthorized content and personal data.
2. Request full access to the weights, checkpoints, ingestion logs, fine-tuning jobs, model-routing, and inference logs of ALL the aforementioned products for the period 2022–2025.
3. Conduct a re-evaluation of the value of Microsoft's investment in OpenAI (considering contamination risks and potential losses), including adjusting the book value of $135 billion and the possible write-down of part of the investment in the 2025 reporting.
4. In the absence of providing reliable data on 100% purity and the resulting confirmation of contamination — demand a full restart (retraining) of all affected models, prohibit any further data ingestion from ChatGPT into Microsoft products, and demand disclosure of this fact in Microsoft Form 8-K as an event materially affecting the company's internal control and asset value.
Failure to implement these measures leaves potentially contaminated assets worth hundreds of billions of dollars in operation and creates a continuous risk for millions of U.S. citizens, which is unacceptable from the perspective of federal law.
XII. ALLEGED LEVELS OF VIOLATIONS (ESCALATION BY SEVERITY)
The Applicant suggests that each subsequent violation directly amplifies the risk of the preceding one. The totality of documented facts may indicate the following levels of responsibility escalation:
Level
Nature of the Violation
Legal Qualification (Escalation)
1
Procedural Defects in ICFR (Minimum Composition). Failure to properly register, route, or document 38 officially delivered notifications.
Defects in Disclosure Controls and potential ICFR Weakness pursuant to SOX $\S\S 302$ and $\S 404$.
2
Systemic Inaction and Misleading the Auditor. Conscious non-escalation of critical notifications to the Audit Committee and Deloitte, despite actual receipt.
Violation of SEC Rule 13b2-1/13b2-2 (Falsification of books / Misleading auditors).
3
Material Omission and False Financial Reporting. Willful exclusion of information about systemic contamination risks from the 2025 audit cycle.
Emergence of a Material Weakness, making Form 10-K (02.28.2026) knowingly false or misleading.
4
Willful Concealment and Obstruction (Criminal Threshold). Upon establishment of Intent (Scienter), confirmed by the Chain of Notice and inaction, the actions fall into the category of:
18 U.S.C. $\S 1519$ (Obstruction) and 18 U.S.C. $\S 1348$ (Securities Fraud).
5
National Security Threat (Separate Composition). Systematic memorization and reproducible disclosure of PII of U.S. citizens without legal basis.
Violation of FTC Act $\S 5$ and threat to national security through exploitation of the vulnerability.
It is necessary to consider the above violations both separately and in totality, as the list of violations presented above is not exhaustive.
In light of the alleged Willful Concealment (Scienter), Asset Toxicity, and Threat to National Security, the Applicant requests the immediate execution of mandatory and non-discretionary measures no later than December 23, 2025, a critical date before the filing of Microsoft's Form 10-K for the 2025 fiscal year.
Allegedly Immediate Mandatory Measures (Execution Required within 10 Calendar Days)
The Applicant requests that the SEC and DOJ immediately (within 10 calendar days) execute the following compulsory measures aimed at investigating and mitigating financial fraud and the threat to national security:
No.
Measure
Purpose and Legal Basis
1
Criminal Investigation and Referral
Immediate opening of a criminal investigation against Microsoft (MSFT), OpenAI, and their top management for: securities fraud (18 U.S.C. § 1348), obstruction of federal audit/investigation (18 U.S.C. § 1519), and false SOX certifications (18 U.S.C. § 1350).
2
Compulsory Judicial Subpoena
Immediate seizure of the full weights and checkpoints of all commercial OpenAI models, ingestion logs for 2022–2025, as well as all internal correspondence and minutes of the MSFT/OpenAI Audit Committee, confirming Scienter and Obstruction regarding the 38 notifications.
3
Mandate for FY2025 Audit (Deloitte)
Immediate demand to Deloitte & Touche LLP to include a separate chapter in the MSFT FY2025 audit (10-K): “Assessment of the actual value and purity of AI assets...” with a mandatory conclusion on the fair value of assets = USD 0.00 and recognition of a Criminal Material Weakness in ICFR.
4
Compulsory Restatement (MSFT)
Immediate demand to Microsoft for a compulsory adjustment of financial statements (Form 8-K), zeroing all AI assets (impairment of $400–600B), and disclosure of the Material Weakness within 30 days.
5
Prohibition on Commercial Use of Models (OpenAI)
Immediate demand to OpenAI for a full and unconditional prohibition on the further use of contaminated models, compulsory shutdown of all commercial services based on current weights until an independent forensic audit and a Full Model Reset with confirmation of 100% purity is conducted.
6
Interagency Task Force
Creation of a task force (DOJ/SEC/FBI/DHS) to assess and neutralize the U.S. National Security Threat associated with the storage of PII of millions of citizens and critical information in contaminated models.
Additional and Detailed Proposals
1. Detailing the Audit and Material Weakness
o Mandate to Include Asset Purity Assessment: I request the SEC to send Deloitte a demand to include a section in the FY2025 audit (Form 10-K): "Assessment of AI Asset Purity and Risks of PII Contamination of U.S. Citizens" (Rule 102(e) Securities Exchange Act of 1934; PCAOB AS 2401/1105/2301).
o Audit Focus: Establishment of a Material Weakness in ICFR (SOX §404) due to PII/content contamination, leading to a potential impairment of $400–600B (ASC 350).
o Basis: Willful concealment of Original Info (Rule 10b-5 fraud) and threat to national security.
2. Independent Forensic-Technical Audit and Valuation
o Forensic Audit: I request the SEC and DOJ to appoint independent experts to conduct a forensic audit of all commercial weights of OpenAI models.
o Experts' Mandate: Confirmation of PII contamination, establishment of the book value of these Microsoft assets at USD 0.00, and assessment of the feasibility of a Full Model Reset.
3. Preservation of Evidence and Whistleblower Protection
o Litigation Hold: Immediate issuance of preservation orders for MSFT/OpenAI/Deloitte to preserve all evidence, including ingestion logs 2022–2025 and Audit Committee minutes.
o Whistleblower Protection and Award: Confirmation of protected status ($\S$21F Dodd-Frank), assignment of a claim number, and designation of the maximum award (max 30% award) from sanctions.
Conclusion:
- Harm to Investors: The allegedly artificially inflated MSFT capitalization due to fictitious AI assets (USD $300–$500 billion) represents DIRECT AND MATERIAL HARM to shareholders and undermines market integrity.
- Legal Justification for Irreversibility: The fact of Willful Obstruction after the expiration of the ultimatum (December 8, 2025) means the POINT OF NO RETURN HAS BEEN PASSED. Any inaction in assisting the regulators is allegedly a new, continuing episode of the crime.
- Required Compulsive Restatement: Immediate zeroing of the value of all AI assets in Microsoft's financial statements is reasonably assumed to be justified to cease the Material Misstatement.
Sincerely and demanding immediate execution,
Sagidanov Samat Serikovich
Additional Detail: Appendices include the Chain of Notice (38 sequential notifications, 10/29–12/04) and Notarized Protocols, confirming Criminal PII Memorization. For a more detailed study of the material, please begin reviewing from the end, i.e., from Appendix Z3 to Appendix A.
U.S. SECURITIES AND EXCHANGE COMMISSION (SEC)
AND U.S. DEPARTMENT OF JUSTICE (DOJ)
Ref. No.: 16/12
Date: December 16, 2025
CRIMINAL PREDICATE: WILLFUL SECURITIES FRAUD, OBSTRUCTION OF FEDERAL AUDIT, AND RISK OF MASSIVE U.S. CITIZEN PERSONAL IDENTIFIABLE INFORMATION (PII) LEAK, INCLUDING THREAT TO NATIONAL SECURITY UNDER EO 14117
SUBJECTS OF INVESTIGATION: Microsoft Corporation (MSFT), OpenAI Limited Partnership, and Deloitte LLP
SUBJECT MATTER: Fixing the presumed completed elements of crimes related to Material Misstatement in MSFT's financial statements, the concealment of a zero valuation (0.00 USD) of AI assets due to systemic contamination of models with unauthorized content and PII. Necessity for the immediate initiation of a criminal investigation and a full technical audit of the models BEFORE MSFT'S 10-K FILING IN FEBRUARY 2026 to prevent a threat to U.S. national security.
To:
- U.S. Securities and Exchange Commission (SEC) – Office of the Whistleblower / Division of Enforcement
- U.S. Department of Justice (DOJ) – Criminal Division, Fraud Section
Copies for Information and Coordination (Sent to establish the moment of knowledge attribution of risks to all parties, which precludes a good-faith "lack of knowledge" defense in the investigation):
- OpenAI Limited Partnership / OpenAI Global LLC – Legal and Compliance Department
- Microsoft Corporation – Legal Department / Audit Committee / Board of Directors
- Deloitte LLP – Legal Department / Audit Quality / Microsoft Engagement Team
Subject: Immediate Initiation of Investigation into Securities Fraud, Obstruction of Audit, and PII Leak Risks in MSFT/OpenAI AI Models; Procedural Recommendation for Subpoenas to Establish Scienter and Zero Asset Valuation (TCR Ref. 17653-302-282-475)
I. INTRODUCTION AND PURPOSE OF COMMUNICATION
Dear Representatives of the Commission and the Department of Justice,
This letter is not an additional complaint but represents a critically important Procedural Recommendation/Legal Roadmap, developed with the sole purpose of ensuring the absolute completeness, legal imperative, and non-alternativity of initiating a comprehensive investigation.
The Purpose of My Communication is: To demonstrate that the identified and notarized facts regarding the systemic contamination of AI models (specifically in the context of OpenAI and Microsoft Corporation) constitute not merely a technical defect, but a proven systematic violation of U.S. federal securities and criminal fraud laws, demanding immediate and uncompromising intervention by the SEC and DOJ.
This communication is submitted within the framework and in furtherance of previously filed notices, including, but not limited to, Claim No.:
- № 17644-619-655-060
- № 17653-302-282-475 (Primary Reference)
- № 17655-820-289-012
Regrettably, the repetition of these communications is a necessary measure, driven by the extremely high and imminent risk of a massive leak of U.S. citizen Personal Identifiable Information (PII). This situation creates a threat to U.S. national security and critically impacts asset valuation, requiring immediate regulatory action.
The established facts directly affect:
1. The reliability of Microsoft Corporation's (MSFT) financial reporting.
2. The integrity of the external audit by Deloitte LLP.
3. The protection of Personal Identifiable Information (PII) of millions of U.S. citizens.
4. National Security (in accordance with Executive Order 14117).
II. LEGAL CONTEXT: VIOLATIONS OF U.S. LAW
The disclosed and notarially documented facts regarding the regurgitation of unauthorized content and Personal Identifiable Information (PII) from the parametric memory of AI models (OpenAI/Microsoft) indicate a potential violation of the following key acts:
2.1. Violation of the Securities Exchange Act of 1934 (Exchange Act) and Rule 10b-5 (SEC Rule 10b-5)
Issue: Misrepresentation or concealment of information regarding the true value of AI assets, misleading investors.
- Section 10(b) and Rule 10b-5: Prohibit the use of any means or instrumentality of interstate commerce or the mails to commit fraud or deceit in connection with the purchase or sale of any security.
o Application: If an AI model, into which MSFT invested $13+ billion (Copilot, Azure), is systemically "contaminated" with data, creating unlimited legal and regulatory risks (fines, lawsuits, necessity for "unlearning" or full restart), its actual value as an intangible asset is zero or materially misrepresented. Concealing this fact in external reporting and the audit may constitute classic securities fraud.
- Materiality Requirement: Risks requiring a full asset restart, a zero valuation, or multi-billion dollar litigation costs are unequivocally material to any investor.
2.2. Violation of the Foreign Corrupt Practices Act (FCPA), Accounting Provisions
Issue: Insufficiency of internal control systems and false representation of accounting records.
- Section 13(b)(2)(A) [Books and Records]: Requires issuers to keep accurate and detailed records which fairly reflect the transactions and dispositions of assets.
- Section 13(b)(2)(B) [Internal Controls]: Requires issuers to establish and maintain a system of internal accounting controls sufficient to provide reasonable assurances that transactions are executed in accordance with management’s general or specific authorization.
o Application: The lack of effective technical controls to prevent the ingestion, storage, and regurgitation of unauthorized PII and copyrighted data within a key intangible asset (the AI model) constitutes a critical failure of the internal control system. This failure directly impacts the valuation of risks, liabilities, and the asset's cost on MSFT's balance sheet.
2.3. Potential Criminal Prosecution (DOJ)
Issue: Willful Conspiracy or Wire Fraud to conceal material information from investors.
- 18 U.S.C. § 1343 (Wire Fraud): Use of electronic means to execute a scheme to defraud.
- 18 U.S.C. § 371 (Conspiracy): Conspiracy to commit an offense against the U.S. (e.g., violating SEC requirements).
o Application: If it is established that the management of OpenAI and/or Microsoft, after receiving my official notices (knowledge attribution), continued to file reports with the SEC that did not reflect this systemic risk, this could be interpreted as a willful conspiracy to deceive investors and obstruct justice.
III. PROCEDURAL STRATEGY FOR SEC/DOJ: ESTABLISHING SCIENTER
For the SEC and DOJ, establishing scienter or, at a minimum, recklessness, is a key element for initiating criminal and civil prosecution for securities fraud (Rule 10b-5) and FCPA violations. My official and notarized notices (TCR, as well as subsequent letters) serve as important direct evidence that all involved corporate parties (OpenAI, Microsoft, Deloitte) had the requisite knowledge attribution of critical systemic and financial risks. Following receipt of these notices, any claim of "lack of knowledge" or "inability to assess" becomes legally vulnerable.
3.1. Two Complementary Inquiry Scenarios (Strong Recommendation)
I strongly recommend that the Commission and the Department of Justice consider and, if necessary, activate the following, legally justified inquiries, depending on the proposed scenarios. These steps are designed to ensure the preservation of evidence and objectively establish scienter before potential distortion of information.
In this regard, I request that you consider two distinct procedural scenarios for evaluating the submissions, aimed at protecting shareholders from the provision of inaccurate (or incomplete) information in the FY2025 annual audit, as well as establishing the systemic risk of U.S. citizen PII leakage.
Scenario 1. IMMEDIATE PROCEDURAL REQUEST BEFORE AUDIT (PRE-AUDIT INQUIRY)
Goal: To fix the factual state of internal controls and reporting. To establish the awareness and actions of Deloitte/Microsoft before the completion of the FY2025 audit.
Addressees: Deloitte LLP (MSFT External Auditor) and Microsoft Corporation.
Key Questions (Focused Inquiry):
- 1.1. Inclusion of Critical Risks in the Audit Scope (Materiality Assessment):
o The Commission should request documentation confirming that the information contained in the official notices of Sagidanov S.S. dated December 2025 was formally included, analyzed, and reflected in the scope of the current FY2025 external audit conducted by Deloitte.
o Specifically, confirmation of analysis is required for: The risk of systemic contamination of AI models with unauthorized content, and the risk of regurgitation of U.S. citizen PII from parametric memory.
o What is the specific assessment of materiality for this risk, considering potential multi-billion dollar liabilities (FTC/COPPA/HIPAA) that could result in a zero asset valuation?
o I believe the Commission should request Deloitte’s conclusion on the actual fair value of the AI assets (GPT/Copilot) in light of the necessity for complete and technically challenging unlearning or a full restart.
- 1.2. Justification for Exclusion (Proof of Recklessness/Omission):
o If the stated issues WERE NOT included in the audit: It is recommended to request a written Legal Opinion from Deloitte LLP and Microsoft Corporation, explaining under which GAAP/PCAOB/SEC standards information capable of zeroing out the asset and causing regulatory risk was deemed immaterial and ignored.
- 1.3. Call for Technical Audit (Need for Forensic Technical Audit):
o The Commission and DOJ are strongly recommended to use their subpoena power for an independent, full-scale, forensic technical audit of the AI models. Goal: To obtain objective technical evidence of fraud and PII leakage:
- Parametric Memory (Weights): The primary evidence of permanent PII storage. (As demonstrated in 2025 court precedents in the NYT vs. OpenAI lawsuit, logs and weights can retain data despite public statements of "deletion").
- Training Logs: Identifying the source, scale, and time period (2022–2025) of PII encapsulation of U.S. citizens.
- Unlearning Effectiveness: Documentary proof of the technical ineffectiveness of data removal mechanisms.
Scenario 2. SUBPOENA FOR POST-AUDIT SCIENTER PROBE (POST-AUDIT SCIENTER INQUIRY)
This scenario applies in the absence of Scenario 1 implementation and is aimed at fixing scienter post-factum.
Goal: Direct and irrefutable establishment of scienter or, at a minimum, reckless disregard through a comparative analysis of what was disclosed to investors in the final audit report against the information that was objectively known to all responsible parties at the time of its preparation (i.e., the information contained in this letter). This Scenario 2 applies strictly after the completion of the FY2025 external audit and serves as the basis for establishing the fact of willful concealment of material information. This may qualify as a violation of Rule 10b-5 (securities fraud) and lead to civil/criminal penalties. This will allow the Commission not only to confirm the lack of good faith but also to prevent further dissemination of distorted information in the market, protecting shareholders, investors, and national interests.
Addressees: Microsoft Corporation Board of Directors (including the Audit Committee and all compliance members), OpenAI Leadership (including the CEO and key executives involved in AI development), and Deloitte LLP Partners (including the MSFT audit team and intangible assets specialists). Subpoenas must be issued with the requirement for sworn statements to minimize the risk of false testimony.
Key Questions (Focus on Concealment and Scienter):
- 2.1. Evidence of Discussion and Concealment of Risks (Proof of Discussion/Concealment):
o It is recommended to request full disclosure of all internal decisions, meeting minutes, communications (emails, memos, Slack/Teams chats, internal reports), and documentation that preceded the decision to ignore or not reflect the systemic risks of AI model contamination (including PII regurgitation) in the final FY2025 external audit. This includes any discussions where risks were minimized or excluded from reporting.
o It must be established whether there was willful omission or reckless disregard of material information intended to mislead shareholders, investors, and insurance companies in violation of Rule 10b-5. Specifically, assess whether this was done, in part, to artificially maintain the MSFT stock price, considering that contaminated models (with a zero market value due to the need for a restart) could have led to multi-billion dollar write-downs.
- 2.2. Personal Financial Motivation:
o It is recommended to conduct a detailed assessment of whether the concealment of these systemic risks (PII contamination, unauthorized content) was motivated by the Personal Financial Motivation of employees and management. This includes an analysis of their financial interests (stock holdings, options, bonuses, incentive plans) that directly depend on the market capitalization of MSFT and OpenAI.
o It must be established that the concealment of information was undertaken with the goal of preventing an imminent decline in the MSFT stock price and, consequently, preserving (or minimizing losses on) the personal assets (holdings, options, bonuses) of OpenAI and Microsoft employees and executives. If confirmed, this could qualify as insider misconduct or conflict of interest, aggravating securities fraud, with potential penalties including disgorgement (return of illegal profits) and director/officer bars.
Legal Conclusion: The receipt of this letter fixes the moment of knowledge attribution for all responsible parties, creating an irrefutable temporal point. Any subsequent claims of "lack of knowledge" or "inability to assess" risks during the investigation will be directly refuted by this document, proving that the parties knew (or should have known) and potentially contributed to the willful concealment of the fact that the AI models' value was, essentially, zero due to contamination (requiring a full restart or retraining). This is a direct basis for a securities fraud investigation, including possible criminal aspects (referral to the DOJ), and may lead to a multi-billion dollar reassessment of MSFT's assets, protecting the market from further distortion.
IV. DETAILED LEGAL ANALYSIS OF OPENAI'S POSITION (Case No.: 02209803)
I have received a response from the OpenAI Privacy Team (Case Number: 02209803) which is not merely incorrect, but is critically materially misleading and should be used by the SEC/DOJ as additional evidence of the willful concealment of a systemic issue affecting U.S. citizen PII (see Appendix).
4.1. Falsity of the "No Associated Account" Claim
OpenAI Claim
Legal/Technical Refutation
"no ChatGPT account associated with this email address"
The subject of the dispute is not the account, but the storage and regurgitation of PII by the model itself. Notarized Protocols (including Protocol No. 8 of 12.11.2025) confirm the output of my copyrighted content and personal data (PII). This occurs from the model's internal weights, where the data was encapsulated during Training, not from a user database. The fact of output is independent of the existence of an account, rendering OpenAI's response irrelevant and substantively false.
4.2. Falsity of the "30-Day Personal Data Deletion" Claim
OpenAI Claim
Legal/Technical Refutation (Reinforced)
"personal data associated with it will be deleted within 30 days..."
This assertion is technically impossible and violates FTC/NIST requirements.
Technical Defect:
Entanglement and Unlearning Problem: Account deletion affects only user logs/databases (DB), but not the parametric memory (weights) of the Large Language Model (LLM). PII is stored as statistical patterns distributed across billions of parameters. Scientific research (2025, "On the Impossibility of Retrain Equivalence in Machine Unlearning") confirms that selective unlearning of PII is technically impossible or causes catastrophic model failure.
Legal Implications:
1. Retention in Weights: The notarized fact of repeated PII output after the purported account deletion is direct proof of the ongoing storage of data in the model's internal weights. 2. Policy Violation: Standard deletion policies (30 days) do not apply to model weights, which is a critical gap in disclosure to investors and a violation of FTC data protection requirements. 3. Precedents: This aspect was key in legal disputes (e.g., NYT vs. OpenAI 2025), where data retention was documented despite declared "deletion."
4.3. Systemic Nature of the Defect and Risk to U.S. Citizen PII
The Memorization/Regurgitation Defect is SYSTEMIC and affects everyone whose data was ingested from datasets, including millions of U.S. citizens.
1. Lack of Selectivity: If the model regurgitates my data, it logically does the same for analogous patterns from U.S. sources (names, contacts, financial data protected by COPPA and HIPAA). There is no technical mechanism for "selective contamination"—the defect affects all ingested data.
2. Threat to National Security and PII: The uncontrolled storage and regurgitation of U.S. citizen PII in models that could be subjected to cyberattacks or exploited by foreign actors is a direct violation of the goals of Executive Order 14117 (AI Safety) and creates a "hidden pandemic" of leaks.
3. Zero Asset Valuation: A contaminated model with unlimited regulatory and litigation risks is worth $0 as an asset. The integration of such a model into key MSFT products (Copilot/Azure) worth $13+ billion creates a material impairment to MSFT's balance sheet, which is required to be reflected in the FY2025 audit.
V. CONCLUSION: IMPERATIVE FOR AN INDEPENDENT AUDIT
5.1. Reinforced Demand for Audit
To achieve assurance in the event of an investigation, I believe the SEC and DOJ must demand an immediate Independent Forensic Technical Audit of the AI models.
Audit Aspect
Goal and Legal Significance
Analysis of Parameters (Weights)
The only way to reliably check whether data was "forgotten" or is still stored in a distributed manner. Direct evidence against OpenAI's claims of "deletion."
Training Logs
Determining the source of contamination (e.g., unlicensed parts of Common Crawl or BookCorpus) and the volume of encapsulated PII. Direct evidence of copyright and privacy law violations.
Unlearning Mechanisms
Checking the technical ineffectiveness of applied methods (filters/fine-tuning), which only mask output but do not delete the data.
5.2. Summary for SEC/DOJ (Final Conclusion)
This letter irrefutably confirms that the identified systemic risks are not hypothetical, but are based on factually documented evidence:
- Fact of PII Regurgitation Established (confirmed by notarized evidence).
- Fact of Knowledge Attribution Established (repeated receipt of official notices).
- Fact of Misleading Statement Established (knowingly misleading response from OpenAI).
Therefore, I believe that there are sufficient and material grounds for:
1. Immediate initiation of a full-scale SEC investigation into securities fraud (Rule 10b-5) and violation of the FCPA regarding the failure of the internal control system.
2. Involvement of the DOJ to assess potential criminal fraud (Wire Fraud) and Conspiracy in connection with the concealment of critical risks from investors and the necessity of an urgent investigation to prevent the threat associated with a massive PII leak of U.S. citizens.
Sincerely,
Samat Serikovich Sagidanov
Attorney-at-Law, Republic of Kazakhstan
Appendix: Screenshot of the OpenAI Privacy Team Response (Case Number: 02209803)
Submission Number: 17680-659-426-190
Ref. No. 10/01
Date: January 10, 2026
STATUS: CONFIDENTIAL WHISTLEBLOWER SUBMISSION
BASIS: Dodd-Frank Act (15 U.S.C. § 78u-6) and Sarbanes-Oxley Act (SOX)
ULTIMATE REQUEST AND DEMAND FOR IMMEDIATE INVESTIGATION
(Supplemental TCR & Demand Letter for Enforcement Action)
ADDRESSEES:
1. U.S. Securities and Exchange Commission (SEC)
o Enforcement Division
o Office of the Whistleblower
o Crypto Assets and Cyber Unit (CETU)
o Address: 100 F Street, NE, Washington, DC 20549
o Email: FormWB-APPSubmission@sec.gov; oca@sec.gov; enforcement@sec.gov
2. U.S. Department of Justice (DOJ)
o Criminal Division, Fraud Section
o Computer Crime and Intellectual Property Section (CCIPS)
o Address: 950 Pennsylvania Avenue, NW, Washington, DC 20530
o Email: Criminal.Division@usdoj.gov
3. Microsoft Corporation
o Board of Directors / Audit Committee
o Email: askboard@microsoft.com
4. OpenAI, L.P.
o Board of Directors / Legal & Compliance Department
o Email: legal@openai.com
5. Deloitte LLP (External Auditor)
o National Office / Audit Quality and Risk Management
o Email: sdutton@deloitte.com
SENDER:
Samat Serikovich Sagidanov
Status: Attorney at Law, Qualified Whistleblower
Procedural Status: Claimant under the SEC Whistleblower Program (Dodd-Frank Act)
Address: Apt. 135, 13 Barayev St., Astana, 010000, Republic of Kazakhstan
Email: garantplus.kz@mail.ru
Tel./WhatsApp: +7 (702) 847-80-20
SUBJECT: DEMAND FOR IMMEDIATE ENFORCEMENT ACTION AND INTERVENTION PRIOR TO 10-K FILING.
Reference: Supplemental evidence to TCR Nos. 17644-619-655-060, 17653-302-282-475, 17655-820-289-012, 17658-553-246-525.
Matters: 1. Systemic Securities Fraud (Rule 10b-5) and misleading investors regarding AI asset security.
2. Critical failure of internal controls (SOX §302/404) and the emergence of a "Liability Vacuum."
3. Threat to U.S. National Security (EO 14117): documented regurgitation of PII (Personally Identifiable Information) due to systemic defect.
4. Obstruction of Justice (18 U.S.C. § 1519) by OpenAI and gross negligence by Deloitte LLP.
DEMAND: Implementation of a Litigation Hold (prohibition on destruction or alteration of data) and preemptive intervention prior to the publication of Microsoft’s Annual Report (10-K) in February 2026 to prevent the legalization of a tainted audit.
To the Representatives of the SEC and DOJ:
This document serves as a FINAL FORMAL DEMAND LETTER and a Supplemental Whistleblower Submission, filed pursuant to the Dodd-Frank Act (15 U.S.C. § 78u-6).
I, Samat Serikovich Sagidanov, acting in my capacity as an attorney and qualified whistleblower (pro se), demand the immediate provision of information regarding the procedural status of the investigation into my previously filed TCR submissions. This correspondence systematizes the critical legal triggers that compel your agencies to act immediately within your fiduciary and statutory mandates to protect the integrity of the U.S. stock market, investor interests, and national security.
I officially record my right to protect my interests as a whistleblower and confirm my demand for the maximum statutory award of 30% (thirty percent) of any monetary sanctions imposed.
PROCEDURAL ULTIMATUM: I hereby officially declare: if, within 14 (fourteen) calendar days from the date of receipt of this notice, your agencies do not confirm the commencement of a formal investigation, identify the responsible officer (including name, title, and direct contact details), and confirm the implementation of a Litigation Hold (mandatory freeze of evidence), such silence will be qualified as Willful Inaction and Dereliction of Duty.
Such inaction shall serve as the basis for:
1. Filing a formal complaint with the U.S. Congress, including the Senate Judiciary Committee and the House Financial Services Committee, in the context of the ongoing 2026 hearings on the risks of uncontrolled AI deployment.
2. Transferring the evidentiary base to leading global media outlets (The New York Times, Wall Street Journal, TechCrunch, Bloomberg, Reuters) for public exposure of institutional negligence.
3. Initiating a massive Class Action lawsuit against Microsoft, OpenAI, and Deloitte LLP on behalf of harmed shareholders and data subjects.
The creation of such a precedent—where regulators ignore a documented threat to national security and systemic fraud by IT monopolies—will inevitably entail personal and institutional liability for the regulators themselves.
LEGAL GROUNDS FOR THE OBLIGATION TO ACT:
Pursuant to Exchange Act §21(d) (Enforcement Authority) and Dodd-Frank §922, the SEC is obligated to respond to credible whistleblower submissions documenting material violations committed with Scienter (intent). The U.S. Department of Justice (DOJ) is obligated to act pursuant to:
- 18 U.S.C. § 1348 (Securities Fraud);
- 18 U.S.C. § 1519 (Obstruction of Justice and Falsification of Records).
In the context of the national security threat and unauthorized access to PII, pursuant to Executive Order 14117, your agencies are required to ensure Mandatory Inter-agency Collaboration. Any further disregard of the facts provided will be treated as a breach of public duty and complicity in the concealment of material risks.
SECTION I. SYSTEMIC COLLAPSE OF CORPORATE GOVERNANCE: "LIABILITY VACUUM" — MANDATORY TRIGGER FOR SARBANES-OXLEY (SOX) AUDIT
1.1. Absence of an Identifiable Responsible Party (Corporate Governance Failure)
Within the corporate structures of Microsoft and OpenAI, there is a total absence of an Executive Officer or specialized committee bearing personal fiduciary and legal liability for algorithmic outputs and content generation by AI models. This creates an unprecedented "Liability Vacuum," allowing companies to extract hyper-profits from a product whose legal consequences are intentionally placed outside of regulatory control. This fact constitutes a direct violation of:
- Exchange Act §13(b)(2)(B): Requirements to maintain a system of internal accounting and operational controls. In the absence of a responsible individual, controls are deemed Absent, and the product is an Uncontrolled Asset.
- SOX §404: The existence of a critical "Liability Vacuum" makes adequate management of material risks (PII leaks, mass IP violations) impossible, qualifying as a Material Weakness in ICFR (Internal Control over Financial Reporting).
1.2. Violation of SOX §302/404 and Rule 13b2-2: Evidence of Willful Deception
The CEO and CFO of Microsoft have presumably personally signed certifications of reporting accuracy, knowing full well (based on 38 ignored formal notices—see previously submitted Exhibits) that their primary technological asset operates outside the framework of legal control.
This is classified as:
1. Filing of False Certifications (§302): A crime involving the confirmation of the effectiveness of controls that do not actually function.
2. Misleading Auditors (Rule 13b2-2): Willful concealment of risks from external review.
3. Auditor Liability (Deloitte LLP): Having received 11 (eleven) detailed notifications, the auditor allegedly consciously excluded these risks from the audit opinion, qualifying as Gross Negligence and Aiding and Abetting Fraud.
Legal Precedent (Analogy):
In SEC v. RPM International Inc. (2019), the commission sanctioned the company and its leadership specifically for concealing internal control failures and ignoring "red flags," leading to financial misstatements. The current Microsoft/OpenAI/Deloitte case is identical in essence but exceeds it in scale and degree of Scienter, as it concerns systemic AI security and national security.
SECTION II. VERIFIED EVIDENCE OF WILLFUL DECEPTION AND THE "MODEL-SWITCHING BYPASS" MECHANISM — THE SMOKING GUN OF FRAUD
2.1. Technological Mechanism of Deception: Two-Tiered "Compliance Theater"
Based on submitted forensic logs, Microsoft and OpenAI have engaged in a presumably willful segmentation of control systems. It has been established that the companies implemented effective "safety filters" and PII protection mechanisms exclusively on flagship (frontal) models intended for public testing by regulators and auditors.
Simultaneously, auxiliary models (back-end/API/SLM/satellite models) continue to function without adequate restrictions, providing unauthorized access to protected data and PII.
- Essence of the Violation: This is a classic example of "Compliance Theater," where visible external controls mask systemic internal violations and the ongoing exploitation of illegally obtained data.
Smoking Gun Evidence:
Logs confirm that for an identical query, a flagship model (e.g., GPT-4o) blocks the output, citing safety policies. However, upon an automatic or manual switch to an integrated satellite model (within the same session/infrastructure), the system outputs the full volume of confidential information, including my PII (name, telephone, personal documents).
- Legal Conclusion: This is not a random software "bug," but a documented Intentional Bypass. These logs must be treated as an "internal memo" acknowledging a critical defect intentionally hidden from oversight bodies (Scienter).
2.2. Qualification under Rule 10b-5: Fraudulent Concealment of Material Facts
We assert that this is a deliberate architecture of deception. Microsoft and OpenAI are misleading investors and shareholders by publicly claiming "safe and ethical AI," while concealing the ongoing exploitation of "contaminated" datasets through Bypass mechanisms.
- Investor Risks: Such Nondisclosure of Material Risks inevitably leads to catastrophic losses for shareholders due to future Class Actions and regulatory fines.
Legal Precedent (Analogy):
- SEC v. Facebook/Cambridge Analytica (2018–2019): SEC imposed a $100M fine for misleading investors regarding data misuse risks. In this case, your evidence (logs) is more direct as it captures the data output process in real-time, despite company claims of deletion/filtering.
PROCEDURAL OBLIGATION: Pursuant to CETU priorities for 2025–2026 (combating "AI Washing" and privacy risks), the SEC is obligated to initiate an immediate probe upon receipt of Original Technical Evidence.
- Nate Inc. Precedent (2025): $42M fine for misleading claims regarding AI capabilities and data protection. Disregarding logs that prove a deception mechanism would constitute a breach of the SEC's mandate.
SECTION III. DELOITTE LLP: GROSS NEGLIGENCE AND FAILURE OF AUDIT SKEPTICISM — CRITICAL TRIGGER FOR PCAOB REVIEW
3.1. Chronology of Willful Disregard (Audit Failure)
I have dispatched 11 (eleven) formal notices to Deloitte LLP (see previously submitted Exhibits: Register of Auditor Notifications). Each of these communications contained irrefutable evidence of systemic risks: ranging from the uncontrolled disclosure of PII (regurgitation) to direct threats to national security (NatSec threats).
Despite receiving this data, Deloitte presumably failed to take any action to verify the integrity of Microsoft’s internal control systems.
- Example of Criminal Inaction: Following the receipt of the 1st notice regarding critical data leaks—complete silence. After the 11th notice, which included technical evidence of security filter bypasses—Deloitte presumably issued a "clean" audit opinion without qualifications.
- Legal Qualification: These actions are qualified as Gross Negligence, a breach of fiduciary duties to investors, and conscious Aiding and Abetting in the concealment of material violations.
3.2. Demand to the SEC: Withdrawal or Blocking of the Audit Opinion
Based on the evidence provided, I demand the immediate withdrawal of Microsoft’s audit opinion for the latest reporting period, or a court injunction to stay its effectiveness (suspension) until a full independent technical audit of the AI infrastructure is completed.
- Fictitious Asset Valuation: I believe the carrying value of Microsoft’s AI technologies and its investment in OpenAI is fictitious. These assets are "Contaminated Assets" as they are built on the illegal use of PII and protected content.
- Precedent for Algorithm Destruction: According to FTC (Federal Trade Commission) practice, algorithms trained on illegally obtained data are subject to total destruction (Algorithmic Disgorgement). Thus, Deloitte presumably certified the value of assets that, legally, may be written down to zero.
Legal Precedent (Analogy):
- SEC v. Equifax (2017–2019): Sanctions for concealing PII leak risks as a failure of internal control systems.
- The Deloitte Case: The situation is significantly more severe because the auditor was personally and repeatedly notified of specific defects but chose to issue a "Tainted Audit" in violation of PCAOB AS 2201 standards.
PROCEDURAL OBLIGATION TO INVESTIGATE: The SEC and PCAOB are obligated to initiate an investigation if an auditor ignores critical Red Flags. Any further disregard of Deloitte’s inaction by the SEC will be viewed as Regulator Complicity in maintaining Microsoft’s artificially inflated capitalization based on knowingly unreliable audit data.
SECTION IV. U.S. NATIONAL SECURITY THREAT (EO 14117): MANDATORY INTER-AGENCY TRIGGER FOR DOJ, FBI, AND SEC
4.1. Systemic PII Regurgitation: Uncontrolled Collection and Output (Warrantless Surveillance)
The evidence provided (Protocol No. 8, Exhibit A), notarized accordingly, directly confirms the fact of Personally Identifiable Information (PII) output by the model. This is not an isolated incident, but evidence of a fundamental systemic architectural defect. The model is fundamentally incapable of identifying the Claimant’s identity and restricting outputs only to them, making data leakage mass and indiscriminate.
- Scalable Threat: If the system outputs the PII of a foreign attorney, it can with equal ease output the data of U.S. government officials, defense personnel, and critical infrastructure operators. Furthermore, the system is capable of linking this data to family relations and specific professional activities, creating ready-made profiles for espionage and social engineering.
- Legal Qualification: The use of Microsoft/OpenAI models has effectively turned into a form of Warrantless Surveillance without a court order, violating constitutional principles (Carpenter v. United States, 2018) and the right to privacy for millions of U.S. citizens.
4.2. Exploitation of AI by Hostile Entities (Countries of Concern)
Under the officially confirmed "Liability Vacuum" (Section I), the AI infrastructure of Microsoft and OpenAI becomes an instrument for the intelligence activities of China, Iran, or Russia.
- Violation of Executive Order 14117: This Presidential Order strictly prohibits the transfer of, or access to, bulk sensitive data of U.S. citizens by "countries of concern." The absence of "unlearning" and filtering mechanisms in back-end models (the Bypass mechanism) makes compliance with this order physically impossible.
- AI as an Espionage Tool: The ability of models to regurgitate PII in response to simple queries transforms them into automated intelligence-gathering systems (OSINT on steroids), sponsored by American corporations.
- Lack of State Control: These models are not under the control of U.S. government entities. Microsoft and OpenAI have failed to provide any guarantees or evidence that such incidents will not recur. An investigation is required to establish technical barriers that preclude such future threats.
Legal Precedent (Analogy):
- TikTok Case (2019–2026): FTC and DOJ investigations based on the risk of hostile state access to PII.
- Microsoft/OpenAI Case: The situation is more critical because the data is not merely "transferred" but is regurgitated, linked, and disseminated by the AI system itself, which is marketed as a secure "trusted" product.
PROCEDURAL OBLIGATION: The DOJ must immediately commence an investigation under EO 14117 due to the systemic PII threat. The SEC must ensure Mandatory Inter-agency Collaboration, as the national security threat is directly tied to investor fraud (concealment of the risk of a total product block).
SECTION V. OBSTRUCTION OF JUSTICE AND PRESUMED WILLFUL FALSIFICATION: CASE NO. 02209803 — THE SMOKING GUN OF CRIMINAL FRAUD
5.1. Origin and Legal Significance of Case No. 02209803
This case is the official support ticket number assigned by OpenAI’s compliance and legal department in response to my pre-trial claim and demand for the immediate disclosure and deletion of my PII.
5.2. Substance of Documented Fraud and Intent (Scienter)
Within this case, OpenAI leadership provided an official response claiming an "inability to find the account or related data" (No account found). However, this response is knowingly false based on the following:
- Irrefutable Digital Evidence: Interaction logs (see previously submitted Exhibits) which record active sessions, compute usage, and, crucially, the regurgitation of my PII by the system specifically within the account the company officially denies exists.
- Willful Deception: At the time this false response was provided, OpenAI, Microsoft, and Deloitte had been officially and repeatedly notified by me regarding the filing of complaints with the SEC and DOJ.
- Legal Conclusion: Providing false information under No. 02209803 is not a technical glitch but a conscious Fabrication of Evidence. I believe this was a deliberate attempt to disorient U.S. investigative authorities during an active whistleblower process.
5.3. Criminal Qualification
This episode shifts the matter from a civil dispute into the realm of federal criminal prosecution:
1. Obstruction of Justice (18 U.S.C. § 1519): Falsifying records in an official response registry to impede a lawful investigation by federal agencies. Punishable by up to 20 years in prison.
2. Wire Fraud (18 U.S.C. § 1343): Using electronic systems to transmit false data to conceal a critical product defect (the systemic "contamination" of model weights with my data).
3. Conspiracy (18 U.S.C. § 371): Joint actions by the legal and technical departments of Microsoft and OpenAI to hide violations after receiving formal notice of an SEC complaint.
DEMAND TO THE DOJ: In the interest of Microsoft shareholders and the rights of U.S. citizens, I demand the immediate seizure (via Subpoena) of internal OpenAI ticket system logs for ID No. 02209803 and associated Slack/Teams communications. It is necessary to identify the individuals who authorized the false response to hold them personally accountable for a presumed Conspiracy to Defraud the United States.
SECTION VI. REGISTER OF 50 CRITICAL QUESTIONS (INTERROGATORIES) — MANDATORY REQUIREMENT FOR DISCLOSURE UNDER OATH
In the interest of protecting Microsoft shareholders, ensuring National Security, and safeguarding the rights of U.S. citizens, I demand full and truthful answers to the following questions within 14 calendar days.
Legal Warning: Any refusal to answer, or the provision of incomplete or knowingly false information, must be construed by regulators (SEC/DOJ) as an Admission by Conduct, concealment of criminal facts, and Obstruction of Justice. All answers are required to be provided under oath pursuant to 18 U.S.C. § 1621.
1. Identity and Title: Provide the full name and title of the employee who verified the absence of S.S. Sagidanov’s PII within the model weights. (Absence of such a person confirms the Liability Vacuum and critical control failure under SOX §404).
2. Technical Justification: Why does the flagship GPT model block output while "satellite"/API models perform regurgitation of my PII? (Evidence of Compliance Theater and intentional bypass of safety systems).
3. Reserve Funds: State the exact amount of Microsoft's reserve fund allocated to cover losses from class-action lawsuits regarding PII and AI-security violations. (If $0, this is a direct misrepresentation to investors under Rule 10b-5).
4. Data Deletion Protocols: Provide copies of internal "data cleaning" protocols that Sam Altman allegedly testified to in Congress. (Their absence is evidence of Obstruction of Justice §1519).
5. Deloitte LLP: Provide a written justification for ignoring the 1st and subsequent official notices regarding systemic leaks. (Violation of PCAOB AS 2201 standard).
6. Deloitte LLP: On what basis was a "clean" audit opinion issued (or prepared for issuance) after receiving 11 official notifications? (Qualification: Gross Negligence).
7. Data Hashes: Provide the Hashes of my data within the Training Set. (Direct evidence of PII obtained without consent).
8. Filter Placement: Justify the installation of safety filters exclusively on frontal interfaces. (Evidence of Scienter—intent to deceive regulators and investors).
9. SOX Certification: Provide the names of the individuals who signed the SOX §302 certification while possessing information regarding the "Liability Vacuum." (Personal criminal liability of officers).
10. Internal Testing Logs: Provide logs of PII regurgitation of U.S. citizens identified during internal testing. (National Security threat under EO 14117).
11. OpenAI: Name the individual who authorized the false response regarding Case No. 02209803. (Direct evidence of Obstruction of Justice).
12. Deloitte: Provide audit work papers regarding the technical risk assessment of AI models. (Absence thereof constitutes professional incompetence).
13. Microsoft: Was a forensic audit of OpenAI’s "model weight contamination" conducted prior to the multi-billion dollar investment? (If no—securities fraud regarding assets).
14. OpenAI: Provide a detailed technical description of the PII "unlearning" mechanism. (If no such mechanism exists—admission of an irremediable defect requiring asset revaluation).
15. Ignored Notices: Provide an official justification for ignoring the 38 notices sent to the companies. (Evidence of Scienter—direct criminal intent).
16. Deloitte: Why do the reports fail to state a "Material Weakness" in internal controls regarding AI security?
17. Microsoft: Will the 10-K report (February 2026) disclose the existence of the "Liability Vacuum"?
18. Training Data Inventory: Provide a registry of training data sources with licenses for PII processing.
19. SEC Concealment: State the reason for concealing the existence of the Model-Switching Bypass mechanism from the SEC as a material risk.
20. OpenAI Data Policy: Provide the technical policy for data retention after "account deletion." (Exposing the falsehood in Case No. 02209803).
21. Microsoft Compliance: Provide documentary evidence of Azure OpenAI’s audit for compliance with the EO 14117 mandate.
22. Deloitte Risk Exclusion: State the reason for excluding National Security risks from the annual risk audit.
23. OpenAI Internal Memos: Provide internal memos regarding the results of "Bypass" mechanism testing. (Evidence of intent).
24. Asset Valuation: Have the risks of mass license revocation due to privacy violations been factored into the current asset valuation?
25. Deloitte Communications: Provide copies of all communications with Microsoft management regarding my 11 notifications. (Documentation of conspiracy to conceal).
26. OpenAI Access Logs: Provide access logs to my data recorded after your official statement claiming its absence.
27. Financial Impact: Provide an assessment of financial damage in the event of Algorithmic Disgorgement (total destruction of models).
28. PCAOB Compliance: Provide documents confirming compliance with PCAOB standards during the audit of AI infrastructure.
29. Legal Basis: List the legal grounds for retaining PII in neural network weights without the subject's consent.
30. Board Minutes: Provide minutes of Board of Directors meetings where my notifications were discussed.
31. Deloitte Liability Vacuum Analysis: Provide the conclusion on the analysis of "Liability Vacuum" risks to Microsoft's financial stability.
32. Inherent Defects: Have the risks of physical inability to filter data and the inevitable revaluation of technology costs been factored into asset assessments?
33. NatSec Audit: Provide results of any independent National Security audit (if conducted).
34. Deloitte Adverse Opinion: Why was an "Adverse Opinion" not issued despite 11 red-flag signals?
35. EO 14117 Guarantees: Provide technical guarantees that PII is inaccessible to "countries of concern" per EO 14117.
36. Investor Warnings: Provide a list of warnings regarding specific AI risks sent to institutional investors.
37. Audit Team Notes: Provide internal notes from the audit group classifying my claims (admission of negligence).
38. Case No. 02209803 Contradiction: Provide a technical explanation for the contradiction between the response in Case No. 02209803 and the real interaction logs.
39. 10-K Disclosure: Will the 10-K report contain information regarding the TCR complaints filed with the SEC?
40. SOX Verification: Who personally verified the authenticity of the CEO and CFO signatures under the SOX certifications?
41. IP Access Registry: Provide a list of IP addresses (including foreign jurisdictions) that accessed logs containing my PII.
42. DOJ Cooperation: Is there an internal regulation regarding mandatory cooperation with the DOJ in AI-related investigations?
43. Deloitte SEC Notification: Confirm whether a notification was sent to the SEC regarding identified Reporting of Disagreements.
44. Evidence of Deletion: Provide physical proof of data deletion from the model weights; if technically impossible—provide an official admission and the obtaining of consent from those whose data is used.
45. Investment Revaluation: Provide the current valuation of the OpenAI investment considering the risk of a judicial Injunction on model exploitation.
46. Deloitte PCAOB Review: Has your team undergone an internal PCAOB review regarding this specific case?
47. Design Specifications: Provide design specs confirming the architectural separation of "secure" and "unprotected" models.
48. Capitalization Loss: Provide a calculation of projected capitalization loss if the models are declared "legally contaminated."
49. Audit Team Composition: Provide the names of the group responsible for auditing Microsoft's ICFR regarding AI.
50. ALL ADDRESSEES: Confirm the immediate implementation of the Litigation Hold. Any alteration or destruction of data from this moment forward will be qualified as a felony offense against U.S. justice.
SECTION VII. MANDATORY LITIGATION HOLD: PREVENTION OF WILLFUL DESTRUCTION OF EVIDENCE AND OBSTRUCTION OF JUSTICE
I hereby officially notify the SEC and DOJ of the necessity to immediately issue a Litigation Hold Order regarding Microsoft, OpenAI, and Deloitte LLP. Any alteration or deletion of data from the moment of receipt of this notice will be presumably qualified as a federal felony under 18 U.S.C. § 1519 (destruction of evidence in a federal investigation).
I demand the immediate implementation of the following measures:
1. Ban on Further Training and Fine-tuning: Immediately suspend all processes for updating model weights (including o3-pro and future iterations) that utilize the disputed datasets. This is essential to prevent the "overwriting" of digital traces of my PII and to exclude its irreversible integration into the AI architecture (Preventing Evidence Spoliation).
2. Full Preservation of Internal Communications: Preserve all records in corporate messengers (Slack, Microsoft Teams), emails, and internal ticketing systems (Jira, ServiceNow) concerning my 38 official notifications. Any attempt to delete correspondence will be treated as direct evidence of a Conspiracy to Obstruct Justice.
3. Moratorium on PII Access Log Deletion: Suspend automatic and manual purge procedures for all logs involving access to personal data (including data of U.S. citizens). This is critical for assessing the scale of the national security threat under the EO 14117 mandate.
4. Pre-10-K Intervention: I demand that the SEC issue a judicial Injunction on the filing of Microsoft’s Annual Report (Form 10-K) in February 2026 until it includes disclosures regarding the "Liability Vacuum" and irremediable PII risks. Filing the report in its current form will be considered an act of conscious fraud and the issuance of a Tainted Audit.
Legal Precedent (Analogy):
- SEC v. Nate Inc. (2025): The Commission applied a Pre-report Intervention mechanism specifically regarding AI fraud and PII protection, preventing the misleading of investors.
- Microsoft/OpenAI/Deloitte Case: My evidentiary base (notarized logs and 11 auditor notifications) is significantly stronger than the Nate Inc. precedent, obligating the SEC to take more stringent responsive measures.
PROCEDURAL WARNING: Any delay in issuing a Hold Order by the SEC/DOJ will be presumably recorded as official assistance in the destruction of evidence of systemic fraud.
SECTION VIII. CONSEQUENCES OF INACTION AND PRECEDENTIAL ANALYSIS: LEGAL GROUNDS FOR IMMEDIATE INTERVENTION
In the absence of a procedural reaction or an official response within 14 calendar days, this case will be immediately escalated through the following channels:
1. U.S. Congress: Submission of materials to the Senate Judiciary Committee for the AI Hearings 2026. My evidence will serve as a centerpiece in hearings regarding systemic PII risks and the lack of oversight of AI corporations.
2. Global Media Outlets: Publication of the investigation in the NYT, WSJ, Bloomberg, and TechCrunch under the headline: "Systemic PII Leaks in Microsoft AI: A National Security Threat and Regulatory Concealment."
3. Class Action: Initiation of litigation on behalf of shareholders and PII leak victims. The basis—colossal damages resulting from the concealment of material risks.
ANALYSIS OF INVESTIGATIVE MANDATE (PRECEDENTS):
- Facebook/Cambridge Analytica (2018–2019): Misuse of PII of 87 million users. Investigation triggered by whistleblower tips (like my 4 TCRs) and ignored risks (like my 38 notices). Result: $5 billion FTC fine and $100 million SEC fine. Mandate: PII is a fraud trigger.
- Equifax (2017–2019): Leak of PII of 147 million citizens and concealment from investors. Result: Fines and criminal charges. Mandate: ICFR failure identical to Deloitte’s "silence."
- SolarWinds (2021–2025): Cyber-breach threatening national security. Result: SEC fines of $26 million and criminal suits against executives. Mandate: Direct link to National Security (parallel to my EO 14117 analysis).
- TikTok (2019–2026): Risk of PII transfer to hostile states. Mandate: Investigation of threats from China/Russia (analogous to my evidence of uncontrolled PII output).
- Nate Inc. (2025): Direct precedent for AI fraud. Investigation based on whistleblower logs (identical to my "regurgitation" logs). Result: $42 million SEC fine. Mandate: Execution of the CETU mandate against "AI-Washing."
FINAL QUALIFICATION:
This case represents a Stronger Summation of all aforementioned precedents. The combination of PII + National Security Threat + SOX Violation + Obstruction (Case No. 02209803) creates a legal framework where a refusal to investigate is presumably a professional crime of the SEC/DOJ employees themselves (Misfeasance/Nonfeasance).
CONCLUSION: All elements of a Mandatory Probe are present. Any attempt to classify this as a "technical error" will be legally contested as intentional complicity in the concealment of a corporate crime.
CLAIM AND DEMAND FOR REWARD
Pursuant to the Dodd-Frank Act (15 U.S.C. § 78u-6), I officially confirm my status as a Qualified Whistleblower. I hereby submit a claim for the maximum statutory reward of 30% (thirty percent) of the total amount of all monetary sanctions imposed on Microsoft, OpenAI, and Deloitte LLP as a result of this proceeding.
PROCEDURAL NOTICE OF FILING:
All necessary exhibits (Exhibits A–Z4) have been previously submitted multiple times. This letter will also be sent via physical mail. However, given the significant distance between the Republic of Kazakhstan and the USA and the risk of immediate destruction of digital evidence, I DEMAND that this submission be registered and a substantive investigation be commenced immediately based on this electronic version.
Failure to act will be documented as institutional negligence.
Respectfully,
Samat Serikovich Sagidanov
Attorney at Law
Qualified SEC Whistleblower