Skip to Content
Top

The CEO Who Built the Case Against Himself With ChatGPT

person typing on computer
|

Most people treat their AI conversations like private thoughts. Type a question into ChatGPT or Claude, get an answer, move on. It feels like thinking out loud.

It is not. Every one of those conversations is a written record, and in the right circumstances, the other side in a lawsuit can demand to see it.

A recent Delaware court case makes this painfully clear.

What Happened at Krafton

In 2021, Krafton, the South Korean gaming conglomerate, acquired Unknown Worlds Entertainment, the studio behind the hit survival game Subnautica, for $500 million upfront plus up to $250 million in contingent earnout payments tied to post-closing revenue. As part of the deal, the studio’s co-founders and its CEO, Ted Gill, were promised operational control of the studio during the earnout period.

By mid-2025, Subnautica 2 was on track for an early access launch, and Krafton’s own internal projections showed the game was likely to generate revenue well above the earnout threshold, with projected payouts ranging from roughly $192 million to $242 million. Krafton CEO Changhan Kim felt the company had agreed to a “bad deal” and feared the earnout would make him look like a “pushover.”

Krafton’s legal team cautioned Kim that a termination-for-cause strategy would not eliminate the earnout obligation and would expose Krafton to legal and reputational risk. Krafton’s head of corporate development, Park, reinforced that warning via Slack, noting that such a move would still likely leave the earnout payable while creating “lawsuit and reputation risk.”

Kim ignored the warnings. Instead, he turned to ChatGPT. The chatbot told him the earnout would be “difficult to cancel.” Kim complained to colleagues that the deal was “a contract under which we can only be dragged around.” But rather than accept that reality, he pressed the AI for a workaround.

At ChatGPT’s suggestion, Kim formed an internal task force dubbed “Project X,” whose mandate was to renegotiate the earnout or execute a full “takeover” of Unknown Worlds. The AI furnished him with a detailed “Response Strategy to a ‘No-Deal’ Scenario” that included a pressure-and-leverage package, strategic talking points, instructions to lock down the studio’s publishing platform, directions to prepare legal-defense materials, and a “two-handed strategy” combining hardball pressure with softer retention incentives.

Krafton followed most of those recommendations. It locked Unknown Worlds out of the Steam publishing platform, cutting off the studio’s ability to release Subnautica 2. It posted a message on the studio’s website, drafted overnight without the studio’s involvement, falsely suggesting the co-founders were considering re-engaging with the project. When earnout negotiations stalled, Krafton terminated all three key employees, citing their intention to proceed with a premature game release. The court later found that justification was pretextual.

Kim admitted at trial that he had deleted certain relevant ChatGPT logs. The opinion does not specify exactly when the deletion occurred.

Even so, the court had extensive ChatGPT-related evidence in the record: shared strategy outputs, Project X documents derived from the AI sessions, and trial testimony about the chats. The judge relied on this material at length in finding that Krafton’s stated grounds for termination were manufactured after the fact and that Kim had acted in bad faith. In a sweeping remedial order, the court reinstated Ted Gill as CEO with full operational authority and equitably extended the earnout measurement period by 258 days.

Why Your AI Chats Are Probably Not Protected

Attorney-client privilege protects confidential communications with your lawyer for the purpose of obtaining legal advice. ChatGPT is not a lawyer. Claude is not a lawyer. If you independently ask an AI chatbot whether you can fire someone, restructure a deal, or avoid a contractual obligation, that conversation will almost certainly not be privileged. There are narrow situations where a third-party tool can be considered part of the attorney-client relationship, but a CEO solo-brainstorming with a consumer AI product after rejecting actual legal counsel is not one of them.

The “work product” doctrine protects materials prepared in anticipation of litigation by or for a party or its representative, which can include attorneys, consultants, insurers, and agents. A CEO independently strategizing with a public AI tool, outside any litigation-preparation workflow and contrary to counsel’s advice, is unlikely to qualify.

The practical takeaway: AI chats are generally discoverable if they are relevant to a dispute, not subject to a valid privilege, proportional to the needs of the case, and within a party’s possession or control. In most business contexts, they will be treated like any other electronic record: emails, text messages, Slack threads, or memos to file. For most executives in most situations, that means they are fair game.

It Gets Worse If You Delete Them

Kim’s deletion of ChatGPT logs backfired in two ways.

First, courts can impose serious consequences when a party fails to preserve relevant electronic records after litigation is reasonably anticipated. Under federal rules, if the court finds that the information should have been preserved, was lost because reasonable steps were not taken, and cannot be restored or replaced, it can order measures to cure the prejudice. The most severe remedy, an adverse-inference instruction telling the jury to assume the lost material was harmful, requires a further finding that the party acted with intent to deprive the other side of the evidence.

Second, and more practically, the deletion reinforced the court’s narrative of a CEO who understood his conduct was problematic and tried to cover his tracks. Even without a formal spoliation ruling on that specific point, the optics were devastating.

If you are involved in, or can reasonably anticipate, any kind of legal dispute, your obligation to preserve relevant documents extends to your AI chat histories. Deleting them does not make them go away. It makes everything worse.

The Privacy Illusion

Part of the problem is that AI chat interfaces are designed to feel intimate. There is no audience. No CC line. No indication that anyone else will ever read what you type. The conversational format encourages candor in a way that email or formal memos do not.

But that perceived privacy is an illusion. Consider what is actually happening:

The AI provider may retain and access your conversations. The specifics depend on the platform and plan tier. OpenAI’s consumer services may use your content for model training unless you opt out; its business and enterprise offerings do not train on customer inputs by default. Anthropic gives consumer users a choice about model training, and its commercial plans (Team and Enterprise) operate under separate terms. The bottom line is that default consumer settings on most platforms do not guarantee the kind of confidentiality you would need for sensitive business or legal discussions. If you are unsure what your plan covers, assume the answer is “not enough.”

Opposing counsel knows to look for AI chats. Discovery requests in litigation are rapidly evolving to include AI interaction histories. If you are asked to produce “all communications relating to [topic]” and you used ChatGPT to think through that topic, those logs are responsive. Failing to produce them is a discovery violation.

AI chats are often more damaging than emails. People tend to be more unguarded with AI than they are in any other written format. The conversational interface encourages the kind of raw, strategic thinking that would be devastating on a courtroom screen. Kim’s ChatGPT interactions did not just show what Krafton did. They showed exactly what Kim was thinking and planning, in his own words, step by step.

What This Means for Executives and Business Leaders

The Krafton case is the first prominent example of AI chat evidence playing a central role in a major corporate dispute. It will not be the last. Here is what you should take from it:

  1. Treat every AI conversation as a potential exhibit. Before you type a prompt, ask yourself: would I be comfortable with this appearing on a screen in front of a judge, a jury, or a reporter? If the answer is no, do not type it.
  2. Do not use AI to work around legal advice you have already received. This is what made the Krafton facts so damaging. Kim did not just use AI for general strategy. He used it to circumvent specific warnings from his own legal team. That sequence turned a business decision into evidence of willful bad faith.
  3. Talk to your lawyer before using AI for anything legally sensitive. If the topic involves contracts, employment decisions, regulatory compliance, intellectual property, or potential disputes, loop in your attorney first. Communications made through or at the direction of your lawyer have a much stronger chance of being protected. Communications you have on your own with a chatbot generally do not.
  4. Update your company’s document retention policies. If your organization has a litigation hold or document preservation policy (and it should), make sure it explicitly covers AI chat histories across all platforms, including ChatGPT, Claude, Copilot, Gemini, and any specialized tools your teams may be using.
  5. Use enterprise-grade AI tools for sensitive business operations. If your company is going to use AI for strategic planning, make sure you are on plans with appropriate confidentiality protections and data retention controls. Default consumer accounts on most platforms do not provide the safeguards you need.

AI tools are extraordinarily useful. They can help you think through complex problems, draft communications, analyze data, and pressure-test strategies. None of that changes because of the Krafton ruling.

What changes is the recognition that AI chats are not a safe space for your most sensitive thinking. They are written records that can and will be used in legal proceedings. The CEO of a multibillion-dollar gaming company learned this in the most public and expensive way possible.

You do not have to.