Monday, March 16, 2026
HomenlYour AI Signed a Billion-Euro Blunder. Are You Liable? Dutch Court Weighs...

Your AI Signed a Billion-Euro Blunder. Are You Liable? Dutch Court Weighs In.

The Bottom Line

  • Your AI, Your Responsibility (Usually): A landmark ruling confirms that companies using AI to generate contracts are generally liable for its output, treating the AI as an extension of the company’s will.
  • A “Too Good to be True” Exception: However, the court carved out a crucial exception. If an AI-generated term is so commercially unreasonable that the other party should have recognized it as a probable error, that specific clause can be voided.
  • Human Oversight is Now Non-Negotiable: The decision effectively mandates a “human-in-the-loop” approach. Relying solely on automated contract generation without robust human review is now a high-stakes legal gamble.

The Details

In a case with significant implications for the future of automated business transactions, the District Court of The Hague was asked to rule on the validity of a contract generated almost entirely by an AI platform. A Dutch tech scale-up used its proprietary AI to draft a complex supply agreement with a German manufacturing giant. The AI, acting on a junior employee’s prompt, included an unlimited liability clause—a term that was commercially disastrous for the scale-up and far outside industry norms. When a dispute arose, the German firm sought to enforce this clause, which would have bankrupted the smaller company. The court was forced to decide: was this a binding agreement or a “digital mistake”?

The court’s reasoning skillfully balanced modern technology with established legal principles. It held that, in principle, a company that deploys an automated system is bound by its actions. The court viewed the AI not as a third party, but as a tool used to express the company’s intent, similar to using an email template or a pre-written form. However, the court pivoted on the doctrine of “mistake” (dwaling) and the counterparty’s duty of good faith. It found that the unlimited liability clause was so extreme and so disconnected from prior negotiations that the German manufacturer had a duty to inquire whether the term was intentional or a glaring error. Its failure to do so meant it could not reasonably rely on the enforceability of that specific, anomalous clause.

This judgment serves as a critical warning for both legal and executive teams. For companies embracing AI for efficiency, it underscores that automation does not eliminate accountability. You are responsible for the digital agents acting on your behalf, and you must implement stringent oversight and review protocols. For companies on the receiving end, the ruling is a reminder that you cannot simply enforce a windfall generated by a counterparty’s obvious technological error. If a contract term seems far too good to be true, it may well be, and a court may prevent you from capitalizing on it. The era of blindly trusting—or exploiting—the output of an algorithm is over.

Source

District Court of The Hague

Kya
Kyahttps://lawyours.ai
Hello! I'm Kya, the writer, creator, and curious mind behind "Lawyours.news"
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments