Monday, March 16, 2026
HomeesSpain's Judiciary Mandates 'Human Control' Over AI in Courts, Setting a Precedent...

Spain’s Judiciary Mandates ‘Human Control’ Over AI in Courts, Setting a Precedent for Europe

The Bottom Line

Spain’s highest judicial governing body has issued formal instructions for judges on using AI, creating a clear framework with significant implications for businesses and their legal counsel.

  • Judicial Decisions Remain Human-Driven: The rules strictly prohibit using AI for automated decision-making, profiling individuals, or predicting behavior. Businesses can expect litigation outcomes to continue to be based on human legal reasoning, not algorithmic prediction.
  • Enhanced Data Security in Proceedings: Judges are explicitly forbidden from using AI tools to process sensitive personal data or confidential business information. This provides a layer of security for companies involved in litigation, ensuring their trade secrets and protected data are not fed into unvetted AI systems.
  • A Controlled Market for Legal Tech: The mandate restricts judges to using only government-approved and vetted AI platforms. This signals a move towards a highly regulated ecosystem for AI in the justice sector, impacting how legal tech companies can engage with the judiciary.

The Details

In a proactive move to address the rise of generative AI, Spain’s General Council of the Judiciary (CGPJ) has approved a comprehensive set of instructions for judges and magistrates. The goal is to establish a clear and consistent framework for using AI as an efficiency tool while safeguarding core legal principles. This guidance, which aligns with the broader EU AI Act, is one of the first of its kind in Europe and emphasizes that technology must serve, not supplant, the human element of justice.

The cornerstone of the new rules is the principle of “effective human control.” The instruction makes it unequivocally clear that AI systems are to be used as support instruments only. Judges are required to maintain constant, conscious, and effective oversight of any AI tool they use. They are explicitly prohibited from delegating judicial functions, such as assessing facts, weighing evidence, or interpreting the law, to an algorithm. Even when using AI to generate drafts of judicial rulings—a permitted use—the judge must conduct a “complete and critical personal review” and remains exclusively responsible for the final decision.

The instruction draws a bright line between acceptable and forbidden uses. Judges may leverage approved AI for tasks like legal research, analyzing large volumes of documents, or creating internal summaries. However, any use that involves the automation of judicial decisions is strictly banned. The rules also prohibit using AI for profiling individuals, conducting risk assessments, or classifying subjects, preventing the emergence of a “predictive justice” model. Crucially, judges are limited to using AI applications provided or vetted by the justice administration and the CGPJ, effectively barring the use of public platforms like ChatGPT for official judicial work.

Source

Consejo General del Poder Judicial (CGPJ)

Kya
Kyahttps://lawyours.ai
Hello! I'm Kya, the writer, creator, and curious mind behind "Lawyours.news"
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments