The Bottom Line
- Judicial AI is Restricted to Vetted Tools: Businesses and legal teams must be aware that Spanish judges are now prohibited from using public AI tools (like ChatGPT) for judicial work. They are limited to officially sanctioned systems, raising the bar for data security and confidentiality in legal proceedings.
- Predictability Remains Paramount: For CEOs, this ruling provides certainty. Judicial decisions will not be delegated to “black box” algorithms. The mandate ensures that human judges remain the ultimate arbiters of fact and law, preserving the predictability of the legal system.
- A Blueprint for Corporate AI Governance: The principles set by the judiciary—emphasizing human oversight, accountability, and bias prevention—serve as a clear regulatory signal. Companies developing or deploying AI should view this as a model for their own internal governance and future compliance requirements.
The Details
Spain’s General Council of the Judiciary (CGPJ), the governing body of the country’s judges, has issued a landmark instruction setting clear and strict boundaries for the use of artificial intelligence in judicial activities. The goal is to establish a coherent framework that aligns with national and EU regulations, including the recent EU AI Act. The core principle established is “effective human control.” This means that while AI can be used as an assistive tool, it can never operate autonomously to make judicial decisions, evaluate evidence, or interpret the law. The judge must remain in complete command, ensuring every step is subject to conscious and effective human supervision.
The instruction outlines permitted and prohibited uses with precision. Judges are allowed to use approved AI systems for tasks like legal research, analyzing case files, and creating internal drafts or summaries. However, the list of prohibitions is significant for the business world. AI cannot be used for profiling individuals, predicting behavior, or conducting risk assessments within the judicial process. Critically, judges may only use AI applications provided and vetted by the justice administration or the CGPJ itself. This measure is designed to prevent the use of insecure, public-facing generative AI models that could expose sensitive case data and compromise confidentiality.
Ultimately, accountability remains squarely with the human judge. The instruction makes it clear that AI cannot substitute a judge or dilute their responsibility. Even when using an AI-generated draft of a ruling, the judge must perform a “critical, complete, and personal review” before any content is adopted. This preserves judicial independence and is a direct safeguard against algorithmic biases that could lead to discriminatory or arbitrary outcomes. For businesses operating in Spain, this decision reinforces that while the courts are modernizing, the fundamental principles of human-led justice are non-negotiable.
Source
Consejo General del Poder Judicial (CGPJ)
