THE BOTTOM LINE
- Judicial decisions remain firmly in human hands. AI is strictly an assistive tool, with judges retaining full and exclusive responsibility for all rulings, preventing “black box” justice.
- Strict data security and confidentiality are paramount. Judges are restricted to using only government-approved and vetted AI systems, mitigating the risks of feeding sensitive case data into public or unsecured platforms.
- AI for profiling or predicting behavior is banned. The use of AI for profiling individuals, predicting behavior, or automatically evaluating evidence is explicitly forbidden, ensuring core judicial functions are not delegated to algorithms.
THE DETAILS
In a proactive move, Spain’s General Council of the Judiciary (CGPJ) has issued a formal guide to all judges and magistrates, establishing a clear framework for using Artificial Intelligence in the courtroom. This initiative aims to align the Spanish judiciary with national and EU regulations, including the recent EU AI Act, while safeguarding fundamental rights. The guidance recognizes the potential of AI—particularly generative AI—as a powerful tool for efficiency. At the same time, it addresses the significant risks it poses to legal certainty and individual liberties if left unchecked. The goal is to create a predictable and secure environment for leveraging technology without compromising the integrity of the justice system.
The framework is built on several core principles, with effective human control being the most crucial. The guidance mandates that judges must maintain constant, conscious, and effective oversight over any AI system they use. AI cannot operate autonomously to make judicial decisions, assess facts or evidence, or interpret the law. This is reinforced by the no substitution principle, which explicitly prohibits AI from replacing a judge’s core duties. Ultimately, the principle of judicial responsibility makes it clear: the judge is solely and entirely accountable for the final decision, regardless of any AI-assisted drafting or research. This ensures that legal reasoning and judgment remain a fundamentally human exercise.
In practice, the rules create a clear line between permitted and prohibited uses. Judges are permitted to use officially provided and sanctioned AI tools for tasks like legal research, reviewing case precedents, and structuring information. They can even use AI to generate internal working drafts or summaries. However, any AI-generated text that makes its way into a judicial ruling requires a “complete and critical personal review and validation” by the judge. The guidance explicitly forbids using AI for automated decision-making, profiling individuals, or handling specially protected personal data. This establishes a “walled garden” approach, where AI serves as a sophisticated assistant within strict boundaries but is never allowed to become the decision-maker.
SOURCE
Source: Consejo General del Poder Judicial
