THE BOTTOM LINE
- Enhanced Legal Predictability: By standardizing the use of AI tools, these new rules aim for a more consistent application of technology across Spanish courts, reducing the risk of “black box” justice and providing a more stable legal environment for businesses.
- Human Oversight is Non-Negotiable: The guidelines firmly place judges in control. AI is strictly a support tool; all final decisions, evidence assessments, and legal interpretations remain the exclusive responsibility of human judges, safeguarding due process for all litigants.
- Strengthened Data Security: Companies involved in litigation can be assured that sensitive information is protected. Judges are restricted to using only officially sanctioned AI systems and are explicitly forbidden from using them to process specially protected data or for profiling purposes.
THE DETAILS
Spain’s top judicial body, the General Council of the Judiciary (CGPJ), has taken a decisive step in regulating the use of artificial intelligence within its courts. By issuing a formal instruction to all judges and magistrates, the CGPJ aims to create a clear, uniform, and coherent framework for leveraging AI. This move proactively addresses the potential pitfalls of generative AI, ensuring its use aligns with both national regulations and the broader European legal context, including the new EU AI Act, while respecting the fundamental principle of judicial independence.
At the heart of the new instruction is the principle of “effective human control.” The guidelines make it unequivocally clear that AI can assist, but never replace, a judge. While approved AI tools can be used for tasks like legal research, analyzing documents, or preparing internal drafts, they are prohibited from operating autonomously in decision-making, weighing evidence, or applying the law. The ultimate accountability for any judgment rests solely with the presiding judge, ensuring that judicial responsibility is never delegated to an algorithm. This “human in the loop” requirement is a critical safeguard against automated bias.
The CGPJ has drawn bright lines on permissible conduct. Judges may only use AI applications provided and vetted by the competent justice administrations or the CGPJ itself, effectively banning the use of public generative AI tools for official work. Furthermore, the instruction explicitly forbids using AI for profiling individuals, predicting behavior, or conducting risk assessments—activities that could introduce bias and undermine fundamental rights. These rules ensure that AI is adopted as a responsible productivity tool that supports the judicial function, not as a machine that performs it.
SOURCE: Consejo General del Poder Judicial (CGPJ)
