The Bottom Line
- Limited Market for Legal Tech: Companies developing AI legal tools must navigate a controlled ecosystem. Spain’s judiciary will only permit the use of government-vetted and supplied AI applications, effectively creating a “walled garden” and a high bar for entry.
- Certainty in Litigation: For businesses involved in legal disputes, this ruling guarantees that a human judge remains the final arbiter. It mitigates the risk of automated or “black box” judgments, ensuring judicial decisions are based on personal review and reasoning, not just algorithmic output.
- A Cautious Regulatory Signal: This move aligns Spain with the broader EU approach of managing AI’s risks, particularly in high-stakes areas like the justice system. It signals a commitment to adopting technology pragmatically, prioritizing fundamental rights, accountability, and the prevention of bias over unchecked efficiency gains.
The Details
Spain’s General Council of the Judiciary (CGPJ) has released a landmark instruction setting clear guardrails for the use of Artificial Intelligence by judges and magistrates. The goal is to establish a uniform and coherent framework for leveraging AI tools, ensuring that their application is consistent with both Spanish and European law, including the principles of the EU AI Act. The instruction acknowledges the potential benefits of AI in the administration of justice but firmly prioritizes the protection of individual rights and liberties against the potential pitfalls of automated systems, especially generative AI.
The cornerstone of the new framework is the principle of “effective human control.” The instruction is unequivocal: AI is to be used as an assistive tool, never as a substitute for a judge. The ultimate responsibility for any judicial decision remains exclusively with the human magistrate. AI systems are expressly forbidden from operating autonomously to make rulings, assess facts or evidence, or interpret the law. This core tenet is reinforced by several other key principles, including the non-replacement of judges, the preservation of judicial independence, the protection of confidential data, and an explicit directive to prevent and mitigate algorithmic bias.
In practice, the instruction places strict limits on what tools can be used and how. Judges are only permitted to use AI applications provided and approved by the competent justice administrations or the CGPJ itself. Permitted uses include legal research, analysis of case files, and the creation of internal summaries or outlines. While judges may use approved AI to generate preliminary drafts of judicial resolutions, these outputs are considered mere support instruments. Every word must undergo a “critical, complete, and personal” review and validation by the judge before it can be incorporated into an official ruling. The use of AI for profiling individuals, predicting behavior, or processing specially protected personal data is strictly prohibited.
Source
Consejo General del Poder Judicial
