THE BOTTOM LINE
- Human Judges Remain in Full Control: The new guidelines firmly establish AI as an assistant, not a decision-maker. This ensures that legal strategies must continue to persuade a human, preserving the core principles of judicial reasoning and advocacy.
- A “Walled Garden” for Judicial AI: Judges are restricted to using only officially approved and vetted AI systems. For businesses, this minimizes the risk of sensitive case data being exposed through insecure public AI tools and ensures a baseline level of reliability.
- Clear “No-Go” Zones for Algorithms: AI is strictly forbidden for core judicial functions like weighing evidence, making final rulings, interpreting the law, or profiling individuals. This protects due process and guarantees that high-stakes business outcomes will not be delegated to an automated “black box.”
THE DETAILS
Spain’s General Council of the Judiciary (CGPJ), the governing body of the country’s judges, has approved a landmark set of instructions on the use of artificial intelligence in the judicial process. This move aims to create a clear and consistent framework for judges and magistrates, balancing the efficiency gains of AI with the non-negotiable protection of fundamental rights and judicial independence. This proactive measure aligns Spain with broader European regulations, including the EU AI Act, establishing clear guardrails as the technology becomes more integrated into the legal landscape.
The cornerstone of the new rules is the principle of “effective human control.” The guidance is unequivocal: AI systems cannot operate autonomously in a judicial context. Every output, from a case summary to a draft document, must be subject to the direct, conscious, and effective control of a human judge. The instructions prohibit AI from replacing a judge in any capacity, emphasizing that ultimate responsibility for every decision remains exclusively with the human judicial officer. This is further reinforced by principles mandating the preservation of judicial independence, data confidentiality, and the active prevention of algorithmic bias.
In practice, the guidelines draw a sharp line between permitted and prohibited uses. Judges may use approved AI tools for tasks like legal research, organizing case files, and creating internal working drafts or summaries. However, they are strictly forbidden from using AI for the substantive work of judging: weighing evidence, interpreting law, or making final decisions. Furthermore, AI cannot be used to process highly sensitive data or for profiling individuals. Crucially, judges are only permitted to use AI applications provided and validated by the justice administrations or the CGPJ itself, preventing the use of public, unsecure generative AI tools in judicial work.
SOURCE
Source: Consejo General del Poder Judicial
