THE BOTTOM LINE
- Enhanced Predictability: For companies litigating in Spain, these new guidelines provide a clear framework for how AI will be used by the judiciary. This reduces uncertainty and ensures that court processes remain consistent, transparent, and grounded in human oversight.
- Absolute Judicial Accountability: The rules explicitly state that judges retain exclusive responsibility for all decisions. This means businesses can be confident that a human legal expert, not an algorithm, is making the final call, eliminating risks associated with automated or “black box” judicial rulings.
- Strict Data Security & Bias Controls: The directive prohibits judges from using unapproved public AI tools and from using AI for profiling individuals or analyzing sensitive data. This protects confidential corporate information and mitigates the risk of biased outcomes, ensuring a fairer legal process.
THE DETAILS
Spain’s General Council of the Judiciary (CGPJ) has proactively addressed the rise of generative AI by issuing a formal instruction for all judges and magistrates. The move aims to establish a clear and uniform framework for using AI in judicial activities, aligning the Spanish courts with emerging national and EU regulations. The core purpose is to harness the efficiency of AI as a support tool while erecting strong guardrails to protect fundamental legal principles like judicial independence, due process, and the confidentiality of case information. This prevents an ad-hoc approach and ensures that the integration of technology does not compromise the integrity of the justice system.
The cornerstone of the new directive is the principle of “effective human control.” The guidelines make it unequivocally clear that AI systems are to be used as assistants, not substitutes for judicial reasoning. Judges are expressly forbidden from delegating core jurisdictional functions—such as evaluating facts, assessing evidence, interpreting the law, or making final decisions—to any AI. Furthermore, judges may only use AI applications that have been officially provided and vetted by the justice administration and the CGPJ. This “approved list” approach is a critical security measure, preventing the use of public AI models that could expose sensitive case data to security breaches.
In practice, the rules draw a sharp line between permitted and prohibited uses. Judges are encouraged to use approved AI tools for tasks like legal research, retrieving case precedents, and structuring information to better understand complex matters. They can even use AI to generate internal drafts or summaries as working documents. However, the directive strictly forbids the automated generation of judicial decisions or the direct copying of AI-generated text into a final ruling without the judge’s “critical, complete, and personal” review and validation. Critically, any use of AI for profiling people, predicting behaviour, or conducting risk assessments is banned, ensuring that judicial discretion remains a fundamentally human exercise.
SOURCE
Source: Consejo General del Poder Judicial
