The Bottom Line
- Enhanced Legal Certainty: Businesses can be confident that judicial decisions will remain human-led. The new guidelines strictly prohibit AI from making autonomous decisions, assessing evidence, or interpreting the law, ensuring predictability in litigation.
- A Regulated Market for Legal Tech: AI vendors take note: Spanish judges are only permitted to use tools officially provided or approved by the justice administration and the General Council of the Judiciary (CGPJ). This creates a controlled, high-stakes market for compliant AI solutions.
- Stronger Data Protection in Court: The rules explicitly forbid using AI to process highly sensitive personal data or for profiling individuals. This provides a crucial layer of security for corporate and personal data involved in legal proceedings.
The Details
In a significant move to address the rise of artificial intelligence, Spain’s General Council of the Judiciary (CGPJ) has issued a comprehensive set of instructions for judges and magistrates. The goal is not to stifle innovation but to create a clear framework for using AI as a support tool, ensuring its adoption aligns with national and EU regulations, including the recent EU AI Act. The core principle underpinning the entire directive is “effective human control.” The guidelines firmly establish that AI can only serve as an assistant; the judge must always remain in complete control and retain final responsibility for every aspect of the judicial process.
The new rules draw a clear line between permitted and prohibited uses. Judges are encouraged to use approved AI tools for tasks that enhance efficiency, such as legal research, retrieving case precedents, and structuring information. AI may even be used to generate preliminary drafts of judicial resolutions. However, these applications are strictly defined as support functions. The rules explicitly forbid using AI to automate the core duties of a judge, such as weighing evidence, interpreting the law, or making a final ruling. Any AI-generated draft must undergo a “complete and critical personal review” by the judge, who remains exclusively accountable for the final, published decision.
Crucially, the CGPJ has created a “walled garden” for judicial AI. Judges are not permitted to use publicly available generative AI tools for their work. They are restricted to applications that have been vetted and provided either by the competent justice administrations or by the CGPJ itself. This approach is designed to mitigate risks related to data security, confidentiality, and algorithmic bias. These prohibitions on using AI for profiling individuals, predicting behavior, or handling highly sensitive data underscore the judiciary’s commitment to safeguarding fundamental rights as it navigates this new technological era.
Source
Consejo General del Poder Judicial
