The Bottom Line
- Judges Remain Fully Liable: Spain’s new rules mandate strict human oversight for any AI-assisted judicial work. This ensures AI remains a tool, not a decision-maker, keeping the lines of legal responsibility clear and resting solely with the judge.
- A ‘Walled Garden’ for Legal Tech: Only government-approved and judiciary-vetted AI tools are permitted. This rule prohibits judges from using public generative AI for official duties, significantly enhancing data security and creating a controlled, high-stakes market for legal tech vendors.
- No ‘Robo-Judges’ on the Horizon: The guidelines explicitly ban the use of AI for core judicial functions like assessing evidence, interpreting law, or profiling individuals. For businesses in litigation, this means final court rulings will remain human-centric, mitigating the risk of opaque algorithmic bias.
The Details
In a decisive move to regulate emerging technology in its courts, Spain’s General Council of the Judiciary (CGPJ) has issued formal instructions for all judges and magistrates on using Artificial Intelligence. The guidance aims to create a clear and consistent framework, aligning with the principles of the new EU AI Act. The core objective is to harness the efficiency of AI while establishing strong safeguards to protect judicial independence and the fundamental rights of all parties in legal proceedings. This isn’t a ban, but a carefully constructed set of rules for engagement.
The central pillar of the new framework is the principle of non-negotiable human control. The instructions state that judges must maintain “real, conscious, and effective” oversight of any AI system they use. AI tools can assist, but they can never substitute for the judge or operate autonomously in making judicial decisions. This means AI cannot be used to weigh evidence, interpret the law, or deliver a final verdict. The judge retains exclusive and full responsibility for every word of a ruling, regardless of whether a draft was partially generated by an AI assistant. This emphasis on accountability is designed to ensure that judicial discretion and critical thinking remain at the heart of the legal process.
The guidance draws a sharp line between permitted and prohibited uses. Judges are allowed to use approved AI tools for tasks such as legal research, analyzing case files, and preparing internal summaries or non-decisive working drafts. However, the use of these tools is strictly forbidden for profiling individuals, predicting behavior, or automatically generating rulings. Even when using an approved AI to draft a preliminary version of a court order, the judge is required to perform a “complete and critical personal review and validation.” Furthermore, judges are explicitly restricted to using only AI applications provided by the competent justice administrations or the CGPJ itself. This policy effectively bars the use of commercial, public-facing AI models for court work, thereby protecting sensitive case data.
Source
Consejo General del Poder Judicial
