The Bottom Line
- Human-Centric Justice Reinforced: For businesses in litigation, this ensures decisions will continue to be based on human legal interpretation and reasoning, maintaining legal certainty and reducing the risk of unpredictable, purely data-driven outcomes.
- Litigation Strategy Unchanged: Legal teams should note that judges are explicitly forbidden from using AI for core tasks like weighing evidence, assessing facts, or profiling individuals. Arguments must continue to be framed for human jurists, not algorithmic analysis.
- A Controlled Market for Legal Tech: The Spanish judiciary will operate a “walled garden,” permitting judges to use only government-approved and vetted AI tools. This signals a cautious, security-focused approach to technological adoption and creates a highly regulated market for AI vendors targeting the justice system.
The Details
Spain’s General Council of the Judiciary (CGPJ), the governing body of the country’s judges, has issued a landmark instruction setting clear guardrails for the use of artificial intelligence in the judicial process. The move aims to create a consistent and secure framework, ensuring that AI serves as an efficiency tool without compromising the fundamental principles of justice. The core principle is non-negotiable: AI is an assistant, never the decision-maker. Judges must maintain “real, conscious, and effective human control” at all times, retaining exclusive responsibility for assessing facts, weighing evidence, and interpreting the law.
The guidelines draw a sharp line between permitted support tasks and prohibited core judicial functions. Judges are encouraged to use approved AI systems for tasks like legal research, organizing case files, and creating internal summaries or working drafts. However, they are strictly forbidden from using AI to automate or delegate final decisions. The rules also explicitly prohibit the use of AI for profiling individuals, predicting behavior, or classifying subjects, directly addressing concerns about algorithmic bias and its potential to infringe on fundamental rights. Even when using AI to generate a preliminary draft of a ruling, the judge must conduct a “complete and critical personal review,” making the final text entirely their own.
In a move to ensure security and prevent the use of unreliable public tools, the instruction mandates that judges may only use AI applications provided and vetted by the competent justice administrations or the CGPJ itself. This “walled-garden” approach prevents the use of insecure, external generative AI models that could compromise confidential case data. The regulation aligns with the broader European legal context, including the EU’s AI Act, by treating judicial systems as a high-risk area where AI deployment must be carefully managed to protect judicial independence, fairness, and the rights of all parties.
Source
Consejo General del Poder Judicial
