The Bottom Line
- Accountability is Non-Negotiable: Spanish judges remain exclusively responsible for all rulings. AI can be used as a drafting assistant, but the final decision, reasoning, and liability rest entirely with the human judge.
- Walled Garden for AI Tools: Judges are prohibited from using public AI platforms. They may only use government-approved, vetted systems, ensuring sensitive case data is not exposed to unsecured, third-party models.
- AI is for Assistance, Not Judgment: The new rules strictly forbid using AI for core judicial tasks like weighing evidence, assessing facts, or interpreting the law. AI cannot be used for profiling individuals or predicting behavior.
The Details
Spain’s General Council of the Judiciary (CGPJ) has taken a decisive step in regulating the use of artificial intelligence within its courts. By issuing a formal instruction for all judges and magistrates, the CGPJ aims to create a clear and consistent framework that leverages AI’s efficiencies while safeguarding fundamental legal principles. This move proactively aligns the Spanish judiciary with emerging European standards, including the EU’s AI Act, ensuring that technological adoption does not compromise judicial independence or the rights of individuals and corporations involved in legal proceedings. The core objective is to provide a reliable environment where AI acts as a tool, not an autonomous decision-maker.
The new guidelines are built on a foundational principle: effective human control. This means that while AI systems can be used for tasks like legal research or summarizing case files, they cannot operate independently. The instruction explicitly states that AI cannot replace a judge in the essential functions of weighing evidence, assessing facts, or applying the law. This principle of no substitution is reinforced by rules on judicial responsibility, ensuring that the judge, not the algorithm, is accountable for every word in a final ruling. The framework also includes a strong emphasis on preventing algorithmic bias and protecting the confidentiality and security of all judicial information.
Practically, this means judges have a clear set of “dos and don’ts.” They are permitted to use officially provided AI tools to search for legal precedents and to generate internal working drafts or summaries. However, any AI-generated text that is considered for inclusion in a judicial resolution requires a complete and critical personal review and validation by the judge. The use of AI is strictly forbidden for profiling individuals, predicting outcomes, or processing specially protected personal data. Most importantly for businesses, judges are restricted to using only those AI applications provided and vetted by the justice administration, preventing the use of public generative AI models and protecting the integrity of sensitive corporate data presented in court.
Source
Consejo General del Poder Judicial (CGPJ)
