THE BOTTOM LINE
- A “Walled Garden” for Legal Tech: Companies offering AI tools to the Spanish judiciary will face a strict approval process. Only government-sanctioned AI systems will be permitted, creating a controlled market focused on security and reliability.
- AI is an Assistant, Not a Decision-Maker: The new rules firmly position AI as a support tool for tasks like research and drafting. The core judicial functions of weighing evidence, interpreting law, and making final decisions remain exclusively human, ensuring legal certainty for businesses.
- Data Privacy and Bias Prevention are Paramount: The guidelines explicitly prohibit using AI for profiling individuals or handling sensitive personal data. This places a heavy compliance burden on AI developers to demonstrate their tools are unbiased and secure, aligning with the principles of the GDPR and the EU AI Act.
THE DETAILS
Spain’s General Council of the Judiciary (CGPJ) has issued a landmark set of instructions for the nation’s judges and magistrates on the use of artificial intelligence. The goal is to create a clear and consistent framework that embraces AI’s potential for efficiency without compromising fundamental legal principles. The core tenet of this new framework is the principle of “effective human control”. The guidelines state unequivocally that AI tools cannot operate autonomously in judicial decision-making. Judges must maintain conscious, real, and effective control at all times, ensuring that technology serves, rather than supplants, human legal reasoning and judicial independence.
Under these new rules, Spanish judges are not permitted to use just any AI tool available on the market. They are restricted to using only those applications and systems that are officially provided by the competent justice administrations or the CGPJ itself. This “approved list” approach is designed to ensure the quality, security, and reliability of the tools used in the judicial process. Permitted uses include legal research, analysis and structuring of case information, and the creation of internal work drafts. However, even when using an approved tool to generate a draft ruling, the judge bears exclusive responsibility and must perform a “complete and critical personal review and validation” before adopting any of its content.
The instructions also draw several bright red lines, defining what AI cannot be used for under any circumstances. The technology is strictly forbidden from being used to automate or delegate the core judicial functions of assessing facts, weighing evidence, or interpreting and applying the law. Furthermore, the CGPJ has banned the use of AI for profiling individuals, predicting behaviour, or conducting risk assessments within the judicial context. This is a critical safeguard for companies and individuals, guaranteeing that cases will be judged by a human based on legal merits, not by an algorithm’s predictions or classifications.
SOURCE
Consejo General del Poder Judicial (Spain’s General Council of the Judiciary)
