THE BOTTOM LINE
- Accountability is Guaranteed: For businesses litigating in Spain, judicial decisions will remain exclusively human-made. The new rules mandate that judges maintain effective human control over any AI tool, ensuring final rulings are a product of human reason, not algorithmic output.
- Enhanced Data Security: Your company’s sensitive information, especially personal data or confidential commercial details submitted in legal proceedings, is explicitly protected. Judges are prohibited from using AI systems to process this type of information, mitigating risks of data leaks or misuse.
- A Controlled Tech Rollout: The Spanish judiciary is adopting a “walled garden” approach. Judges can only use AI tools officially provided or approved by the justice administration. This signals a cautious, standardized adoption of legal tech, impacting which tools gain traction within the court system.
THE DETAILS
Spain’s General Council of the Judiciary (CGPJ), the governing body of the country’s judges, has issued a landmark instruction on the use of artificial intelligence in judicial activities. The directive aims to create a clear and consistent framework that allows the judiciary to leverage AI’s benefits for efficiency while firmly upholding core legal principles. This move proactively aligns the Spanish courts with emerging national and EU regulations, including the EU AI Act, establishing a clear policy of AI as a support tool, not a substitute for judicial authority.
The cornerstone of the new framework is the principle of effective human control. The instruction is unequivocal: AI systems cannot operate autonomously to make judicial decisions, assess facts or evidence, or interpret the law. Every output from an AI tool, such as a draft summary or legal research, must undergo a complete and critical personal review by the judge. This ensures that judicial independence and responsibility remain solely with the human decision-maker, preventing the delegation of core judicial functions to a machine and safeguarding against algorithmic bias.
Practically, the instruction defines clear boundaries for AI use. Permitted applications include legal research, analysis of case files, and drafting internal working documents. However, there are strict prohibitions. Judges are barred from using AI for profiling individuals, predicting behavior, or conducting risk assessments. Furthermore, only AI applications provided and vetted by the competent justice administrations or the CGPJ itself may be used. This creates a controlled environment, ensuring that the tools used meet stringent standards for security, confidentiality, and reliability.
SOURCE
Source: Consejo General del Poder Judicial (CGPJ)
