Previously we wrote about the fundamental
AI ethics principles (link article). In order to uphold and operationalize these principles of human autonomy, prevention of harm, fairness, explicability, the
EC Guidelines for Trustworthy AI provides a list of requirements:
This requirement targets policies and measures to safeguard sensitive data, ensuring compliance with regulations and protecting individuals' privacy rights. —Data privacy is primarily the responsibility of data governance teams, data protection officers (DPOs), cybersecurity specialists, legal teams specializing in data protection laws (such as GDPR or CCPA), and compliance officers.
This requirement targets the ability of AI systems to perform reliably and accurately across diverse scenarios, mitigating vulnerabilities and ensuring consistent performance. —Developers, data scientists, quality assurance (QA) engineers, and system architects are primarily responsible for ensuring the robustness of AI systems. This involves thorough testing, validation, and monitoring of AI models and systems.
This requirement ensures that AI systems do not exhibit biases or discriminate against individuals or groups based on protected characteristics, promoting equitable outcomes. —Data scientists, AI ethicists, diversity and inclusion experts, and compliance officers play key roles in identifying and mitigating biases within AI systems. They collaborate to ensure that AI models are trained on diverse and representative datasets and undergo rigorous fairness testing.
This requirement ensures that transparency and comprehensibility of AI systems' decisions and processes, enabling stakeholders to understand and trust AI-driven outcomes. —Data scientists, machine learning engineers, and AI developers are responsible for designing AI models and systems that are interpretable and explainable. They implement techniques such as model explainability tools, interpretable machine learning models, and clear documentation of AI processes.
This requirement upholds human oversight, control, and accountability in AI systems, ensuring that humans remain in charge of critical decisions and actions. —Executives, AI governance leaders, legal teams, and ethics boards are responsible for establishing policies and frameworks that ensure human oversight, control, and accountability in AI systems. They define guidelines for human intervention, decision-making authority, and ethical guidelines for AI use cases.
This requirement ensures that accountability in AI systems is essential for trust and integrity. —Executives, AI governance leaders, and legal teams must establish transparent, auditable systems with mechanisms to minimize and report negative impacts. They must balance trade-offs and provide avenues for redress, ensuring both AI systems and their human operators are held responsible, fostering ethical AI use.
This requirement ensures that AI systems prioritize sustainability, environmental friendliness, and positive social impact. —Leaders and ethics boards must create guidelines to ensure AI benefits society and upholds democratic values. By integrating these considerations, AI can support a more sustainable and equitable future.
--
These requirements collectively form the foundation of an effective AI governance framework, promoting ethical AI practices and fostering trust in AI technologies.