Tendencias y Estudios / Digitalización

Explaining the Black Box: when law controls AI

The explainability of Artificial Intelligence algorithms, in particular Machine-Learning algorithms, has become a major concern for society. Policy-makers across the globe are starting to reply to such concern.


In Europe, a High-level Expert Group on AI has proposed seven requirements for a trustworthy AI, which are: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity/non-discrimination/fairness, societal and environmental wellbeing, and accountability.

https://www.cerre.eu/publications/explaining-black-box-when-law-controls-ai

https://www.cerre.eu/sites/cerre/files/issue_paper_explaining_the_black_box_when_lax_controlds_ai.pdf