May 2019
Ethics Guidelines for Trustworthy AI
On April 8th, 2019, the High-Level Expert Group on AI presented their ethics guidelines for trustworthy artificial intelligence. This follows the publication of the guidelines' first draft in December 2018 which received more than 500 comments (including feedback from EUnited) through an open concultation.
According to the guidelines, trustworthy AI should be:
(1) lawful - respecting all applicable laws and regulations
(2) ethical - respecting ethical principles and values
(3) robust - both from a technical perspective while taking into account its social environment
The guidelines put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy. A specific assessment list aims to help verify the application of each of the key requirements:
- Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches.
- Technical Robustness and safety: AI systems need to be resilient and secure. They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible.
- Privacy and data governance: besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimised access to data.
- Transparency: the data, system and AI business models should be transparent. Traceability mechanisms can help achieving this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations.
- Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.
- Societal and environmental well-being: AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly. Moreover, they should take into account the environment, including other living beings, and their social and societal impact should be carefully considered.
- Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes plays a key role therein, especially in critical applications.
Next Steps
A piloting process will be set up as a means of gathering practical feedback on how the assessment list, that operationalises the key requirements, can be improved. All interested stakeholders can already register their interest to participate in the piloting process that will be kicked-off in summer 2019.
Commenting on the publication and the next steps, Jethro Schiansky, EUnited Executive Director said, “this piloting process will be vital in ensuring that the ethical considerations which are practically relevant for various AI applications (such as industrial AI) are considered by stakeholders engaging in the piloting phase. The Guidelines – as they currently stand - imply that all ethical considerations contained in the assessment list will need to be made regardless of the application and regardless of whether there is a human impact resulting from the deployment of an AI-enabled product or service. The piloting phase should be used as an opportunity to refine the assessment list, potentially by distinguishing between various categories of AI application, so as to make it as relevant as possible to users”.
Moreover, a forum discussion was set up to foster the exchange of best practices on the implementation of Trustworthy AI.
Following the piloting phase and building on the feedback received, the High-Level Expert Group on AI will review the assessment lists for the key requirements in early 2020. Based on this review, the Commission will evaluate the outcome and propose any next steps.
All relevant information on the document as well as the next steps towards the review of the assessment list can be found on the new AI Alliance, page dedicated to the guidelines.