Taking the next step following the Ethics Guidelines for Trustworthy AI published in April 2019, this report contains HLEG's proposed policy and investment recommendations for Trustworthy AI, addressed to EU institutions and Member States. The proposed recommendations underline digitalisation and AI as amongst the most transformative technologies of our time. Policy-makers are keenly aware that when developing and deploying AI, we need to maximise the benefits and minimise/prevent risks, hence proposing the following recommendations.
Recommendations focus on four main areas of beneficial impact: (A) Humans and Society, (B) the Private Sector, (C) the Public Sector and (D) Research and Academia.
In addition, the second part of the report addresses the main enablers needed to facilitate those impacts: focusing on availability of data and infrastructure (E); skills and education (F); appropriate governance and regulation (G); and funding and investment (H).
Here is a list of the main takeaways from the report:
1. Empower and protect humans and society
First, individuals need to be aware of and understand the capabilities, limitations and impacts of AI. Second, they must have the necessary education and skills to use the technology, to ensure that they can truly benefit therefrom as well as being prepared for a transformed working environment where AI systems will become ever more prevalent. And third, they need adequate safeguards from any adverse impact that AI might bring.
2. Take up a tailored approach to the AI landscape
Policy-makers should consider the “big picture”, by looking at AI’s overall impact on – and potential for – society, while simultaneously understanding the sensitivities of AI solutions in B2C, B2B and P2C contexts, both as digital products and services only, and as digital solutions embedded in physical systems.
3. Secure a Single European Market for Trustworthy AI
This is a complex and multifaceted undertaking which includes the avoidance of market fragmentation, for instance through the harmonisation of legislation where appropriate, while at the same time maintaining a high level of protection of individuals’ rights and freedoms across all Member States.
4. Enable AI ecosystems through Sectoral Multi-Stakeholder Alliances
Implementing the recommendations put forward in this document necessitates stakeholder cooperation. It is only by joining forces and bringing all relevant actors – from civil society, industry, the public sector and research and academia – around the table, that we can make a difference. The proposal is to start laying the groundwork for this in the second half of 2019, building on the current report.
5. Foster the European data economy
A whole set of policy actions are needed for European organisations to generate societal benefits and succeeding in global competition, including provisions for data access, data sharing, use of data, re-use of data and data interoperability, all the while ensuring high privacy and data protection standards for individuals. This also requires putting in place the necessary (physical) infrastructures to enable the other building blocks needed to develop and deploy Trustworthy AI in Europe.
6. Exploit the multi-faceted role of the public sector
It is uniquely placed to deliver and promote human-centric and Trustworthy AI services, leading by example, while ensuring a strong protection of fundamental rights. Public procurement-based innovation provides a great opportunity not only to incentivise the development of novel AI solutions that can optimise public services, but also to foster Trustworthy AI solutions amongst European companies of all sizes, and to create beneficial solutions in their own right for application elsewhere.
7. Strengthen and unite Europe’s research capabilities
It should strengthen and create additional Centres of Excellence in AI, and foster collaboration with other stakeholders, including small and large companies, the public sector, as well as society at large. An ambitious research roadmap for AI should be developed, which includes grand challenges of global relevance, respects and fosters Trustworthy AI, and substantially impacts human-centric application domains.
8. Nurture education to the Fourth Power
This starts with raising awareness and providing education on AI’s capabilities, challenges and limitations, as well as teaching appropriate skills to deal with this, whilst ensuring an inter- and multidisciplinary perspective. Primary (1), secondary (2) and tertiary (3) education models need to take this into consideration, and continuous learning (4) – including on-job learning – must secure the re- and up-skilling of individuals for the new digital era in Europe, establishing a work-life-train balance.
9. Adopt a risk-based governance approach to AI and ensure an appropriate regulatory framework
A risk-based approach is advocated that is focused on proportionate yet effective action to safeguard AI that is lawful, ethical and robust, and fully aligned with fundamental rights. A comprehensive mapping of relevant EU laws should be undertaken so as to assess the extent to which these laws are still fit for purpose in an AI-driven world. In addition, new legal measures and governance mechanisms may need to be put in place to ensure adequate protection from adverse impacts as well as enabling proper enforcement and oversight, without stifling beneficial innovation.
10. Stimulate an open and lucrative investment environment
The new Horizon Europe and Digital Europe programmes set firm steps towards enhanced European investment levels in AI, yet much more needs to be done on the public side, and real successes can only be achieved through significant private sector support.
11. Embrace a holistic way of working, combing a 10-year vision with a rolling action plan
To achieve these goals, Europe needs a holistic strategy with a long-term vision that can capture the opportunities and challenges of AI for the next 10 years. At the same time, a framework is needed that allows continuous monitoring of the landscape and adapting impactful actions on a short-term rolling basis. In this regard, the yearly update of the Commission’s and Member States' Coordinated plan for AI is a welcome development and should be secured.
As concrete next steps to pave the road for this new way of working, in the second half of 2019, it is recommended to (1) run a piloting phase of the Ethics Guidelines for Trustworthy AI to enable their improvement and secure their (sectoral) relevance, and (2) instigate a limited number of sectorial AI ecosystem analyses. Building on this report’s cross-sectoral recommendations, it is believed to be necessary to learn which impactful actions should be undertaken for various strategic sectors, covering all the areas of impact and the enablers mentioned in this report.
> Download the full report with more details and full explanation of each recommendation.