March 2020
European Commission's Digital Package
On the 19th February, the Commission published the latest set of documents related to its digital strategy, namely:
A. Report on: “Shaping Europe’s Digital Future”
B. Report on: “A European Strategy for Data”
C. White paper: “Artificial Intelligence – A European approach to Excellence and trust”
D. Report on: “Safety and liability implications of Artificial Intelligence, the Internet of Things and Robotics”
A summary of the relevant parts of each initiative are given below. More detail can be found in the documents linked above.
A. “Shaping Europe's Digital Future”
The Communication sets out the Commission’s strategy with regard to:
1. Technology that works for people
- See document on "Artificial Intelligence - ..." above;
- A reinforced skills agenda, with a strong focus on digital skills in early career transitions;
- Accelerating Europe’s investments in Gigabit connectivity through an updated Action Plan on 5G and 6G;
- A European Cybersecurity Strategy, including establishment of a joint cybersecurity unit.
2. A fair and competitive digital economy
- See “A European Data Strategy” above;
- Industrial Strategy package facilitating transformation towards clean, circular, digital and globally competitive EU industries, including SMEs and the reinforcement of single market rules;
- A new Consumer Agenda.
3. An open, democratic and sustainable society
- A circular electronics initiative to align with Circular Economy Action Plan foreseen under the Green Deal.
B. “A European Strategy for Data"
Over all objective of the Communication is to create a single European data space, including personal and non-personal data, increase access to and use of data for both citizens and businesses. This requires a number of initiatives and actions based on 4 pillars:
1. A cross-sectoral governance framework for data access and use
- Focus on standards and interoperability;
- Use of data for research;
- A European Common Industrial Data Space (facilitate an industry agreement on how to share data in line with competition rules);
- Propose a “Data Act” in order to address B2B data sharing, liability, IPR framework and competition rules.
2. Enablers: Investments in data and strengthening Europe’s capabilities and infrastructures for hosting, processing and using data, interoperability
- Invest in a High Impact project on European data spaces, encompassing data sharing architectures (including standards for data sharing, best practices, tools) and governance mechanisms, as well as the European federation of energy-efficient and trustworthy cloud infrastructures and related services, with a view to facilitating combined investments of €4-6 billion, of which the Commission could aim at investing €2 billion. First implementation phase foreseen for 2022.
3. Competences: Empowering individuals, investing in skills and in SMEs
- Explore enhancing the portability right for individuals under Article 20 of the GDPR giving them more control over who can access and use machine-generated data (possibly as part of the Data Act in 2021).
4. Common European data spaces in strategic sectors and domains of public interest
- See the annex of the document "A European Strategy for Data", which presents in more detail each of the sector- and domain-specific common European data spaces, with background on the sector-specific policies and legislation underpinning the creation of such spaces in the different sectors and domains, and proposing sector-specific actions that are tangible, sizable, focused on data, and accompanied by a clear and realistic timeline.
C. White Paper on Artificial Intelligence – A European Approach to Excellence and Trust
The White Paper focusses on the establishment of an “ecosystem of excellence” based around the a policy framework aimed at the mobilisation of resources, as well as an “ecosystem of trust” which would be achieved by the introduction of a regulatory framework for AI.
Ecosystem of excellence will focus on:
- Investments with an objective of 20 billion euros per year of total investment, in cooperation with Member States;
- Research and development;
- Digital skills;
- SMEs and start-ups;
- Public Private Partnership on AI, Data and Robotics;
- Access to and management of data (investment in high-performance and quantum computing); international cooperation.
Ecosystem of trust will focus on:
- The seven key requirements taken from the Guidelines on Trustworthy AI;
- Fundamental rights, safety and liability frameworks;
- The importance of avoiding a fragmentation of the Single Market through a proliferation of national measures to deal with the risks stemming from AI;
- “High-risk” AI applications which would themselves be regulated by any horizontal measure. Other applications would be dealt with through existing safety and liability frameworks (albeit they are currently also under revision);
- The relevant criteria for identifying a “high-risk” AI based upon, cumulatively, a list of high-risk sectors and a definition of high-risk uses. This needs to be further developed and will be crucial;
- Voluntary labelling for low risk applications.
D. Report on the Safety and Liability Implications of Artificial Intelligence, the Internet of Things, and Robotics
The report essentially attempts to identify the gaps in the current regulatory frameworks for product safety and liability. In particular it focusses on the following issues:
- Safety – the focus here is on the lack of provisions dealing with risks associated with cybersecurity in the current safety Framework.
- Connectivity - the lack of provisions dealing with risks associated with cybersecurity in the current safety Framework.
- Autonomy – i.e. the idea that a new risk assessment procedure should/could be foreseen where the product undergoes changes during its lifetime, such as introducing a new or different function in the product not foreseen by the manufacturer in the initial risk assessment.
- Data dependency – risks to safety derived from faulty data (this seems to make little sense but merits further discussion).
- Opacity – that humans must be able to understand how the algorithmic decisions of a system have been reached (transparency).
- Software – Stand-alone software. If uploaded into AI products, this should not have an safety impact. Also not clear if software is considered to be a product in the current liability framework.
- Complex Value Chains – Consideration to be given to each actor in the value chain having an impact on product safety (e.g. software producers) and users (by modifying the product) would assume their responsibility and provide the next actor in the chain with the necessary information and measures.
- Liability – key issues are of scope (is embedded / stand-alone software covered?) and burden of proof. Commission considering the need to reverse the burden of proof as a consequence of the complexity of liability rules for damage caused by the operation of AI applications. This is dangerous!
- High-risk AI applications – coupling strict liability for high-risk AI with mandatory insurance (e.g. motor insurance model).