Tuesday 21 July 2020

AI Update: EU High-Level Working Group Publishes Self Assessment for Trustworthy AI

On July 17, 2020, the High-Level Expert Group on Artificial Intelligence set up by the European Commission (“AI HLEG”) published The Assessment List for Trustworthy Artificial Intelligence (“Assessment List”). The purpose of the Assessment List is to help companies identify the risks of AI systems they develop, deploy or procure, and implement appropriate measures to mitigate those risks.

The Assessment List is not mandatory, and there isn’t yet a self-certification scheme or other formal framework built around it that would enable companies to signal their adherence to it.  The AI HLEG notes that the Assessment List should be used flexibly; organizations can add or ignore elements as they see fit, taking into consideration the sector in which they operate. As we’ve discussed in our previous blog post here, the European Commission is currently developing policies and legislative proposals relating to trustworthy AI, and it is possible that the Assessment List may influence the Commission’s thinking on how organizations should operationalize requirements relating to this topic.

As a preliminary step, the AI HLEG recommends that organizations perform a fundamental rights impact assessment to establish whether the artificial intelligence system respects the fundamental rights of the EU Charter of Fundamental Rights and the European Convention on Human Rights. That assessment could include the following questions:

  1. Does the AI system potentially negatively discriminate against people on any basis?
    1. Have you put in place processes to test, monitor, address, and rectify potential negative discrimination bias?
  2. Does the AI system respect children’s rights?
    1. Have you put in place processes to test, monitor, address, and rectify potential harm to children?
  3. Does the AI system protect personal data relating to individuals in line with the EU’s General Data Protection Regulation (“GDPR”) (for example, requirements relating to data protection impact assessments or measures to safeguard personal data)?
  4. Does the AI system respect the rights to freedom of expression and information and/or freedom of assembly and association?
    1. Have you put in place processes to test, monitor, address, and rectify potential infringement on freedom of expression and information, and/or freedom of assembly and association?

Following the performance of the fundamental rights impact assessment, organizations can then proceed to carry out the self-assessment for trustworthy AI. The Assessment List proposes a set of questions for each of the seven requirements for trustworthy AI set out in the AI HLEG’s earlier Ethics Guidelines for Trustworthy Artificial Intelligence. A non-exhaustive list of the key questions relating to each of the seven requirements are as follows:

  1. Human Agency and Oversight
  • Is the AI system designed to interact with, guide, or take decisions by human end-users that affect humans or society?
  • Could the AI system generate confusion for some or all end-users or subjects on whether they are interacting with a human or AI system?
  • Could the AI system affect human autonomy by interfering with the end-user’s decision-making process in any other unintended and undesirable way?
  • Is the AI system a self-learning or autonomous system, or is it overseen by a Human-in-the-Loop/Human-on-the-Loop/Human-in-Command?
  • Did you establish any detection and response mechanisms for undesirable adverse effects of the AI system for the end-user or subject?
  1. Technical Robustness and Safety
  • Did you define risks, risk metrics and risk levels of the AI system in each specific use case?
  • Did you develop a mechanism to evaluate when the AI system has been changed in such a way as to merit a new review of its technical robustness and safety?
  • Did you put in place a series of steps to monitor and document the AI system’s accuracy?
  • Did you put in place a proper procedure for handling the cases where the AI system yields results with a low confidence score?
  1. Privacy and Data Governance
  • Did you put in place measures to ensure compliance with the GDPR or a non-European equivalent (e.g., data protection impact assessment, appointment of a Data Protection Officer, data minimization, etc.)?
  • Did you implement the right to withdraw consent, the right to object, and the right to be forgotten into the development of the AI system?
  • Did you consider the privacy and data protection implications of data collected, generated, or processed over the course of the AI system’s life cycle?
  1. Transparency
  • Did you put in place measures that address the traceability of the AI system during its entire lifecycle?
  • Did you explain the decision(s) of the AI system to the users?
  • Did you establish mechanisms to inform users about the purpose, criteria, and
  • limitations of the decision(s) generated by the AI system?
  1. Diversity, Non-discrimination, and Fairness
  • Did you establish a strategy or a set of procedures to avoid creating or reinforcing unfair bias in the AI system, both regarding the use of input data as well as for the algorithm design?
  • Did you ensure a mechanism that allows for the flagging of issues related to bias, discrimination or poor performance of the AI system?
  • Did you assess whether the AI system’s user interface is usable by those with special needs or disabilities or those at risk of exclusion?
  1. Societal and Environmental Well-being
  • Where possible, did you establish mechanisms to evaluate the environmental impact of the AI system’s development, deployment and/or use (for example, the amount of energy used and carbon emissions)?
  • Could the AI system create the risk of de-skilling of the workforce? Did you take measures to counteract de-skilling risks?
  • Does the system promote or require new (digital) skills? Did you provide training opportunities and materials for re- and up-skilling?
  • Did you assess the societal impact of the AI system’s use beyond the (end-)user and subject, such as potentially indirectly affected stakeholders or society at large?
  1. Accountability
  • Did you establish mechanisms that facilitate the AI system’s auditability (e.g., traceability of the development process, the sourcing of training data and the logging of the AI system’s processes, outcomes, positive and negative impact)?
  • Did you ensure that the AI system can be audited by independent third parties?
  • Did you establish a process to discuss and continuously monitor and assess the AI system’s adherence to the Assessment List?
  • For applications that can adversely affect individuals, have redress by design mechanisms been put in place?

 

The Assessment List is part of the EU’s strategy on artificial intelligence outlined in the communication released by the European Commission in April 2018. A previous version of the Assessment List was included in April 2019 Ethics Guidelines for Trustworthy AI issued by the AI HLEG, which we discussed in our prior blog post here. The revised Assessment List reflects the learnings from the piloting phase from 26 June until 1 December 2019 in which over 350 stakeholders participated.


AI Update: EU High-Level Working Group Publishes Self Assessment for Trustworthy AI posted first on https://centuryassociates.blogspot.com/

No comments:

Post a Comment