Artificial intelligence (AI) refers to the ability of a computer or a computer-enabled robotic system to process information and produce outcomes in a manner similar to the thought processes of humans in learning, decision making and problem solving.  As a result of rapid advances in AI, pre-pandemic, McKinsey Global Institute estimated that between 75 and 375 million people around the world will need to change jobs or acquire new skills by 2030.  AI both holds promise of innovation and disruption, as does the legal framework that is developing to rein in its risks without hindering its progress.

In May 2019, the US Government joined the OECD (Organisation for Economic Co-operation and Development) in setting forth principles to improve the innovation and trustworthy development and application of AI.  At the same time, the bipartisan Artificial Intelligence Initiative Act (AIIA) was introduced in the US Senate to organize a national strategy for developing AI and provide a $2.2 billion federal investment over five years to build an AI-ready workforce, accelerating the delivery of AI applications from government agencies, academia, and the private sector over the next 10 years.

Across the pond, the European Commission set up an independent High-Level Expert Group on AI to provide guidelines on how to improve the trustworthiness of AI.  In February of this year, the European Commission published a “White Paper on Artificial Intelligence – A European approach to excellence and trust.”  The Commission’s white paper identified that the most critical risks regarding AI are the risks to fundamental rights, privacy of data, safety and effective performance, and liability identification. The Commission noted that the best approach to regulation should be based on risk assessments to ensure responses to AI development are proportionate and do not dampen innovation.  Rather than setting out proposed regulations, the Commission has set out legal requirements to ensure that AI remains trustworthy and respectful of the values and principles of the European Union.

The global health crisis is accelerating AI in a myriad of ways.  One of the more salient examples to the health crisis is that contact tracing sets up a conflict of personal privacy against public health and security.  Contact tracing requires huge numbers of voluntary participants which, after years of privacy scandals involving governments and corporations, is a challenge for consumers to embrace.  Rather than take a centralized government approach, Silicon Valley generally advocates for a more decentralized approach to privacy, and in particular contact tracing.  While companies have increased their transparency in the hope to engender trust, state governments have continued to penalize companies who overreach.  For example, there has been continued enforcement under the California Consumer Privacy Act (CCPA).  In March, over sixty companies sent pleas to the California Attorney General to postpone the Attorney General’s July 1, 2020 CCPA enforcement date, but their pleas fell on deaf ears.

On June 1, 2020, federal lawmakers proposed the Exposure Notification Privacy Act (ENPA), bipartisan legislation protecting consumer privacy and promoting public health in the development of exposure notification technologies to combat the spread of COVID-19. The legislation grants consumers control over their personal data and limits the types of data that may be collected and how that data can be used.  This act is the third in a series of COVID-related federal data privacy bills (proposed by Republicans and Democrats respectively—the “COVID-19 Consumer Data Protect Act of 2020” and the “Public Health Emergency Privacy Act”) aimed at tackling the difficulties posed by collecting data to combat the spread of infectious disease with the public’s increasing concerns with the privacy and cybersecurity of their data.  In these uncertain times, one thing is for sure, it won’t be the last.