Investing in artificial intelligence (AI) companies has become a riskier and more involved process than in previous years. Companies need new processes and tools to follow the more stringent AI regulations that are on the horizon (at least in Europe and the United States). Regulators are discussing how best to structure AI regulations in order to align risk management with optimizing the potential value creation of these technologies. Investors should take a similar approach in their investment strategy. Read on for a discussion of the considerations investors should keep in mind as they vet their investment pipeline.
To follow more involved regulations, companies increasingly need audits of their systems, protocols and back-ups for data traceability, monitoring of systems, and diversity, equity and inclusion (DEI) training. A number of companies are developing formal AI policies with commitments to privacy and DEI. Companies are also appointing ethics governance boards and chief ethics officers to implement such policies.
The overall value creation potential for AI is estimated to be in the trillions of dollars.
According to PwC’s Global Artificial Intelligence Study, AI technologies have the potential of increasing global GDP by 14% by 2030, contributing up to $15.7 trillion. As is typical, the deployment of AI systems will also lead to potential new risks. Safety in autonomous driving as well as discrimination in software used by technology companies, the legal system, banking system and health system are all prime examples. Privacy considerations , as reflected in the General Data Protection Regulation (GDPR) are also another consideration. Such privacy related regulations impact business in terms of developing new processes, systems and teams as well as business decisions such as limiting markets or products. For additional information, the rise of AI and traps for the unwary were previously covered here. Risks of investing in SaaS Solutions can be found here and here.
Given the complexity of the potential regulations and the web of data and machine learning tools used in AI, companies need to have a playbook for their data and the algorithms they use. Companies have a captive audience of consumers, regulators and investors, all of whom are watching closely. The programming underlying machine learning and the uses of data and related privacy restrictions, or lack thereof, will need to be open and transparent, otherwise the potential value of these AI tools will be undermined or eroded in the future.