When executives think of the sources of artificial intelligence risks for the enterprise, the imagery may be one of the rogue terminators taking over the world and subjugating humanity. However, while those sentient systems, which may pose a sinister risk, are quite a ways away, there are more mundane but equally critical risks to consider.
After enticing the corporate sector for many years, artificial intelligence is now firmly established as an exciting technology to solve problems where cognition is a prerequisite. Moreover, the advent of cheap storage, faster processing power, and the ability to crunch through vast amounts of data has propelled artificial intelligence to the forefront of emerging enterprise technologies. So yes, AI is indeed a transformational technology.
But companies need to understand the practical and, some may say, the dark side of the things – the sources of risks and how they manifest in AI projects and models.
It is essential to understand that not all risks are significant, and some can be mitigated with suitable measures. But the most considerable risk of them all is not aware that there are risks and the genesis of such potential trouble spots.
Sources of Artificial Intelligence Risks for the Enterprise
- Explainable AI: Can you explain the results? Or is your AI model and algorithms a black box? Explainability is the biggest concern and potential source of risk for companies.
- Inherent Biases Creeping into Models: Human beings have several implicit and explicit biases, and without mitigating them, the AI projects may have inherent biases.
- Flaws in the Underlying Data: In many companies, input data has several problems. Sometimes it is incomplete data. And other times, it is inaccurate data. Or data that is not labeled or mislabeled. Each of these problems poses a threat to the AI models’ stability, accuracy, and predictability.
- Data Privacy Issues: With GDPR, CCPA, and several other regulations, there are two types of risks – one is using personal and confidential data in an unprotected manner exposing the companies to reputational risk. And the other is the risk of theft of models and the underlying data.
- Model Updates: Even after building a sophisticated machine learning model, if the model is not learning from new information, the results are often stale and, at times, dangerous. Continuous learning and an iterative feedback loop are prerequisites in creating effecting AI solutions.
- Data is Not Representative: If the composition of the data used in AI projects is not representative of the population the company serves, the risk is enormous. Any decisions that result from the AI model do not reflect the entire spectrum of possibilities and may engender unfavorable outcomes.
- Models Learning the Wrong Things: For example, a conversational AI model that interacts with anonymous groups may encounter a lot of profanity, vulgarity, racist dialog, misogyny, etc., and may incorporate that into its “speech” in future interactions. (Remember this incidence: “Tay was an artificial intelligence chatterbot that Microsoft Corporation originally released via Twitter on March 23, 2016; it caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut down the service only 16 hours after its launch.” – From Wikipedia
- Humans in the Loop Misguiding Models: Most AI projects include humans in the loop to override, inform, or update the AI models. But either based on lack of sufficient knowledge, inherent bias, or error in judgment, there is a risk that the humans misguide the models.
- Tech Stack Issues: Lack of the proper infrastructure that allows the AI models to function optimally can challenge stability and performance.
- Infrastructure Constraints: Whether it is bandwidth, failure of data updates, processing power, or storage capacity, any number of infrastructure issues will impede the proper functioning of AI models.
- Lack of Skills: Most companies do not have the requisite talent, such as Data Scientists, AI developers, technology architects, and even data labelers. The shallowness of caliber, capacity, and competence may force the companies to rely on third parties.
- Cybersecurity Risks: Today’s hackers, particularly those sponsored or harbored by nation-states, have proven time and again that nothing is beyond their reach. AI models in the cloud with all the underlying data are ripe for a slew of cybersecurity threats.
Just because there are risks, there is no reason to shy away from artificial intelligence. Here are a few quick tips to mitigate the risks from artificial intelligence projects.
Tips for Mitigating Risks to AI Projects:
- Choose the Right Use Cases: Choosing valid use cases is an essential first step.
- Ensure data quality: Data quality goes to the heart of the matter for efficacy in the functioning of AI models.
- Build a Diverse, Cross-functional Team: A team with representation from diverse racial groups, age bands, belief spectrum, and thought is essential to AI success.
- Robust Testing: Instill backtesting, use simulation, and other measures to ensure that the AI models are powerful, functioning as intended, and the outcomes are within the acceptable realm of possibilities.
- Establish Feedback Loops and Continuous Learning: Models should not become stale, and it is essential to keep the AI algorithms updated frequently.
- Implement Lightweight but Effective Governance: A robust governance model with adequate guidelines and guardrails is a safeguard against models misbehavior.