Mitigating the Risks of AI
Despite the constant presence of AI in the media, trust in AI companies has been declining. As you can see below, the US public had viewed AI companies neutrally in 2019. That perspective has since moved into distinctly distrustful territory, a full 15% points lower in 2024.
Anecdotally, we hear similar concerns expressed from both business leaders as well as front-line employees. The former is worried they don’t have the appropriate governance in place for the technology, and the latter fears AI is coming for their jobs.
For business leaders, we see primarily four categories of risks that they should consider when deploying AI: inaccuracies in AI output, legal and data privacy risks, risk of AI bias and misuse through negligence or even otherwise bad actors. As enterprises look to AI to drive decision-making and operational efficiencies, understanding and mitigating these risks becomes paramount:
Risk #1: Accuracy risks
The incredible power and the Achilles’ heel of generative AI are one and the same. It’s precisely the probabilistic approach that allows LLMs to produce human-like content. But that same fuzzy logic by definition will never be 100% accurate - just like no human is ever perfect. Practically, this means you should deploy AI under the assumption that it will be wrong some percentage of the time.
In 2022 there was a very public example of this. Air Canada’s AI chatbot had incorrectly informed a passenger that he was eligible for a discount. When the passenger went to redeem that discount, Air Canada shockingly tried to argue that they weren’t responsible for any hallucinations the AI might have had and refused to honor the discount. This minor incident then turned into a PR debacle for Air Canada
Whether the AI is wrong 10% of the time, 5% or even 1% depends on the specific implementation, but regardless it’s critical to take steps to mitigate those inevitable moments of failure. Here are a few for you to explore:
Supplement AI with ground truth
One particularly promising approach to improving AI accuracy is to deploy your app using a popular architecture called Retrieval Augmented Generation, or RAG for short. In simple terms, it entails creating a database that contains the knowledge you want your AI to accurately represent. Depending on your use case, this knowledge might include your customer service documents, or your company’s HR policies, or your catalog of product details. When a user asks your AI app a question, it first searches this database for any relevant ground truth, retrieves the appropriate info and appends the info as part of the prompt to the LLM.
Since your AI is provided with not only the user’s original instructions, but also any relevant context that we know to be factually correct, the responses it provides become far, far more accurate. This technique has proven to be incredibly effective at driving down hallucinations, with various development frameworks emerging to make implementation relatively easy.
Keep a human-in-the-loop
Although RAGs can significantly improve the quality of AI outputs, it will never eliminate them fully. We always recommend that companies deploy their AI tools in such a way that a human can pass final judgment on the output before its use. For example, this could include reading the AI-generated article before it’s published, or reviewing an AI image before it's incorporated into an ad. It takes far less time for a person to review something than to create it, so the operational savings can still be preserved while you mitigate the risk of AI hallucinations.
In most businesses, a manager would expect to spend some portion of their time reviewing the work of an entry-level employee. The same mental model applies here.
Take responsibility when AI is wrong
When AI inevitably makes that mistake (because that’s how AI works), and the human-in-the-loop inevitably fails to catch it (they’re only human after all), be prepared as a business to take responsibility. Customers expect businesses to own their mistakes, whether that’s due to an error on their website, or perhaps a sales representative mistakenly quotes the wrong price, or if their chatbot says the wrong thing. The cost to take responsibility for the hallucinated discount would have been a mere $645. The reputational harm from a disgruntled customer is immeasurably more costly.
Companies deploying AI should gauge both the frequency that their AI is wrong, and the cost of being mistaken and bake that into their financials from the beginning. Within manufacturing, factories carefully measure the number of defects per million, and expect to absorb the cost of those defects. Credit card companies anticipate that a portion of their customer base will fail to pay off their balances, and structure their products accordingly. AI usage should be no different.
Avoid customer-facing applications to start
The worst kind of error a business could make is one that negatively impacts their customers. Given the inherent risk of error with AI and its novelty as a technology, some companies are prioritizing internal operational use cases for AI tooling over customer facing applications. We’ve observed this anecdotally with our clients as well as represented within the recent survey data published by Andreessen Horowitz.
While we firmly believe there are significant opportunities to create value through customer facing AI applications, if the cost of being wrong is particularly high or your company’s still getting comfortable with the technology it makes sense to start internally.
Risk #2: Legal & data privacy risks
The advent of generative AI has caused a wave of both legal and privacy concerns over what data is being used to train these large language models. Many media companies have expressed concern that their copyrighted content was used as training data without their permission and therefore illegally. This has led to multiple lawsuits against foundation model providers, such as the one from the NY Times.
Business leaders have been understandably skittish about adopting AI and inadvertently infringing on copyright. However, the model providers have nearly all responded by offering a form of “copyright shield”. For example, OpenAI explicitly states in their business terms that they will defend and indemnify users if a third party claims IP infringement. Some companies are trying to differentiate their models through copyright. For example, Adobe is positioning Firefly as a commercially safe alternative to Midjourney, having been trained on licensed photos from Adobe Stock vs scraped from the internet. We recommend clients review the indemnification clauses of model providers before using them, but rest assured the most popular providers are at this point commercially safe for enterprise use.
On the privacy side, businesses may be concerned that an AI provider will use their inputs into the AI as future training data. This would be especially problematic if sensitive, proprietary or personally identifiable data was then outputted by the AI to external users. We recommend clients pay close attention to the terms of use for the type of license you buy from the model providers. For example, OpenAI explicitly commits to not using your data, inputs and outputs for training models for ChatGPT Team and ChatGPT Enterprise. Contrast that with the privacy language for personal accounts, where OpenAI reserves the right to train their models on user provided content.
Some leaders have rolled out policies that allow the use of AI tools, but ask that employees refrain from providing the AI with confidential corporate information. These types of policies are cumbersome, confuse your teams and are nigh impossible to enforce. We believe a simple, clear policy is a better approach. Either get the appropriate license that provides data privacy (usually for Enterprise accounts) or don’t use those tools at all.
Risk #3: Risk of bias in AI
The unfortunate reality is that AI is often biased. This is because AI is trained on large swathes of the internet, which itself reflects the biases of humans and is not representative. Take language for example. 55.6% of the internet is written in English, despite the fact that native English speakers only account for 4.7% of the global population.
If you factor in secondary English speakers, that figure only goes up to 18.8%. Once you realize that this is how the underlying training data skews, it becomes unsurprising that the most popular LLMs handle English better than other languages. The same bias can be observed beyond spoken languages. For instance, within coding use cases most LLMs are more proficient with Python than something obscure like Rust. One study from MIT found that 3 computer vision gender classification systems were significantly less accurate for darker-skinned females (up to 34.7% error rate) compared to lighter-skinned males (0.8% error rate). This was attributed to training datasets that were overwhelmingly composed of lighter-skinned subjects.
The challenge for most leaders deploying AI for corporate applications is that they will almost certainly use off-the-shelf models like GPT-4, Claude or Gemini, where the exact training data used and the degree to which it is fair will not be clear. Another difficulty is that different organizations and individuals will have different definitions of fairness. However, you can still mitigate bias to an extent.
First, create a clear definition of fairness for your company and for your particular application of AI. Perhaps you’re primarily concerned with racial and ethnic fairness. Or perhaps you’re concerned with gender or age bias. Or perhaps you want to ensure your AI has sufficient non-English language coverage.
Once you have a measurable definition, you can set up an evaluation framework to periodically test your AI for bias. There are both proprietary and open source tools that can help with this, such as FairLearn from Microsoft and Fiddler.
Finally, you can inspect your own data for bias. While you won’t have access to the training data already incorporated into an LLM, in many instances you’ll supplement with additional training data to improve accuracy for your specific use case. Having balance within the data set that you control can help reduce any inherent bias.
Risk #4: Risk of negligence or bad actors
There have been numerous instances already of people being overly reliant on AI or not bothering to double check the work it produces. In New York, there was a high profile case where two lawyers were sanctioned for submitting a legal brief that contained six fake citations that ChatGPT had hallucinated. In the scientific journal Frontiers, there was another situation where an author published a paper showing a rat with impossibly large genitalia, a figure (amongst others) that was fabricated by Midjourney. What’s amazing in this instance is that this paper had made it past an editor and two peer reviewers prior to publishing. Whether the lawyer or scientific author in these instances were deliberately trying to mislead through AI or were simply negligent is beside the point. For a business leader, the outcome is the same.
There are a few techniques you can deploy to minimize this very real risk. The most potent technique was already mentioned earlier - keep a human in the loop. For particularly sensitive use cases, you may want to consider adding a second reviewer to catch anything the first reviewer may have missed. There are also tools that can be deployed to identify an over-reliance on AI. For example, Copyleaks and GPTZero are two popular tools that identify when AI has been used to generate a piece of content. In extreme instances where you’re concerned about truly malicious usage of AI, a common technique used within cybersecurity is to deploy red teams. With origins in war gaming, red teams are tasked with discovering ways to exploit the AI application and produce undesirable outcomes. Once these vulnerabilities have been identified, your teams can then develop appropriate countermeasures.
When integrating AI into business operations, it’s crucial to acknowledge and address the inherent risks that accompany this technology. However, once understood these challenges can be effectively mitigated through a variety of tactics. By establishing comprehensive governance frameworks and fostering a culture of accountability, businesses can safely and responsibly leverage AI to drive innovation and operational efficiency.