No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

AI Needs to Reduce Errors – Or Everyone Will Pay the Price

I have yet to meet anyone who enjoys being in direct communication with an AI tool. AI-augmented CX systems may provide some efficiencies, but those efficiencies often translate into high levels of customer dissatisfaction. With this in mind, and studies of various flavors indicating that AI is a low trust technology, it’s important for anyone considering the deployment of highly advanced and seemingly sophisticated AI-based products and services to consider the downside(s). A systematic deep dive in the cost-benefit analysis is a wise idea, particularly given the fact that the AI bar is just revving up.

To quote, “AI’s Trust Problem,” a May 2024 Harvard Business Review article:

There is another gulf, however, that ought to be given equal, if not higher, priority when thinking about these new tools and systems: the AI trust gap. This gap is closed when a person is willing to entrust a machine to do a job that otherwise would have been entrusted to qualified humans. It is essential to invest in analyzing this second, under-appreciated gap — and in what can be done about it — if AI is to be adopted widely.

 Particularly as litigators begin (and yes, it’s just beginning) to go after AI systems and those that deploy them (i.e. everyone from system developers to end users), every human in the chain must carefully consider how much an AI-sponsored error is worth. Please note ... it’s not that humans don’t make mistakes. We make plenty without AI. But we have humanity and common sense, both of which are sorely lacking in every AI system.

At this point, it’s important to consider cost savings as the driver behind much AI deployment. In many instances, customers are deploying AI systems to increase efficiencies and reduce costs, often by reducing manpower costs. AI tools can rely on historic data to make predictions of future outcomes. To the extent that these predictions are reliable, all is well. But in those cases when the problem doesn’t fit the mold of the questions that the AI systems have been designed to manage, things can go horribly awry. There is a critical balancing act whose importance may be overlooked. That is the balancing act between perceived cost savings/operational efficiencies and that of sound decision-making based information from multiple inputs. 

The Harvard Business Review article cites 12 factors that underly distrust of AI-driven issues. These include disinformation, safety and security, black box problems, ethical concerns, bias, instability, hallucinations in large language models, job loss and social inequalities, environmental impact, industry concentration,n and state overreach. To me, the most important of these are essentially all of them, although I will focus on the first few.

Disinformation is at the very top of the list. While it has certainly been around since probably the Stone Age, the power of AI has supercharged the ability to deliberately mislead more people more speedily, because of the incredible power and reach of internet tools and applications. In fact, the biggest social media companies have largely stepped back from fact-checking and content moderation in the name of free speech (and perhaps cost-cutting as well).

The issues of safety and security speak for themselves. How secure is the personal data that you’ve provided to an unknown and/or ginormous database, whether that data was provided willingly or not? The short answer is, “who knows?” This also ties into the “black box” problem, since most AI tool providers don’t share what they’re doing with the data that they’ve captured and use to produce results. These providers claim (some could argue “reasonably claim”) that the value that each company is adding is its own special sauce and as such its processes are proprietary. It’s often impossible to know how each is generating the outputs it provides, and where the biases are buried in its calculations. While AI tools are certainly way more sophisticated than this, these systems can be seen as super-powered calculators crunching many fields of numbers in a short time. Although some state and local laws have been enacted to ensure that, for example, hiring decisions, at least in New York City, where such a law is on the books, a study produced by Cornell University shows that “few companies have disclosed how AI algorithms influence their hiring decisions” despite the fact that the AI transparency law as been on the books for more than a year. As such, it’s hard to know how powerful—let alone enforceable--the enacted law really is. 

The bottom line is this: The more that humanity is pushed out of the decision-making process, the greater the risk of bad outcomes, some of which could have been avoided, and many of which will end up being litigated. Further, the more heavily that a decision-maker relies on AI output to make decisions, the greater risk to which the decision-maker exposes itself and his/her enterprise.

A final quote from the Harvard Business Review. “First, no matter how far we get in improving AI’s performance, AI’s adopters — users at home and in businesses, decision-makers in organizations, policymakers — must traverse a persistent trust gap.”

According to consultant Robert Harris, “There’s also a debate over whether making AI more humanlike will cause users to trust it more (or less), but I think this misses the point. People trust reliable robotic systems such GPS and payment confirmations, among others. However, it’s the system’s reputation that is more important than how lifelike it is.” Reputational trust -- one more worthy consideration, and one that AI hasn’t earned. Yet.

Enterprises need to invest resources in understanding the risks most responsible for the trust gap affecting their applications’ adoption, then work to mitigate those risks. And, pairing humans with AI will be the most essential risk-management tool, which means we shall always have a need for humans to steer us through the gap — and the humans need to be trained appropriately.” Hear, hear!