Skip To Main Content
In The News

What it means for humans when AI-powered programs fail

Machine learning still has a way to go, but it can be a learning experience for its human creators, too

Having your podcast automatically download or checking Waze for the best route home have become second nature. Artificial intelligence (AI) and machine learning are woven into the fabric of our daily lives.

While we’d like to think AI is always reliable, computer glitches happen. Machine-led errors range in scope and scale, as do their implications. Recently, an employee was accidentally fired after the company’s computer system unintentionally terminated his contract. It took three weeks of human intervention to override the error.

Or take the case of Facebook’s chatbots. The company created AI that could converse with each other, but it wasn’t long before developers realized the bots developed their own secret language, which they were using. Developers shut them down.

“The point here is that AI is not only a particular technology, but that it always operates in particular contexts,” says Tero Karppi, University of Toronto assistant professor, lecturer for the Institute of Communication, Culture, Information and Technology, and Faculty of Information. “These contexts also matter what we define as failures and how severe or important we think those failures are, and how we measure them.”

He mentions driving directions given by navigation apps during the 2017 California wildfires. While the apps provided traffic-free routes, they also gave directions that led drivers into the fires.

“Obviously there was no traffic there, but also the conditions were no longer safe, a fact that these systems were unable to calculate and process,” says Karppi. “These new technologies bring with them new failures that we never anticipated.”

Errors with AI often start at the coding level. Avery Swartz, entrepreneur and tech expert, explains how developers leverage existing software when building programs. “AI is only as good as the human that programs it, and only as good as the human that eventually determines the data it spits out,” she says.

One theory for combatting AI failure, she says, is to let machines make mistakes and learn from them, just as humans do.

Martin Lavoie, VP of sales and finance at Experience, one of Canada’s largest IT staffing firms, agrees. “As a human, if you make a mistake you just correct it, but in the AI world if the machine doesn’t learn how to correct their own mistakes then that creates more problems,” he says, also stressing the significance of proper coding. “At the base, it starts with the programming aspect. Machine learning is also coding. Everything goes to coding one way or another.”

AI is in its infancy, and like all new technology, this is where mistakes happen most. It’s not stable at the beginning and there is trial and error,” he says. “Eventually things will get smoother and companies will get more prepared. Companies that are already working in AI now are at tip of iceberg in technology, so they already know issues and are trying to prepare accordingly.”


CPA Canada will be releasing a primer on artificial intelligence this fall as well as a publication examining the impact AI may have on the profession. See how the organization is preparing members against AI error with the webinar Technology issues facing CPAs.