Accounting | Artificial Intelligence

Ethics are central to building AI tools, says expert

As a speaker at the 2019 Ethics Symposium, Jeff Lui will highlight the need for a multidisciplinary approach to artificial intelligence projects

A Facebook IconFacebook A Twitter IconTwitter A Linkedin IconLinkedin An Email IconEmail

Female scientist consults a female Engineer about coding and programming“I’m less concerned about AI than I am about the people who are actually building the tools,” says Jeff Lui, a director in Deloitte’s AI practice (Shutterstock/Gorodenkoff)

Jeff Lui is fascinated by people.

He likes to explore how they think, learn and work. And as a director in Deloitte’s artificial intelligence (AI) practice, he’s also acutely interested in how they relate to AI—especially when it comes to creating the tools.

“I’m less concerned about AI than I am about the people who are actually building the tools,” he says. “It’s very important to ensure they have a good ethical lens on what they’re building. Because ultimately, we’re putting these tools in their hands. It’s all unregulated—it’s very dangerous.”

It’s precisely that ethical lens that Lui intends to explore at the 2019 Ethics Symposium, being held April 25-26, 2019. Presented by the Centre for Accounting Ethics and CPA Canada, the symposium will explore the influence of technology on all aspects of the accounting profession and its practice. [See Accountants can only gain from AI, say experts


Lui says he is often asked to talk about the implications of building out an AI app in an organization. “The way it’s done is fairly consistent, whether you are in a bank or manufacturing or other kind of company,” he says. “But there are many ethical issues to consider. You need a lot of oversight in the way data is used and how it’s being deployed, as well as the bias and fairness associated with building the application. It’s really important to understand all of that, but not a lot of people do.” 

At the symposium, Lui plans to outline the four-step framework he uses when approaching a potential AI project with a client. “Each step is designed to answer a fundamental question,” he says. “It’s essentially a funnel.” 


“There’s so much hype around AI these days that many people think they absolutely need to build a tool in their organization,” says Lui. “But actually, you’ve got to be very clear about what AI does and doesn’t do. Often, when a company tells me what they have in mind, I’ll say, ‘You don’t need AI for that. You can do it with a simple macro or RPA. These are short sequences of code written to perform a single task, or a series of tasks—they’re not pure AI or machine learning.’” 


Assuming the company has a machine learning application in mind, you need to decide if it is right to build it. 

“There are some things that I don’t think we should build,” says Lui. “For example, given the current state of the world, I don’t think we should build weaponized AI, which essentially is a nuclear bomb strapped with computer vision. Yet a lot of countries are building it, because there are no regulations preventing it.”


Once you’ve determined that you should build the tool, you need to move on to the fairness test. This has to do with making sure no groups are underrepresented in the data set used to build the application.

“A lot of banks are using AI to predict whether a new customer will pay back their loans before they approve them for credit cards,” says Lui. “But these banks use data sets from the past 15 years. I always tell them you have to be very cautious about doing this. The way you approved someone 15 years ago might have been fundamentally different—you might have approved more men, for example. If you use the same data set in your machine learning algorithm, it’s going to perpetuate that kind of bias and you won’t even know it.”


This final step is to determine how an AI tool can be hacked or tampered with. 

“I’m always alarmed when a vendor or startup says their machine learning algorithm is 100 per cent accurate, because they haven’t understood the implications of hacking these systems,” says Lui. “But it’s so easy to do in so many cases. In fact, a rogue engineer with an axe to grind could take down an entire warehousing security AI platform.”   


Basically, Lui thinks ethics plays into every part of the AI journey. “These tools are not creating themselves,” he says. “It really will require people to understand what they’re doing.”

That’s one of the reasons Lui thinks we need to bring a multidisciplinary approach to building AI applications. “We need accountants, economists, lawyers, anthropologists— we need everyone to give their perspective on it,” he says. “We are trying to take the human miracle through technology and because of that we have to understand the whole baggage that comes with it.”

For Lui, accountants have a definite seat at the table. “Don’t think your voice is not important just because you’re not technical or you’re just learning about what AI is and how it impacts accounting. There are opportunities everywhere and that’s why you should start to learn about the ethics behind AI, or just the basic technical components behind it or how it’s going to impact your day to day job. And we all need to consider what’s going to happen to the workforce when we eliminate mundane jobs. I think everyone should have a voice on those topics.” 

To hear Lui speak, click here to register for the fourth Ethics Symposium being held April 25-26 in Toronto, Ont. The theme this year is The Impact of Technology on Ethics, Professionalism and Judgment in Accounting.


Big data and artificial intelligence – the future of accounting and finance is the second in a series of publications by CPA Canada focusing on AI. The first publication, A CPA’s introduction to AI: From algorithms to deep learning, what you need to know serves as a primer on AI, explaining AI “buzzwords” and discusses the evolution of data, AI and computing power.