Chantal Bernier

Chantal Bernier, a speaker at this year’s The One National Conference in Halifax, offers CPAs a step-by-step “best practice” approach when implementing AI. (Photo by Dentons Canada)

Features | Profile

When choosing AI, think of the client’s privacy first, says expert Chantal Bernier

The ONE National Conference speaker offers accountants four steps for implementing artificial intelligence keeping in line with new privacy regulations 

A Facebook IconFacebook A Twitter IconTwitter A Linkedin IconLinkedin An Email IconEmail

With the rise of artificial intelligence (AI) as a tool for business, it’s only a matter of time before the latest in technology becomes a part of how you serve your clients.

Couple that with new and shifting privacy regulations being implemented across the globe (think GDPR) and the uncertainty around how to effectively and safely extract and protect personal data (such as with Facebook and Cambridge Analytica), there’s uncertainty and reservation around it all.

But all is not lost.

“We have a brand-new situation where we have highly sensitive information that is created automatically without the involvement of the individual. That is a significant privacy development, so we need to consider the principles of privacy law, which can’t change,” says Chantal Bernier, who leads the Canadian Privacy and Cybersecurity practice for Dentons Canada LLP. “It [privacy] is a fundamental right, that stays throughout time, but how do they apply it in this context? That is where the challenge is, integrating in AI the privacy laws.”

Bernier, a speaker at this year’s The One National Conference in Halifax, offers CPAs a step-by-step “best practice” approach when implementing AI to improve workflow, better serve your clients, and differentiate yourself from—or keep up with—the competition.

Step 1: Pick the right technology

When choosing AI software, the accountant must ensure that the data being collected is necessary to provide the services offered, while considering exactly how the data will be used through a privacy impact assessment.

Bernier uses the example of John Hancock, a U.S. subsidiary of Canadian insurance company Manulife, which now offers incentives—including discounts and gift cards—to customers who meet exercise targets tracked by the company on wearable devices such as a Fitbit or Apple Watch through its Vitality program. Privacy concerns include insurers eventually using data to select the most profitable customers or hiking insurance rates for those who opt out of participating in the program.

“When looking at the possibility of AI, you must ensure that the technology used is not excessive in collecting information in terms of how it would be used,” warns Bernier.

Step 2: Ensure the host is secure

Once the software is chosen, the company selected to host, or store, that data should be scrutinized and adequately assessed for security and reliability. “You need to choose a company that has a true record of security,” says Bernier.

“If I choose to tell my entire life to the entire world, I am choosing to do so. If someone posts my whole life to the entire world, then my privacy has been violated because I was disempowered.”

Step 3: Encourage staff buy-in

All staff impacted by the data collection must be trained on the use (and potential misuse) of the information collected, while appropriate restrictions—i.e. implementing access privileges to certain employees —are put in place. Moving forward, that access should be consistently monitored and reported upon to avoid, or identify, any violations.

“Once it [AI technology] has been bought and brought into the organization, [the third step] is to ensure that there is an internal policy for that accounting office that is up to the privacy obligations,” she says.

Step 3: Empower your clients

Finally, client buy-in is a must from the get-go. This is where full transparency comes into play. Bernier explains that it’s not good enough to simply hand over a document with fine print for clients to read and sign on the dotted line. “Considering the complexity of AI, it should be a conversation, not just a ‘sign here,’” she says. “‘We will collect XYZ and we use it in ABC way.’”

Furthermore, if certain data collected does go beyond servicing the client’s needs, to determine other factors, such as client trends, there needs to be additional client consent for that, adds Bernier.

The bottom line, according to Bernier, is that the use of AI comes down to three client considerations: full transparency; empowering the client to agree or disagree; and ensuring a fair return for that personal information.

“That is the right to privacy,” she says. “If I choose to tell my entire life to the entire world, I am choosing to do so. If someone posts my whole life to the entire world, then my privacy has been violated because I was disempowered.”

Be in the know with CPA Canada

Chantal Bernier’s session, Using AI for Predictive Analytics in A Privacy Aware World, at The ONE National Conference on Oct. 1, gives CPAs a look into the state of AI and data privacy from a global perspective. To stay up-to-date on hot topics impacting the industry today, browse through CPA Canada’s upcoming events and conferences.