Features | From Pivot Magazine

Is your boss using AI to spy on you?

Artificial intelligence has the power to revolutionize businesses—so long as it’s not too busy snooping on employees

A Facebook IconFacebook A Twitter IconTwitter A Linkedin IconLinkedin An Email IconEmail

AI snooping watercooler illustration AI snooping watercooler illustration In a world where even logging into your computer can require facial recognition, AI snooping in the workplace seems inevitable (Illustration by Doug Chayka)

An oil company in Calgary recently gathered its employees to roll out a new policy. After the meeting ended, a manager called aside one of the workers and told her, “I see you don’t buy in to our new policy. Are you going to be a team player?” The employee was shocked. She hadn’t said a word during the presentation, so how could they possibly know that? Her superiors were monitoring the staff at the meeting for non-compliance cues, the manager explained, and had inferred her hostility to the policy based on an artificial intelligence program’s interpretation of her body language and facial expression. The woman, who didn’t want to be identified for fear of retribution from her employer, relayed the story to Robin Winsor, an Alberta tech futurist who heads Cybera, a non-profit IT-advancement agency in Alberta. “It creeped her out terribly,” says Winsor.

It’s virtually impossible to gauge how many Canadian companies are using artificial intelligence to spy on their employees. But the use of AI in the workplace is undoubtedly on the rise. Some of these applications are convenient (a voice-recognition program that types what you speak) or helpful (an industrial machine that shuts down if it sees its operator is not wearing a hard hat). But as companies increasingly turn to AI for potentially invasive purposes, they may face an ethical dilemma: is it wrong to spy on employees, even if it boosts morale and productivity?

For every innocuous task performed by AI, there is another, more morally ambiguous application. Take the example of the Florida-based software company Veriato, a cyber attack-prevention program that records the computer activity of all employees, “creating a record that can be used as evidence in civil and criminal litigation.” Or Teramind, which tracks employee productivity, including the amount of work time spent on social media, project work and apps. Others, like Montreal’s Officevibe or France’s TeamMood, monitor workplace morale with regular surveys, which can be anonymous, that ask employees to record their emotional state and share their deepest concerns about workplace culture.

For employers, the appeal is clear. These programs allow managers to track who’s performing, who’s slacking and maybe even the reasons why. They can also protect a company, alerting bosses if an employee is doing something to put the business at risk, such as exposing proprietary information, whether out of malice or negligence. “Nothing is more important to a business than its records,” says Nancy Flynn, an electronic policy and compliance expert, and founder of the ePolicy Institute in Columbus, Ohio. She says companies have a duty to their clients—and employees—to protect their data. One information leak can potentially destroy a company and render its employees jobless.

In the U.S. 26 states have laws prohibiting employers from demanding workers’ social media passwords

Flynn argues AI monitoring is effective and ethically sound, so long as management is completely transparent with staff and monitors only relevant corporate material, not private information. In the U.S., for example, 26 states have enacted privacy laws prohibiting employers from demanding employees’ personal social media passwords. (Lawyers say labour laws prohibit this in Canada.) “Really, employers should only monitor as allowed by law and for legitimate business reasons,” she says. 

Employees agree that transparency is key. In a global survey conducted by American HR software company Kronos, roughly 60 per cent of respondents said their organizations had yet to discuss AI’s potential impact on their workplaces, and that they’d feel more comfortable if their managers communicated what effect it might have. After all, nearly two-thirds of employees surveyed would welcome AI that automates time-consuming work or balances their workload. 

Winsor says AI systems should always benefit the business, as well as employees’ working conditions. “You can’t better the company’s bottom line by crushing the soul and creativity of your employees,” he says. “If you do, you’re no better than a Dickensian workhouse overlord.”

But keeping tabs on AI is not always as easy as it sounds, says Chris MacDonald, an associate professor at Ryerson University’s Ted Rogers School of Business who specializes in business ethics. AI in its most advanced form is constantly learning and changing, he says. “So, the net result is often that nobody actually knows what’s going on inside the black box. There’s literally no human programmer responsible.” That means it’s not always clear what kinds of rules and principles AI abides by. Does a record-keeping application realize that storing messages from a corporate email account might be okay, but that personal correspondence on social media is out of bounds?

AI can monitor body language and facial expressions to gauge if an employee is buying in to a new company policy

The AI Now Institute, a New York University research centre that studies the social implications of artificial intelligence, is concerned ethical considerations aren’t keeping pace with the rapid development of AI. Its 2017 report urges the AI industry to implement ethical codes, strong oversight and accountability mechanisms—things that don’t currently exist on a meaningful scale. In the U.S., that lack of oversight has led to a flurry of lawsuits in which employees have fought successfully against companies that demanded too much personal social media information for monitoring purposes.

AI in the workplace hasn’t caught on as quickly in Canada, says Jodie Wallis, Accenture’s head of AI in Canada. The country places last in a report ranking 10 countries’ successful application of AI technologies. That’s not necessarily a bad thing. It reflects the fact that nearly three in four Canadian firms have AI ethics committees, the most in all the countries the report surveyed. “In a lot of countries, the organizations are jumping to deploy without being thoughtful about how [they’re] going to deal with ethics,” Wallis told the Globe and Mail. “Canadian organizations tend to do the opposite: ‘Let’s think about all the ethics, and then we’ll deploy.’ ”

In a world where even logging into your computer can require facial recognition, AI snooping in the workplace seems inevitable. Smart companies will be ready with ethics committees and transparent rules, ensuring that their workplace morale program doesn’t turn into a game of I Spy.