During Donald Trump’s presidency, the pervasive problems of mis- and disinformation could be witnessed (Getty)
With each passing minute, we’re increasingly bombarded by information faster than we can process it. As we struggle to keep up with our social media feeds and app notifications, a raft of growing concerns about the impact of that information deluge and the intentions of those behind it has emerged, particularly when it comes to the media we consume.
In 2016, the term “fake news” went from being nearly non-existent in the public lexicon to becoming a buzzword after an influx of Russian propaganda allegedly influenced the U.S. presidential election. In 2017, Collins Dictionary even named “fake news” its word of the year. Fake news has continued to pose a concern ever since. This past spring, the Harvard Kennedy School of Government published the results of a massive survey that asked more than 150,000 respondents in 142 countries about their concerns surrounding fake news online. The survey revealed that nearly 60 per cent of respondents were worried about misinformation, with that percentage rising for young and low-income groups.
But fake news is just part of the information challenge society is facing today, says Jeffrey Dvorkin, senior fellow at Massey College and former director of journalism at the University of Toronto’s Scarborough Campus. “We’re all overwhelmed by the flood of information and the explosion of news sources beyond the traditional mainstream media gatekeepers, so there’s a universal confusion and anxiety about what can be trusted and what can’t.”
“That’s where the professional accountant comes in,” says Gord Beal, FCPA, vice-president, research, guidance and support for CPA Canada. “The world is looking for ethical leadership to help all of us navigate the messy and unreliable world of information. Although we are not the only answer, CPAs’ expertise in providing trust in financial information can be applied to help provide trust in a much broader spectrum of information.”
In his book Trusting the News in a Digital Age: Toward a “New” News Literacy, published in 2021, Dvorkin also notes that fake news can come in two distinct forms: mis- and disinformation. “It comes down to intent,” he says. “When Uncle Fred sends you something interesting he found on the internet that turns out to be untrue and from an unreliable source, that’s misinformation.” Disinformation, on the other hand, is the deliberate misleading of the public by providing information that is known to be untrue for a variety of purposes: discrediting enemies, bilking consumers, or simply to spread fear and moral panic. “Disinformation is where the real danger to society lurks,” adds Dvorkin.
“Freedom convoy” protestors march in Ottawa on July 1 (Getty)
DISINFORMATION THROUGH THE AGES
Of course, disinformation is far from a novel phenomenon. In ancient Rome more than 2,000 years ago, shortly after Julius Caesar was murdered, his adopted son and great-nephew, Octavian, swayed public opinion against his rival Mark Antony by spreading disinformation about Antony’s drunkenness and disdain for traditional Roman values. Octavian, who was confined by the technological limitations of his time, shared his propaganda via orated poetry and slogans on coinage. Octavian ultimately defeated Antony in a protracted civil war to become Rome’s first emperor in 27 BC.
Since then, methods for sharing information—both accurate and erroneous—have grown in sophistication and speed. “Even before the internet, there’s always been a certain inevitability to how technology influences culture,” says Dvorkin. “We can look back to the invention of the television, radio and telegraph as past examples of how the increasingly instantaneous exchange of information has always resulted in a shock to the system.”
The first seismic shock arrived in 15th century Germany, after Johannes Gutenberg sparked the Printing Revolution by inventing the first Western movable-type printing press. Suddenly, a bible that took several years to painstakingly write by hand could be printed within weeks. The Printing Revolution dramatically improved literacy rates and turbocharged the dissemination of ideas that eventually led to the Enlightenment and Scientific Revolution. On the downside, the printing press’s democratization of information and speed of delivery meant dangerous lies could spread just as fast. Sometimes those lies could be benign, such as the infamous Great Moon Hoax of 1835 that had countless people convinced the moon was inhabited by bat-winged humanoids. But, more often than not, those lies were used for destructive ends. One of the most dangerous examples took place in the 1920s and 1930s when the Nazis distributed and fabricated propaganda to stir antisemitic fervour, which resulted in the deaths of millions of Jewish people during the Holocaust.
INFORMATION IN A DIGITAL WORLD
With billions of people enjoying access to the Internet today, the benefits and harms of our instant access to information have drastically accelerated in their scope and impact.
During Donald Trump’s presidency, the pervasive problems of mis- and disinformation could be witnessed in the growth of climate-change denial, the flat-Earth and QAnon movements, culminating in the rise of anti-vax hysteria during the COVID-19 pandemic. In the past two years, numerous articles and social media posts have spread anti-vax propaganda and COVID-19 conspiracies, including: the alleged suppression of purported COVID miracle drug ivermectin; allegations that COVID vaccines were created by Bill Gates to implant microchips in every single human being; and even rumours that COVID-19 doesn’t actually exist.
These forces of mis- and disinformation increase the polarization of our society as they fuel our biases within our respective echo chambers. Case in point: depending on which source you rely on for your news, you may view the recent Canadian freedom convoy protest in Ottawa as an attempted coup or the last stand for freedom against authoritarian overlords.
IN THE GOVERNMENT, WE DON’T TRUST
Beyond news media, the explosion of mis- and disinformation in the past decade has come hand-in-hand with a comprehensive erosion of public trust in our governments and Big Tech. In 2013, whistleblower Edward Snowden controversially leaked classified information that suggested what many already suspected: our own governments may have been working together to secretly spy on us on a mass scale. Five years later, Facebook was caught in the crosshairs of public outrage due to its involvement with Cambridge Analytica and its alleged influence on the Brexit referendum and the 2016 U.S. presidential election. The following year, Google sister company Sidewalk Labs drew intense scrutiny from its proposed “smart city” development on the Toronto waterfront, with local politicians and activists alike expressing privacy concerns about what exactly the tech giant planned on doing with the data its smart city collected. In 2020, Sidewalk Labs withdrew its proposal, citing COVID-related economic uncertainty—although opponents suspect the reversal had more to do with governmental pushback regarding their privacy concerns. Even the now-omnipresent TikTok, which essentially started as an entertainment app, has been alleged by some to be Chinese spyware.
“We can’t uninvent the internet or erase the challenges it’s created,” says Dvorkin. The cat is out of the bag, so the question becomes: can we figure out solutions to these problems?
On February 2, 2021, as part of its Foresight: Reimagining the Profession initiative, CPA Canada invited more than 100 leaders from the business world and accounting profession, including representatives of the International Federation of Accountants (IFAC), the International Ethics Standards Board for Accountants (IESBA) and the Institute of Chartered Accountants of Scotland (ICAS), to a global virtual roundtable to discuss several pressing ethical challenges facing the world in this era of complex and rapid digital change. A key talking point during the roundtable was the growing concern around mis- and disinformation and the role of bias in exacerbating their detrimental effects.
“As technology becomes more transformational and disruptive on a socio-political and economic scale worldwide, CPAs have to make sure they’re proponents of reliable and trustworthy information in their decision-making,” says Laura Friedrich, FCPA, an IESBA technical adviser who was a speaker at the ethics roundtable. “The main theme that came out of the roundtable was the need to figure out the best methods to utilize the information that comes from big data appropriately and safely.”
THE BIAS IN THE ROOM
Under the direction of CPA Canada’s Gord Beal, the B.C.-based accounting team of Laura Friedrich and her husband, Brian, used insights gleaned from the roundtable to write a report entitled Identifying and Mitigating Bias and Mis- and Disinformation, which was released this past February and is the third report in a four-part series. For their report, the Friedrichs examined how CPAs can best address key ethical issues arising from artificial-intelligence-governed automated systems, social media ubiquity, and public distrust in governments and the media.
“On an individual level, we found the biggest challenge in this area is the acceptance that we all come to the table with biases based on our backgrounds and experiences,” says Brian Friedrich, FCPA, IESBA board member and chair of IESBA’s Technology Working Group. “Rather than viewing bias as wrong or improper, we need to shift the mentality to viewing bias as normal so we can then recognize and remove the harms our biases can create in our decision-making.”
These biases extend to the automated systems the business world relies on to parse through Big Data. While such systems utilize AI and machine-learning processes that may appear to be objective at first glance, they’re built on the operational assumptions and objectives of their designers. Laura Friedrich says CPAs should always ask fundamental questions about those systems and the data they collect and analyze:
Where is that data coming from? How is it being collected? How is this data being used? What privacy standards are in place?
In addition to encouraging the implementation of bias-identifying standards in the business world, the Friedrichs also support governments taking a more active role in creating legislation to address the technology-accelerated issues of mis- and disinformation. However, they need to tread lightly as they do so. In the current climate of public mistrust, governments can be perceived to be influencing their citizens for nefarious purposes or secondary political objectives, even if those actions are being done in the name of public safety, as we have seen during this pandemic.
Dvorkin agrees, although he believes there’s a legitimate concern that government regulation of information on the internet may be a slippery slope towards undemocratic suppression of freedom of speech. “It’s going to be very difficult for governments to come up with plans that will have the credibility of being a benign oversight,” he says.
Facebook CEO Mark Zuckerberg testifies at a 2018 Senate hearing in which he warned of an “arms race” against Russian disinformation (Getty)
POLICING THE INTERNET
The other problem is simply one of logistics: there aren’t enough people to effectively monitor the internet, whether you’re a government or a social media giant. Which brings us back to ethically designed and transparently supervised automated systems. “What’s likely going to happen is that government oversight will depend on algorithms that will contain trigger mechanisms set off by certain words or phrases,” says Dvorkin, who points to recently enacted and soon-to-come legislation that uses algorithms for similar purposes in Europe and Australia. For example, France passed a law in 2018 to fight misinformation during elections and Australia is introducing new regulatory powers to combat mis- and disinformation on social media platforms later this year. Dvorkin adds, “A similar sort of technical oversight will eventually be imposed in North America. The Canadian government is currently proposing the regulation of digital platforms with Bill C-11 (commonly known as the Online Streaming Act), but we’re still in the thick of it right now, so we’re just going to have to feel our way in the dark a little bit.”
The Friedrichs are closely following these developments. Ultimately though, they say the responsibility to mitigate bias and mis- and disinformation begins and ends with individuals. “CPAs need to have an inquiring mind and engage their professional skepticism,” says Brian Friedrich, adding that CPAs should go back to the original source whenever possible. “Before utilizing and distributing information, CPAs should proactively consider other points of view and challenge their own beliefs so they can make the most well-informed decision possible.”
Laura Friedrich also recommends a more flexible and open-ended approach tailored to the dynamic needs of today’s business world, especially in vital but relatively nascent areas like sustainability. “The profession has branded itself in a way that asserts CPAs always provide the solutions, but we need to accept that we might not always have all the answers in these complex times,” she says. “Instead, we need to set expectations by establishing ourselves as the professionals who can come up with a solid and responsibly informed plan of action to find a way forward through uncertainty in a meaningful way.”
STAY IN THE LOOP
Find out what impact the Cullen report is having on the fight against money laundering, see how the legalization of single-game sports betting is creating new businesses and meet the CPAs using innovation to fight climate change.