Collision conference tackles AI governance and management
Collision is one of the biggest technology and start-up conferences in North America. With that being said, you would think that the Toronto event would be frenzied with AI evangelists, but the discussion turned out to be more muted. There was less of a hype around the promise of the technology, and more of a subdued acceptance that it’s here, and now it’s about finding the right approach to deploying and managing it.
Here are three key takeaways for CPAs navigating the technology landscape and considering or implementing AI. Be sure also to take advantage of all CPA Canada’s guidance on this topic here.
Getting AI out of the ‘proof of concept’ loop
Per Aidan Gomez, co-founder of Canadian AI company Cohere, while 2023 was the year of the ‘proof of concept’ for AI, this year is all about pushing applications into production. He emphasized the importance of getting companies out of the ‘proof on concept loop’ – a continuous journey of investing in development and testing of small, one-off AI projects, without larger scale implementation.
Other speakers on the topic explained that even if companies are not enabling their employees to use generative AI tools, employees are still finding a way to use them. These speakers highlight not only the risk of getting caught in the loop of excessive investment without meaningful outcomes, but also the damage that withholding these productivity tools can potentially have on businesses. Rogue use of AI tools by employees can present issues with loss of proprietary data, privacy, and security issues, and they say that slapping a written policy in place isn’t an effective tool to fully manage that risk.
Safety and trust as a competitive advantage
There was a great panel on AI safety and governance that explored the role of AI regulation in trustworthy and responsible AI management. Because we are in the early stages of AI regulation in many countries, including Canada, one panellist made an important point: the companies that take charge of AI safety and insert responsible AI programs into what they do early will build a competitive advantage over those companies that do not. Industries that see trust and safety as currency with their customers and stakeholders will reap the rewards in an AI ecosystem that is not as focused on trust and safety currently as it is with progressing the technology.
Governance and collaboration as key levers
An extension of the AI safety and trust discussion, speakers emphasized the role that proper organizational governance over AI systems should play to build trustworthiness and manage risks posed by AI applications. Some of the practical advice included:
- Governance of AI systems needs to marry company policies to the technology – including the unique risks posed by AI (such as understandability, transparency, model drift, and bias) that are not necessarily present with other enterprise technologies.
- IT departments need help to understand, control and monitor AI applications. Subject matter experts throughout the organization that understand both business and user needs must be involved in testing and monitoring/auditing the accuracy and outputs of AI tools.
- Boards need to understand the unique risks of AI systems. Governance and management of AI systems is a leadership issue, and boards/C-suite need to be involved in breaking down the silos in organizations to enable effective collaboration and governance.
Importantly for the profession, AI governance is an area where CPAs can play an important role. Driven by our understanding of business processes and expertise in strategy, risk and controls, CPAs stand to lead the way in integrating AI into business strategy. CPA Canada has our members covered with our latest guidance on the role of organizational governance of AI systems. Read more about CPA Canada’s AI research and publications here.