Related Articles
In her remarks to the International Corporate Governance Network (ICGN) Conference, Sands Capital Director of Stewardship Karin Riechenberg discussed the challenges of creating an AI governance framework.
The rise of artificial intelligence (AI) has created significant opportunities for businesses today. However, alongside the expected productivity gains and cost savings stemming from AI are challenges that most companies have not encountered before. To help mitigate these risks, we believe management teams and boards need to understand and have a plan to address the issues that can arise from using AI.
As a panelist at the International Corporate Governance Network (ICGN) 2024 Conference, Sands Capital Director of Stewardship Karin Riechenberg outlined key factors we encourage the management teams of our portfolio businesses to consider when creating a business’ AI governance framework.
Read excerpts from her comments below.
Guiding Ethical Digital Governance
As active, long-term investors in leading innovative businesses, we are committed to helping our portfolio businesses navigate the unexpected challenges that technological change poses to individuals, organizations, and the world at large.
In recent years, our stewardship program has placed greater emphasis on thematic areas we believe will present the most significant challenges and opportunities for our portfolio businesses over the next decade, including the area of digital ethics. Indeed, as the use of algorithms, big data, and AI becomes more common, we have increased our efforts to help our portfolio businesses navigate the challenges while embracing the opportunities presented by new technologies.
Through our engagements with portfolio businesses, we seek to understand how each company is developing and implementing AI by assessing key factors, including risk management, policy development, employee training, oversight and accountability, data governance, adherence to industry frameworks and standards, compliance and fairness, and the safety and reliability of AI applications.
Building a Framework
Our process for guiding our businesses toward a digital governance framework begins at the portfolio level. We first set engagement priorities by, among other things, identifying our holdings that operate in what we view to be high-risk industries (such as healthcare, financial services, defense, etc.) or have high-risk use cases (medical devices, surveillance mechanisms, safety features in vehicles). Once we’ve identified these companies, we seek to understand how they use AI within their businesses and where it might present risks. Keeping use cases narrow can help mitigate various risks, including data leakages and erroneous output.
We believe it is important for management teams and boards to consider all potential risks that can arise from using AI within their business. These risks could be direct or indirect, internal or external, intentional or unintentional, or all of the above.
While there is value in thinking through all possible risks, we also see the immense value in management teams acknowledging that they don’t know all the ways AI could be harmful. We believe having a humble approach to AI governance can lead to the development of an agile governance framework—one that is ready to adapt as the business and the technology landscape change.
Creating a Comprehensive System
Once companies identify how AI is used in the business and the potential risks of implementing AI, they can use their findings to create a governance system. We believe the system should cover transparency, policies and processes, oversight and accountability, and education and training. It’s important to remember that the level of governance should depend on the level of risk. For example, there should be more oversight applied to an engine that is recommending medications—in comparison to one recommending your next show to watch.
Creating AI policies and procedures is key to practicing good governance, in our view. These policies should outline how the company’s implementation of AI is responsible and how the company will seek to maintain these standards into the future. It can be helpful to align these policies with industry-accepted technology frameworks, such as the National Institute of Standards and Technology or the International Organization for Standardization.
We also advise management teams to establish procedures for detecting, mitigating, and monitoring AI issues. These procedures are critical for both the development and deployment stages of AI. Similar to best practices in cybersecurity, we believe it is important to establish a crisis response and management plan. In the event of an issue, a procedure should already be in place that clearly specifies who is responsible for overseeing the remediation, the steps required to resolve the issue, who will handle communication about the issue, and how the issue will be communicated to key stakeholders.
It is our belief that implementing AI responsibly involves not only ensuring it is used ethically and effectively but also equipping staff members with the knowledge and tools to determine when it is appropriate to rely on AI versus when human judgment is necessary. For instance, specifying a gender or race in an AI prompt can unintentionally introduce bias into the output—possibly skewing results. Staff members should understand these nuances so they can make informed decisions and avoid unintended consequences.
Finally, when a business is transparent about its AI usage and policies, it builds trust and helps create buy-in—both internally and externally. This openness with staff, investors, and customers helps reduce uncertainty and fosters a culture of accountability and candor, encouraging constructive feedback and trust, which in turn can lead to better decision-making and faster innovation. This ability to adapt and evolve not only can benefit companies internally but can also position them to build a competitive edge—the kind of advantage we, as investors, value.
Disclosures:
The views expressed are the opinion of Sands Capital and are not intended as a forecast, a guarantee of future results, investment recommendations, or an offer to buy or sell any securities. The views expressed were current as of the date indicated and are subject to change.
This material may contain forward-looking statements, which are subject to uncertainty and contingencies outside of Sands Capital’s control. Readers should not place undue reliance upon these forward-looking statements. There is no guarantee that Sands Capital will meet its stated goals. Past performance is not indicative of future results. You should not assume that any investment is or will be profitable. GIPS® Reports found here.
Sands Capital regularly engages with the management teams and, if appropriate, board members of portfolio businesses to better understand each business’s long-term strategic vision and management of risks and opportunities, including those pertaining to environmental, social, and governance (ESG) matters. More information is available in the Sands Capital Engagement Policy.
Notice for non-US Investors.