Related Articles
The broad adoption of generative artificial intelligence could be one of the greatest changes of our lifetime. The risks and unintended consequences that could accompany that widespread use has become a key focus of our stewardship efforts when engaging with our portfolio businesses.
Transcript
Ian Barnes
Welcome. My name is Ian Barnes. I’m a managing director on the client relations team and based in London. Today, I’m joined by Karin Riechenberg, our director of stewardship, to talk about the tricky subject of AI [artificial intelligence] governance. But if it were easy, we wouldn’t be here to talk about it.
At Sands Capital. we’ve been investing in leading growth businesses for over 30 years, and we strive to be ahead of change. Without a doubt, the broad adoption of AI across society is likely to be one of the biggest changes we’ve all experienced. So I think this is going to be a very important discussion. So Karin, welcome.
And maybe I could start by asking you at a very high level, why should we be talking about this? Why is it so important?
Karin Riechenberg
Thank you, Ian. Well, as you already said, we at Sands Capital invest in highly innovative, disruptive businesses. And one thing that we have seen in our 30 years of doing this work is that rapid innovation and change also create many risks and unintended consequences. And we believe that it’s important for us as active, fundamental, long-term investors to thoroughly examine all these risks and unintended consequences.
And right now, one of the greatest innovations of our time, as you’ve said, is artificial intelligence and, in particular, generative artificial intelligence. So what we have done over the past two to three years is that we have put a particular focus within our stewardship program on responsible AI governance and how that can address some of those potential risks and unintended consequences that we see coming down the road.
Ian Barnes
So I think I understand why we should be talking about this, but what are those risks that you keep on referring to?
Karin Riechenberg
Yes. So I would say the biggest risk is the fear of AI itself. With all these scandals and headlines coming out, the challenge that companies face is how quickly they can implement this technology. And here, I think the biggest issue is that if companies and investors don’t have trust in the technology, that’s going to slow down innovation, and it’s going to hinder progress.
We believe that responsible AI governance can help us build this trust, because it can also help companies with decision-making. The more trust they have in these tools, the faster they can move, the faster they can deploy them. And conversely, in the absence of effective AI governance, companies run the risk of having dramatic missteps or safety failings.
And some of these are well known. We’ve heard about misinformation, bias, IP [intellectual property] infringement, data leakage, all these things that might happen, which can lead to significant setbacks in the innovation process. And it can even lead to significant financial impacts for companies: fines, lawsuits, reputational damage, an impact on brand value. And in extreme cases, it can lead to regulatory backlash and even overregulation, which then can slow down innovation for an entire sector.
Ian Barnes
Got it. So what are we doing then to understand and control for these risks?
Karin Riechenberg
So at Sands Capital, we’ve been trying to first build capacity and knowledge to understand this topic because it is a complex topic, and it touches on so many different sectors and industries.
In 2022, we got together with other investors and founded an initiative called Investors for a Sustainable Digital Economy. This is a group of investors that is pooling resources to generate research on this broader topic of digital governance and best practices within digital governance.
Then we’ve tried to take this research and disseminate it within our firm through training—through building toolkits and guides for our investment professionals.
And then, finally, we’re trying to now translate this knowledge to our companies by engaging with them and trying to get them to implement governance—digital governance best practices.
Ian Barnes
Got it. OK, so it sounds like we’ve made a great start, but I’m sure we have a lot more to do. So finally, to the audience of asset owners or even the companies that we invest in, I mean, what should they be doing?
Karin Riechenberg
So the way we think about it is: Start by identifying your high-risk industries and your high-risk use cases for AI. So the high-risk sectors are things like, obviously, technology. But it can also be things like healthcare, financial services, insurance, hiring, and defense.
And then, in high-risk use cases, are there any use cases that have significant impacts on people’s lives or businesses? So credit ratings, surveillance, law enforcement, hiring, safety features, and vehicles. All those things can be considered high-risk use cases.
And then, once you’ve identified those in your portfolio or your business, I think you can ask a series of questions. Where and how is AI being used? What is the intended use for the technology? And might the technology be used in a way that could have a negative impact? Who is it going to impact? And then, what kind of governance systems do you have in place to mitigate those risks and to identify them beforehand and how to build these systems? That is where the devil gets into the details, which I will not get into.
But I think the goal ultimately is to build a system that can identify not just the known risks that we are all aware of—bias, hallucinations, etc., but any potential unknown risks that might be coming down the line that we’re not aware of yet because this is all moving so quickly. And so, a very strong governance system will, basically, try to capture as much as it can while also being very transparent and having strong layers of accountability.
Ian Barnes
So it seems like there’s going to be a lot more work done in this area. That was really interesting. Thank you very much.
For those who are interested in knowing more, we do routinely publish research into this area on our website , plus, of course, more general research into the secular growth trends that we see driving our investment decision-making and wealth generation for our clients.
Thank you for watching.
Disclosures:
The views expressed are the opinion of Sands Capital and are not intended as a forecast, a guarantee of future results, investment recommendations, or an offer to buy or sell any securities. The views expressed were current as of the date indicated and are subject to change. This material may contain forward-looking statements, which are subject to uncertainty and contingencies outside of Sands Capital’s control. Readers should not place undue reliance upon these forward-looking statements. There is no guarantee that Sands Capital will meet its stated goals. Past performance is not indicative of future results. A company’s fundamentals or earnings growth is no guarantee that its share price will increase. Forward earnings projections are not predictors of stock price or investment performance, and do not represent past performance. References to companies provided for illustrative purposes only. The portfolio companies identified do not represent all of the securities purchased or recommended for advisory clients. There is no assurance that any securities discussed will remain in the portfolio or that securities sold have not been repurchased. You should not assume that any investment is or will be profitable. GIPS Reports found here.
Notice for non-US investors.