What you believe and what you value as an organization matters. Discover the DNA of our firm.
Get to know our teams and the stories of select staff members who share why they choose to work at Sands Capital.
Read about some of the latest events, partnerships, and business highlights from Sands Capital.
Sands Capital portfolio managers reflect on five powerful lessons they have learned while investing through past downturns.
As artificial intelligence (AI) becomes a part of our everyday lives, making sure it’s transparent and unbiased has become a high priority for businesses, governments, and ordinary people. In this episode of What Matters Most, Scott Frederick, a Sands Capital managing partner, explains how software startup Credo AI is helping to keep AI models responsible. Join us as we look at how Credo AI aims to lead the way in this vital new field.
At Sands Capital we encourage our investment team to think in decades not quarters. Director of Research Michael Raab, CFA discusses how culture can support the visionary research needed to find businesses creating the future.
WEG is setting new standards for sustainability and innovation by focusing on energy efficiency and renewable solutions that deliver cutting-edge technologies to align with the needs of a rapidly changing world.
Our philosophy is rooted in the belief that, over time, stock prices will reflect the earnings power and growth of the underlying businesses.
Our latest annual report offers a comprehensive view of how we add value through active stewardship.
Our newest strategy takes an unconstrained approach to seeking the best growth businesses outside of the U.S.
Sands Capital invests in innovative businesses across all stages of the growth spectrum
Scott Frederick, Managing Partner
As artificial intelligence (AI) becomes a part of our everyday lives, making sure it’s transparent and unbiased has become a high priority for businesses, governments, and ordinary people. In this episode of What Matters Most, Scott Frederick, a Sands Capital managing partner, explains how software startup Credo AI is helping to keep AI models responsible. Join us as we look at how Credo AI aims to lead the way in this vital new field.
(2:10) AI Has Evolved Over Decades
(4:48) Responsible AI vs. Ethical AI
(9:29) Simplifying AI Governance with Credo AI
(14:10) Can Agents and Artificial Superintelligence be Transparent?
(18:27) Solving the Alignment Problem
(20:16) Industries Ripe for Credo AI’s Solution
(22:14) Market Timing and Other Potential Headwinds
(24:53) How Big of a Market Could AI Governance Be?
Kevin Murphy (00:01):
We all make mistakes. But what happens when artificial intelligence makes a mistake? What ripple effects could a biased or flawed AI model have on a business or society? The results could be discriminatory, contributing to flawed hiring practices or, if a self-driving car misses a stop sign, fatal. To address this need, software startups are creating, monitoring, and maintaining ethical AI systems that can counter human mistakes and biases. And Credo AI is leading the way.
(00:31):
Welcome to What Matters Most, where we explore the businesses propelling global innovation and changing the way we live and work today and into the future. I’m Kevin Murphy, and today we explore Credo AI’s competitive advantages — from seasoned management and innovative platforms to early proof points with leading customers, and why the company has caught the attention of large tech clients.
(00:52):
So, let’s get into it. I’m joined today by Sands Capital’s managing partner Scott Frederick to better understand why we believe this business looks so promising and why it has the potential for large and sustainable growth. So, Scott, very excited to have you in the discussion today. Let’s start with a little backstory. How long have you been covering AI and tech broadly, and when and how did you discover Credo AI?
Scott Frederick (01:16):
Well, thanks, Kevin. I appreciate the opportunity. This should be a lot of fun. Quick background. I’ve been an early-stage tech investor since 1997, so I’ve been at this quite a while. I guess that’s twenty-eight years years if you go back to when I started in venture capital and if you go back to when I started or made my first AI investment, it would be about 15 years ago, and that was a company called Automated Insights, way back in 2010.
(01:43):
A little fun story about Automated Insights: It’s actually a deal I did when it was a one-person company, fell in love with the business, and ultimately joined full-time as their COO. So, in some ways it was AI that triggered a four-year hiatus from venture capital, and we ultimately sold that business to Vista, the private equity firm. So, AI is near and dear to my heart as I got to witness and enjoy its transformational power early.
Kevin Murphy (02:10):
I think people think of AI as the last five years. It definitely hit the scene pretty aggressively with ChatGPT and others, but there’s a history that predates that.
Scott Frederick (02:20):
Oh, absolutely. I mean, I’d argue AI is, I mean, it’s decades, if not 50, 60 years old. It’s really in the last five years that it was generative AI and the rise of ChatGPT that kind of caught the zeitgeist. But back in 2010, the company I referenced, Automated Insights, we turned structured data automatically into long-form narrative content. And that probably sounds a lot like ChatGPT. It was a fundamentally different technology that we used. It was deterministic and not predictive, but it was still AI.
Kevin Murphy (02:54):
So, that leads to the next question I had asked, too. You’ve had a front-row seat at the emergence of AI, and all along, I’m sure you’ve been immersed in the debate around ethical AI. Is that what led you to Credo AI? And give us an idea of what Credo AI does and how they fit into that ecosystem?
Scott Frederick (03:13):
Great question. And what I really like about that question is it’ll help me hopefully underscore some things that are a little bit different about Sands Capital and our approach to venture. We are what I would call thesis-driven investors.
(03:25):
We focus on a couple of select areas. In our case, it’s AI, cybersecurity, and healthcare IT. But then we literally try to put pen to paper at regular intervals to keep ourselves honest, focus our work, and brainstorm where we think the largest opportunities might be coming.
(03:42):
So, I spent a lot of time trying to talk to executives at large companies and trying to get them to open up on their pain points. One of the things that struck me going back four or five years, and I shouldn’t say me, it struck our entire team, was that we found some of the most forward-leaning data shops were paralyzed when it came to operationalizing their algorithmic initiatives.
(04:08):
So, it’s a lot of words there, but what that means is some wonderful large enterprises had AI initiatives or algorithmic initiatives they wanted to bring to market or deploy, and they weren’t sure how to do it. The regulatory landscape, the legal landscape — it was confusing, it was fraught with danger. And there was also the question in these large enterprises of who owns that decision?
(04:33):
Is it compliance? Is it the CFO? Is it the operating unit? And so, the deeper we dug, the more convinced we became that this was an intractable problem. And in my world, when you hear or discover an intractable problem, that creates opportunity.
Kevin Murphy (04:48):
So, let’s talk about that a little bit more before we get into specifically how Credo AI tackles the problem. Responsible AI: What exactly does that mean? I’ve read a lot about the alignment problem and things like that. How do you define the problem they’re solving?
Scott Frederick (05:04):
Another good question, and this is a little bit of a pet peeve of mine: A lot of people refer to ethical AI, and they’ll use terms like “ethical” or “responsible AI,” and they speak of the alignment problem, and they get a little intellectually lazy. And I don’t like using those labels without defining them. I really prefer the term “responsible AI” as opposed to “ethical.”
(05:25):
I think ethics can be debated, and those debates can quickly become derailed by disagreements about relative value judgments. But in contrast, I’d argue it’s pretty easy to lay out and find agreement on the components of a responsible approach to AI governance. And when we at Sands Capital speak of responsible AI, we really focused on four key attributes: transparency, fairness, reliability and safety, and then privacy and security.
Kevin Murphy (05:55):
When you think of standards for governance and responsible AI, are those standards keeping up with the pace of change in AI? You look at the EU regulators and regulators around the world laying the gauntlet down, saying, “This is what you should do,” but it’s really based on what’s currently happening, and things are changing so quickly. Are they able to keep up with that?
Scott Frederick (06:16):
Well, the short answer is no. And I hate to sound cynical, but I don’t think there’s any way in a space it’s evolving as quickly as AI that the regulators are going to be able to fully keep up. And the other thing you have is: Look at what’s happened in the United States. One of Trump’s first actions was to dismantle Biden’s AI Act. So one, it’s hard to keep up, but then it’s also hard to have consistency, and that makes it very difficult on large enterprises because they’re multinational.
(06:47):
Another thing I like to make sure I point out, because not many people realize, it’s not just the EU and the United States, if I’m not mistaken, 46 of the 50 U.S.states had AI legislation pending at the state level. But if you go back to those four categories, you asked for some examples. Some of them are relatively easy.
(07:09):
Transparency, I’d say to build trust in AI, it’s absolutely critical to ensure that the AI systems are as transparent as possible. And this involves everything from clear documentation of the underlying AI models as well as what data that AI model was trained on and understanding the range of potential outcomes. Where I think the discussion gets really interesting is when you start to turn to things like fairness, and here I think examples are wonderful.
(07:39):
One of my favorite examples is the city of Boston, near and dear to my heart — they had an initiative that they put forth, I believe, it was back 15 years ago. It was a pothole project, believe it or not. And they built an app that was going to use accelerometers and GPS so that when people drove around on their daily commutes and they hit a pothole, it would automatically drop a digital pin, and they’d get a real-time map of the city and know everywhere that a pothole should be fixed.
(08:12):
And it’s one of these things that really sounds brilliant, it’s innovative, and it was initially well-executed. But what was interesting is when they looked at the data, it became readily clear that, especially 15 years ago, a disproportionate of the number of people who had smartphones were in wealthy and highly affluent neighborhoods. And so the resulting map of where to dispatch people to fix roads was completely unfair, and it would’ve just reinforced wealth differences.
(08:43):
Another just interesting analog on that story, the fix was quite simple. Take the app off people’s smartphones and just put them on garbage trucks, because the garbage trucks went everywhere in the city. You can have all the best intentions, but if the fundamental data is skewed, you’re going to get skewed results. And that brings me back to the need for a platform like Credo AI’s.
PROMO 1 (09:08):
You’re listening to season three of What Matters Most from Sands Capital. Subscribe to our upcoming episodes to hear more about the companies we think will grow into tomorrow’s leaders. You can also access our latest thinking at SandsCapital.com.
Kevin Murphy (09:23):
I’m glad you brought it back to Credo AI. Let’s dig into that a little bit. What’s their unique approach to solving these problems?
Scott Frederick (09:29):
Think of Credo AI as a centralized governance platform. It provides reporting and workflows on a company’s AI initiatives. So, you can also think of it as allowing an enterprise to build an AI registry. What are all your algorithmic initiatives? And then they can track and evaluate each of those AI use cases based on revenue potential, impact, and risk. And along the way, it’s going to automatically identify relevant risks as well as legislation.
(10:02):
One thing that a lot of listeners might not be aware of, if you have an algorithm that powers an HR decision, it’s required to be registered in the state of New York. It’s still a little unclear what that means to actually register an algorithm with the state, but that law is on the books. And so a lot of what Credo AI is doing is the power of the registry, the power of the reporting and the workflows, and to do all of that in a context aware manner.
(10:32):
And then on the risk management side, another thing that Credo AI brings to the table that’s relatively unique, they have a product called Lens, and that’s an open source model, and it allows an enterprise to do qualitative evaluations of AI or machine learning models against different responsible or ethical AI principles. And Credo is designed to be plug and play and to play seamlessly via APIs with the leading MLOps [machine learning operations] vendors.
(11:03):
And again, that’s an important distinction, because a lot of people confuse MLOps with the AI governance. I think they’re two separate pieces of software. But combined, I think the important takeaway, the governance platform and then the lens assessment framework, it allows a company to define its AI risk management policies, its procedures. They can execute quantitative tests, they can review the results, approve model deployments, and then document risk management to whoever might need to see it. And that’s going to be everything from management to boards of directors to regulators.
Kevin Murphy (11:44):
So sticking on the Credo story, take me through the history of the market or the context that they’ve been operating in. As I mentioned earlier, it’s been changing pretty dramatically. How hard is it for Credo to stay ahead of this change?
Scott Frederick (12:00):
Well, let me actually take it a step further back because I’m now realizing, I’m not sure I answered one of your earlier questions on how we found Credo. I mentioned that we were thesis-driven investors. We had this thesis that there was going to be an opportunity for AI governance as a unique category, which again, in venture it’s pretty rare to find white space with that large a TAM [total addressable market].
(12:24):
And I want to give a lot of credit to Chris Eng, one of my partners. He literally set out on an international search to find the leading AI governance software vendor. We looked at companies in Canada, we looked at companies in the UK and in Europe, multi-month process. And we were actually pretty close to giving up because we hadn’t found that magical mix of the right product vision and then team to execute.
(12:49):
And that’s when Chris found Navrina Singh and Credo AI. What I love about the story is that when we first reached out, they weren’t raising money, but because of our thesis-driven approach, and we’d already done all the work, we actually sat down and prepared a PowerPoint. I flew out to the West Coast, took Navrina to dinner, and did what I call a reverse pitch where I said, “We’ve been studying your space. We want to back somebody. We think you can win in this space.” And that allowed us to preempt their series. And it’s been super exciting since then.
(13:23):
But to your question on staying ahead of the game, it’s hard because when you have that much white space, you’re obviously going to attract competitive pressure. There are new entrants and those come in a lot of different forms. It’s going to be everything from other startups, large players trying to move laterally, and then also consultancies are trying to move into this space.
Kevin Murphy (13:45):
So getting a little more meta about it too. The big step change that’s coming, or I guess is already in process, is artificial superintelligence [ASI]. And so if you think about the nexus of communications, it initially was person to person, person to computer, computer to person, but there was always a human or a person involved. With automated super intelligence, the likely next scenario is computer to computer.
(14:10):
So you mentioned transparency as being a pretty important part of the overall governance around responsible AI. When we get into that ASI environment, does transparency go away? How do you regulate, mitigate, even monitor that?
Scott Frederick (14:26):
Well, you nailed it and, to me, it’s when you move to agency, and the label these days is “agentic AI,” and it’s absolutely going to happen. It’s going to, I think, create a step function and efficiency gains across large enterprises. But it also creates, as you say, all sorts of questions.
(14:46):
And again, I think it’s good to raise specific examples. I do it not to trivialize, because some of these examples get just fun and silly, but they do help underscore the problem. And one of my favorite here is there’s an AI assistant company called Lindy, and I don’t know if you’ve heard this story, but it’s kind of fun. Their CEO tells a story about when Lindy, the AI assistant, effectively rickrolled a customer.
(15:12):
And for those who aren’t familiar with Rickrolling, it’s about a 15-year-old meme, which in the meme world is a long, long time. All started when a 4chan user, believe it or not, was trying to play a prank on his friends, and Grand Theft Auto IV was coming out, and he was talking about it, and he said he had a link to Grand Theft Auto IV.
(15:34):
And instead that link went to Rick Astley’s “Never Gonna Give You Up” video, and that became the classic bait-and-switch Rickrolling meme. But what’s bringing that back to Lindy software, this is an AI assistant where consumers who’ve bought the software can interact and ask the AI assistant questions, and somebody said, “Hey, can I get a video tutorial on how to use Lindy?”
(15:59):
And the CEO of the company was surprised when the AI assistant gave a link to a video because the CEO knew they hadn’t done a video yet. And the CEO was like, “Wait a minute, I better see, what did the AI assistant send?” And sure enough, the AI assistant sent a link to Rick Astley’s “Never Gonna Give You Up.”
(16:18):
And if you think about that, it perfectly underscores what a classic LLM [large language model] or a probabilistic predictive system is doing. To the LLM, that seems like an appropriate behavior, because they have one and a half billion examples of human beings responding to questions with random links of a Rickroll. And so the AI assistant was like, “Yeah, I completed the task, I did what I was supposed to do.” It was a completely normal response.
(16:45):
But obviously for Lindy software, that’s not what they want it to do. I don’t think it does away with transparency, but it does move us completely away from a deterministic to a probabilistic world. And once you move into the world of probabilities and predictions, which is what happens with ChatGPT or LLMs, it really is just different. The regulatory framework needs to be different. The risk profile is completely different.
Kevin Murphy (17:15):
So, sticking with the Rick Astley example, I think that’s interesting. That’s kind of a benign problem though. I think the other one I was reading about, the thought experiment, I think it was Nick Bostrom’s paper clip, an alignment problem. Are you familiar with that?
Scott Frederick (17:29):
I’m not. I love it. I want to hear it.
Kevin Murphy (17:32):
It’s a thought experiment. Bostrom is a philosopher, but basically the thought experiment is that there’s a paper clip factory, and they bring in an artificial intelligence machine that is generative and then, I guess, also superintelligence, but they give it a simple task, maximize paper clip production. And without any guardrails, without aligning the algorithm with the ultimate goal, which is not to create paper clips at any cost, but just to create paper clips. They take it to this absurd, logical conclusion that it eventually kills all humanity, mines every natural resource from the planet, and then has to go into space.
Scott Frederick (18:11):
I love it, and I have heard that those kinds of questions are the questions we need to be asking. But that’s what I love about this field. It really is moving that quickly. And I think anybody that says they know where this all leads is lying to you and lying to themselves.
Kevin Murphy (18:27):
It’s such a horse race right now that I think, and trying to bring it back to Credo, companies will ignore the alignment problem. They’ll say, “This is technology that will give me advantages above my competition,” but not really thinking about the overall societal implications. Is that a good way to think about that? And if it is, what’s Credo’s role in that?
Scott Frederick (18:49):
To me, I’ve spoken a couple of times about the difference between non-deterministic predictive systems and what’s going to be critical for those to succeed is to establish trust. And I think that is what Credo is. If you take it all the way down to its core, what is Credo AI trying to do? It’s trying to enable an enterprise to embed trust in all of their algorithmic activities.
(19:17):
And so, a secondary impact of that is that we, and when I say we, Credo AI and I’m on their board, are trying to get the world to understand that unlike a lot of governance software, which people think of as a tax on the system, it’s a necessary cost. It brakes and guardrails. I actually don’t view Credo AI in an appropriate governance platform as brakes. I view it as an accelerant. And the reason is, if you don’t have a system like this, you’re not going to be able to embed trust from the beginning. So your systems are going to go awry, and they’re not going to be able to be operationalized.
PROMO 2 (20:01):
This is season three of Sands Capital’s What Matters Most. Subscribe wherever you get your podcasts to get notified of new episodes and to join us as we go deep into the companies we believe will shape our future.
Kevin Murphy (20:16):
So putting more specifics around Credo’s business, can you describe some use cases, maybe some specific industries, that they’re currently in, and where do you think they go from their current customer base?
Scott Frederick (20:29):
I’d argue that almost every business is going to leverage AI or it’s going to lose to a business that does. So in terms of the TAM, I’d say it’s absolutely massive and that Credo can apply to just about any large enterprise. In terms of where they’re doing really well now, they definitely do better in highly regulated industries.
(20:54):
In terms of how do we get from here to there? I think one of the things that I’ve been pushing for pretty hard at the board level, and I’m very excited that the company has launched last year, and it’s to complement the software platform with professional services. And it’s interesting, because a lot of Silicon Valley investors are not in favor of professional services.
(21:18):
They’ll argue, “Oh, it’s going to hurt my margin profile.” I couldn’t disagree more strongly. I think early in the evolution of a market services can really help you meet your customers where they are. There are some very forward-thinking customers that want to go all in on a full AI governance platform, but there are going to be a lot that just need the AI registry and need to be educated on what responsible AI can be, should be and help them define their own priorities and preferences and procedures.
(21:52):
So, I think it was just two quarters ago, Credo AI launched their professional services and has had really pretty rapid uptake. It changes the business a little bit, but I think in the near term it’ll be a real accelerant. It’ll help us onboard those customers and ultimately prime them so that they’re ready for the full platform solution.
Kevin Murphy (22:14):
So those are the tailwinds and solutions. What are some of the headwinds to Credo’s business? What speed bumps do you see along the way?
Scott Frederick (22:23):
I’d say this about almost any venture investment, it’s market timing. We’re trying to see around the corner, and sometimes you can be too early. I’m absolutely convinced that the world is going to need an AI governance platform. It’s going to need to be a standalone, multi-stakeholder, so we’re headed in the right direction. It’s just how quickly can we get that market to mature?
(22:45):
So, I’d say risk number one is just being too early. The future can take a long time. I’m one of those always pounding the table on the rate of change, but at the same time, I have to remind my CEOs, there are still AOL customers out there, and I don’t mean that as a cheap shot on AOL, but even something like the cloud that feels … what is it now? Ten years old? The shift to the cloud, it’s still only 30 percent of the workloads. Again, the future takes time.
(23:13):
But that’s another reason to get into professional services so we can help meet the customer where they are and pull them along that journey. I’d say other headwinds are just that, because it is nascent and inherently multi-stakeholder, it’s not always clear who in the enterprise to go to get the deal done. But that’s what startups fight through.
Kevin Murphy (23:36):
So sticking on the idea of governance software, one of the things that pops into my mind is a lot of software in similar fields, and again, this is new territory, but have been accounting software companies that lost their business because the bigger software providers just turned what they did into a module within their own system. So why wouldn’t a company like Oracle or Microsoft just start incorporating AI governance modules into Copilots and things like that?
Scott Frederick (24:07):
That’s always a risk. I’d say, as a venture investor, I’ll always bet on the little guy being able to out-execute. And especially in a market like this that is moving so quickly and requires so much in-depth knowledge, not only of regulations, but of best practices across multiple industries. So, one of the things that Credo AI has done very well is to bring powerful partnerships on board early.
(24:35):
Credo AI works with Microsoft now. They work with Databricks, they work with McKinsey. These are all revenue producing partnerships. They take a lot of work and time to put in place, but for a relatively young software company, those are impressive partners.
Kevin Murphy (24:53):
Talk to us a little bit about the industry. What’s the expectation for growth?
Scott Frederick (24:58):
Intelligent software is going to be an extraordinary opportunity. In terms of putting a dollar figure on it, I know IDC recently said that 2024 was well over $320 billion spent on AI solutions. I’ve seen people estimate that AI is going to add $15,16 trillion to the economy by 2030.
(25:21):
My dad was an economist, and he’d say that prediction violates the law that you can make a prediction, but don’t put a date alongside it. At some point, it’ll add $15,16 trillion, but we’re talking massive, massive numbers. But I do think the way to think about it as bringing cognition or cognitizing traditional software. Any decision that could be made by a human being in under a minute will ultimately be automated.
(25:47):
That’s just a frame of reference that I use and that’s going to get built in throughout software, and then that software is going to be made agentic, which again means that the software has agency and can actually take action and execute commands on your behalf. And that gets really exciting, but also really scary. You really want trust built in from the very beginning.
(26:10):
And I think in order to do that, you are going to need a sophisticated standalone software platform. But again, I’m pitching my book. I’ve made the bet, but I do passionately believe and hopefully that passion comes out.
Kevin Murphy (26:22):
Well, Scott, this has been a fascinating conversation, and I fear that when we hit stop on the record button, it’ll be stale within five minutes, because it’s such a dynamic and fast-moving industry. But you really gave us a lot to think about here and really introduced us to an important player in this space. So thanks very much for joining us on this podcast.
Scott Frederick (26:43):
Thank you, Kevin. It was a lot of fun. Anytime, and I always love the opportunity to talk about our portfolio companies, talking the book is the most fun part of the job.
Kevin Murphy (26:51):
Yeah, agreed. Credo AI is one of the many companies that excite us at Sands Capital. It fits our vision of investing in innovative companies that have the potential to improve lives at scale and that have the competitive advantages and leadership it takes to sustain growth over time. Listen to future episodes to learn about other great growth companies our research has unearthed and the insights that give us the confidence to invest in them for the long term.
DISCLOSURE (27:16):
The featured podcast portfolio companies represent a subset of Sands Capital Holdings that illustrate the types of businesses in which we typically invest. Companies are selected on a rotating basis to highlight different sectors and geographies. The views and opinions expressed herein are those of individuals and may differ from the views and opinions expressed by Sands Capital.
Views are current as of the recording date, are subject to change, and are not intended as a forecast, a guarantee of future results, investment recommendations, or an offer to buy or sell any securities. This podcast may contain forward-looking statements which are subject to uncertainty and contingencies outside of Sands Capital’s control. Listeners should not place undue reliance upon these forward-looking statements.
There’s no guarantee that Sands Capital will meet its stated goals. A company’s fundamentals or earnings growth is no guarantee that its share price will increase. The specific securities identified and described do not represent all of the securities purchased, sold, or recommended for advisory clients, and there is no assurance that any securities discussed will remain in the portfolio or that securities sold have not been repurchased.
You should not assume that any investment is or will be profitable. A full list of public portfolio holdings, including their purchase dates, is available on our website at www.sandscapital.com/sinceinception.
Disclosures:
The featured podcast portfolio companies represent a subset of Sands Capital holdings that illustrate the types of businesses in which we typically invest. The series uses rotation whereby companies are selected to highlight different sectors and geographies.
The views and opinions expressed herein are those of individuals and may differ from the views and opinions expressed by Sands Capital. The views expressed were current as of the date indicated and are subject to change. This material may contain forward-looking statements, which are subject to uncertainty and contingencies outside of Sands Capital’s control. Readers should not place undue reliance upon these forward-looking statements. There is no guarantee that Sands Capital will meet its stated goals. Past performance is not indicative of future results. A company’s fundamentals or earnings growth is no guarantee that its share price will increase. Forward earnings projections are not predictors of stock price or investment performance, and do not represent past performance. References to companies provided for illustrative purposes only. The portfolio companies identified do not represent all of the securities purchased or recommended for advisory clients. There is no assurance that any securities discussed will remain in the portfolio or that securities sold have not been repurchased. You should not assume that any investment is or will be profitable. GIPS® Reports found here.
Something has gone wrong, check that all fields have been filled in correctly. If you have adblock, disable it.
Notice for non-US Investors