Related Articles
Navrina Singh and Credo AI are helping organizations build AI responsibly, by providing an AI governance software platform to ensure technology is fair, compliant, safe, secure, auditable, and human centered.
Artificial intelligence (AI) systems and machine learning (ML) can perform tasks at a volume and speed that far exceed human capabilities. Still, these technologies lack a critical characteristic: alignment with human values. Navrina Singh is determined to address that gap. She established Credo AI to provide organizations that procure, build, and deploy AI and ML with a comprehensive governance platform to bring oversight and accountability to these systems.
“The fourth industrial revolution has been powered by technologies like AI,” says Navrina. “Businesses are looking for innovations that will deliver efficiencies and optimize operations, while also adding to their top-line revenue and profitability. Increasingly, they’re also focused on their green line, which is the attainment of environmental, social, and governance goals.”
While AI has the power to revolutionize business, Navrina emphasizes that it is crucial to keep human values at its core. “Governance plays a vital role in AI as these technologies leverage vast amounts of data and make decisions through algorithms,” she says. “Without a human-centric approach, the risks are enormous, and the outcomes can be devastating. It’s imperative that we build and use these technologies responsibly. Regulations are coming. Companies who invest in comprehensive AI governance programs now will reap the benefits later.”
benefits later.
Artificial intelligence (AI) systems and machine learning (ML) can perform tasks at a volume and speed that far exceed human capabilities. Still, these technologies lack a critical characteristic: alignment with human values. Navrina Singh is determined to address that gap. She established Credo AI to provide organizations that procure, build, and deploy AI and ML with a comprehensive governance platform to bring oversight and accountability to these systems.
“The fourth industrial revolution has been powered by technologies like AI,” says Navrina. “Businesses are looking for innovations that will deliver efficiencies and optimize operations, while also adding to their top-line revenue and profitability. Increasingly, they’re also focused on their green line, which is the attainment of environmental, social, and governance goals.”
While AI has the power to revolutionize business, Navrina emphasizes that it is crucial to keep human values at its core. “Governance plays a vital role in AI as these technologies leverage vast amounts of data and make decisions through algorithms,” she says. “Without a human-centric approach, the risks are enormous, and the outcomes can be devastating. It’s imperative that we build and use these technologies responsibly. Regulations are coming. Companies who invest in comprehensive AI governance programs now will reap the benefits later.”
Addressing AI’s Shortcomings
If these technologies operate on their own, without the proper governance, unintended consequences can arise in a variety of ways. Some examples are listed below from the sectors that Credo AI currently serves, which include financial services, hiring and talent management, insurance, technology, and government agencies.
- Bias and discrimination: AI systems can perpetuate and amplify existing societal biases in ways that create discriminatory outcomes. AI-based algorithms are being used extensively to predict who will likely default on a loan, who should be hired, where sources of fraud might be, and what price individual customers are willing to pay for a product or a service. An AI recruiting tool trained on historical data may be biased against women. Facial recognition software has also been found to be worse at detecting darker skin colors. Drawing on these inbuilt biases, algorithms can not only perpetuate inequalities but also raise them to unprecedented levels. In the recent years, there has been a wave of emerging regulations like New York City’s Local Law 144, which is designed to manage bias in hiring systems, and bills like Colorado Senate Bill 21-169 that aims to prevent unfair discrimination on the basis of race, color, sex and other protected attributes.
- Unreliable and nontransparent performance: AI systems can be opaque, and that makes it difficult to understand how decisions are being made and who is responsible for errors. If, for example, a banking customer is rejected based on an AI prediction about the customer’s creditworthiness, companies run the risk of not being able to explain why. Similarly, AI systems may be used to underwrite insurance policies, assess risks, and make claim decisions. If these decisions are not transparent or erroneous, it could be difficult for consumers to understand how their coverage is being determined. Lack of transparency can erode consumer trust, especially in institutions that provide critical functions like financial services, health care, insurance, and education, not to mention employment and hiring decisions.
- Plagiarism, copyright infringement and disinformation: Generative AI technologies like ChatGPT1 blur the lines between human-created and machine-generated content, making it difficult to differentiate on the basis of context. Additionally, ChatGPT’s responses may contain a significant amount of irrelevant and unrealistic information and may draw on any potential source for its content. Some of that content could be copyright protected or at least developed by humans who deserve credit or compensation. Generative AI can also make us vulnerable to potential dangers such as fraud, scams, and disinformation.
Incorporating Governance at Every Stage of AI
Credo AI provides tools and measures to help organizations avoid these potential pitfalls. “The only way to achieve human-centered AI is through contextual and comprehensive governance,” says Navrina. “The governance also needs to be ongoing so that oversight and accountability to humans occur at every stage of AI’s uses.”
As Navrina explains, Credo AI’s software platform seeks to manage, monitor, and mitigate risk in the entire AI lifecycle. It does so by interrogating a company’s data, models, and processes throughout the AI lifecycle, ensuring they align with the company’s objectives, standards, and best business practices, as well as with current, and potential future, regulations. The platform then generates governance assets, such as risk dashboards, model cards, disclosure reports, and transparency reports that identify potential problems and offer solutions to help manage and mitigate risks. These assets can be shared with internal and external stakeholders, such as auditors, regulators, consumers, and business partners. Standardizing the governance process in a transparent way helps promote trust among all stakeholders.
When Sands Capital Ventures was examining firms that seek to provide governance for AI, Credo AI stood out, says Scott Frederick, a managing partner at the firm. “Too many other startups focus on post-production model monitoring to catch performance issues and errors,” he says. Credo AI has distinguished itself from competitors because, Scott says, “It applies a more holistic and multi-stakeholder approach that takes social considerations into account as well as all the technological issues IT departments consider paramount.” He adds, “The approach Credo AI takes can help ensure that AI models are designed responsibly from the start.”
Involving All Key Stakeholders in the Process
As with past technological revolutions, Navrina notes, technology experts like engineers and developers are playing a key role in determining how AI can help meet the needs of businesses and their customers. “The problem is that these technical stakeholders are not always the ones with a deep understanding of all the financial, brand and regulatory risks that AI presents,” she says. Credo AI bridges that gap by enabling all key stakeholders to be brought into the governance process for AI. “Oversight functions, governance and risk policies, standards and regulations are all critical and must be built with input from multiple stakeholders. Collectively, these governance practices will ensure that these powerful AI capabilities we are launching into the world create an infrastructure we can trust.”
There is no single test or metric that can be used to gauge if AI is built responsibly, emphasizes Scott. “Data scientists can’t validate models in isolation,” he says. “Responsible AI requires engagement from people across multiple functions, including product owners, data scientists, ethical AI specialists, and the members of risk management and compliance teams.”
Scott explains, “Data scientists can use Credo AI to incorporate human centered considerations as they analyze data and model outputs and then adjust the design of their AI models accordingly. Risk and compliance teams can be included in the process, as a second line of defense, to ensure the models adhere to all relevant company guidelines, industry regulations, standards and the rapidly growing framework of state, federal and international laws.” Product owners and firm executives will find support for their key roles, as well. “They can use Credo AI to gain industry and use-case specific insights that will inform and help ensure a responsible and ethical design of their AI models,” Scott says.
A Lifelong Passion to Serve the Community
Navrina has always been interested in doing work that serves broad societal goals. The inspiration for this came from her parents, she says. Neither of them had technical backgrounds. Her father served in the Indian military, and her mother was originally a teacher but later became a fashion designer. Both had a keen interest in serving their community, and they fostered the same commitment in Navrina. Her social conscience also combined with another key inclination – a curiosity to understand how things work. That interest took a technical direction when she obtained her first computer. She spent hours online researching everything from how to disassemble motherboards to how to code and build robotics applications.
At the time, Navrina says, her home country of India didn’t regularly foster young girls’ interest in science, technology, engineering or math careers. Still, Navrina’s talents with technology eventually led her to earn a bachelor’s degree in engineering from the College of Engineering Pune University. She then came to the United States and earned a master’s in electrical and computer engineering from the University of Wisconsin-Madison and later an MBA from the Marshall School of Business at the University of Southern California.
Navrina spent 18 years in engineering, product and business leadership roles at Qualcomm and Microsoft. From her work as a product leader overseeing data and engineering teams, Navrina also knew about the misalignment of incentives that can arise among the humans developing AI systems. She understood that the technical experts who are tasked with developing high-powered AI applications powered by machine learning often view compliance and governance rules as gate checks that only slow down the pace of innovation.
“My growing concerns about the oversight deficit that can arise from the different goals of technical professionals vs. those in risk, compliance and audit groups motivated me to explore ways to address the challenges that AI governance was creating,” says Navrina. To serve this need, Navrina founded Credo AI in February 2020 with an investment from Andrew Ng’s AI Fund, an incubator for startups focused on AI solutions. She chose the name “credo” because it means a set of values that guide your actions. With Credo AI, she aspires to guide the responsible development and use of AI.
Credo AI provides customers with a SaaS (Software-as-a-Service) governance platform that can sit on top of their technical machine learning operations (MLOps) infrastructure to provide continuous oversight and accountability. Credo AI’s vision, says Navrina, is “to empower enterprises to build AI responsibly.”
A Valued Partnership with Sands Capital
Credo AI chose Sands Capital to lead its Series A round of investing. “From the outset, it was clear that Scott and the team had a deep understanding of our business, our customer base and metrics, and the traction we were gaining,” says Navrina. “But even more importantly for us, they understood our vision for the kind of company we are trying to build and the broader impact we want to make.”
The ongoing relationship between the firms has exceeded Navrina’s expectations. “Sands Capital has been a force multiplier for Credo AI,” she says. “When Scott comes to us with ideas, it feels like he is a member of the Credo AI team. He consistently offers the expertise that he and his colleagues at Sands Capital have in ways that help us further our mission and impact. If there are obstacles to our goals that he thinks he can help clear away, he does.” She adds, “Frankly, I wish more venture capital firms worked that way.” The type of support Scott and the Sands Capital team provides, Navrina says, “is what enables startups to build great, thriving businesses.”
Sands Capital has been an industry leader in the examination of the risks that artificial intelligence and machine learning capabilities create. To this end, In June 2022, the firm served as the sponsor and host for a discussion on the responsible use of AI, which brought together policymakers as well as leaders from the industry, government, investment community.
A Recognized Leader for AI
Scott says that the great potential for Credo AI’s business stems, in part, from the fact that Navrina is a true thought leader in the field of responsible and ethical AI governance. Her widely recognized expertise and stature in the industry are among the reasons she serves on the board of directors for the Mozilla Foundation, a nonprofit with the mission to support an open and exclusive Internet via trustworthy AI. Navrina also serves on the National AI Advisory Committee which was established in 2022 to advise the President and the National AI initiative office.
Scott says, “This is a critical, yet nascent market, and there are only a few industry subject matter experts. We believe that Credo AI will meaningfully contribute tools and expertise to the development of responsible AI best practices and regulations that will eventually become the de facto international laws and standards.”
Navrina adds, “At Credo AI, we aim to develop solutions that inject responsibility into the entire AI and machine learning life cycle. Our society, our economies and our world depend on it.”
1 ChatGPT is a chatbot developed by OpenAI, an AI research laboratory consisting of the non-profit OpenAI and its for-profit subsidiary corporation OpenAI Limited Partnership.
Disclosures:
As of August 26, 2024, Microsoft was held in Sands Capital strategies.
The series, Partners for Growth, features profiles of founders of portfolio companies that represent a subset of Sands Capital holdings that illustrate the types of businesses in which we typically invest before they go public. The website uses rotation whereby companies are selected to highlight different sectors and growth industries. The founders featured in this series agreed to participate and have approved their stories prior to publishing. Compensation was not provided to participants; however, Credo AI can reuse the content with Sands Capital’s permission. Usage rights to the content featured is as September 4, 2024, and is subject to change. Ms. Singh has no other relationships or affiliations with Sands Capital and does not invest in its public or private strategies.
This document does not constitute or form part of an offer to sell or issue, or a solicitation of an offer to purchase or subscribe for, any securities. It is not intended to form the basis of any investment decision and is being made available for informational purposes only. An investment in private markets fund is only suitable for sophisticated investors for whom an investment does not constitute a complete investment program, who have experience with similar types of investments, and who understand, are willing to assume, and have the financial resources necessary to withstand the significant risks involved in the investment, including the potential for a complete loss of capital.
The specific securities identified and described do not represent all the securities purchased, sold, or recommended for advisory clients. There is no assurance that any securities discussed will remain in the portfolio or that securities sold have not been repurchased. You should not assume that any investment is or will be profitable.
The views expressed are the opinion of Sands Capital and are not intended as a forecast, a guarantee of future results, investment recommendations, or an offer to buy or sell any securities. The views expressed were current as of the date indicated and are subject to change. This material may contain forward-looking statements, which are subject to uncertainty and contingencies outside of Sands Capital’s control. Readers should not place undue reliance upon these forward-looking statements. There is no guarantee that Sands Capital will meet its stated goals. All investments are subject to market risk, including the possible loss of principal. Past performance is not indicative of future results.
This communication is for informational purposes only and does not constitute an offer, invitation, or recommendation to buy, sell, subscribe for, or issue any securities. The material is based on information that we consider correct, and any estimates, opinions, conclusions, or recommendations contained in this communication are reasonably held or made at the time of compilation. However, no warranty is made as to the accuracy or reliability of any estimates, opinions, conclusions, or recommendations. It should not be construed as investment, legal, or tax advice and may not be reproduced or distributed to any person.