Navrina Singh, White House AI advisor and founder and CEO of Credo AI, artificial intelligence (AI) governance software, joins Josh Lipton on Asking for a Trend to discuss AI regulation.
Singh tells Yahoo Finance that as AI tech evolves, more companies consider it a risk. “Just recently, 56% of the Fortune 500 companies have identified AI as a risk factor on their most recent annual reports, and this is up by almost 50% compared to 2022.”
“The reason that is happening is AI is becoming pervasive in whether it is in productivity tools for coding, in fraud models or in massively used systems which could be promoting misinformation.”
When there are such powerful systems, "it becomes really important for enterprises, but also policymakers, to think about how to put guardrails, and that's where this very interesting public-private sector interplay is coming,” Singh explains.
California Governor Gavin Newsom is signed three bills into law that target the misuse of AI-created content in an effort to combat election misinformation. Singh says the debate around Newsom’s AI policies “is bringing a lot of focus on what a good governance framework look like.”
As companies “are building very powerful foundation models that potentially could impact massive misinformation concerns [and] cyber attacks, we need to start thinking about how do you put guardrails” on the tech, Singh says.
Why companies should care about AI regulation: Expert
Typography
- Smaller Small Medium Big Bigger
- Default Helvetica Segoe Georgia Times
- Reading Mode