Article: Can We Legislate AI Safety?
Photo Source: Unsplash
The growing application of AI in commerce, industry, finance, government, education, and social media and the seemingly immediate adoption of chatbots such as ChatGPT has amplified investment in AI. The market has swelled for companies that develop AI and those that make the hardware necessary to build larger, more powerful AI models. Nvidia makes the premier chips for training large language models, and the market responded dramatically over the past several years, launching Nvidia from obscurity to the third most valuable company in the world with a market cap of $2.7 trillion, only behind Apple and Microsoft. Such feverish advances have also conjured up fears of the misuse of these powerful new AIs that could harm society, the government, and the economy. Much discussion on how to regulate AI and ensure it gets developed for beneficial purposes has led to proposals for legislation to protect the public from malicious AI applications. California has recently taken the lead in AI regulation.
Bill SB-1047, titled the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,” passed the California legislature on August 29, 2024, and it now awaits the signature or veto of the Governor of California, Gavin Newsom. (legalinfo.legislature.ca.gov) Bill SB-1047 sets out to protect society from the malicious misuse of large artificial intelligence models. The law determines the types of models from which the public needs protection, the regulation of these models, and the punishment of developers and companies who made the models if one of them causes harm. In the bill, harm includes “… novel threats to public safety and security, including by enabling the creation and the proliferation of weapons of mass destruction, such as biological, chemical, and nuclear weapons, as well as weapons with cyber-offensive capabilities.” Additionally, the bill cites the potential harm from AI model use in deep fakes that produce content of a person doing or saying things they never performed or said.
The bill aims at large models that use massive amounts of computing power that cost $100,000,000 or more to train or the retuning of an existing model that costs $10,000,00. The bill carves out these massive expenditures based on the current technology and computing costs. Still, it does not look forward to what happens when computing power gets greater and less expensive over time, as it has been for decades. It seems strange that the emphasis on computing power needed to make very large or “frontier” models would take the approach of how the model gets trained rather than the capabilities of the model. We may see breakthroughs in AI models that could drive the evolution of much smaller and cheaper models demonstrating greater power.
Bill SB-1047 also requires developers of AI models to build a safety protocol into the model development process that provides a mechanism to shut down a model if it causes harm either at the behest of a human or on its own. Furthermore, the bill proscribes penalties for developers who do not comply with the law. The penalties for models that cause over $500,000,000 in damage to people, property, or infrastructure include fines not to exceed $10,000,000. The bill also establishes a new Board of Frontier Models to issue regulations and auditing protocols.
Proponents of SB-1047 see it as a good step forward in protecting the public from vicious actions stemming from frontier AI models. Still, opponents see the law as a power grab by the government that will stifle innovation in Silicon Valley, arguably the most innovative AI hub in the world. According to TechCrunch, Elon Musk has endorsed the bill, but other tech heavyweights such as Andreessen Horowitz staunchly oppose it. (techcrunch.com) Government intervention in innovation seems restrictive, and it does not have the reach to control actors beyond California and America, for that matter. Such a focused regulation will most likely move innovators out of state or country. Governor Newsom has until the end of September to weigh all these factors when deciding to sign or veto SB-1047.
Dr. Smith’s career in scientific and information research spans the areas of bioinformatics, artificial intelligence, toxicology, and chemistry. He has published a number of peer-reviewed scientific papers. He has worked over the past seventeen years developing advanced analytics, machine learning, and knowledge management tools to enable research and support high-level decision making. Tim completed his Ph.D. in Toxicology at Cornell University and a Bachelor of Science in chemistry from the University of Washington.
You can buy his book on Amazon in paperback and in kindle format here.
Comments