Microsoft President Brad Smith has joined the chorus of tech industry leaders calling for government regulation of artificial intelligence (AI). However, Smith also acknowledges that companies have a role to play in managing this powerful technology.
During a panel discussion in Washington, D.C., Smith emphasized the need for governments to act swiftly, as reported by The New York Times. The call for regulation comes at a time when the rapid advancement of AI, particularly generative AI tools, has attracted increased scrutiny from regulators.
Generative AI refers to AI systems capable of generating text, images, or other media based on user-provided prompts. Notable examples include Midjourney's image generator platform, Google's Bard, and OpenAI's ChatGPT.
The demand for AI regulation has gained momentum since the public launch of ChatGPT in November. Prominent figures such as Warren Buffett, Elon Musk, and OpenAI CEO Sam Altman have raised concerns about the potential risks associated with the technology. This unease extends to the ongoing WGA writer's strike, fueled by fears that AI could replace human writers, and to video game artists, as game studios explore AI technologies.
Smith endorsed the idea of requiring developers to obtain licenses before deploying advanced AI projects. He also suggested that "high-risk" AI systems should operate exclusively within licensed AI data centers.
Additionally, Smith called on companies to take responsibility for the societal impact of AI. He emphasized the importance of notifying the government when testing AI technologies and continued monitoring and reporting of unexpected issues, even after deployment.
Despite these concerns, Microsoft has made significant investments in AI. The company reportedly invested over $13 billion in OpenAI, the developer of ChatGPT, and integrated the popular chatbot into its Bing web browser.
In a post on AI governance, Smith stated, "We are committed and determined as a company to develop and deploy AI in a safe and responsible way." He emphasized that the responsibility for establishing guardrails for AI should not rest solely on technology companies but should be a shared responsibility.
Smith's remarks align with those made by OpenAI CEO Sam Altman during a hearing before the U.S. Senate Committee on the Judiciary. Altman suggested the creation of a federal agency to regulate and set standards for AI development, including the licensing of large-scale AI efforts and enforcement of safety standards.