Microsoft rolled out a blueprint for regulating artificial intelligence on Thursday that calls for building on existing structures to govern AI.
Microsoft’s proposal is the latest in a string of ideas from industry on how to regulate a technology that has captured public attention, attracted billions of dollars in investments and prompted several of its principal architects to argue that AI is in desperate need of regulation before it has broad, harmful effects on society.
In remarks before a Washington, D.C. audience on Thursday, Microsoft President Brad Smith proposed a five-point plan for governing AI: implementing and building upon existing frameworks, requiring effective brakes on AI deployments, developing a broader legal and regulatory framework, promoting transparency and pursuing new public-private partnerships.
“We need to be clear-eyed and we need to be responsible as we create this technology,” said Smith.
“It will send a signal to the market that this is the future we all need to embrace,” Smith told an audience that included members of Congress, government officials, labor leaders and civil society groups.
Smith’s remarks come amid growing interest in Washington about how to regulate the rapidly growing AI industry. At a pair of Senate hearings last week, lawmakers pressed tech company executives and researchers on how to regulate the technology and address the many concerns raised by AI, including its ability to accelerate harms such as cyberattacks, fraud against consumers, and discrimination and bias.
Earlier this week, the Biden administration released an updated framework for fostering responsible AI use, including a roadmap for investments for research and design priorities for AI. The White House also requested input from the public on mitigating AI risks. The administration has previously noted concerns about bias and equity issues with the technology.
Microsoft’s recommendations align with OpenAI CEO Sam Altman’s testimony before Congress last week. Both Altman and Smith called for a licensing regime to govern AI companies overseen by a new independent agency. Smith added that he would like to see AI specialists within regulatory agencies evaluate products.
In his remarks on Thursday, Smith pointed to NIST’s artificial intelligence risk management framework as an example of frameworks that regulators can build on and said he would like to see an executive order requiring the federal government to only acquire AI services from firms that abide by principles of responsible use.
Microsoft has played an instrumental role in OpenAI’s recent advances, funding the company with billions of dollars in investments and cloud computing credits that the start-up has used to train its GPT models, which are widely considered the industry leader. Microsoft has begun integrating OpenAI’s technology into its products, including its Bing search engine, and the partnership between the two firms is a major force behind recent AI advances.
The companies’ critics have responded skeptically to their proposals for regulation, saying a licensing regime could potentially hurt other start-ups. Critics have also noted similar calls from companies like Meta, which called for regulation after it was caught in Congress’s crosshairs following the Cambridge Analytica scandal. OpenAI has already come out against stronger regulations in the European Union, threatening to pull out of the market if regulators continue their current course.
Asked by Rep. Ritchie Torres, D-N.Y., how lawmakers can balance the need to slow down and regulate the technology while also retaining a strategic competitive advantage against China, Smith said part of the solution is building strong partnerships with other nations to build a global framework for responsible AI. He also urged Congress to not move so slowly as to fall behind U.S. allies, saying that Microsoft hopes that Congress will pass federal privacy legislation this year.
Smith noted that it is important to address the national security concerns of deep fakes and and their ability to aid foreign intelligence operations and called for greater transparency about when AI is used to generate content. Smith says Microsoft is committed to producing an annual transparency report for its AI products.
Smith acknowledged lawmakers’ many concerns while also offering positive examples of the use of AI, including using the technology in real time to map 3,000 schools in Ukraine damaged by Russian forces and then providing that information to the United Nations as part of war crimes investigations.
Corrected May 25, 2023: This story has been corrected to note that Brad Smith called for a the creation of a new independent agency to regulate AI firms.
The post Microsoft urges lawmakers to adopt new guidelines for responsible AI appeared first on CyberScoop.