At a US Senate hearing, OpenAI CEO Sam Altman suggested creating an agency to oversee AI models that operate “above a certain level of capability.” The agency should issue licenses for advanced developments in the field of artificial intelligence and withdraw them if companies violate established rules.
Altman’s proposal came in response to senators’ concerns that the tech industry’s over-enthusiasm for AI could lead to uncontrollable consequences. He agreed with the senators’ observation that the agency could act like the Nuclear Regulatory Commission, which issues licenses for nuclear power plants and strictly monitors their operation.
Altman confirmed that AI can indeed get out of hand:
“I think if something goes wrong with this technology, it can go completely wrong. We want to speak out about this and work with the government to prevent this from happening.”
OpenAI’s AI language model, which is the basis of the GhatGPT chatbot, has become one of the most successful and in many ways triggered the boom around artificial intelligence technologies at the end of last year. The initial admiration of the community has been replaced by fears among many of its members. Thousands of well-known figures in the industry and famous personalities signed an open letter on the temporary limitation of the development of AI until the rules for its work are created.
Elon Musk and more than 1,000 experts sign an open letter on the dangers of advanced AI – a moratorium on development and regulation is proposed
Source: Insider