OpenAI forms a new team to assess the "catastrophic risk" of artificial intelligence
On Thursday, October 27th, artificial intelligence research company OpenAI announced the formation of a new team to assess and mitigate "catastrophic risks" related to artificial intelligence.OpenAI stated in a statement on Thursday that the new team is called Preparedness and its main task is to "track, evaluate, predict, and protect" potential major issues caused by artificial intelligence, including nuclear threats
On Thursday, October 27th, artificial intelligence research company OpenAI announced the formation of a new team to assess and mitigate "catastrophic risks" related to artificial intelligence.
OpenAI stated in a statement on Thursday that the new team is called Preparedness and its main task is to "track, evaluate, predict, and protect" potential major issues caused by artificial intelligence, including nuclear threats.
In addition, the team will be committed to reducing the "chemical, biological, and radiological threats", as well as the "autonomous replication" behavior of artificial intelligence. Other risks that the Preparedness team will address include the behavior of artificial intelligence deceiving humans, as well as network security threats.
OpenAI wrote in an update: "We believe that the capabilities of cutting-edge artificial intelligence models will surpass those of the most advanced models and have the potential to benefit all humanity. However, they also pose increasingly serious risks
Aleksander Madry, director of the MIT Deployable Machine Learning Center, will lead the Preparedness team.
OpenAI pointed out that the Preparedness team will also develop and maintain a "risk informed development policy" that outlines the company's work in evaluating and monitoring artificial intelligence models.
Sam Altman, CEO of OpenAI, has warned of potential catastrophic events triggered by artificial intelligence. In May this year, Ultraman and other renowned artificial intelligence researchers issued a brief statement stating that "reducing the risk of extinction brought about by artificial intelligence should be a global priority
In an interview in London earlier, Ultraman also suggested that governments around the world should take artificial intelligence "seriously" like nuclear weapons. (Small)
Tag: OpenAI forms new team to assess the catastrophic risk
Disclaimer: The content of this article is sourced from the internet. The copyright of the text, images, and other materials belongs to the original author. The platform reprints the materials for the purpose of conveying more information. The content of the article is for reference and learning only, and should not be used for commercial purposes. If it infringes on your legitimate rights and interests, please contact us promptly and we will handle it as soon as possible! We respect copyright and are committed to protecting it. Thank you for sharing.