Home > News list > Tech >> Intelligent devices

OpenAI will use new methods to train models to combat artificial intelligence "hallucinations"

Intelligent devices 2023-06-01 10:45:29 Source: Network

Cailian News Agency, June 1st (Editor Niu Zhanlin)On Wednesday, OpenAI released its latest research paper showing that the company is using a new method to train artificial intelligence (AI) models to combat artificial intelligence "hallucinations".Artificial intelligence hallucinations refer to the ability of artificial intelligence models to generate content, which is not based on any real-world data, but rather a product of the model's own imagination



Cailian News Agency, June 1st (Editor Niu Zhanlin)On Wednesday, OpenAI released its latest research paper showing that the company is using a new method to train artificial intelligence (AI) models to combat artificial intelligence "hallucinations".

Artificial intelligence hallucinations refer to the ability of artificial intelligence models to generate content, which is not based on any real-world data, but rather a product of the model's own imagination. People are concerned about the potential problems that this illusion may bring, including moral, social, and practical issues.

When OpenAI's chat robot ChatGPT or Google competitor Bard is simply fabricating false information, artificial intelligence hallucinations occur, acting as if they are talking endlessly about facts. Some independent experts have expressed doubts about the effectiveness of OpenAI in doing so.


For example, in Google's promotional video for Bard in February, the chat robot made an untrue statement about the James Webb Space Telescope. Recently, ChatGPT cited a "forgery" case in a document in a federal court in New York, which may result in penalties for the New York lawyer involved.

OpenAI researchers wrote in a report: "Even the most advanced artificial intelligence models are prone to producing lies, often exhibiting a tendency to fabricate facts in uncertain moments. These illusions are particularly severe in areas that require multi-step reasoning, as a single logical error is enough to disrupt a larger solution

The new strategy proposed by the company is to reward each correct reasoning step when training artificial intelligence models, rather than simply rewarding the correct final conclusion. According to researchers, this method is referred to as "process supervision" rather than "result supervision" and may improve the performance and accuracy of artificial intelligence, as this strategy encourages models to follow more human like "thought chains".

Karl Cobbe, a mathematical researcher at OpenAI, pointed out that "detecting and mitigating logical errors or hallucinations in a model is a crucial step in building a universal artificial intelligence (AGI)." He pointed out that the motivation behind this study is to address AI hallucinations, in order to make the model more capable of solving challenging reasoning problems.

Cobbe added that OpenAI has released an accompanying dataset containing 800000 human tags for training the models mentioned in the research paper.

The day before, technology executives and artificial intelligence scientists were ringing the alarm for AI, stating that the extinction risk brought by this technology is comparable to that of an epidemic and nuclear war.

More than 350 people have signed a statement released by the Center for AI Safety, which stated that, like other societal risks such as the pandemic and nuclear war, reducing the risk of extinction brought about by AI should be a global priority.

Tag: to OpenAI will use new methods train models combat


Disclaimer: The content of this article is sourced from the internet. The copyright of the text, images, and other materials belongs to the original author. The platform reprints the materials for the purpose of conveying more information. The content of the article is for reference and learning only, and should not be used for commercial purposes. If it infringes on your legitimate rights and interests, please contact us promptly and we will handle it as soon as possible! We respect copyright and are committed to protecting it. Thank you for sharing.

AdminSo

http://www.adminso.com

Copyright @ 2007~2024 All Rights Reserved.

Powered By AdminSo

Open your phone and scan the QR code on it to open the mobile version


Scan WeChat QR code

Follow us for more hot news

AdminSo Technical Support