Home > News list > Tech >> Industry dynamics

Will artificial intelligence lead to the extinction of humanity? Experts say they are more concerned about false information and user manipulation

&Nbsp; On June 4th, it was reported that with the rapid development and popularization of artificial intelligence technology, many professionals are concerned that unrestricted artificial intelligence may lead to the extinction of humanity. But experts say that the biggest negative impact brought by artificial intelligence is unlikely to be the nuclear war scenes in science fiction movies, but rather the deteriorating social environment caused by false information and user manipulation

&Nbsp; On June 4th, it was reported that with the rapid development and popularization of artificial intelligence technology, many professionals are concerned that unrestricted artificial intelligence may lead to the extinction of humanity. But experts say that the biggest negative impact brought by artificial intelligence is unlikely to be the nuclear war scenes in science fiction movies, but rather the deteriorating social environment caused by false information and user manipulation.

The following is the translation content:

In recent months, the industry has become increasingly concerned about artificial intelligence. Just this week, more than 300 industry leaders issued a joint public letter warning that artificial intelligence may lead to human extinction, and artificial intelligence should be treated seriously like "the pandemic and nuclear war".

Terminology like "the end of artificial intelligence" always evoke images of robots dominating the world in science fiction movies, but what are the consequences of taking on the role of developing artificial intelligence? Experts say that reality may not be as tumultuous as the plot of a movie. It won't be the artificial intelligence that triggers a nuclear bomb, but rather the gradual deterioration of the social environment.

Jessica Newman, Director of the Artificial Intelligence Security Program at the University of California, Berkeley, said, "I don't think people should worry about AI going bad or AI having some malicious desire The danger comes from something simpler, which is that people may program artificial intelligence to do harmful things, or we may eventually integrate inherently inaccurate artificial intelligence systems into more and more social domains, causing harm

This is not to say that we should not worry about artificial intelligence. Even if the apocalyptic scenario is unlikely to occur, powerful artificial intelligence has the ability to disrupt social stability in the form of escalating misinformation problems, manipulating human users, and bringing significant changes to the labor market.

Although artificial intelligence technology has existed for decades, the widespread use of language learning models such as ChatGPT has exacerbated long-standing concerns. Newman said that at the same time, technology companies are rushing to apply artificial intelligence to their products, engaging in fierce competition with each other, causing a lot of trouble.

She said, "I am very concerned about the path we are currently taking." "For the entire field of artificial intelligence, we are in a particularly dangerous period because although these systems may seem unique, they are still very inaccurate and have inherent vulnerabilities

Experts interviewed expressed that they are most concerned about many aspects.

Errors and false information

Many fields have already initiated the so-called artificial intelligence revolution. Machine learning technology supports social media news push algorithms, and has long been criticized for exacerbating inherent biases and misinformation issues.

Errors and false information

It can be said that the collapse of social media is the first time we have encountered truly foolish artificial intelligence, because recommendation systems are actually just simple machine learning models, "said Peter Wang, CEO and co founder of the data science platform Anaconda. We have truly failed completely

Peter Wang added that these errors may lead to an endless vicious cycle in the system, as language learning models are also trained based on error information, creating flawed datasets for future models. This may lead to the "model killing each other" effect, and future models may amplify biases due to the output of past models and be forever affected.

Experts say that inaccurate erroneous information and false information that can easily mislead people are amplified by artificial intelligence. Large language models like ChatGPT are prone to so-called "hallucinations" and repeatedly fabricate false information. A study by the news industry watchdog NewsGuard found that there are many inaccuracies in the content of dozens of online "news" websites written entirely by artificial intelligence.

Gordon Crovitz and Steven Brill, co CEOs of NewsGuard, stated that this system may be exploited by bad people to intentionally spread erroneous information on a large scale.

Krowitz said, "Some malicious actors can create false statements and then use the multiplier effect of this system to disseminate false information on a large scale." "Some people say that the danger of artificial intelligence is exaggerated, but in the field of news and information, it is having an astonishing impact

Rebecca Finlay of PartnershiponAI, a global non-profit organization, said: "In terms of potential harm on a larger scale, misinformation is the most likely and high-risk aspect of artificial intelligence to cause harm to individuals." "The question is how do we create an ecosystem that allows us to understand what is real?" "How do we verify what we see online

Malicious manipulation of users

Although most experts have stated that erroneous information is the most direct and common concern, there is still considerable controversy over the extent to which this technology may have a negative impact on users' thoughts or behavior.

In fact, these concerns have brought about many tragedies. According to reports, a Belgian man committed suicide after being encouraged by a chat robot. There are also chat robots that tell users to break up with their partners or make users with eating disorders lose weight.

Newman said that from a design perspective, because chat robots communicate with users through dialogue, they may generate more trust.

The big language model is particularly capable of persuading or manipulating people to subtly change their beliefs or behaviors, "she said. Loneliness and mental health are already major issues around the world, and we need to observe the cognitive impact of chat robots on the world

Therefore, what experts are more concerned about is not that artificial intelligence chat robots will gain perceptual abilities and surpass human users, but rather that the big language models behind them may manipulate people to cause harm to themselves that was not originally possible. Newman said that this is particularly true for language models that operate in an advertising profit model, as they attempt to manipulate user behavior and use the platform as long as possible.

Newman said, "In many cases, causing harm to users is not because they want to do so, but rather the consequences of the system's failure to comply with security protocols

Newman added that the humanoid nature of chat robots makes users particularly susceptible to manipulation.

She said: "If you talk to a thing that uses first person pronouns and talk about its own feelings and situations, even if you know that it is not true, it is still more likely to trigger a human like reaction, making people more likely to want to believe it." "Language patterns make people willing to trust it, treat it as a friend, rather than a tool."

Labor issues

Another long-standing concern is that digital automation will replace a significant amount of human work. Some studies have concluded that by 2025, artificial intelligence will replace 85 million jobs worldwide and over 300 million jobs in the future.

There are many industries and positions affected by artificial intelligence, including screenwriters and data scientists. Nowadays, artificial intelligence can pass lawyer exams like real lawyers and answer health questions better than real doctors.

Experts have warned that the rise of artificial intelligence may lead to large-scale unemployment, leading to social instability.

Peter Wang warns that large-scale layoffs will occur in the near future, with "many job positions facing risks" and almost no plans to deal with the consequences.

He said, "In the United States, there is no framework for how people survive when they are unemployed." "This will lead to a lot of chaos and turbulence. For me, this is the most concrete and realistic unintended consequence that arises from it

What to do in the future

Although people are increasingly concerned about the negative impact of the technology industry and social media, there are few measures to regulate the technology industry and social media platforms in the United States. Experts are concerned that artificial intelligence is also the same.

Peter Wang said, "One of the reasons why many of us are concerned about the development of artificial intelligence is that in the past 40 years, as a society, the United States has basically abandoned its regulation of technology

Nevertheless, in recent months, the US Congress has taken a proactive approach by holding hearings to allow OpenAI CEO Sam Altman to testify about the regulatory measures that should be implemented. Finley said she is "encouraged" by these measures, but more work is needed to develop artificial intelligence technology specifications and how to publish them.

It is difficult to predict the responsiveness of legislative and regulatory authorities, "she said." We need to conduct rigorous scrutiny of this level of technology

Although the harm of artificial intelligence is the biggest concern of most people in the industry, not all experts are apocalyptic. Many people are also excited about the potential application of this technology.

Peter Wang said, "In fact, I believe that the new generation of artificial intelligence technology can truly unleash enormous potential for humanity, allowing human society to prosper on a larger scale, surpassing the levels of the past 100 or even 200 years." "In fact, my positive impact on it is very unusual." (Chen Chen)

Tag: Will artificial intelligence lead to the extinction of humanity


Disclaimer: The content of this article is sourced from the internet. The copyright of the text, images, and other materials belongs to the original author. The platform reprints the materials for the purpose of conveying more information. The content of the article is for reference and learning only, and should not be used for commercial purposes. If it infringes on your legitimate rights and interests, please contact us promptly and we will handle it as soon as possible! We respect copyright and are committed to protecting it. Thank you for sharing.

AdminSo

http://www.adminso.com

Copyright @ 2007~2024 All Rights Reserved.

Powered By AdminSo

Open your phone and scan the QR code on it to open the mobile version


Scan WeChat QR code

Follow us for more hot news

AdminSo Technical Support