Home > News list > Tech >> Industry dynamics

Silicon Valley Debate: Will AI Destroy Humanity?

On May 22nd, it was reported that with new technologies such as generative artificial intelligence becoming a new trend in the technology industry, the debate about whether artificial intelligence will destroy humanity has become increasingly intense. A well-known technology leader has warned that artificial intelligence may take over the entire world

On May 22nd, it was reported that with new technologies such as generative artificial intelligence becoming a new trend in the technology industry, the debate about whether artificial intelligence will destroy humanity has become increasingly intense. A well-known technology leader has warned that artificial intelligence may take over the entire world. Other researchers and executives have stated that this claim is science fiction.

At last week's congressional hearing, Sam Altman, CEO of artificial intelligence startup OpenAI, clearly reminded everyone that the technology publicly disclosed by the company poses security risks.

Altman warned that artificial intelligence technologies such as ChatGPT chat robots may trigger issues such as false information and malicious manipulation, and called for regulation.

He stated that artificial intelligence may "cause serious harm to the world".

As Altman testified in the US Congress, the debate on whether artificial intelligence will dominate the world is shifting towards the mainstream, and opinions and disagreements across Silicon Valley and those dedicated to promoting artificial intelligence technology are growing.

Previously, some people believed that the intelligence level of machines might suddenly surpass humans and decide to destroy them. Now, this marginal idea is gaining more and more support. Some authoritative scientists even believe that the time it takes for computers to surpass and control humans will be shortened.

But many researchers and engineers say that although many people are concerned about the emergence of killer artificial intelligence like Skynet in the movie Terminator in reality, this concern is not based on logical good science. On the contrary, it has distracted people's attention from the practical problems that this technology has caused, including the problems described by Altman in his testimony. The current artificial intelligence technology is confusing copyright, exacerbating concerns about digital privacy and monitoring, and may also be used to enhance hackers' ability to break through network defenses.

Google, Microsoft, and OpenAI have all publicly released breakthrough artificial intelligence technologies. These technologies can engage in complex conversations with users and generate images based on simple text prompts. The debate about evil artificial intelligence has heated up.

This is not science fiction, "said Geoffrey Hinton, the godfather of artificial intelligence and a former Google employee. Sinton stated that artificial intelligence, which is smarter than humans, may emerge within 5 to 20 years, compared to his previous estimate of 30 to 100 years.

It's like aliens have landed on Earth or are about to land, "he said. We really can't accept it because they speak fluently, they are very useful, they can write poetry, and they can reply to boring letters. But they are really aliens

However, within large technology companies, many engineers closely related to this technology do not believe that replacing humans with artificial intelligence is something that everyone needs to worry about now.

Sarah Hooker, former Google researcher and director of CoherforAI, a research laboratory under the artificial intelligence startup Cohere, said, "Among researchers actively engaged in this field, there are far more people who focus on current real-life risks than those who focus on whether humans have survival risks

There are many practical risks at present, such as publishing robots that have been trained in inappropriate content, which can deepen biases and discrimination; The vast majority of training data for artificial intelligence is in English, mainly from North America or Europe, which may further deviate from the language and culture of most people; These robots often fabricate false information and disguise it as facts; In some cases, they may even fall into an infinite loop of conversations with attacking users. In addition, people are not aware of the chain reaction brought about by this technology, as all industries are preparing for the potential disruption or change that artificial intelligence may bring, and even high paying jobs such as lawyers or doctors will be replaced.

Some people also believe that in the future, artificial intelligence may harm humans and even control the entire society in some way. Although the risks related to human survival seem more severe, many people believe that these risks are more difficult to quantify and less specific.

There is a group of people who believe that these are just algorithms. They are just repeating what they see online, "Google CEO Sundar Pichai said in an interview in April this year." There is also a view that these algorithms are emerging with new features, creativity, reasoning ability, and planning ability. "" We need to approach this matter with caution

This debate stems from the continuous breakthroughs in machine learning technology in the field of computer science over the past decade. Machine learning creates relevant software and technologies that can extract novel insights from a large amount of data without clear human instructions. This technology is ubiquitous in various applications such as social media algorithms, search engines, and image recognition programs.

Last year, OpenAI and several other small companies began releasing tools using new machine learning technologies: generative artificial intelligence. This so-called big language model uses trillions of photos and sentences captured online for self training, and can generate images and text based on simple prompts, engage in complex conversations with users, and write computer code.

Anthony Aguirre, executive director of the Future of Life Institute, said that large companies are competing to develop increasingly intelligent machines with almost no regulation. The Future Life Research Institute was established in 2014 to study the risks that exist in society. With funding from Tesla CEO Elon Musk, the institute began studying the possibility of artificial intelligence destroying humans in 2015.

Aguilar said that if artificial intelligence obtains better reasoning ability than humans, they will attempt to achieve self-control. This is something that people should worry about, just like the current practical problems.

He said, "How to constrain them to stay on track will become increasingly complex." "Many science fiction novels have already made it very specific

In March of this year, Aguilar helped write an open letter calling for a six-month suspension of training for new artificial intelligence models. This open letter received 27000 signatures and support, including senior AI researcher Yoshua Bengio, who won the highest award in computer science in 2018, and one of the most influential AI startups, CEO Emad Mostaque.

Musk is undoubtedly the most eye-catching among them. He once helped create OpenAI and is now busy building an artificial intelligence company himself. Recently, he has invested in expensive computer equipment needed to train artificial intelligence models.

For many years, Musk has believed that humans should be more cautious about the consequences of developing super artificial intelligence. In an interview during Tesla's annual general meeting last week, Musk said that he had funded OpenAI because he felt that Google co-founder Larry Page was "careless" about the threat of AI.

Meibian Zhihu Quora is also developing its own artificial intelligence model. Adam D'Angelo, the CEO of the company, did not sign the public letter. When people make this suggestion, they have different motivations, "he said of the open letter

OpenAI CEO Altman also does not endorse the content of the open letter. He stated that he agrees with some of the content of this open letter, but overall lacks "technical details" and is not the correct way to regulate artificial intelligence. Altman stated at the AI hearing held last Tuesday that his company's approach is to launch AI tools to the public as soon as possible in order to identify and solve problems before technology becomes more powerful.

But there is an increasing debate in the technology community about killer robots. Some of the harshest criticisms come from researchers who have been studying this technical flaw for many years.

In 2020, Google researchers Timnit Gebru and Margaret Mitchell, together with Washington University scholars Emily M. Bender and Angelina McMillan Major, wrote a paper. They believe that the increasing ability of large language models to imitate humans exacerbates the risk that people may perceive them as emotional.

On the contrary, they believe that these models should be understood as "random parrot tongues," or simply very good at predicting which word will appear in sentences based solely on probability, without the need to understand what they are saying. Other critics refer to the big language model as "automatic completion" or "knowledge enema".

They documented in detail how the big language model reproduced negative content such as gender discrimination. Gebru said that this paper was suppressed by Google. After insisting on publicly publishing this article, Google fired her. A few months later, the company fired Mitchell again.

The four collaborators of this paper also wrote a letter in response to the public letter signed by Musk and others.

They said, "It's dangerous to distract our attention with a fantasy artificial intelligence utopia or the end of the world." "On the contrary, we should focus on the very real and realistic exploitative behavior of development companies, which are rapidly concentrating their efforts and exacerbating social inequality

Google declined to comment on the dismissal of Gruber at the time, but stated that many researchers were still studying responsible and ethical artificial intelligence.

There is no doubt that modern artificial intelligence is powerful, but that does not mean that their threat to human survival is imminent, "said Hooke, the director of artificial intelligence research at Cohere.

Currently, most of the discussion on how artificial intelligence can break free from human control is focused on how it can quickly overcome its own limitations, just like Skynet in Terminator.

Hooke said, "Most technologies and the risks present in them are gradually changing." "The current technological limitations exacerbate most of the risks

Last year, Google fired artificial intelligence researcher Blake Lemoine. He once stated in an interview that he firmly believes that Google's LaMDA artificial intelligence model has perceptual capabilities. At that time, Lemon was harshly reprimanded by many in the industry. But a year later, many people in the technology industry also began to accept his views.

Former Google researcher Hinton said that it was not until recently that he changed his established view on the potential dangers of this technology after using the latest artificial intelligence models. Hinton posed complex questions to computer programs that, in his view, required artificial intelligence models to roughly understand his requests, rather than just predicting possible answers based on training data.

In March of this year, Microsoft researchers stated that while studying OpenAI's latest model, GPT4, they observed a "spark of universal artificial intelligence", which refers to artificial intelligence that can think independently like humans.

Microsoft has spent billions of dollars collaborating with OpenAI to develop the Bing chat robot. Skeptics believe that Microsoft is building its own public image around artificial intelligence technology. People always believe that this technology is more advanced than the actual situation, and Microsoft can benefit a lot from it.

Microsoft researchers believe in their paper that this technology has developed an understanding of world space and vision based solely on the trained text content. GPT4 can automatically draw unicorns and describe how to stack random objects, including eggs, together so that the eggs do not break.

The Microsoft research team wrote, "In addition to mastering languages, GPT-4 can also solve various complex new problems related to fields such as mathematics, programming, vision, medicine, law, psychology, etc. without any special prompts." They concluded that the capabilities of artificial intelligence in many fields are comparable to those of humans.

However, one of the researchers admitted that although artificial intelligence researchers have attempted to develop quantitative standards to evaluate the intelligence level of machines, defining 'intelligence' remains very challenging.

He said, "They all have problems or controversies

Tag: Silicon Valley Debate Will AI Destroy Humanity


Disclaimer: The content of this article is sourced from the internet. The copyright of the text, images, and other materials belongs to the original author. The platform reprints the materials for the purpose of conveying more information. The content of the article is for reference and learning only, and should not be used for commercial purposes. If it infringes on your legitimate rights and interests, please contact us promptly and we will handle it as soon as possible! We respect copyright and are committed to protecting it. Thank you for sharing.

AdminSo

http://www.adminso.com

Copyright @ 2007~2024 All Rights Reserved.

Powered By AdminSo

Open your phone and scan the QR code on it to open the mobile version


Scan WeChat QR code

Follow us for more hot news

AdminSo Technical Support