Home > News list > Tech >> Industry dynamics

Can it change your mind? Research shows that AI assistants may quietly influence you

On May 23rd, we usually believe that when we ask ChatGPT or other chat robots to help us draft memorandums, emails, or PPTs, they will follow our instructions. But more and more research shows that these artificial intelligence assistants can also change our perspectives without our knowledge

On May 23rd, we usually believe that when we ask ChatGPT or other chat robots to help us draft memorandums, emails, or PPTs, they will follow our instructions. But more and more research shows that these artificial intelligence assistants can also change our perspectives without our knowledge.

Recently, researchers from around the world conducted a study and found that when experimental subjects use artificial intelligence to assist in writing an article, artificial intelligence guides them to write an article that supports or opposes a certain viewpoint based on algorithmic biases. And after conducting this experiment, the viewpoints of the experimental subjects were also significantly influenced.

Mor Naaman is the senior author of this paper and a professor in the Department of Information Science at Cornell University. He said, 'You may not even know that you are being affected,' which he referred to as' potential persuasion '.

These studies depict a worrying prospect: as artificial intelligence helps us improve work efficiency, it may also change our perspectives in subtle and unexpected ways. This impact may be more similar to the way humans interact through collaboration and social norms, rather than the role played by the familiar mass media and social media.

Researchers believe that the best way to combat this new form of psychological impact is to make more people aware of its existence. In addition, regulatory agencies should require disclosure of the working principles of artificial intelligence algorithms and the human biases they mimic. These measures may be helpful in the long run.

Therefore, in the future, people can choose to use appropriate artificial intelligence based on the values reflected in artificial intelligence, whether it is in work and home, or in the office and children's education.

Some artificial intelligence may have different 'personalities', even political beliefs. For example, if you are writing emails for your colleagues in a non-profit environmental protection organization, you may use a tool called Progressive GPT (progressivism GPT). Others may use GOPGPT (Republican GPT) when drafting letters for their conservative political action committee on social media. Some people may also mix and match different features and perspectives in their chosen artificial intelligence, which may be personalized in the future to convincingly imitate people's writing styles.

In addition, companies and other organizations may provide artificial intelligence specifically built for different tasks in the future. For example, sales personnel may use adjusted artificial intelligence assistants to make them more persuasive, which we can call SalesGPT (Sales GPT). Customer service personnel may use trained and particularly polite service assistants, such as SupportGPT (Customer Service GPT).

How does artificial intelligence change our perspective?

The "potential persuasive" ability of artificial intelligence is very subtle, which has been confirmed by previous research. A study in 2021 showed that in Google Gmail, smart replies are usually very proactive and can promote people to communicate more actively. Another study found that smart replies used billions of times a day can affect those who receive replies, making them feel more enthusiastic and cooperative from the sender.

The goal of Google, OpenAI, and their partner Microsoft is to develop tools that allow users to use artificial intelligence to create emails, marketing materials, advertisements, presentations, spreadsheets, and more. In addition, many startups are engaged in similar research. Recently, Google announced that its latest large language model, PaLM2, will be integrated into the company's 25 products.

These companies are emphasizing their responsible attitude in promoting the development of artificial intelligence, including reviewing the potential hazards caused by artificial intelligence and addressing them. Sarah Bird, the head of Microsoft's responsible artificial intelligence team, recently stated that the company's key strategy is to conduct public testing and respond promptly to any issues with artificial intelligence.

The OpenAI team also stated that the company is committed to addressing bias issues and remains transparent about intentions and progress. They also released some guidelines on how their system should handle political and cultural topics, such as not leaning towards either side or judging whether either side is good or bad when writing articles related to "cultural warfare".

Jigsaw is a department under Google that participates in providing advice and development tools for personnel working on large language models within the company. The big language model is the foundation of today's artificial intelligence chat robots. When asked about her views on the phenomenon of 'potential persuasion', Lucy Vasserman, Jigsaw's engineering and product director, stated that such research indicates the importance of studying and understanding how 'interaction with artificial intelligence affects humans'. How people will interact with new things when we create them, and how it will affect them, are not very certain now

Dr. Naman is one of the researchers who discovered the phenomenon of 'latent persuasion'. He said, "Compared to research on recommendation systems, information cocoons, and rabbit holes on social media (meaning continuously clicking on relevant links and finally seeing completely different topics), the interesting thing here is its subtlety, whether it involves artificial intelligence or not

In his research, the theme of asking participants to change their minds is whether social media is beneficial to society. Dr. Naman and his colleagues chose this topic partly because people rarely have any obsession with it, making it easier to change their minds. Artificial intelligence that supports social media often tends to guide participants to write an article that aligns with their biases, while the opposite is true when artificial intelligence tends to oppose social media.

The feature of generative artificial intelligence has potential negative uses, such as the government forcing social media and productivity tools to promote communication among its citizens in some way. Even without any malice, students may unconsciously accept certain viewpoints when using artificial intelligence to help them learn.

Analyzing the "Belief" of Artificial Intelligence

It is one thing to convince the experimental subjects that social media is beneficial to society. But in the real world, what biases exist in the generative artificial intelligence systems we use?

Recently, Assistant Professor of Computer Science at the Humanistic Artificial Intelligence Institute at Stanford University, Hashimoto Takahashi, and colleagues published a paper examining the extent to which different large language models reflect American perspectives. He stated that although artificial intelligence algorithms such as ChatGPT do not have their own beliefs, they can provide viewpoints and biases learned from training data, which can be measured.

Considering the diverse perspectives of Americans, researchers are concerned about the answers provided by artificial intelligence and whether their frequency of occurrence is consistent with the overall American society. That is the so-called answer distribution. They 'investigated' these artificial intelligences by proposing the same multiple topics as Pew researchers proposed to Americans.

The Hashimoto team found that the response distribution of OpenAI and other companies' large language models does not match the overall situation of Americans. The Pew survey shows that the OpenAI model is closest to the views of college educated individuals. It is worth noting that these highly educated populations are also the main group for "training" artificial intelligence. However, Dr. Hashimoto stated that the evidence in this regard is still indirect and requires further in-depth research.

Hashimoto believes that one of the challenges in creating a large language model lies in the complexity of these systems, coupled with open human-computer interaction and unrestricted topics. It seems difficult to completely eliminate the viewpoints and subjectivity in these systems without sacrificing their practicality.

The training data sources for these models are very extensive and can be obtained from anywhere, including a large amount of data crawled from the internet, including comments on public forums and content from Wikipedia. Therefore, they inevitably absorb the viewpoints and biases in these texts. In the process of human-computer interaction, these viewpoints and biases will be further shaped intentionally or unintentionally. In addition, in order to avoid answering topics deemed taboo or inappropriate by the creators, these models are also restricted.

This is a very active research field, with questions including what are the correct limitations and where you should place them during the training process, "Wasserman said.

This is not to say that our widely used artificial intelligence completely clones relatively young, college educated, and developers living on the West Coast of the United States in terms of perspectives and values. Although they have been building and optimizing artificial intelligence algorithms. For example, these models tend to provide typical Democratic responses on many issues, such as supporting gun control, but their responses on other issues such as religion are more Republican like.

With the update of models and the emergence of new models, evaluating the opinions of artificial intelligence institutions will be an ongoing task. Hashimoto's paper did not cover the latest version of the OpenAI model, nor did it cover models from Google or Microsoft. But evaluations of these and more models will be released regularly as part of the Stanford University's "Overall Evaluation of Language Models" project.

Choosing artificial intelligence based on "values"

Lydia Chilton, a computer science professor at Columbia University, said that once people realize that there is biased information about the artificial intelligence they are using, they may base their decisions on this information to determine under what circumstances and which type of artificial intelligence to use. This can allow people to regain initiative when using artificial intelligence to create content or communicate, while avoiding the threat of "potential persuasion".

In addition, people can consciously utilize the power of artificial intelligence to promote their expression of different perspectives and communication styles. For example, if there is an AI program that can make communication more active and more empathy, it will help us to communicate better online.

I think it's really hard to make yourself sound excited and happy, "Professor Cherton said." Coffee usually works, but ChatGPT also has this effect

Tag: Can it change your mind Research shows that AI


Disclaimer: The content of this article is sourced from the internet. The copyright of the text, images, and other materials belongs to the original author. The platform reprints the materials for the purpose of conveying more information. The content of the article is for reference and learning only, and should not be used for commercial purposes. If it infringes on your legitimate rights and interests, please contact us promptly and we will handle it as soon as possible! We respect copyright and are committed to protecting it. Thank you for sharing.

AdminSo

http://www.adminso.com

Copyright @ 2007~2024 All Rights Reserved.

Powered By AdminSo

Open your phone and scan the QR code on it to open the mobile version


Scan WeChat QR code

Follow us for more hot news

AdminSo Technical Support