Home > News list > Tech >> Intelligent devices

How can AI scams become popular after being released? | Titanium thermal evaluation

Intelligent devices 2023-06-07 10:48:04 Source: Network

Recently, the Telecommunications Network Crime Investigation Bureau of Baotou Public Security Bureau released a case of using intelligent AI technology to commit telecommunications fraud. The victim was defrauded of 4


Recently, the Telecommunications Network Crime Investigation Bureau of Baotou Public Security Bureau released a case of using intelligent AI technology to commit telecommunications fraud. The victim was defrauded of 4.3 million yuan within 10 minutes, and the topic of "AI fraud" became a hot topic.

With the increasing maturity of AI technology, both fidelity and deception are becoming stronger. When AI singers and anchors who resemble celebrities can be "highly imitated" in appearance and voice, how should people prevent them?

How is the current progress in the standardization of generative AI? How can the application of AI technology avoid legal risks?

Will the frequent infringement and fraud incidents currently constrain the development of the industry?

In this issue of "Titanic Hot Review", senior media professionals are invited to join the hot search on the topic of "AI fraud". How can AI "improve" after being released? There was a discussion, and below is a collection of some viewpoints.

How should people take precautions when their appearance and voice can be highly imitated.

Jia Xiaojun, the head of Beiduo Finance, stated that tools are neutral and the key lies in how to use them. "AI fraud" is a typical example. For ordinary people, the fraud generated by such AI technology is very difficult to distinguish, especially when simulating sound and video.

The reason why AI fraud can occur is essentially a problem caused by personal information leakage. The other party can grasp sensitive information of relevant parties, accurately provide information such as accounts and addresses, and also obtain contact lists, thereby defrauding users of funds.

For ordinary people, when it comes to situations such as funds, it is necessary to maintain sensitivity, carefully check if there are any problems, and preferably achieve cross validation to keep a close eye on their wallets.

From a regulatory perspective, it is necessary to strengthen the introduction of scientific and technological personnel, innovate relevant adversarial technologies, and achieve timely and effective reminders and prevention. At the same time, strengthen the platform access threshold and take reasonable interception measures.

Guo Shiliang, an expert at the Whale Platform think tank, said that everyone is talking about AI, but people see more of the benefits of AI but less of the negative impact it brings. Someone was scammed 4.3 million yuan in 10 minutes. The key is that the fraudster used AI face changing technology, and the victim also passed video verification before transferring the money. Unexpectedly, it was a scam, and it was a high-end scam that seemed flawless.

Later, with the full assistance of the bank, it took 10 minutes to successfully intercept the 3.3684 million yuan of funds in the fraudulent account, but there were still 931600 yuan of funds still being recovered. AI fraud is very clever, using both voice synthesis technology and AI face changing technology. It can even invade the contact information of friends and successfully steal their money.

The arrival of the AI era has brought both opportunities and tests. At this time, scammers are using new technologies to cheat, and their anti fraud measures also need to keep up with the times. Everyone's anti fraud awareness also needs to be improved, and anyone involved in sensitive content such as transfers should be vigilant. The AI era has fully arrived, and relevant laws and regulations need to be followed up in place. Technology is advancing, and regulatory and law enforcement capabilities and fraud prevention technologies also need to keep up with the times.

Jiang Han, a senior researcher at Pangu Think Tank, said that with the development of artificial intelligence technology, AI fraud has also begun to emerge. AI fraud is a form of fraud that utilizes artificial intelligence technology. Its advantages lie in the ability to achieve higher frequency attacks, more precise targeting, and more efficient deception effects. It is also a double-edged sword in the process of technological development. How should we view and respond to this type of fraud phenomenon?

Firstly, frequent fraud incidents require everyone to continuously improve their learning ability and identify potential fraud risks. AI technology has strengthened the attack methods of criminals from basic data analysis, model training, artificial intelligence decision-making, and other aspects. For consumers, to prevent such fraud, it is necessary to improve their technical awareness and risk identification capabilities. We need to try our best not to give opportunities to answer unknown calls, not to be gullible or deceived, and to protect our privacy and property security.

Secondly, enterprises need to strengthen self-discipline, regulation needs to guide market norms, and individuals should further strengthen their awareness of risk prevention and control. AI technology can bring efficiency and competitive advantages to enterprises, but attention should also be paid to the balance between innovation and compliance, especially when it comes to sensitive areas such as user information and privacy. Enterprises should strengthen self-discipline and compliance control. For regulatory agencies, it is necessary to guide the development of market norms and prevent such illegal and criminal activities. For individuals, they should actively participate in the management of public safety, learn new knowledge about fraud prevention, and enhance their ability to identify potential fraud.

In the long run, the AI trend of generative large models continues, but how to use AI as a tool to improve requires everyone to work together. In the development process of AI, it is necessary to pay attention to the compliance and security of technology, and promote the implementation of AI technology in accordance with traditional laws, ethics, and ethics. The development of AI cannot simply pursue speed and efficiency. At the same time, it should also be based on the study of human nature, not forgetting the original intention to make good use of artificial intelligence to better serve society, exploring and optimizing AI paradigms, preventing potential abuse and harm, and assisting the healthy development of AI.

Bi Xiaojuan, the editor in chief of the New Economic Observer Group, stated that technology has always been a double-edged sword, as the saying goes. In the past few years, it can be seen that the development of big data and the sharing economy has brought considerable economic and social benefits, but the by-product is the massive leakage of personal privacy information, and telecommunications network fraud and harassing SMS are constantly emerging. However, with the investigation and filling of regulatory deficiencies, timely follow-up of policies and regulations, and the improvement of public awareness of prevention, similar fraud incidents have been alleviated to some extent.

The same process is now taking place in the AI field. The commercial value of AI technology continues to be unleashed, but it also makes fraudsters AI "like Tiger Wings": by impersonating users' relatives and friends through facial changes, voice simulations, and other means, it is almost impossible to achieve fraud that is real and difficult for users to distinguish, making it easy for them to be hooked. Moreover, using AI technology, criminals can simultaneously commit fraud against a large number of users, resulting in a wider range of harm and greater property losses for victims.

But as an ordinary user, it is definitely not without resistance in front of AI. Firstly, it is necessary to enhance the awareness of risk prevention for personal property and be vigilant against this new type of fraud; Secondly, strengthen personal information protection and avoid registering a large number of unofficial and dating apps; Once again, if the other party requests to borrow or transfer money through audio or video means, no matter how eager the other party appears, try to use multiple means and if conditions permit, conduct offline verification before transferring, involving large transfers, and even go to the bank counter for processing; Finally, if unfortunately deceived, it is necessary to immediately report to the police and contact the relevant bank to stop the loss as much as possible.

In terms of laws, regulations, and supervision, timely follow-up should be carried out to set thresholds and firewalls for the development and application of AI technology, and to promote AI for the better. The good news is that in early April, the National Cyberspace Administration drafted the "Management Measures for Generative Artificial Intelligence Services (Draft for Soliciting Opinions)" to solicit opinions from the public, with the aim of promoting the scientific, orderly, healthy and standardized development of China's AI industry, and preventing chaos and infringement.

For AI practitioners, they should follow the direction of national regulatory policies, maintain sensitivity to AI ethics and ethics, and achieve the unity of technological development and corporate responsibility.

Industry observer Wenzi stated that research on AI security can be traced back to 2008. So far, it has included numerous fields. In a survey of the natural language processing (NLP) community in 2022, 37% of people agree or slightly agree that AI decision-making may lead to a disaster "at least as bad as the full-scale nuclear war". However, there are also some different voices. For example, Andrew Ng, an adjunct professor at Stanford University, compared it to "worrying about overpopulation on Mars before we even set foot on Terrestrial Time."

Dr. Samuel Bowman's viewpoint is a more relevant understanding of this issue - rather than stopping research on AI/ML around the world, people should ensure that all sufficiently powerful artificial intelligence systems are built and deployed in a responsible manner.

Zhang Jingke, the founder of Internet Beijing Diary, said that for the massive information produced by AI at a high speed, mankind will inevitably face the problem of false information more than 100 times in the era of Internet information explosion again.

If you want to protect most people from it, you can refer to the traditional news and the protected mode of copyright. That is, information dissemination needs to be annotated with the source, and if it is AI writing, the author needs to be annotated. This is also a necessary cost paid by netizens and capital to eliminate information dissemination intermediaries. In the long run, free information without supervision will no longer be fair and efficient.

aboutHow is the current progress in the standardization of generative AI? How can the application of AI technology avoid legal risks?

The person in charge, Chu Shaojun, said that firstly, in the field of communication and public opinion, it is often "good things never go out, bad things spread thousands of miles". Using AI fraud is not a new thing in essence, and it is only one of the thousands of fraud methods. However, due to its association with popular AI technology and applications, it is easy to ferment into a hot topic and receive attention from various public opinions, Even to some extent, there has been a crackdown on new technologies and fields, and calls for regulatory oversight of new technologies will erupt in a short period of time. However, at present, there is no need to overly focus on new technologies and fields, and to regulate them too early. After all, the development of any new technology or field requires time and tolerance, and sometimes even a certain amount of "barbaric growth" time and space.

Secondly, for new fields and technologies, regulation and legislation often lag behind, and this matter requires more self-discipline from enterprises and industries. While developing business and technology, enterprises should also consider their own social responsibilities, and have better prediction and prevention in the research and development and design stages. At the same time, it is necessary to strengthen user education and science popularization, so that users and the general audience can also understand new technologies and fields, and take preventive measures. At the same time, users and the audience themselves should also learn and pay more attention, and at the same time, they must have an extra perspective. Making money and borrowing money transfer should be checked in multiple ways, and at the same time, do not believe that "pie will fall from the sky".

Thirdly, new technologies can be used for fraud, but from another perspective, the emergence of new technologies can also be used for fraud prevention and anti fraud. Many times, technology is neutral or neutral, depending on how it is applied and standardized. The current frequent incidents of infringement and fraud will not restrict the development of the industry, but will promote its standardized development. After all, after the brutal growth, the frequent occurrence of negative events will not only trigger public opinion supervision, but also trigger more self-discipline among enterprises and the industry. On the other hand, the regulatory and legislative perspectives will definitely accelerate, ultimately pushing the industry towards more standardized development.

Finally, to prevent extreme thinking and excessive regulatory thinking, even if negative events occur, on the one hand, regulatory and legislative norms are needed; On the other hand, it is also necessary to consider tolerance and time while regulating and legislating, to give the industry sufficient development space. After all, looking at the big picture, the future development of AI, especially AIGC and other fields, may not only be related to the enterprise industry, but also to the country's voice in the technology field. All parties have a responsibility to make the development of new technologies better and more stable.

Internet technology blogger Pi Tui stated that AI artificial intelligence technology is becoming increasingly mature. Are there any risks and how to avoid them? The answer given by Tongyi Qianwen is that the development of AI artificial intelligence technology has indeed brought many conveniences and opportunities to people, but it also brings some risks and challenges.

Avoiding AI risks can start from the following points: first, data privacy: collecting and using personal data requires strict privacy and protection regulations to ensure that data is not abused or leaked. Secondly, transparency and interpretability: AI systems should have transparency and interpretability, so that people can understand their decisions and behaviors. Thirdly, security: Ensure the security of the AI system, including preventing malicious attacks, data backup and recovery, error correction, and other aspects. Fourthly, legal compliance: Ensure that the AI system complies with relevant laws, regulations, and ethical norms, and avoid conflicts with laws. Fifth, technical level: Ensure the technical level of AI systems to prevent technology abuse or misuse. Finally, cooperation and collaboration: Collaborate and collaborate with other stakeholders to ensure that the development and application of AI systems are in line with public interest and social responsibility.

In short, avoiding AI risks requires ensuring that the development, use, and management of AI systems comply with ethical, legal, and safety standards, as well as strengthening education and public participation to promote the healthy development of AI technology.

Zheng Yang, Director of Strategic Development Department of Zhonghuo International, stated that the current types of AI fraud mainly include voice synthesis, AI face changing, and the use of big data and AI technology to screen and filter information, in order to target audiences. Technology is inherently a double-edged sword, and historical experiences such as SMS and phone fraud, account theft, and P2P have repeatedly verified a truth that every iteration of new technology brings endless technological fraud.

Using reverse thinking to think, firstly, the core of AI fraud lies in its ability to distinguish between fake and genuine information. The focus of prevention and regulation lies in how to more easily identify the content generated by AI, and how to provide ordinary people with a deeper understanding of the risks that AI technology may pose (popularization of common sense). Secondly, the foundation of AI fraud technology lies in data, and the focus of prevention and regulation is on how to prevent the leakage and abuse of personal information.

From a regulatory perspective, on the one hand, precise education and prevention measures need to be taken in advance for vulnerable populations, such as empty nest seniors and fanatical star chasers. On the other hand, strict supervision can be exercised over channels such as online marriage, dating, loans, and online games.

In the process of marketization, any technology needs to achieve a balance between its social and economic value. In terms of the current development of AI technology, prominent social issues include privacy and data protection risks, intellectual property infringement risks, moral and ethical risks, etc. And these issues will inevitably affect the marketization speed of AI technology, but this itself is a necessary process for technology implementation.

To fundamentally reduce legal risks, AI technology requires the production companies of AI technology and its tools to conduct platform self-regulation. For example, Google's labeling of every AI generated image created by its tools is a good start. On the other hand, it is necessary for relevant departments to establish relevant regulatory laws, regulations, and standard systems as soon as possible, clarify legal boundaries and responsible parties. For example, in the regulations issued by the Cyberspace Administration in April, organizations and individuals providing AI services are required to bear the responsibility of content producers.

Wei Li, founder of Dali Finance, said that according to the information on AI fraud cases published at present, the technology involved in AI fraud is mainly deep synthesis technology, including AI face changing technology, voice (voice) synthesis technology and text generative model. Illegal individuals can use AI face changing technology to create fake videos or photos, in order to impersonate others and deceive them.

At the technical level, there are also many technology companies and researchers actively practicing "technology countermeasures" to identify such AI deep synthesis content. These "technology countermeasures" products are usually based on deep learning technology, such as identifying AI generated videos by analyzing video features, processing traces, and detecting "mismatches" in facial features.

In terms of legal prevention and control, national agencies, platforms, and individuals work together. While the platform continuously improves its audit and monitoring capabilities in accordance with various regulatory requirements, individuals also need to be vigilant at all times. Perhaps, it is also necessary to establish collaborative and shared databases, collect and store confirmed video samples, or establish law enforcement practices such as anti AI fraud alliances.

How to make artificial intelligence technology "technology better" has become an urgent problem to be solved. Firstly, data privacy and security are crucial. It is necessary to strengthen the supervision and management of artificial intelligence technology to prevent the abuse and misuse of personal information. Secondly, it is necessary to strengthen the security and reliability, transparency, and fairness of artificial intelligence technology. Finally, it is necessary to strengthen the education and popularization of artificial intelligence technology, improve the public's technological literacy and safety awareness, and avoid being deceived and victimized.

aboutWill the frequent infringement and fraud incidents currently constrain the development of the industry?

Senior media expert Zu Tengfei stated that "technology itself is innocent", and the key lies in who applies it. AI is not a new technology that only exists today, nor are the "bad guys" who cheat and profit from it. They cannot give up on it just because they choke. From a global perspective, AI is a direction that almost every country is vigorously developing. It can be applied in multiple fields, liberating productivity and improving production efficiency.

Tracing back to the source, the source of AI fraud is personal information leakage. Earlier, due to personal information leakage, people were harassed by phone calls, bombarded by text messages, and now AI fraud, personal information protection has reached a point where it is necessary to do so. When downloading various types of apps, they require reading phone numbers, photos, geographical locations, etc. Have these information been effectively protected by relevant software companies? Or are they being used by people with ulterior motives to resell and profit?

To cope with the new type of AI fraud, it is still necessary to achieve the knowledge points of various anti fraud bloggers, such as protecting personal information, verifying messages, and not transferring funds or making payments. At the same time, relevant enterprises must strictly comply with relevant policies, improve AI technology ethics standards, and strengthen safety regulatory measures.

Tang Chen, the head of student Tang Chen, stated that negative cases that arise during the development of AIGC are inevitable, such as AI fraud, digital people, and face changing, which are all manifestations of the capabilities of artificial intelligence tools. How to avoid the negative effects brought about by the development of artificial intelligence fundamentally depends on regulating who uses artificial intelligence for. A few days ago, Stefanie Sun responded to the copyright dispute caused by "AI Stefanie Sun" and said, "Anything is possible, nothing matters. I believe that pure thinking and being oneself are already enough." Stefanie Sun's response received high praise from the public. In addition to her simple literary talent and large layout, the more significant significance is that she presented the confidence that makes people who are people, This is also the confidence that artificial intelligence cannot replace humans in the short term. Her mindset should also become a reference for the public.

Similarly, this logic applies to any industry that artificial intelligence is transforming. As OpenAI CEO Sam Ultraman said, artificial intelligence technology will reshape society as we know it, possibly the "greatest technology ever developed by humans", and will greatly improve human life. But facing it squarely can bring real danger, and people should be happy to be a bit afraid of it. Only with awe can artificial intelligence technology be utilized by humanity in the development of technology. In this process, people are not worried about the technology itself, just like Stefanie Sun is not too worried about AI Stefanie Sun, but about the motivation for humans to use technology. Everyone should pay more attention to what they hope technology will evolve into? What will non technology transform humanity into? This may be the essence of the problem.

Tag: How can AI scams become popular after being released


Disclaimer: The content of this article is sourced from the internet. The copyright of the text, images, and other materials belongs to the original author. The platform reprints the materials for the purpose of conveying more information. The content of the article is for reference and learning only, and should not be used for commercial purposes. If it infringes on your legitimate rights and interests, please contact us promptly and we will handle it as soon as possible! We respect copyright and are committed to protecting it. Thank you for sharing.

AdminSo

http://www.adminso.com

Copyright @ 2007~2024 All Rights Reserved.

Powered By AdminSo

Open your phone and scan the QR code on it to open the mobile version


Scan WeChat QR code

Follow us for more hot news

AdminSo Technical Support