Warning: exif_imagetype(https://www.feiyo.top/wp-content/uploads/2024/03/ftc-says-about-increase-in-ai-related-scams-on-social-media.jpg): failed to open stream: Connection refused in /home/deploy/sites/www.feiyo.top/wp-includes/functions.php on line 3314

Warning: file_get_contents(https://www.feiyo.top/wp-content/uploads/2024/03/ftc-says-about-increase-in-ai-related-scams-on-social-media.jpg): failed to open stream: Connection refused in /home/deploy/sites/www.feiyo.top/wp-includes/functions.php on line 3336

Warning: exif_imagetype(https://www.feiyo.top/wp-content/uploads/2024/03/ftc-says-about-increase-in-ai-related-scams-on-social-media.jpg): failed to open stream: Connection refused in /home/deploy/sites/www.feiyo.top/wp-includes/functions.php on line 3314

Warning: file_get_contents(https://www.feiyo.top/wp-content/uploads/2024/03/ftc-says-about-increase-in-ai-related-scams-on-social-media.jpg): failed to open stream: Connection refused in /home/deploy/sites/www.feiyo.top/wp-includes/functions.php on line 3336

Nice88 bet sign up bonus.Royal meaning in Urdu,Jilievo 666

Science & Technology

FTC Says About Increase in AI-Related Scams on Social Media

Over the last year, the Federal Trade Commission (FTC) has recorded a significant increase in the number of complaints about advertising materials that were created using artificial intelligence for fraud.

FTC Says About Increase in AI-Related Scams on Social Media

The mentioned regulator also reported that in 2023, many claims were received from citizens about the fact that AI could potentially be applied when generating marketing content. Such complaints relate to the alleged use of advanced technology to commit fraudulent activities.

The FTC document, the contents of which were reviewed by the media at the request of the Freedom of Information Act, notes that at least a third of the specified complaints were related to advertising materials that are posted on social media platforms, including Facebook and YouTube. This means that popular Internet resources, which are a kind of vast multimedia spaces that form a significant part of the world of digital reality, are faced with the problem of misinformation of a new category. In this case, attention should be paid to the fact that the mentioned problem comes from the own advertisers of social media platforms.

It is known that in February last year, the FTC received two complaints related to advertising content generated through the use of artificial intelligence. After 12 months, the number of such complaints increased to 14. This dynamic is quite understandable in the context of the tendency of active dissemination and implementation of generative artificial intelligence.

It is worth noting that the complaints received by the FTC in all likelihood only partially reflect the scale of the problem of AI fraud. Most likely, the majority of users will complain about advertising content generated by artificial intelligence and distributed to commit criminal acts directly to the management of social media platforms, and a smaller part of them will report the mentioned problem to the specified regulator.

The FTC document contains examples of the negative impact of the specified content. One such example is the story that happened to a 30-year-old resident of Los Angeles. This victim, after watching a YouTube video, clicked on a link to a fake Tesla website, on which a certain digital incarnation of Elon Musk reported that the automotive company offers a profitable investment program that provides significant profits in the shortest possible time. In this case, it was claimed that a joint project with cryptocurrency firms was being implemented. A resident of Los Angeles, believing what was said by the digital incarnation of Elon Musk, to whom the billionaire has nothing to do, transferred $7,000. After a while, it turned out that the user had become a victim of scammers using the capabilities of artificial intelligence.

A user from Florida reported on a deepfake advertisement on YouTube, in which the main acting person is a virtual imitation of Brad Garlinghouse, chief executive officer of the Ripple payment network. This complaint also contains a statement that the digital platform ignores reports about the mentioned content.

A resident of the Philippines grievances a video advertisement in Reels. In this case, the scammers claimed to use artificial intelligence to help people earn up to $1,500 a day as part of a part-time schedule. An Australian user announced the announcement on Instagram of an AI trading platform ostensibly developed by Elon Musk. In this case, the scammers also promised impressive financial results. An Australian resident invested $250 in this offer, but these actions had no result other than losing money.

It is worth noting that criminals’ access to advanced technologies complicates the possibility of exposing them. Artificial intelligence significantly increases the level of realism of fraudulent schemes. Advanced technologies also allow scammers to hide signs of their activity. In this context, user awareness is extremely important. For example, a query in an Internet search engine, such as How to know if my camera is hacked, will allow users to detect signs of unauthorized access to a personal device.

A representative of YouTube’s parent company Alphabet, in a comment to the media, said that the technology giant is aware of the trend of deep fake advertising and is investing heavily in a tool for detecting such content. Efforts aimed at launching mechanisms of juridical action in these cases were separately noted.

A spokeswoman for Meta Platforms said that the tech giant is cooperating with law enforcement agencies to investigate the operations of scammers and deprive criminals of access to platforms owned by this company.

More than a month ago, the FTC proposed an initiative on a new set of rules prohibiting impersonating individuals. The regulator announced this proposal after the number of complaints about fraud committed within the framework of the mentioned algorithm and using artificial intelligence increased.

FTC Chair Lina M. Khan said that criminals use AI tools to impersonate people as realistically as possible and on a large scale. Separately, she noted that the number of cases of voice cloning and other similar crimes based on artificial intelligence indicates a significant increase in the importance of protecting residents of the United States from imitators. According to her, the proposed FTC regulations would strengthen the tools to combat fraud committed using AI posing as individuals.

Farhad Farzaneh, chief products officer at Trustly, during a conversation with media representatives last month, told a story that fraudsters received several million dollars by managing to imitate the company’s executives using artificial intelligence in a video conference format.

Social media platforms have been fighting for years against persons who use these corporate virtual spaces to spread manipulative ideas and various narratives aimed at reinforcing false beliefs in the public consciousness. Nowadays, the players in this industry have to cope with a new challenge, which is scammers using deep fake technology based on artificial intelligence.

Serhii Mikhailov

2864 Posts 0 Comments

Serhii’s track record of study and work spans six years at the Faculty of Philology and eight years in the media, during which he has developed a deep understanding of various aspects of the industry and honed his writing skills; his areas of expertise include fintech, payments, cryptocurrency, and financial services, and he is constantly keeping a close eye on the latest developments and innovations in these fields, as he believes that they will have a significant impact on the future direction of the economy as a whole.