US seizes domain names used by AI-powered Russian bot farms for disinformation

By news2source.com

Thank you for reading this post, don't forget to subscribe!

The US Division of Justice (DOJ) said it seized two web domain names and searched nearly 1,000 social media accounts that were allegedly used by Russian Ultimatum actors to spread massive pro-Kremlin disinformation within and outside the country. To spread it secretly.

The DOJ noted, “The social media bot farm used elements of AI to create fictitious social media profiles – often claiming to belong to individuals in the United States – that the operators used in support of Russian government objectives.” Used to promote messages.”

The bot community, which includes 968 accounts on X, is accused of being part of an elaborate scheme hatched by an employee of Russian state-owned media outlet RT (formerly Russia Today), which was subsidized by the Kremlin. and was assisted by an officer. Russia’s Federal Security Service (FSB), who created and led an unnamed personal perception group.

Developmental efforts for the bot farm began in April 2022 when people acquired online infrastructure while keeping their identity and location secret. According to the DoJ, the group’s purpose was to advance Russian interests by spreading misinformation through fictitious online personas representing various nationalities.

The fake social media accounts were registered using personal electronic mail servers that relied on two domain names – mlrtr(.)com and otanmail(.)com – which were purchased from regional registrar Namecheap. X has since suspended the bot accounts for violating the provider’s terms.

The IDEA operation – which focused on the US, Poland, Germany, the Netherlands, Spain, Ukraine and Israel – was pulled using an AI-powered tool package called Meliorater, which facilitated “collective” openings. and the operation of the mentioned social media bot farms.

Law enforcement agencies in Canada, the Netherlands and the US said, “Using this tool, RT affiliates spread misinformation to several countries, including the United States, Poland, Germany, the Netherlands, Spain, Ukraine and Israel.” ,

Meliorater includes an administrator panel called Brigadier and a backend tool called Taras, which is used to control authentic-looking accounts whose profile photos and biographical information were fabricated using an open-source program called Faker. Are. ,

Cyber ​​security

Each of these accounts presumably had a certain identity or “soul” according to the three bot ideals: those that propagate political ideologies favorable to the Russian government, such as messages already shared through alternative bots, and each bot. And perpetuate misinformation shared through non- Bot accounts.

In addition to being the only tool package identified on X, additional research has revealed Ultimatum actors’ intentions to increase their capacity to protect other social media platforms.

Additionally, Device Missed.

The businesses noted, “Bot persona accounts make clear attempts to avoid bans for terms of service violations and to avoid being seen as bots by blending into the larger social media environment.” “Like authentic accounts, these bots follow real accounts reflecting their political leanings and interests listed in their biographies.”

“Farming is a beloved pastime for millions of Russians,” RT told Bloomberg based on the allegations, without refuting them.

The development marks the first time the US has publicly taken on an international government for its use of AI in an international influence campaign. False legal allegations have been made in the case, but the investigation into the case is still ongoing.

the lookalike survives

In recent months Google, Meta, and OpenAI have warned that Russian disinformation campaigns, as well as disinformation campaigns operating through a community called Doppelganger, have at times leveraged their platforms to spread pro-Russian propaganda.

“The campaign is still active as well as the network and server infrastructure responsible for content distribution,” Curium and EU DisinfoLab noted in an unsealed document revealed on Thursday.

“Surprisingly, Doppelganger operates not from a hidden data center in the Vladivostok fortress or from some remote military bat cave, but from newly minted Russian providers operating inside Europe’s largest data centers. Doppelganger Cyber “Works in conjunction with criminal activities and associated advertising networks.”

At the heart of the operation is a network of bulletproof internet hosting providers encompassing AZA, Unholy Empire, GIR and TNSECURITY, which also secured command-and-control domain names for various malware families such as Stalk, Amade, Agent Tesla, Glupteba Are. Raccoon Stealer, RisePro, Redline Stealer, RevengeRAT, Lumma, Meduza and Mystique.

Cyber ​​security

Additionally, NewsGuard, which offers a suite of tools to combat misinformation, recently found that customer AI chatbots “identified fabricated stories from state-affiliated sites as local news outlets in a third of their responses.” ” Are sensitive to repetition.

Operations affected by Iran and China

It also comes as the US Office of the Director of National Intelligence (ODNI) noted that Iran is “becoming increasingly aggressive in its foreign influence efforts, seeking to sow discord and undermine confidence in our democratic institutions.” doing.”

The company further said that Iranian actors continue to refine their cyber and influence activities, using social media platforms and issuing ultimatums, and they are amplifying pro-Gaza protests in the US by posing as activists online.

Google, for its part, said it took down more than 10,000 cases of the Dragon Bridge (aka Spamouflage Dragon) task in the first quarter of 2024, a China-related spam-yet persistent impact on YouTube and Blogger. is a title given to a network. Which promoted narratives that negatively portrayed the US, along with content like the Taiwan elections and the Israel-Hamas war focused on Chinese speakers.

By comparison, the tech giant predicted at least 50,000 such cases in 2022 and an additional 65,000 in 2023. Overall, this has prevented more than 175,000 cases of community spread in its lifetime.

Ultimatum Research Crew (TAG) researcher Zak Butler noted, “Despite their consistently abundant content production and the scale of their operations, DragonBridge receives practically no organic engagement from real audiences.” “In the cases where DRAGONBRIDGE content received engagement, it was almost entirely inauthentic, coming from other DRAGONBRIDGE accounts, not from authentic users.”

Did you find this lesson engaging? see us Twitter And LinkedIn to read additional unique content we post.


Discover more from news2source

Subscribe to get the latest posts sent to your email.

Leave a Reply

Discover more from news2source

Subscribe now to keep reading and get access to the full archive.

Continue reading