Study: An Army of Twitter Robots Found Spreading Cryptocurrency Scam
Next Post

Press {{ keys }} + D to make this page bookmarked.


Study: An Army of Twitter Robots Found Spreading Cryptocurrency Scam

Duo Security, Inc

LAS VEGAS, NEVADA – August 14, 2018

Jordan Wright and Olabode Anise of Duo Security have spent three months analyzing 88 million public Twitter accounts and over half-a-billion tweets. They presented their analysis of the problem at Black Hat 2018 in Las Vegas last week. Their study, ‘Don’t@Me’ is now available from the Duo Security website.

What did the research uncover?

The main focus of the research was how to improve the detection of Twitter bots. To do this Wright and Anise had to build their own dataset against which to run tools. They have provided a link to the tools and scripts that they used on GitHub along with instructions on how to use them.

“Malicious bot detection and prevention is a cat-and-mouse game. We anticipate that enlisting the help of the research community will enable discovery of new and improving techniques for tracking bots. However, this is a more complex problem than many realize, and as our paper shows, there is still work to be done,” Wright says.

As part of the research they discovered:

  • A sophisticated cryptocurrency scam botnet consisting of at least 15,000 bots. It was actively siphoning money from its victims by using multiple linked attacks.
  • Botnets that were deploying deceptive behavior to appear genuine. This means not tweeting in bursts, being online for longer periods and using random retweets.
  • The use of geolocation in tweets in order to appear as if they were disconnected.
  • How botnets create their own fake social networks with controlled interaction to prevent them being easily identified if one part is taken down.
  • How tracking fake followers can uncover other fake followers and their networks. Fake followers can also persuade legitimate users to follow them extending their impact.
  • The researchers also built their own anatomy of a fake account that looked at its account attributes, content and the content metadata.
  • The use of amplification bots that like tweets to increase a tweets popularity. This increases the likelihood of other users clicking links on the tweets. Those links often point to sites where malware is downloaded onto the users device.
  • The discovery of a scam hub consisting of a 3-tier hierarchical structure.
  • Mapping of the cryptocurrency scam botnet’s three-tiered, hierarchical structure, consisting of scam publishing bots, “hub” accounts followed by bots and amplification bots.
  • Using unicode characters in tweets instead of traditional ASCII characters.
  • Adding various white spaces between words or punctuation.
  • Transitioning to spoofing celebrities and high-profile Twitter accounts in addition to cryptocurrency accounts.
  • Using screen names that are typos of a spoofed account’s screen name.
  • Performing minor editing on the profile picture to avoid image detection.

The latter five behaviors were seen as botnets began to evolve to avoid detection, according to the study.

The cryptocurrency scam botnet

In late May 2018 the researchers uncovered a cryptocurrency scam. It used spoofed accounts to giveaway cryptocurrency.

Right now, the Duo Security researchers say the bots are still functioning, imitating otherwise legitimate Twitter accounts, including news organizations, to bleed money from unsuspecting users via malicious “giveaway” links.

The researchers even found Twitter recommending some of the robot accounts in the ‘Who to follow’ section in the sidebar.

Typically, the bots first created a spoofed account for an existing cryptocurrency-affiliated account.

That spoofed account would have what appeared to be a randomly-generated screen name – say, @o4pH1x­bcnNgXCIE – but it would use a name and profile picture pilfered from the existing account.

Bolstered by all that genuine-looking window dressing, the bot would reply real tweets posted by the original account.

The replies would contain a link inviting the victim to take part in a cryptocurrency giveaway.

Those following the links would get infected with malware and redirected to sites that would harvest their data. This is done by asking them to create an account on the site or for the details of a crypto wallet. The attackers would then use that data to defraud the victim.

Wright and Anise discovered more than 2,600 bots spreading similar links. The bots were connected at two separate networks. Enumerating them uncovered a larger botnet as part of the scam.

Amplification Bots

One job of these bots was to like tweets, in order to artificially pump up a given tweet’s popularity.

The researchers noticed that these “amplification bots” were also used to increase the number of likes for the tweets sent by other robot accounts, to give the scam an air of authenticity.

“Amplification bots are accounts that exist to like tweets in order to artificially inflate the tweet’s popularity. To make the cryptocurrency scam appear legitimate, we noticed that amplification bots were used to increase the number of likes for the tweets sent by bots,” Wright and Anise wrote.

When the researchers mapped out the connections, they found clusters of bots that received support from the same amplification bots, thus binding them together.

Duo Security, Inc

This is just the beginning, the researchers said in a post about the research.

They’ve open-sourced the tools and techniques they developed during their research and urged others to continue to build on the work and create new techniques to identify and flag malicious bots.

The paper goes into far more detail regarding how complicated it is to research bots in the first place – one vexing problem, for example, is an ongoing lack of data on how many bots are actually on Twitter.

Meanwhile instead of purging millions of scam bots, Twitter has submitted to Congress a faked ‘Russian Bot List’ associated with accusations of meddling in the 2016 US presidential election followed with suspending of accounts of real Americans, as USA Really reported earlier.

Author: USA Really