These Students Built The Anti-Bot Algorithm Twitter Desperately Needs

Entertainment By Elena Boaghi |

In recent Senate hearings, lawmakers grilled representatives from Facebook, Twitter, and Google about how the Russian state used their platforms to sow discord and influence the election. The tech giants were contrite but relatively close-lipped about what they are really going to do to prevent foreign governments from manipulating American voters online.

Twitter in particular has a problem with bots controlled by organizations affiliated with the Kremlin. But the company can’t seem to figure out how to identify whether which of the many bots on its service are malevolent bots tweeting inflammatory political statements on both sides of the aisle or real Americans exercising their freedom of speech. While the company’s congressional testimony included the fact that it has discovered 2,752 Russian-controlled accounts and 36,000 Russian bots, it still has not announced a surefire way of identifying such bots and removing them from its site.

But it’s not that difficult–because recently a group of students took a big step toward solving it. Their chrome plug-in, called Bot Check, identifies when Twitter handles display political propaganda bot-like behavior. The extension places a tag that says “Botcheck.me” next to the user’s Twitter handle–click it and a pop-up box tells you if it has found propaganda bot-like patterns from that user and allows you to report if you think the algorithm’s classification is accurate or not. You can also enter any Twitter handle into their website and it will tell you if the account is likely a propaganda bot.

[Screenshot: botcheck.me]

The brains behind Bot Check–Ash Bhat and Rohan Phadte, both computer science students at UC Berkeley–were able to identify these propaganda bots by their behavior with 93.5% accuracy. Most bots share certain tendencies: They tend to tweet regularly every few minutes, retweet fake news, obtain a large number of followers very quickly, and retweet other similar accounts.

Bhat and Phadte discovered these behaviors by analyzing 12,000 accounts and those accounts’ 250,000 tweets during the month of September, 2017. They found that there was a spike in accounts with bot-like tendencies joining Twitter on Election Day and Inauguration Day as compared to verified accounts. They noticed that bot-like accounts tend to tweet about five times as much as human accounts. They analyzed accounts tweeting #ImpeachTrump and #MAGA, and found that a significant percentage of the tweets with each hashtag came from bots–49% for #ImpeachTrump and 66% for #MAGA.

Once Bhat and Phadte had identified these behaviors, they built a database of accounts that fit the bill and trained a machine learning algorithm to recognize them. “It is a tough problem–but definitely doable,” Bhat tells Co.Design in an email. “There’s a lot of low-hanging fruit in terms of what Twitter can do to work toward a solution. If they were to build something like our machine learning model, they could also make pretty significant headway toward this problem.” Twitter did not immediately respond to a request for comment.

Because some bot-behaving accounts do respond and comment like human accounts, Bhat and Phadte also theorize that some of the bots are moderated in order to appear more humanlike. They also think that some accounts that rarely tweeted, and then suddenly start tweeting out divisive political content around and after the election, were originally human accounts that were hacked and converted into bots.

The students were even able to create mini botnets themselves–they bought old Twitter logins and passwords for 11 accounts, then turned the accounts into bots (which now tweet out the latest news stories). Each account cost about $4, and it only took them about 30 minutes. The ease with which they were able to accomplish this makes it seem even more plausible that Russian hackers are turning old accounts into propaganda bots. And if it’s this simple to hack the system, who’s to say they’ll ever stop?

“We’re pretty confused over Twitter’s inactivity and their statement,” Bhat tells Co.Design, referring to the company’s claim that the problem’s difficulty is what has stopped it from solving it. After all, it took a couple of college students only six weeks to build the extension–and Twitter is a multibillion-dollar company. Since Botcheck.me launched on Halloween, users have identified more than 5,000 suspicious accounts. Twitter has work to do.

Leave a Reply

Your email address will not be published. Required fields are marked *

18 + 16 =