In a surprising move against a chosenite last week, Twitter suspended the bot account of Jewish journalist Yair Rosenberg, the senior writer at American-Jewish news rag TabletMag.com.
According to a sympathetic “reporter” over at Mashable.com, poor Yair is a victim. You see, during his undoubtedly fair and balanced 2016 coverage of the Trump campaign, Rosenberg was attacked online by Nazi trolls who harassed him with holocaust memes.
As an act of revenge, Rosenberg enlisted the help of developer Neal Chandra to create a “Nazi hunting” twitter bot under the handle “Imposter Buster” that relentlessly spammed and harassed anyone on Twitter deemed antisemitic.
So Twitter permanently suspended the account. Yep. Apparently, one of the gatekeepers at Twitter didn’t get Zuckerberg’s memo stating that Jews and Jewish groups are the coronated controllers and regulators of all “hate speech” on social media. So, of course, this otherwise obscure story is considered newsworthy among Rosenberg’s fellow (((journos))).
In response to Mashable’s demand for answers from Twitter on why it suspended the Jewish journos “Nazi hunter” bot, the company issued the following statement, written by a Care Bear:
“Twitter welcomes the use of our service to counter hate speech and promote positivity, unity, and understanding. We believe this type of counterspeech is a healthy use of Twitter, and a necessary part of a vibrant democracy. Everyone on Twitter must follow the Twitter Rules, including our rules that prohibit hateful conduct, as well the rules that prohibit spammy behavior and automated mentions of other people. We are regularly in touch with developers to help ensure their work fully follows the Twitter Rules and our developer policies.”
You’d think that’d be the end of the story, but no. The New York Times allowed Rosenberg to kvetch about his experience Wednesday in a thousand-word op-ed called “Confessions of a Digital Nazi Hunter.”
The whole thing is really kind of laughable — but here’s a story that’s not so amusing. In fact, it’s downright chilling: “Shadowy Israeli App Turns American Jews Into Foot Soldiers In Online War.” More on that later.
For now, there are numerous articles on The New Nationalist about the plague of bots on social media. However, if you’re unfamiliar with the scourge, here’s a concise rundown on the issue from a January 2017 whitepaper from University College London:
Threats of Twitter bots
Twitter bots have attracted a lot of attention because they can pose serious threats to the health and security of Twitter as a popular public social and communication service.
Spamming: Spammer bots can send a large amount of unsolicited content to other users. The most common objective of spam is getting users to click on advertising links with questionable value, or propagate computer viruses and other malware.
Fake trending topics: If bots are able to pass as humans through Twitter’s filters, they would be counted by Twitter for choosing trending topics and hashtags. This would allow the bots to create fake trending topics that are not actually being popular in Twitter.
Opinion manipulation: A large group of bots can misrepresent public opinion. If the bots are not detected in time, they could tweet like real users, but coordinated centrally around a specific topic. They could all post positive or negative tweets skewing metrics used by companies and researchers to track opinions on that topics.
Astroturfing attack: Bots can orchestrate a campaign to create a fake sense of agreement among Twitter users, where they mask the sponsor of the message, making it seem like it originates from the community itself.
Fake Followers: Fake followers can be bought or sold online. After receiving payment from a user, the botmaster of a botnet can instruct its bots to follow that user. Fake followers could make a user seem more important than it is. One would expect that fake followers should try to appear like real users, however people rarely verify whether someone’s followers are human or bots.
Streaming API contamination: Many research works rely on analysing tweet data returned by Twitter’s streaming API. It is reported that the API is susceptible to an attack by bots, where bots can time their tweets in such a way that their tweets can be included in the API with a probability higher than the expected 1%, up to as high as 82%.