Tag Archive for Chatbot

Chatbots Taking Over Politics

Chatbots Taking Over PoliticsMercifully, the 2016 U.S. election cycle is coming to an end. Most people are talking about how terrible all the candidates are. We don’t care anymore both candidates suck. The political conversation online is even worse. Political conversation online is more hateful because most of the politics on social media outlets like Facebook or Twitter are chatbots.

Researchers say that most election tweets come from political chatbots. Chatbots are computer programs that simulate human conversation or chat through artificial intelligence. Political chatbots engage with other users about politics, especially on Twitter (TWTR) and Facebook (FB).

Chatbots are rooting for Trump.

most election tweets come from political chat botsRecode reports that chatbots for both sides are pushing their candidates hard. According to a paper released by Oxford University’s Project on Computational Propaganda, Republican bots are out tweaking Democratic chatbots on the Web.

The researchers found that most bots root for Trump to win the election. During the third presidential “debate,” Twitter bots sharing pro-Trump-related content outnumbered pro-Clinton bots by 7 to 1. Between the first and second debates, bots generated more than 33% of pro-Trump tweets, compared with 20% for pro-Clinton tweets.

Twitter bot

The Oxford team found that a Twitter bot is automated account software that acts independently. Bots can retweet, like, and reply to tweets. They can also follow accounts and tweet themselves.

bots can give candidates and issues unwarranted cloutThe researchers found that Twitter accounts with extremely high levels of automation, meaning they tweeted over 200 times during the data collection period (Oct. 19-22) with a debate-related hashtag or candidate mention, accounted for nearly 25% of Twitter traffic surrounding the last debate.

The problem with the outpouring of automated engagement on Twitter is that campaigns often measure success (and decide where and how to invest in further outreach) by counting these retweets, likes, replies, and mentions.

Chatbots can give issues unwarranted clout.

The article states that it is hard to tell how many retweets and likes are from real supporters. A proliferation of chatbots can give candidates and issues unwarranted clout. Throughout the race, Trump has discounted the value of polls. They’re rigged, he says. Instead, his campaign implores Americans to reference how viral he is on social media and the size of his rallies.

rump’s uptick in automated Twitter fandomThe third debate came on the heels of the leaked tape of Trump bragging about sexually assaulting women, which went viral. The article speculated that Trump’s uptick in automated Twitter fandom during the debate may have been intended to counteract the lingering outrage against the candidate on social media.

Increasingly, journalists use Twitter to report stories and prove public interest. They believe it’s an excellent way to bring audience voices into a political discussion, though more voices don’t always make for a better conversation. The author warns that much of the engagement numbers aren’t from real people, which is also a sobering reminder that virality is no demonstration of genuineness.

Automated fake profiles that look real

journalists use Twitter to report stories and prove public interestDonald Trump likes to boast that he’s more popular than Hillary Clinton on social media. After all, he has 12.9 million Twitter followers, while Clinton lags behind with a mere 10.1 million. But it’s hard to say how much those numbers mean if many of them represent robots. Sam Woolley, a researcher at the University of Washington who studies the political use of social media bots, told Revelist “… that well over half of his [Trump] followers are automated, fake profiles made to look like real people.”

Mr. Howard told CNN,The takeaway is that we should be skeptical about social media … Politicians use bots to influence debate, it’s often a form of a negative campaign because in many cases these bots can be very vicious.

Rb-

Filippo Menczer, a computer scientist at Indiana University’s School of Informatics and Computing, said botnets have been deployed in many countries to squelch dissent. “We’ve seen examples in other countries – in Russia, Iran, and Mexico – of bots used to destroy social movements. They would impede conversations.  All of a sudden, you would see hundreds of thousands of junk tweets flooding your feed.”

Notice the Trump – Russia tie.

This is one of the risks of automating work with bots, which I wrote about here. The pro-Trump bots keep counting on themselves to skew their total numbers up and bury the discussion points from actual voters under the avalanche of bot chat.

Watch out—it won’t be long before chatbots are granted rights under dubious SCOTUS rulings like Citizen United.

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Chatbot Risks

Chatbot RisksChatbots are the latest rage on social media. As Time explained, they have been around since the 1960s. That’s when MIT professor Joseph Weizenbaum created a chatbot called ELIZA. Chatbots found a home on desktop messaging clients like AOL Instant Messenger. Chatbots went dormant as messaging transitioned away from desktops and onto mobile devices.

Sophiscated botBut they’re poised for a resurgence in 2016. There are two reasons for this. First, artificial intelligence and cloud computing has gotten better thanks to improvements in machine learning. Second, bots could be big money.

Tech titans have chatbots on social media

All the tech titans have released social bots on the web; Apple’s (AAPL) Siri, Facebook’s (FB) “bots on Messenger“, Google’s (GOOG) Allo, and Microsoft’s (MSFT) ill-fated Tay. They believe there’s a buck to be made here, and they’re scrambling to make sure they don’t get left out.

Social botThe July issue of the Communications of the ACM included an article, “The Rise of Social Bots,” which lays out social bots’ impact on online communities and society at large. The authors define a social bot as a computer algorithm that automatically produces content and interacts with humans on social media, trying to emulate and possibly alter their behavior.

The Business Insider published this infographic about the social bot ecosystem.

Business Insider infographic

Chatbots can be deceptive

The ACM article argues that social bots populate techno-social systems; they are often benign, or even useful, but some are created to harm by tampering with, manipulating, and deceiving social media users. The article offers several examples of how social bots can be a hindrance. The first example involves the Twitter (TWTR) posts around the Boston Marathon bombing. The researcher’s analysis found that social bots were automatically retweeting false accusations and rumors. The researchers argue that forwarding false claims without verifying the false tweets granted the false information more influence.

bots can artificially inflate political candidatesThe ACM article also discusses how social bots can artificially inflate political candidates. During the 2010 mid-term elections some politicians used social bots to inject thousands of false tweets to smear their opponents. This type of activity puts the integrity of the democratic process at risk. These types of attackers are also called astroturfing, or twitter-bombs.

Anti-vaxxer chatbots

The article offers another example of the use of social bots to influence an election in California. During the recent debate in California about a law on vaccination requirements there appears to be widespread use of social bots by opponents to vaccinations. This social bot interference puts an unknown number of people at risk of death or disease.

bot provoked stock market crashGreed is the most likely use of social bots. One example from the article is the April 2013 hack of the Twitter account of the Associated Press. In this case, the Syrian Electronic Army used the hacked account to posted a false statement about a terror attack on the White House which injured President Obama. This false story provoked an immediate $136 Billion stock market crash as an unwarranted result of the widespread use of social bots to amplify false rumors.

Chatbots manipulate social media reality

Research has shown that human emotions are contagious on social media. This means that social bots can be used to artificially manipulate social media users’ perception of reality without being aware they are being manipulated. The article says the latest generation of Twitter social bots has many “human-like” online behaviors that make it difficult to separate bots from humans. According to the authors, social bots can:

  • Search the web to fill in their profiles,
  • Post pre-collected content at a defined time
  • Engage in conversations with people,
  • Infiltrate discussions and add topically correct information.

Some bots garner attention.Some bots work to gain greater status by searching out and following popular or influential users or taking other steps to garner attention. Other bots are identity thieves, adopting slight variants of user names to steal personal information, picture, and links.

Strategies to thwart bad chatbots

The authors review several attempts to thwart these growing sophisticated bots.

1. Innocent-by-association – This theory measured the number of legitimate links vs. the number of social bots (Sybil) links a user has. This method was proven to be flawed. Researchers found that Facebook users are pretty indiscriminate when adding users. The article says that 20% of legitimate Facebook users accept any friend request and 60% accept friend requests with only one contact in common.

2. Crowdsourcing – Another approach to stop social bots is crowdsourcing. The crowdsourcing approach would rely on users and experts reviewing an account. The reviewers would have to reach a majority decision that the account in question was a bot or legit. The authors pointed out some issues with crowdsourcing.

  • It will not scale to large existing social networks like Facebook or Twitter.
  • “Experts” need to be paid to check accounts.
  • It exposes user’s personal information related to the account to unknown users and “experts.”

3. Feature-based detection is the third method the researchers noted by the authors. Feature-based bot detection uses behavior-based analysis with machine learning to separate human-like behavior from bot-like behavior. Some of the behaviors that these types of applications include:

  • The number of retweets.
  • Age of account.
  • Username length.

4. Sybil until proven otherwise – The Chinese social network RenRen uses the fourth method noted by the author. This network uses a “Sybil until proven otherwise” approach. According to the article, this approach is better at detecting unknown attacks, like embedding text in graphics.

rb-

Use your brainWhile people’s ability to critically assimilate information, is beyond technology, the authors call for new ways to detect social bot-generated spam vs. real political discourse.

The researchers speculate there will not be a solution to the social bot problem. The more likely outcome is a bot arms race, like what we are seeing in the war on SPAM and other malware.

Related articles
  • Man vs. Machine: What do Chatbots Mean for Social Media? (blogs.adobe.com)

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.