Tag Archive for Eliza

Russia Trolls Public Health

Everything you see on the Internet is trueHey here is a surprise – things on Facebook are fake. GovInfo Security is reporting that social media trolls sponsored by Russia have been actively stirring up the mindless vaccination debates. Researchers from George Washington University and Johns Hopkins University published their findings on (08/23/2018). They published a report, “Weaponized Health Communication: Twitter Bots and Russian Trolls Amplify the Vaccine Debate,” in the American Journal of Public Health. In the article, they based studied social media tweets collected from 2014 to 2017 on the vaccine debate.

Facebook profited from Russia-backed accounts trying to sway the 2016 U.S. presidential election

According to the research the Internet Research Agency, a company backed by the Russian government is at the center of the dis-information. The known Russian social media troll which specializes in online influence operations is linked to the spread of “polarized and anti-vaccine” misinformation via social media. The social media posts appear designed to undercut trust in vaccines. Such information could lead to lower vaccination rates and further contribute to a rise in mass outbreaks of measles, mumps, and rubella among children, among other viral infections.

How do anti-vaccine messages spread?

From 2014-2017, Twitter bots and Russian trolls disseminated anti-vaccine messages in trying to erode public consensus on vaccination in the U.S.

From 2014-2017, Twitter bots & Russian trolls disseminated anti-#vaccine messages in an attempt to erode public consensus on #vaccination in the US

The researchers’ review of anti-vaccine messaging on Twitter found the sources of disinformation are automated. There appears to be a steady stream of vaccine discussion being undertaken by social media bots. Social media bots are automated accounts. The researchers also identified and social media cyborgs’, that are hacked accounts taken over by bots. There are also social media trolls. Social media trolls are people who often disguise their identity and seek to sow discord.

The researchers also identified “content polluters.” Content polluters used anti-vaccine messages as bait to entice their followers to click on advertisements and links to malicious websites. The researchers contend that content polluters collate to high levels of anti-vaccine content. In the case of Russian trolls, however, their “messages were more political and divisive” and included both pro-vaccine and anti-vaccine content.

Trolls tied to Russia

Examples of Russian troll commentsTo identify accounts controlled by Russian trolls, the researchers used previously published information on Twitter accounts that intelligence agencies have tied to Russian government disinformation campaigns. As an example, CNN reports that one Russian troll account sent 253 tweets containing the #VaccinateUS hashtag among their sample. Among those tweets with the hashtag;

  • 43% were pro-vaccine,
  • 38% were anti-vaccine,
  • 19% were neutral.

By posting a variety of anti-, pro-, and neutral tweets and directly confronting vaccine skeptics, trolls, and bots “legitimize” the vaccine debate, the researchers wrote in the study. The researchers noted,

This is consistent with a strategy of promoting discord across a range of controversial topics, a known tactic employed by Russian troll accounts … One commonly used online disinformation strategy, amplification, seeks to create impressions of false equivalence or consensus through the use of bots and trolls.

amplification, seeks to create impressions of false equivalence or consensus through the use of bots and trollsThe prevalence of social media bots, trolls, and cyborgs – accounts in online discourse about vaccines threatens to skew discussions.  Researchers warn. “This is vital knowledge for risk communicators, especially considering that neither members of the public nor algorithmic approaches may be able to easily identify bots, trolls, or cyborgs.

The researchers found that the trolls, bots, and cyborgs goal is to create open-ended discussions designed to amplify online debates and disagreements. One tact cited in the article is rehashing discredited research published 20 years ago with fake claims of risks that have led to some parents opting to not vaccinate their children.

Threats from online misinformation

The threat from online misinformation is that even fewer parents will vaccinate their children against measles, mumps, and rubella. The researchers wrote that vaccine-hesitant parents are more likely to turn to the internet for information and less likely to trust healthcare providers and public health experts on the subject … Exposure to the vaccine debate may suggest that there is no scientific consensus, shaking confidence in vaccination. The researchers warn,

Recent resurgences of measles, mumps, and pertussis and increased mortality from vaccine-preventable diseases such as influenza and viral pneumonia underscore the importance of combating online misinformation about vaccines.

Russian troll use Facebook to amplify online disagreementsAmplifying debates over vaccines appear to be part of what ambassador John B. Emerson described as the Kremlin’s 4D campaigns – for dismiss, distort, distract and dismay. In a 2015 speech, Mr. Emerson warned that the Russian government was becoming more expert at running these types of propaganda campaigns.

Intelligence experts in the U.S. and Europe have warned that these Kremlin campaigns continue. In February, U.S. Director of National Intelligence Dan Coats warned the Senate Intelligence Committee that the intelligence community expected Russia to attempt to amplify existing divisions in U.S. society to spread chaos for strategic effect. Ambassador Coats warned,

At a minimum, we expect Russia to continue using propaganda, social media, false-flag personas, sympathetic spokespeople and other means of influence to try to exacerbate social and political fissures in the United States.

Anti-Bot research

Little research has gone into researching how to identify social media trolls or bots that influence online discussions. (rb- I covered some of the efforts underway to detect bots in 2016.) In 2015, DARPA ran a contest in which it asked researchers to classify whether a stream of tweets it had harvested about vaccines in 2014 were bots. Researchers were given a data set with more than 4 million messages harvested from 7,000 accounts, of which 39 were bots.

MIT Technology Review reported the winner, data science and social analytics firm SentiMetrix, correctly identified all the bots, with only one false positive. SentiMetrix was able to use an algorithm to  look for “linguistic cues” the poster was fake, like

  • Little research has gone into researching how to identify social media trolls or botTweets that used bad grammar,
  • Output was similar to other chatbots like Eliza,
  • Profile pictures that used stock images,
  • Numbers of tweets posted over time,
  • Unusual posting patterns,
  • Female username with a profile photo of a bearded man. (rb- Sound familiar? I wrote about some of these same steps in 2016)

The research led SentiMetrix to identify 25 bots, which enabled it to train a machine-learning algorithm to pinpoint 10 more. Despite such work, “the public health community largely overlooked the implications of these findings,” the Johns Hopkins and George Washington researchers say.

The impact of social media bots on the vaccine debates is not an abstract concern. The U.S. Centers for Disease Control and Prevention reports they are investigating 124 cases of measles across 22 states and DC, including Michigan. That’s already more than the 118 cases counted in the U.S. during all of 2017.

Spreading measles in Michigan

WOODTV in Grand Rapids reports that cases of measles in Michigan have hit a two-decade high. Angela Minicuci with the MDHHS told WOODTV the state has “tallied 10 cases of measles so far this year — the highest case count since 1998.

The CDC says low vaccination rates are to blame for recent measles outbreaks. They report the majority of those who contract measles, which is highly contagious, have not been vaccinated.

One reason so many are at risk of spreading measles is that 18 states allow parents to opt-out of vaccinating their schoolchildren for non-medical reasons. In June 2018 researchers found  multiple “hotspot” areas,” at high risk for vaccine-preventable pediatric infection epidemics.” Included in these hotspots are Detroit, Troy, and Warren, Michigan. The DetNews reports these areas had more than 400 kindergartners receive the non-medical vaccination exemptions.

Grand Traverse AcademyIn 2017 an outbreak of measles and whooping cough forced Grand Traverse Academy in Traverse City Michigan to close for a week. Grand Traverse County has one of Michigan’s highest rates of schoolchildren opting out of vaccines — twice the state average and six times the national rate for kindergartners in 2013-14.

The problem is not limited to the United States. In Europe, there’s been a “dramatic increase” in measles infections. WHO says there were 23,927 cases of measles in Europe during 2017 and 5,273 in 2016.

rb-

They want you to ignore the truthRenée DiResta, who researches disinformation online at Data For Democracy, pointed out the obvious,  “This isn’t just happening on Twitter. This is happening on Facebook, and this is happening on YouTube, where searching for vaccine information on social media returns a majority of anti-vaccine propaganda,”

She says. “The social platforms have a responsibility to start investigating how this content is spreading and the impact these narratives are having on targeted audiences.

The Russians want us focused on our own problems so that we don’t focus on them. 

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Chatbot Risks

Chatbot RisksChatbots are the latest rage on social media. As Time explained, they have been around since the 1960s. That’s when MIT professor Joseph Weizenbaum created a chatbot called ELIZA. Chatbots found a home on desktop messaging clients like AOL Instant Messenger. Chatbots went dormant as messaging transitioned away from desktops and onto mobile devices.

Sophiscated botBut they’re poised for a resurgence in 2016. There are two reasons for this. First, artificial intelligence and cloud computing has gotten better thanks to improvements in machine learning. Second, bots could be big money.

Tech titans have chatbots on social media

All the tech titans have released social bots on the web; Apple’s (AAPL) Siri, Facebook’s (FB) “bots on Messenger“, Google’s (GOOG) Allo, and Microsoft’s (MSFT) ill-fated Tay. They believe there’s a buck to be made here, and they’re scrambling to make sure they don’t get left out.

Social botThe July issue of the Communications of the ACM included an article, “The Rise of Social Bots,” which lays out social bots’ impact on online communities and society at large. The authors define a social bot as a computer algorithm that automatically produces content and interacts with humans on social media, trying to emulate and possibly alter their behavior.

The Business Insider published this infographic about the social bot ecosystem.

Business Insider infographic

Chatbots can be deceptive

The ACM article argues that social bots populate techno-social systems; they are often benign, or even useful, but some are created to harm by tampering with, manipulating, and deceiving social media users. The article offers several examples of how social bots can be a hindrance. The first example involves the Twitter (TWTR) posts around the Boston Marathon bombing. The researcher’s analysis found that social bots were automatically retweeting false accusations and rumors. The researchers argue that forwarding false claims without verifying the false tweets granted the false information more influence.

bots can artificially inflate political candidatesThe ACM article also discusses how social bots can artificially inflate political candidates. During the 2010 mid-term elections some politicians used social bots to inject thousands of false tweets to smear their opponents. This type of activity puts the integrity of the democratic process at risk. These types of attackers are also called astroturfing, or twitter-bombs.

Anti-vaxxer chatbots

The article offers another example of the use of social bots to influence an election in California. During the recent debate in California about a law on vaccination requirements there appears to be widespread use of social bots by opponents to vaccinations. This social bot interference puts an unknown number of people at risk of death or disease.

bot provoked stock market crashGreed is the most likely use of social bots. One example from the article is the April 2013 hack of the Twitter account of the Associated Press. In this case, the Syrian Electronic Army used the hacked account to posted a false statement about a terror attack on the White House which injured President Obama. This false story provoked an immediate $136 Billion stock market crash as an unwarranted result of the widespread use of social bots to amplify false rumors.

Chatbots manipulate social media reality

Research has shown that human emotions are contagious on social media. This means that social bots can be used to artificially manipulate social media users’ perception of reality without being aware they are being manipulated. The article says the latest generation of Twitter social bots has many “human-like” online behaviors that make it difficult to separate bots from humans. According to the authors, social bots can:

  • Search the web to fill in their profiles,
  • Post pre-collected content at a defined time
  • Engage in conversations with people,
  • Infiltrate discussions and add topically correct information.

Some bots garner attention.Some bots work to gain greater status by searching out and following popular or influential users or taking other steps to garner attention. Other bots are identity thieves, adopting slight variants of user names to steal personal information, picture, and links.

Strategies to thwart bad chatbots

The authors review several attempts to thwart these growing sophisticated bots.

1. Innocent-by-association – This theory measured the number of legitimate links vs. the number of social bots (Sybil) links a user has. This method was proven to be flawed. Researchers found that Facebook users are pretty indiscriminate when adding users. The article says that 20% of legitimate Facebook users accept any friend request and 60% accept friend requests with only one contact in common.

2. Crowdsourcing – Another approach to stop social bots is crowdsourcing. The crowdsourcing approach would rely on users and experts reviewing an account. The reviewers would have to reach a majority decision that the account in question was a bot or legit. The authors pointed out some issues with crowdsourcing.

  • It will not scale to large existing social networks like Facebook or Twitter.
  • “Experts” need to be paid to check accounts.
  • It exposes user’s personal information related to the account to unknown users and “experts.”

3. Feature-based detection is the third method the researchers noted by the authors. Feature-based bot detection uses behavior-based analysis with machine learning to separate human-like behavior from bot-like behavior. Some of the behaviors that these types of applications include:

  • The number of retweets.
  • Age of account.
  • Username length.

4. Sybil until proven otherwise – The Chinese social network RenRen uses the fourth method noted by the author. This network uses a “Sybil until proven otherwise” approach. According to the article, this approach is better at detecting unknown attacks, like embedding text in graphics.

rb-

Use your brainWhile people’s ability to critically assimilate information, is beyond technology, the authors call for new ways to detect social bot-generated spam vs. real political discourse.

The researchers speculate there will not be a solution to the social bot problem. The more likely outcome is a bot arms race, like what we are seeing in the war on SPAM and other malware.

Related articles
  • Man vs. Machine: What do Chatbots Mean for Social Media? (blogs.adobe.com)

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.