Tag Archive for Security

2016’s Most Dangerous Online Celebrities

2016's Most Dangerous Online CelebritiesThe 10th annual McAfee Top 100 Most Dangerous Celebrities to Search for Online Study, published by Intel Security, was recently released.  The yearly report uncovers which celebrities are the most dangerous to search for on Intertube.  These dangerous celeb results can expose fans to viruses, malware, and identity theft while searching for the latest information on today’s pop culture stars.  Intel (INTC) used its McAfee site rating software to find the number of risky sites generated by searches on Google, Bing, and even beleaguered Yahoo.

Intel securityConsumers today remain fascinated with celebrity culture and go online to find the latest pop culture news,” said Gary Davis, chief consumer security evangelist at Intel Security.  “With this craving for real-time information, many search and click without considering potential security risks.  Cyber-criminals know this and take advantage of this behavior by attempting to lead them to unsafe sites loaded with malware.

Most Dangerous Online Celebrities

This year’s most dangerous celebrity online is Amy Schumer.  The comic joins recent most dangerous celebrity online alumni Jimmy Kimmel, Jay Leno, and Emma Watson.  According to Intel Security, a search for the “Trainwreck” actress has a 16.1% likelihood of returning results that direct fans to sites with viruses and malware.

2016 most dangerous celebrity online is Amy SchumerJustin Biber is the second most dangerous online celebrity.  As for the “Sorry” singer, there’s a 15% chance that Beliebers could connect with a malicious website.

The rest of this year’s Top 10 list included:
3.  Carson Daly 13.4%
4.  Will Smith 13.4%
5.  Rihanna 13.3%
6.  Miley Cyrus 12.7%
7.  Chis Hardwick 12.6%
8.  Daniel Tosh  11.6%
9.  Selena Gomez 11.1%
10.  Kesha 1exploit celebrity fandom for abuse1.1%

Intel says there are two big truths: cyber-criminals try to exploit celebrity fandom for abuse.  The first is that consumers want convenience.  As people rely less on cable and, instead, search for the content they want online, they’ll find many third-party sources for their favorite music or videos.

But unofficial sources are often dangerous.  Links can send users to unsafe sites, where sneaky tactics for stealing data and usernames are awaiting.  The popular torrent file format for downloading files allows cyber-criminals to sneak viruses onto devices.

social media obsessed cultureSocial media-obsessed culture

The second truth attackers are exploiting is the desire for gossip – now.  In today’s social media-obsessed culture, fans want real-time information about their favorite celebrities.  It isn’t uncommon for a celebrity to share a photo, post, or comment around the world in a matter of seconds.  Those posts often spark a wave of searches.  With all that traffic, cyber-criminals can trick fans into visiting a faux-gossip website infested with malware to steal passwords, credit card information, and more.  This method is particularly effective on social media channels, like Facebook, Twitter, and WhatsApp, where the standards for trust are low.

How to protect yourself

In addition to recommending anti-virus software, Intel, whose products include McAfee software, urges consumers to be skeptical when surfing the web.  But don’t worry.  No one is asking you to give up your celebrity infatuation; here are a few things you can do to make sure you’re entertained safely:

  • rusted video streaming services Watch media from sources.  Are you looking for the latest episode of Amy Schumer’s TV show, Inside Amy Schumer?  Stick to the official source at comedycentral.com or well-known and trusted video streaming services like Hulu to ensure you aren’t clicking on anything malicious.
  • Be wary of searching for file downloads.  Of all the celebrity-related searches we conducted, “torrent” was the riskiest by far.  According to Intel, a search for ‘Amy Schumer Torrent’ results in a 33 % chance of connecting to a malicious website.  Cybercriminals can use torrents to embed malware within authentic files, making it tricky to detect safe downloads from unsafe sources.  It’s best to avoid using torrents, especially when so many legitimate streaming options are available.
  • Keep your personal information personal.Keep your personal information private.  Cybercriminals are always looking for ways to steal your personal information.  If you receive a request to enter information like your credit card, email, home address, or social media log-in, Intel says you should not give it out thoughtlessly.  Please research and ensure it’s not a phishing or scam attempt that could lead to identity theft.
  • Use security protection while browsing.  Many software products can scan web pages you’re browsing, alerting you to malicious websites and potential threats.  This can keep you safe as you study the latest gossip.

rb-

The stars are new, but the game is the same.  In addition to applying some critical thinking to your web browsing, the same advice from 2015, 2014, 2013, 2012, etc. stands……

Maybe I will get more hits after putting these pop names in here.

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005.  You can follow him on LinkedInFacebook, and Twitter.  Email the Bach Seat here.

FIDO

FIDOSince 2013 there have been nearly 5 billion data records lost or stolen according to the Breach Level Index. The UN says there are 6.8 billion mobile phone accounts which mean globally 96% of humans have a cell phone. It would seem that these factoids could interact to cut the pace of lost or stolen data records. An effort is underway to use mobile devices to better secure data called FIDO.

https://fidoalliance.org/FIDO (Fast ID Online) is an open standard for a secure and easy-to-use universal authentication interface. FIDO plans to address the lack of interoperability among strong authentication devices. TargetTech says FIDO is developed by the FIDO Alliance, a non-profit organization formed in 2012. FIDO members include AgnitioAlibaba, ARM (ARMH), Blackberry (BBRY), Google (GOOG), Infineon Technologies, Lenovo (LNVGY), Master Card, Microsoft (MSFT), Netflix, Nok Nok Labs, PayPal, RSA, Samsung, Synaptics, Validity Sensors and Visa.

The FIDO specifications define a common interface for user authentication on the client. The article explains the goal of FIDO authentication is to promote data privacy and stronger authentication for online services without hard-to-adopt measures. FIDO’s standard supports multifactor authentication and strong features like biometrics. It stores supporting data in a smartphone to eliminate the need for multiple passwords.

encrypted virtual containerThe author writes that FIDO is much like an encrypted virtual container of strong authentication elements. The elements include: biometrics, USB security tokens, Near Field Communication (NFC), Trusted Platform Modules (TPM), embedded secure elements, smart cards, and Bluetooth. Data from authentication sources are used for the local key, while the requesting service gets a separate login to keep user data private.

FIDO is based on public-key cryptography that works through two different protocols for two different user experiences. According to TargetTech the Universal Authentication Framework (UAF) protocol allows the user to register an enabled device with a FIDO-ready server or website. Users authenticate on their devices with fingerprints or PINs, for example, and log in to the server using a secure public key.

authenticate users with a strong second factorThe Universal Second Factor (U2F), originally developed by Google, is an effort to get the Web ecosystem (browsers, online service providers, operating systems) to authenticate users with a strong second factor, such as a USB touchscreen key or NFC on a mobile device.

FIDO’s local storage of biometrics and other personal identification is intended to ease user concerns about personal data stored on an external server or in the cloud. By abstracting the protocol implementation, FIDO also reduces the work required for developers to create secure logins.

Samsung and PayPal have announced a FIDO authentication partnership. Beginning with the Samsung Galaxy S5 users can authorize transactions to their PayPal accounts using their fingerprints, which authenticates users by sending unique encrypted keys to their online PayPal wallets without storing biometric information on the company’s servers.

Samsung and PayPal FIDO authentication partnershipFIDO promises to clean up the strong authentication marketplace, making it easier for one-fob-fits-all products. The open standards shift some of the burdens for protecting personally identifiable information to software on devices or biometric features, and away from stored credentials and passwords. ComputerWeekly described FIDO’s potential this way:

The FIDO method is more secure than current methods because no password of identifying information is sent out; instead, it is processed by software on the end user’s device that calculates cryptographic strings to be sent to a login server.

In the past, multiple-factor authentication methods were based on either a hardware fob or a tokenless product. These products use custom software, proprietary programming interfaces, and much work to integrate the method into your existing on-premises and Web-based applications.

same authentication device can be used in multiple ways for signing into a variety of providersComputerWeekly says FIDO will divorce second-factor methods from the actual applications that will depend on them. That means the same authentication device can be used in multiple ways for signing into a variety of providers, without one being aware of the others or the need for extensive programming for stronger authentication.

Integrating FIDO-compliant built-in technology with digital wallets and e-commerce can not only help protect consumers but reduce the risk, liability, and fraud for financial institutions and digital marketplaces.

The big leap that FIDO is taking is to use biometric data – voiceprint, fingerprint, facial recognition, etc. and digitize and protect that information with solid cryptographic techniques. But unlike the traditional second-factor authentication key fobs or even the tokenless phone call-back scenarios, this information remains on your smartphone or laptop and isn’t shared with any application provider. FIDO can even use a simple four-digit PIN code, and everything will stay on the originating device. With this approach, ComputerWeekly says FIDO avoids the potential for a Target-like point-of-sale exploit that could release millions of logins to the world, a big selling point for many IT shops and providers.

Target-like point-of-sale exploitIt can eliminate having to carry a separate dongle as just about everyone has a mobile phone these days this is a mobile world we live in, and we need mobile-compatible solutions; otherwise, you’re behind the curve right out of the gate.

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Mind Readers Can Steal Your Biometric Info

Mind Readers Can Steal Your Biometric InfoBy now, most people have come to the position that passwords suck. The momentum for alternate means of authentication is growing. Researchers are working on how to use biometric technology for mainstream login activities. As I have pointed out there is a number of emerging biometric techniques like; iris scans, facial recognition, or behavioral characteristics. All of these methods have flaws, which pose a problem for authentication non-repudiation.

passwords suckIn a post at IEEE Spectrum, Megan Scudellari writes that fingerprints can be stolen, iris scans spoofed, and facial recognition software fooled. In the wake of these flaws, researchers have turned to brain waves as the next step in biometric identification. Biometric identification is any means by which a person can be uniquely identified by evaluating one or more distinguishing biological traits. Unique identifiers include fingerprints, hand geometry, earlobe geometry, retina and iris patterns, voice waves, DNA, and signatures.

The researchers are racing to prove how accurately and accessibly they can verify a person’s identity using electroencephalograph (EEG) data. An EEG is a test that detects electrical activity in the brain using electrodes attached to the scalp. The IEEE article explains that as your eyes skim over these pixels you are reading and turn them into meaningful words, your brain cells are flickering with a pattern of electrical activity that is unique to you. These unique patterns can be used like a password or biometric identification. In fact, researchers have taken to calling them “passthoughts”.

brain cells are flickering with a pattern of electrical activity that is unique to youUsing brainwaves to authenticate people goes back a while. Back in 2012, I wrote about the Muse headband sensor which promised to “create a specific brainwave signature or a password they would never have to say out loud or type into a computer.” More recently, psychologists and engineers at Binghamton University in New York achieved 100 percent accuracy at identifying individuals using brain waves captured with a skullcap with 30 electrodes. Scientists at the University of California at Berkeley have adopted a set of earbud sensors that worked with 80 percent accuracy.

The problem is our brains don’t produce a single, clear signal that can be checked like a fingerprint. The article says our brains emit a messy, vibrant symphony of personal information, including one’s emotional state, learning ability, and personality traits. The author contends that as EEG technology becomes cheaper, portable, and more ubiquitous—not only for identity authentication, but in apps, games, and more— there’s a high likelihood that someone will tap into that concerto of information for malicious purposes. Abdul Serwadda, a cybersecurity researcher at Texas Tech University told Spectrum;

If you have these apps, you don’t know what the app is reading from your brain or what [the app’s creators are] going to use that information for, but you do know they’re going to have a lot of information

The Texas Tech team performed experiments to see if they could glean sensitive personal information from brain data captured by two popular EEG-based authentication systems. Surprise, surprise: they were able to capture sensitive personal information from brain data.

capture sensitive personal information from brain data.

Mr. Serwadda presented his results at the IEEE International Conference on Biometrics. The Texas Tech researchers examined EEG-based authentication systems that claimed high levels of authentication accuracy. One system examined was the Berkley model, and the second was based on the Binghamton model. The article explains that these EEG-based authentication systems utilize specific features, or markers, of brain activity to identify a person, like isolating the melody of a specific orchestra instrument to identify a song.

ListeningThe researchers wanted to see if those markers also contained sensitive personal information—in this case, a tendency for alcoholism. They ran old EEG scans which included alcoholics and non-alcoholics through the systems. Using the brain wave data, they were able to accurately identify 25% of the alcoholics in the sample. That’s 25% of people who just lost their privacy. Mr. Serwadda said;

We weren’t surprised, because we know the brain signal is so rich in information … But it is scary. [Wearable brain measurement] is an application that’s just about to go mainstream, and you can infer a lot of information about users.

The researcher said that malicious third parties could mine brain data to make inferences about learning disabilities, mental illnesses, and more. He told Spectrum, “Imagine if you made these things public, and insurance companies became aware of them … It would be terrible.”

IOActive senior consultant Alejandro Hernández told The Register that dangerous vulnerabilities exist in EEG kits. EEG’s security problems are depressingly familiar results of bad software design, Hernández said. EEG devices are vulnerable to man-in-the-middle attacks, as well as less-severe application vulnerabilities and ordinary crashes. Mr. Hernández says.

… some applications send the raw brain waves to another remote endpoint using the TCP/IP protocol, that by design doesn’t include security, and therefore this kind of traffic is prone to common network attacks such as man-in-the-middle where an attacker would be able to intercept and modify the EEG data sent.

steal raw EEG dataThe IOActive consultant found that components like the acquisition device, middleware, and endpoints lack authentication meaning an attacker can connect to a remote TCP port and steal raw EEG data. That same flaw lets attacks pull off the more dangerous reply attacks.

Unfortunately, the researchers do not have a solution for how to secure such information—though in the study, compromising a little on authentication accuracy did reduce the ability to detect who was an alcoholic. Mr. Serwadda hopes other research teams will now take privacy, and not just accuracy, into account when optimizing such systems. Professor Serwadda concludes, “We have to prepare for the movement of brain wave [assessment] into our daily lives.”

Rb-

Given the willingness of apps developers to sell share any info to any third party and the unwillingness of the public to take even basic steps to secure their info online, everyone’s deepest personal information can be hacked in the future.

Another problem with passthoughts UC Berkeley’s John Chuang identifies that stress, mood, alcohol, caffeine, medicine, and mental fatigue could change the electrical signals that are generated.

Despite advances in logging in with your mind, there might always be a need for an old-fashioned eight-plus character phrase with no spaces. “Passwords will never go away,” says Berkeley’s Chuang. He reasons that for a computer, a typed password may be the easiest way to verify identity, while a finger swipe may be best for a touch screen.

But we need to think beyond those to future devices—wearables, for instance—for which there will be neither a keyboard nor a touch screen. “For each device, we must figure out what are the most natural, intuitive ways to tell the device that we are who we are,” Professor Chuang says. Going directly to the brain seems like an obvious choice.

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Stop using SMS for Two-Factor Authentication

Stop using SMS for Two-Factor AuthenticationFollowers of the Bach Seat know that passwords suck and no longer provide reliable security. Because automated mass cybercrime attacks are hammering businesses daily, the National Institute of Standards and Technology (NIST) is disrupting the online security status–quo. According to InfoWorld, the US government’s standards body has decided that passwords are not good enough anymore. NIST now wants government agencies to use two-factor authentication (2FA) to secure applications, networks, and systems.

NIST logoTwo-factor authentication is a security process where the user provides two means of identification from separate categories of credentials. The first is typically something you have, a physical token, such as a card. The second is usually something you know like a PIN number.

The proposed standard discourages organizations from sending special codes via SMS messages. Many services offer two-factor authentication. They ask users to enter a one-time passcode sent via SMS into the app or site to verify the transaction. The author writes that weaknesses in the SMS mechanism concern NIST.

NIST now recommends that developers use tokens and software cryptographic authenticators instead of SMS to deliver special codes. They wrote in a draft version of the DAG; “OOB [out of band] using SMS is deprecated and will no longer be allowed in future releases of this guidance.”

Short Message Service (SMS)Federal agencies must use applications that conform to NIST guidelines. This means for software to be sold to federal agencies, it must follow NIST guidelines. InfoWorld says this is especially relevant for secure electronic communications.

SMS-based Two-Factor Authentication is considered insecure by NIST for a number of reasons. First, someone other than the user may be in possession of the phone. The author says an attacker with a stolen phone would be able to trigger the login request. In some cases, the contents of the text message appear on the lock screen, which means the code is exposed to anyone who glances at the screen.

SMS based two-factor authentication (2FA)InfoWorld says that NIST isn’t deprecating SMS-based methods simply because someone may be able to intercept the codes by taking control of the handset, that risk also exists with tokens and software authenticators. The main reason NIST appears to be down on SMS is that it is insecure over VoIP.

The author says there has been a significant increase in attacks targeting SMS-based two-factor authentication recently. SMS messages can be hijacked over some VoIP services. SMS messages delivered through VoIP are only as secure as the websites and systems of the VoIP provider. If an attacker can hack the VoIP servers or network they can intercept the SMS security codes or have them rerouted to her own phone. Security researchers have used weaknesses in the SMS protocol to remotely interact with applications on the target phone and compromise users.

Signalling System 7 (SS7) Sophos’ Naked Security Blog further explains some of the risks. There is malware that can redirect text messages. There are attacks against the This hack

Mobile phone number portability also poses a problem for SMS security. Sophos says that phone ports, also known as SIM swaps can make SMS insecure. SIM swap attacks are where an attacker convinces your mobile provider to issue you a new SIM card to replace one that’s been lost, damaged, stolen or that is the wrong size for your new phone.

SIM swap attacksSophos also says in many places it is very easy for criminals to convince a mobile phone store to transfer someone’s phone number to a new SIM and therefore hijacking all their text messages.

ComputerWorld highlights a recent attack that used social engineering to bypass Google’s two-factor authentication. Criminals sent users text messages informing them that someone was trying to break into their Gmail accounts and that they should enter the passcode to temporarily lock the account. The passcode, which was a real code generated by Google when the attackers tried to log in, arrived in a separate text message, and users who didn’t realize the first message was not legitimate would pass the unique code on to the criminals.

NIST’s decision to deprecate SMS two-factor Passwordauthentication is a smart one,” said Keith Graham, CTO of authentication provider SecureAuth. “The days of vanilla two-factor approaches are no longer enough for security.

For now, applications and services using SMS-based authentication can continue to do so as long as it isn’t a service that virtualizes phone numbers. Developers and application owners should explore other options, including dedicated two-factor apps. One example is Google Authenticator, which uses a secret key and time to generate a unique code locally on the device for the user to enter into the application.

Hardware tokens such as RSA’s SecurID display a Hardware tokens new code every few seconds. A hardware security dongle such as YubiKey, used by many companies including Google and GitHub, supports one-time passwords, public-key encryption, and authentication. Knowing that NIST is not very happy with SMS will push the authentication industry towards more secure options.

Many popular services and applications offer only SMS-based authentication, including Twitter and online banking services from major banks. Once the NIST guidelines are final, these services will have to make some changes.

Fingerprint RecognitionMany developers are increasingly looking at fingerprint recognition. ComputerWorld says this is because the latest mobile devices have fingerprint sensors. Organizations can also use adaptive authentication techniques, such as layering device recognition, geo-location, login history, or even behavioral biometrics to continually verify the true identity of the user, SecureAuth’s Graham said.

NIST acknowledged that biometrics is becoming more widespread as a method for authentication, but refrained from issuing a full recommendation. The recommendation was withheld because biometrics aren’t considered secret and can be obtained and forged by attackers through various methods.

Biometric methods are acceptable only when used with another authentication factor, according to the draft guidelines. NIST wrote in the DAG;

[Biometrics] can be obtained online or by taking a picture of someone with a camera phone (e.g. facial images) with or without their knowledge, lifted from objects someone touches (e.g., latent fingerprints), or captured with high-resolution images (e.g., iris patterns for blue eyes)

Biometrics

At this point, it appears NIST is moving away from recommending SMS-based authentication as a secure method for out-of-band verification. They are soliciting feedback from partners and NIST stakeholders on the new standard. They told InfoWorld, “It only seemed appropriate for us to engage where so much of our community already congregates and collaborates.

You can review the draft of Special Publication 800-63-3: Digital Authentication Guidelines on Github or on NIST’s website until Sept. 17. Sophos recommends security researcher Jim Fenton’s presentation from the PasswordsCon event in Las Vegas that sums up the changes.

VentureBeat offers some suggestions to replace your SMS system:

  • Hardware tokens that generate time-based codes.
  • Apps that generate time-based codes, such as the Google Authenticator app or RSA SecurID,
  • Hardware dongles based on the U2F standard.
  • Systems that use push notifications to your phone.

 

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Chatbot Risks

Chatbot RisksChatbots are the latest rage on social media. As Time explained, they have been around since the 1960s. That’s when MIT professor Joseph Weizenbaum created a chatbot called ELIZA. Chatbots found a home on desktop messaging clients like AOL Instant Messenger. Chatbots went dormant as messaging transitioned away from desktops and onto mobile devices.

Sophiscated botBut they’re poised for a resurgence in 2016. There are two reasons for this. First, artificial intelligence and cloud computing has gotten better thanks to improvements in machine learning. Second, bots could be big money.

Tech titans have chatbots on social media

All the tech titans have released social bots on the web; Apple’s (AAPL) Siri, Facebook’s (FB) “bots on Messenger“, Google’s (GOOG) Allo, and Microsoft’s (MSFT) ill-fated Tay. They believe there’s a buck to be made here, and they’re scrambling to make sure they don’t get left out.

Social botThe July issue of the Communications of the ACM included an article, “The Rise of Social Bots,” which lays out social bots’ impact on online communities and society at large. The authors define a social bot as a computer algorithm that automatically produces content and interacts with humans on social media, trying to emulate and possibly alter their behavior.

The Business Insider published this infographic about the social bot ecosystem.

Business Insider infographic

Chatbots can be deceptive

The ACM article argues that social bots populate techno-social systems; they are often benign, or even useful, but some are created to harm by tampering with, manipulating, and deceiving social media users. The article offers several examples of how social bots can be a hindrance. The first example involves the Twitter (TWTR) posts around the Boston Marathon bombing. The researcher’s analysis found that social bots were automatically retweeting false accusations and rumors. The researchers argue that forwarding false claims without verifying the false tweets granted the false information more influence.

bots can artificially inflate political candidatesThe ACM article also discusses how social bots can artificially inflate political candidates. During the 2010 mid-term elections some politicians used social bots to inject thousands of false tweets to smear their opponents. This type of activity puts the integrity of the democratic process at risk. These types of attackers are also called astroturfing, or twitter-bombs.

Anti-vaxxer chatbots

The article offers another example of the use of social bots to influence an election in California. During the recent debate in California about a law on vaccination requirements there appears to be widespread use of social bots by opponents to vaccinations. This social bot interference puts an unknown number of people at risk of death or disease.

bot provoked stock market crashGreed is the most likely use of social bots. One example from the article is the April 2013 hack of the Twitter account of the Associated Press. In this case, the Syrian Electronic Army used the hacked account to posted a false statement about a terror attack on the White House which injured President Obama. This false story provoked an immediate $136 Billion stock market crash as an unwarranted result of the widespread use of social bots to amplify false rumors.

Chatbots manipulate social media reality

Research has shown that human emotions are contagious on social media. This means that social bots can be used to artificially manipulate social media users’ perception of reality without being aware they are being manipulated. The article says the latest generation of Twitter social bots has many “human-like” online behaviors that make it difficult to separate bots from humans. According to the authors, social bots can:

  • Search the web to fill in their profiles,
  • Post pre-collected content at a defined time
  • Engage in conversations with people,
  • Infiltrate discussions and add topically correct information.

Some bots garner attention.Some bots work to gain greater status by searching out and following popular or influential users or taking other steps to garner attention. Other bots are identity thieves, adopting slight variants of user names to steal personal information, picture, and links.

Strategies to thwart bad chatbots

The authors review several attempts to thwart these growing sophisticated bots.

1. Innocent-by-association – This theory measured the number of legitimate links vs. the number of social bots (Sybil) links a user has. This method was proven to be flawed. Researchers found that Facebook users are pretty indiscriminate when adding users. The article says that 20% of legitimate Facebook users accept any friend request and 60% accept friend requests with only one contact in common.

2. Crowdsourcing – Another approach to stop social bots is crowdsourcing. The crowdsourcing approach would rely on users and experts reviewing an account. The reviewers would have to reach a majority decision that the account in question was a bot or legit. The authors pointed out some issues with crowdsourcing.

  • It will not scale to large existing social networks like Facebook or Twitter.
  • “Experts” need to be paid to check accounts.
  • It exposes user’s personal information related to the account to unknown users and “experts.”

3. Feature-based detection is the third method the researchers noted by the authors. Feature-based bot detection uses behavior-based analysis with machine learning to separate human-like behavior from bot-like behavior. Some of the behaviors that these types of applications include:

  • The number of retweets.
  • Age of account.
  • Username length.

4. Sybil until proven otherwise – The Chinese social network RenRen uses the fourth method noted by the author. This network uses a “Sybil until proven otherwise” approach. According to the article, this approach is better at detecting unknown attacks, like embedding text in graphics.

rb-

Use your brainWhile people’s ability to critically assimilate information, is beyond technology, the authors call for new ways to detect social bot-generated spam vs. real political discourse.

The researchers speculate there will not be a solution to the social bot problem. The more likely outcome is a bot arms race, like what we are seeing in the war on SPAM and other malware.

Related articles
  • Man vs. Machine: What do Chatbots Mean for Social Media? (blogs.adobe.com)

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.