Tag Archive for 2016

Mind Readers Can Steal Your Biometric Info

Mind Readers Can Steal Your Biometric InfoBy now, most people have come to the position that passwords suck. The momentum for alternate means of authentication is growing. Researchers are working on how to use biometric technology for mainstream login activities. As I have pointed out there is a number of emerging biometric techniques like; iris scans, facial recognition, or behavioral characteristics. All of these methods have flaws, which pose a problem for authentication non-repudiation.

passwords suckIn a post at IEEE Spectrum, Megan Scudellari writes that fingerprints can be stolen, iris scans spoofed, and facial recognition software fooled. In the wake of these flaws, researchers have turned to brain waves as the next step in biometric identification. Biometric identification is any means by which a person can be uniquely identified by evaluating one or more distinguishing biological traits. Unique identifiers include fingerprints, hand geometry, earlobe geometry, retina and iris patterns, voice waves, DNA, and signatures.

The researchers are racing to prove how accurately and accessibly they can verify a person’s identity using electroencephalograph (EEG) data. An EEG is a test that detects electrical activity in the brain using electrodes attached to the scalp. The IEEE article explains that as your eyes skim over these pixels you are reading and turn them into meaningful words, your brain cells are flickering with a pattern of electrical activity that is unique to you. These unique patterns can be used like a password or biometric identification. In fact, researchers have taken to calling them “passthoughts”.

brain cells are flickering with a pattern of electrical activity that is unique to youUsing brainwaves to authenticate people goes back a while. Back in 2012, I wrote about the Muse headband sensor which promised to “create a specific brainwave signature or a password they would never have to say out loud or type into a computer.” More recently, psychologists and engineers at Binghamton University in New York achieved 100 percent accuracy at identifying individuals using brain waves captured with a skullcap with 30 electrodes. Scientists at the University of California at Berkeley have adopted a set of earbud sensors that worked with 80 percent accuracy.

The problem is our brains don’t produce a single, clear signal that can be checked like a fingerprint. The article says our brains emit a messy, vibrant symphony of personal information, including one’s emotional state, learning ability, and personality traits. The author contends that as EEG technology becomes cheaper, portable, and more ubiquitous—not only for identity authentication, but in apps, games, and more— there’s a high likelihood that someone will tap into that concerto of information for malicious purposes. Abdul Serwadda, a cybersecurity researcher at Texas Tech University told Spectrum;

If you have these apps, you don’t know what the app is reading from your brain or what [the app’s creators are] going to use that information for, but you do know they’re going to have a lot of information

The Texas Tech team performed experiments to see if they could glean sensitive personal information from brain data captured by two popular EEG-based authentication systems. Surprise, surprise: they were able to capture sensitive personal information from brain data.

capture sensitive personal information from brain data.

Mr. Serwadda presented his results at the IEEE International Conference on Biometrics. The Texas Tech researchers examined EEG-based authentication systems that claimed high levels of authentication accuracy. One system examined was the Berkley model, and the second was based on the Binghamton model. The article explains that these EEG-based authentication systems utilize specific features, or markers, of brain activity to identify a person, like isolating the melody of a specific orchestra instrument to identify a song.

ListeningThe researchers wanted to see if those markers also contained sensitive personal information—in this case, a tendency for alcoholism. They ran old EEG scans which included alcoholics and non-alcoholics through the systems. Using the brain wave data, they were able to accurately identify 25% of the alcoholics in the sample. That’s 25% of people who just lost their privacy. Mr. Serwadda said;

We weren’t surprised, because we know the brain signal is so rich in information … But it is scary. [Wearable brain measurement] is an application that’s just about to go mainstream, and you can infer a lot of information about users.

The researcher said that malicious third parties could mine brain data to make inferences about learning disabilities, mental illnesses, and more. He told Spectrum, “Imagine if you made these things public, and insurance companies became aware of them … It would be terrible.”

IOActive senior consultant Alejandro Hernández told The Register that dangerous vulnerabilities exist in EEG kits. EEG’s security problems are depressingly familiar results of bad software design, Hernández said. EEG devices are vulnerable to man-in-the-middle attacks, as well as less-severe application vulnerabilities and ordinary crashes. Mr. Hernández says.

… some applications send the raw brain waves to another remote endpoint using the TCP/IP protocol, that by design doesn’t include security, and therefore this kind of traffic is prone to common network attacks such as man-in-the-middle where an attacker would be able to intercept and modify the EEG data sent.

steal raw EEG dataThe IOActive consultant found that components like the acquisition device, middleware, and endpoints lack authentication meaning an attacker can connect to a remote TCP port and steal raw EEG data. That same flaw lets attacks pull off the more dangerous reply attacks.

Unfortunately, the researchers do not have a solution for how to secure such information—though in the study, compromising a little on authentication accuracy did reduce the ability to detect who was an alcoholic. Mr. Serwadda hopes other research teams will now take privacy, and not just accuracy, into account when optimizing such systems. Professor Serwadda concludes, “We have to prepare for the movement of brain wave [assessment] into our daily lives.”

Rb-

Given the willingness of apps developers to sell share any info to any third party and the unwillingness of the public to take even basic steps to secure their info online, everyone’s deepest personal information can be hacked in the future.

Another problem with passthoughts UC Berkeley’s John Chuang identifies that stress, mood, alcohol, caffeine, medicine, and mental fatigue could change the electrical signals that are generated.

Despite advances in logging in with your mind, there might always be a need for an old-fashioned eight-plus character phrase with no spaces. “Passwords will never go away,” says Berkeley’s Chuang. He reasons that for a computer, a typed password may be the easiest way to verify identity, while a finger swipe may be best for a touch screen.

But we need to think beyond those to future devices—wearables, for instance—for which there will be neither a keyboard nor a touch screen. “For each device, we must figure out what are the most natural, intuitive ways to tell the device that we are who we are,” Professor Chuang says. Going directly to the brain seems like an obvious choice.

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Stop using SMS for Two-Factor Authentication

Stop using SMS for Two-Factor AuthenticationFollowers of the Bach Seat know that passwords suck and no longer provide reliable security. Because automated mass cybercrime attacks are hammering businesses daily, the National Institute of Standards and Technology (NIST) is disrupting the online security status–quo. According to InfoWorld, the US government’s standards body has decided that passwords are not good enough anymore. NIST now wants government agencies to use two-factor authentication (2FA) to secure applications, networks, and systems.

NIST logoTwo-factor authentication is a security process where the user provides two means of identification from separate categories of credentials. The first is typically something you have, a physical token, such as a card. The second is usually something you know like a PIN number.

The proposed standard discourages organizations from sending special codes via SMS messages. Many services offer two-factor authentication. They ask users to enter a one-time passcode sent via SMS into the app or site to verify the transaction. The author writes that weaknesses in the SMS mechanism concern NIST.

NIST now recommends that developers use tokens and software cryptographic authenticators instead of SMS to deliver special codes. They wrote in a draft version of the DAG; “OOB [out of band] using SMS is deprecated and will no longer be allowed in future releases of this guidance.”

Short Message Service (SMS)Federal agencies must use applications that conform to NIST guidelines. This means for software to be sold to federal agencies, it must follow NIST guidelines. InfoWorld says this is especially relevant for secure electronic communications.

SMS-based Two-Factor Authentication is considered insecure by NIST for a number of reasons. First, someone other than the user may be in possession of the phone. The author says an attacker with a stolen phone would be able to trigger the login request. In some cases, the contents of the text message appear on the lock screen, which means the code is exposed to anyone who glances at the screen.

SMS based two-factor authentication (2FA)InfoWorld says that NIST isn’t deprecating SMS-based methods simply because someone may be able to intercept the codes by taking control of the handset, that risk also exists with tokens and software authenticators. The main reason NIST appears to be down on SMS is that it is insecure over VoIP.

The author says there has been a significant increase in attacks targeting SMS-based two-factor authentication recently. SMS messages can be hijacked over some VoIP services. SMS messages delivered through VoIP are only as secure as the websites and systems of the VoIP provider. If an attacker can hack the VoIP servers or network they can intercept the SMS security codes or have them rerouted to her own phone. Security researchers have used weaknesses in the SMS protocol to remotely interact with applications on the target phone and compromise users.

Signalling System 7 (SS7) Sophos’ Naked Security Blog further explains some of the risks. There is malware that can redirect text messages. There are attacks against the This hack

Mobile phone number portability also poses a problem for SMS security. Sophos says that phone ports, also known as SIM swaps can make SMS insecure. SIM swap attacks are where an attacker convinces your mobile provider to issue you a new SIM card to replace one that’s been lost, damaged, stolen or that is the wrong size for your new phone.

SIM swap attacksSophos also says in many places it is very easy for criminals to convince a mobile phone store to transfer someone’s phone number to a new SIM and therefore hijacking all their text messages.

ComputerWorld highlights a recent attack that used social engineering to bypass Google’s two-factor authentication. Criminals sent users text messages informing them that someone was trying to break into their Gmail accounts and that they should enter the passcode to temporarily lock the account. The passcode, which was a real code generated by Google when the attackers tried to log in, arrived in a separate text message, and users who didn’t realize the first message was not legitimate would pass the unique code on to the criminals.

NIST’s decision to deprecate SMS two-factor Passwordauthentication is a smart one,” said Keith Graham, CTO of authentication provider SecureAuth. “The days of vanilla two-factor approaches are no longer enough for security.

For now, applications and services using SMS-based authentication can continue to do so as long as it isn’t a service that virtualizes phone numbers. Developers and application owners should explore other options, including dedicated two-factor apps. One example is Google Authenticator, which uses a secret key and time to generate a unique code locally on the device for the user to enter into the application.

Hardware tokens such as RSA’s SecurID display a Hardware tokens new code every few seconds. A hardware security dongle such as YubiKey, used by many companies including Google and GitHub, supports one-time passwords, public-key encryption, and authentication. Knowing that NIST is not very happy with SMS will push the authentication industry towards more secure options.

Many popular services and applications offer only SMS-based authentication, including Twitter and online banking services from major banks. Once the NIST guidelines are final, these services will have to make some changes.

Fingerprint RecognitionMany developers are increasingly looking at fingerprint recognition. ComputerWorld says this is because the latest mobile devices have fingerprint sensors. Organizations can also use adaptive authentication techniques, such as layering device recognition, geo-location, login history, or even behavioral biometrics to continually verify the true identity of the user, SecureAuth’s Graham said.

NIST acknowledged that biometrics is becoming more widespread as a method for authentication, but refrained from issuing a full recommendation. The recommendation was withheld because biometrics aren’t considered secret and can be obtained and forged by attackers through various methods.

Biometric methods are acceptable only when used with another authentication factor, according to the draft guidelines. NIST wrote in the DAG;

[Biometrics] can be obtained online or by taking a picture of someone with a camera phone (e.g. facial images) with or without their knowledge, lifted from objects someone touches (e.g., latent fingerprints), or captured with high-resolution images (e.g., iris patterns for blue eyes)

Biometrics

At this point, it appears NIST is moving away from recommending SMS-based authentication as a secure method for out-of-band verification. They are soliciting feedback from partners and NIST stakeholders on the new standard. They told InfoWorld, “It only seemed appropriate for us to engage where so much of our community already congregates and collaborates.

You can review the draft of Special Publication 800-63-3: Digital Authentication Guidelines on Github or on NIST’s website until Sept. 17. Sophos recommends security researcher Jim Fenton’s presentation from the PasswordsCon event in Las Vegas that sums up the changes.

VentureBeat offers some suggestions to replace your SMS system:

  • Hardware tokens that generate time-based codes.
  • Apps that generate time-based codes, such as the Google Authenticator app or RSA SecurID,
  • Hardware dongles based on the U2F standard.
  • Systems that use push notifications to your phone.

 

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Labor Day 2016

On the first Monday in September, the U.S. celebrates Labor day. Labor Day celebrates the contributions working men and women have made to America.

Labor Day 2016

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

What is Bitcoin?

Bitcoin is the name of probably the best-What is bitcoin?known cryptocurrency or digital currency or digital gold or virtual money. A cryptocurrency is a medium of exchange, such as the US dollar, but is digital and uses encryption techniques to control the creation of monetary units and to verify the transfer of funds. Blockchain is the technology that enables the existence of cryptocurrency.

Occupy Wall StreetThe cryptocurrency has populist roots. It made its debut in relative obscurity at the start of 2009, when the great recession  financial crisis was still raging. A person or group of people known as Satoshi Nakamoto purportedly created the bitcoin protocol and reference software. The populist ideology behind Bitcoin is to take power out of the hands of the central bankers and governments who usually control the flow of currency.

Bitcoin is both a digital currency and a payment system. The basic idea behind Bitcoin is that you can use it to pay for things without a third-party broker, like a bank or government. The value of a bitcoin depends on the bitcoin market at the time. One bitcoin = 100,000,000 Satoshi like 1 dollar = 100 cents. There are no transaction fees and no need to give your real name. Merchants have to pay transaction fees on each credit card sale of 2.5% to 3.5% to the likes of Visa, MasterCard, or Discover.

Accounting ledgerThink of Bitcoin like one big ledger shared by all the users: When you pay for something with bitcoin or get paid, then your transaction is recorded on the ledger to ensure there is no double spending of the currency.

Members of the network collectively contribute processing power from their computers to maintain Bitcoin’s integrity. And every time a transaction is made, a record of it is sent out to be recorded in a public ledger where the transactions are effectively set in stone. Anyone can download and install the Bitcoin software for free so these records are distributed permanently across the entire network. This publicly distributed ledger is called the blockchain.

Peer to peerIn order to get more Bitcoins, computers running bitcoin software compete to confirm the transaction by solving a complex cryptographic equation, and the winner is rewarded with more bitcoins. Currently, a winner is rewarded with 25 bitcoins roughly every 10 minutes. The process is known as “mining”. Don’t get too wrapped up in Bitcoin mining because only the computer powerhouses get their bitcoins this way.

The Consumerist explains that Bitcoin mining math is complicated and hard to forge, so the blockchain stays accurate. Because anyone can download and install the Bitcoin software for free, the payment processing and record-keeping for Bitcoin is done in a widely distributed way, and not on one particular server.

Bitcoin miningWhen blockchains are created, so are new bitcoins — but there’s a hard limit to how many will ever exist. The system was designed to create more bitcoins at first, then to dwindle exponentially over time. The first set of blockchains each created 50 bitcoins. The next set each created 25 bitcoins, and so on. New blockchains are created roughly every 10 minutes no matter what; when more computers are actively mining, the program they’re running gets harder (and therefore slower) to compensate. The Bitcoin FAQ estimates that the last bitcoin will be mined in the year 2140, bringing the permanent circulation to just under 21 million. (Currently, there are roughly 15.8 million bitcoins in the world.)

In order to use Bitcoin, you’ll have to install a “bitcoin wallet” app on your phone or computer, and then buy them from a bitcoin exchange. A bitcoin digital wallet is a kind of virtual bank account that allows users to send or receive bitcoins, pay for goods or save their money via an exchange of public and private security keys. Bitcoin wallets can exist either in the cloud or on a user’s computer. The wallets have all the risks of any other app on your device or in the cloud. Unlike bank accounts, the FDIC does not insure bitcoin wallets. CNN Money points out some of the risks in using bitcoin.

Bitcoin miningIn order to buy bitcoins, you have to use a marketplace called “bitcoin exchanges” which allow people to buy or sell bitcoins using different currencies. These exchanges have a dubious history.

Bitcoin exchanges are vulnerable to hacking, collapse or a ”run on the bank.” A run on a bank occurs where customers are scared and demand to withdraw their deposits so fast that the bank makes payments and shutdowns. If something like that happens, good luck getting your money back: This isn’t like an FDIC-insured bank account.

Bitcoin can be used in a few places; Marketwatch says there doesn’t seem to be much rhyme or reason to where you can use Bitcoin:

rb-

The use of bitcoins in Michigan has not really taken off. Last summer, according to the FreeP, there were only a handful of businesses in metro Detroit that took bitcoin included:

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Chatbot Risks

Chatbot RisksChatbots are the latest rage on social media. As Time explained, they have been around since the 1960s. That’s when MIT professor Joseph Weizenbaum created a chatbot called ELIZA. Chatbots found a home on desktop messaging clients like AOL Instant Messenger. Chatbots went dormant as messaging transitioned away from desktops and onto mobile devices.

Sophiscated botBut they’re poised for a resurgence in 2016. There are two reasons for this. First, artificial intelligence and cloud computing has gotten better thanks to improvements in machine learning. Second, bots could be big money.

Tech titans have chatbots on social media

All the tech titans have released social bots on the web; Apple’s (AAPL) Siri, Facebook’s (FB) “bots on Messenger“, Google’s (GOOG) Allo, and Microsoft’s (MSFT) ill-fated Tay. They believe there’s a buck to be made here, and they’re scrambling to make sure they don’t get left out.

Social botThe July issue of the Communications of the ACM included an article, “The Rise of Social Bots,” which lays out social bots’ impact on online communities and society at large. The authors define a social bot as a computer algorithm that automatically produces content and interacts with humans on social media, trying to emulate and possibly alter their behavior.

The Business Insider published this infographic about the social bot ecosystem.

Business Insider infographic

Chatbots can be deceptive

The ACM article argues that social bots populate techno-social systems; they are often benign, or even useful, but some are created to harm by tampering with, manipulating, and deceiving social media users. The article offers several examples of how social bots can be a hindrance. The first example involves the Twitter (TWTR) posts around the Boston Marathon bombing. The researcher’s analysis found that social bots were automatically retweeting false accusations and rumors. The researchers argue that forwarding false claims without verifying the false tweets granted the false information more influence.

bots can artificially inflate political candidatesThe ACM article also discusses how social bots can artificially inflate political candidates. During the 2010 mid-term elections some politicians used social bots to inject thousands of false tweets to smear their opponents. This type of activity puts the integrity of the democratic process at risk. These types of attackers are also called astroturfing, or twitter-bombs.

Anti-vaxxer chatbots

The article offers another example of the use of social bots to influence an election in California. During the recent debate in California about a law on vaccination requirements there appears to be widespread use of social bots by opponents to vaccinations. This social bot interference puts an unknown number of people at risk of death or disease.

bot provoked stock market crashGreed is the most likely use of social bots. One example from the article is the April 2013 hack of the Twitter account of the Associated Press. In this case, the Syrian Electronic Army used the hacked account to posted a false statement about a terror attack on the White House which injured President Obama. This false story provoked an immediate $136 Billion stock market crash as an unwarranted result of the widespread use of social bots to amplify false rumors.

Chatbots manipulate social media reality

Research has shown that human emotions are contagious on social media. This means that social bots can be used to artificially manipulate social media users’ perception of reality without being aware they are being manipulated. The article says the latest generation of Twitter social bots has many “human-like” online behaviors that make it difficult to separate bots from humans. According to the authors, social bots can:

  • Search the web to fill in their profiles,
  • Post pre-collected content at a defined time
  • Engage in conversations with people,
  • Infiltrate discussions and add topically correct information.

Some bots garner attention.Some bots work to gain greater status by searching out and following popular or influential users or taking other steps to garner attention. Other bots are identity thieves, adopting slight variants of user names to steal personal information, picture, and links.

Strategies to thwart bad chatbots

The authors review several attempts to thwart these growing sophisticated bots.

1. Innocent-by-association – This theory measured the number of legitimate links vs. the number of social bots (Sybil) links a user has. This method was proven to be flawed. Researchers found that Facebook users are pretty indiscriminate when adding users. The article says that 20% of legitimate Facebook users accept any friend request and 60% accept friend requests with only one contact in common.

2. Crowdsourcing – Another approach to stop social bots is crowdsourcing. The crowdsourcing approach would rely on users and experts reviewing an account. The reviewers would have to reach a majority decision that the account in question was a bot or legit. The authors pointed out some issues with crowdsourcing.

  • It will not scale to large existing social networks like Facebook or Twitter.
  • “Experts” need to be paid to check accounts.
  • It exposes user’s personal information related to the account to unknown users and “experts.”

3. Feature-based detection is the third method the researchers noted by the authors. Feature-based bot detection uses behavior-based analysis with machine learning to separate human-like behavior from bot-like behavior. Some of the behaviors that these types of applications include:

  • The number of retweets.
  • Age of account.
  • Username length.

4. Sybil until proven otherwise – The Chinese social network RenRen uses the fourth method noted by the author. This network uses a “Sybil until proven otherwise” approach. According to the article, this approach is better at detecting unknown attacks, like embedding text in graphics.

rb-

Use your brainWhile people’s ability to critically assimilate information, is beyond technology, the authors call for new ways to detect social bot-generated spam vs. real political discourse.

The researchers speculate there will not be a solution to the social bot problem. The more likely outcome is a bot arms race, like what we are seeing in the war on SPAM and other malware.

Related articles
  • Man vs. Machine: What do Chatbots Mean for Social Media? (blogs.adobe.com)

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.