Updated 06/19/2020 – Redmond is reporting that the ACLU has uncovered evidence (PDF) that Microsoft was pursuing sales of its facial recognition technology after its vow to stop selling the software. The ACLU says Microsoft continued to pursue sales to the U.S. Drug Enforcement Administration (DEA) six days after the announcement. Microsoft president Brad Smith claimed the firm would stop selling facial recognition tech to U.S. police agencies until there is a national law in place that’s “grounded in human rights.”
The article calls MSFT’s Smith’s “stand” last week “as a bit hollow or misleadingly narrow” and “opaque transparency.”
—
Updated 06/12/2020 – CNN is reporting that Microsoft has fallen in line with IBM and Amazon. It has announced it will not sell facial recognition technology to police departments in the United States, at least until there is a federal law to regulate the technology.
Following IBM’s stand, Amazon has announced it will stop providing its facial recognition technology to police forces for one year. TechCrunch makes the point that the Amazon announcement did not say if the moratorium would apply to the federal government. Amazon also did not say in the statement what action it would take after the yearlong moratorium expires.
Both firms are calling for national regulation of the tech. As I predicted below.
—
IBM has made a step in the right direction in the fight against structural racism. IBM CEO Arvind Krishna sent a letter to the U.S. Congress citing concerns that artificial intelligence (AI) facial recognition software could be used for mass surveillance and racial profiling. As a result, IBM will no longer sell general-purpose facial recognition or analysis software.
IBM facial recognition changes
The company is not abandoning facial recognition. Reuters cites an IBM source that says, IBM will “no longer market, sell or update the products but will support clients as needed.” As Engadget points out, the move comes in the midst of protests over police brutality and discrimination capped by the apparent murder of George Floyd by Minneapolis police officers.
The use of AI and facial recognition has a history of privacy and bias problems. In 2019, Pew Research reported that 50% of U.S. adults said they did not trust tech companies to use facial recognition responsibly. 27% of the same group did not trust law enforcement agencies to use facial recognition responsibly. There are good reasons for the distrust of facial recognition. Many reports have found that facial recognition systems can be biased. They have systemic bias’ against non-whites and women. This is particularly true if the training data includes relatively few people from those groups.
The Verge documents some of the defacto bias’ in facial recognition. In 2018, AI researchers Joy Buolamwini and Timnit Gebru, Gender Shades project was the first to reveal the extent to which many commercial facial recognition systems (including IBM’s) were biased. This work led to mainstream criticism of these algorithms and ongoing attempts to address bias.
Clearview AI Inc., facial recognition software identifies people by comparing their faces with 3 billion images many scraped from social media sites. Clearview took the images from Facebook, YouTube, and Venmo without notifying the people. The facial recognition tool is widely used by private sector companies and law enforcement agencies. Clearview has since been issued numerous cease and desist orders and is at the center of a number of privacy lawsuits. Facebook was also ordered in January 2020 to pay $550 million to settle a 2015 class-action lawsuit over its unlawful use of facial recognition technology.
The Verge points out that IBM is not without a share of the blame. IBM was found to be sharing a training data set of nearly one million photos in January 2019 taken from Flickr without the consent of the subjects. IBM told The Verge in a statement at the time that the data set would only be accessed by verified researchers and only included images that were publicly available. The company also said that individuals can opt out of the data set.
A December 2019 NIST study found:
empirical evidence for the existence of a wide range of accuracy across demographic differences in the majority of the current face recognition algorithms that were evaluated.”
Amazon’s facial recognition software
Notably, NIST’s study did not include Amazon’s facial recognition software Rekognition. Rekognition, has also been criticized for its accuracy. In 2018, the ACLU found that Rekognition incorrectly matched 28 members of Congress to faces picked from 25,000 mugshots.
Despite Amazon’s system providing what the ACLU called a disproportionate number of false matches of congress embers of color, Amazon posted a statement expressing concern over the “inequitable and brutal treatment of Black people in our country.” But the richest man in the world Jeff Bezos and his company are part of the problem. Amazon is profiting off racial profiling of Black people by police.
Amazon has built a nationwide surveillance network. The surveillance network of our homes and communities uses Amazon Ring cameras and its Neighbors app. The company collects the images and then handed its data over to the police.
What Amazon does with the data:
- It aggressively sells its facial recognition software, Rekognition, to police departments and ICE.
They store footage from millions of Ring users and have over 1,330 surveillance partnerships with local police departments.
- Through these surveillance partnerships, law enforcement, including the FBI and ICE can get unprecedented access to large swaths of footage and data without any oversight.
- The Neighbors App allows for the crowdsourced (mob) tracking of Black people.
rb-
Mr. Krishna should be applauded for his public stand. But call me cynical – this is also about business. Morgan Stanley predicts that AI and automation will be a one trillion dollar industry by 2050. Change is coming and big tech – IBM, MSFT, GOOG, FB are trying to get in front of it. The titans are pushing for reform – not abolition for two reasons.
First, they want to use new regulations as a barrier to entry into this market. They want to upstarts like Clearview AI and 45+ other small to multi-national firms who may have new ideas out of the $1T market.
Second – Big tech knows they can buy the politicians in DC cheaper than having to fight off regulations in 50 different states. Big business has done this time and again. they will sit in front of a congressional hearing – say mea culpa and maybe Congress will pass some lame regulation that the lobbyist wrote. Nothing will change because there is too much money on the table to do the right thing to stop the structural racism that led to George Floyd’s death.
Related article
- Black deaths at the hands of law enforcement are linked to historical lynchings (Economic Policy Institute)
Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedIn, Facebook, and Twitter. Email the Bach Seat here.