In a matter of days, there was a sea change with one of the most cutting-edge and controversial technologies in Silicon Valley. Three tech giants — Amazon, Microsoft and IBM — all said this week they will not sell facial recognition technology to police.
The technology, which has long been criticized by justice and privacy advocates, faced new scrutiny in the wake of protests over the death of George Floyd. Suddenly, years of concern about the potential for facial recognition products to misidentify subjects and increase the risk of racial discrimination gained new urgency amid a national reckoning over race and police enforcement.
But the moves announced this week by Big Tech are limited and largely temporary. Amazon said it will stop selling its software, Rekognition, to law enforcement for a year with the hope Congress would take action. Some skeptics were quick to criticize Amazon for imposing a time limit when congressional action may not happen, and certainly not in a year. (Microsoft said it will wait until Congress addresses the issue.)
Now comes the hard part. The tech industry is calling for a nationwide law to replace a patchwork of state and local legislation with a deeply divided Congress, during a heated election year, and at a time of crisis — all of which create the perfect conditions for either inaction on the part of distracted lawmakers or hasty corner-cutting.
Some of the companies have said they want to help with crafting the legislation. But that has critics of the tech industry worried. They believe companies could try to seek the moral high ground on the one hand while simultaneously using their substantial lobbying power to push for light-touch policies that benefit its financial interests.
“I think they can undoubtedly make more money with reformed face recognition than banned face recognition,” said Matthew Guariglia, a policy analyst at the Electronic Frontier Foundation, a digital rights group.
Amazon and Microsoft declined to comment.
The current state of play
The term “facial recognition” has come to describe a wide array of tools, which are often used in a range of settings. But in general, it refers to software that — with the help of artificial intelligence — compares the image of one human face to a database, in hopes of finding a match. Police can use the software to analyze stored video surveillance footage captured by CCTV or doorbell cameras, among other applications.
Last May, San Francisco became the first major city in the US to ban facial recognition, a move that suggested the global headquarters of Big Tech was trying to send a message to the rest of the country. But a year later, the regulatory landscape is still a jumbled mix of state and local policies, all of which seemingly focus on limiting and restricting the technology rather than offering alternative guidelines for best practices.
“There’s not really a state that’s taken this on and said, ‘Yes, you should be using face recognition, and here’s how,’” said Neema Singh Guliani, senior legislative counsel at the American Civil Liberties Union.
At the local level, a number of cities have imposed outright bans on facial recognition. In addition to San Francisco, the list includes Oakland, Calif. as well as Cambridge and Somerville, Mass. The rules prohibit city agencies, including the police, from using facial recognition software. Meanwhile, a range of other cities are considering bans of their own. Portland, Ore. could go the furthest, with a measure that would prohibit even commercial uses of facial recognition.
At the state level, California Gov. Gavin Newsom signed a bill last fall that imposes a ban on facial recognition technology, but narrowly targeted at police body cameras and only for a three-year period. Similar laws are already in effect in Oregon and New Hampshire.
There’s another category of laws that could come into play, which deal with facial recognition or policing only indirectly. For example, Illinois has a rare and expansive law governing the biometric information of consumers. States like New Hampshire, meanwhile, have laws on the books preventing facial recognition software from being used in connection with drivers’ license images.
Silicon Valley goes to Washington
Tech companies have seized opportunities where they can in order to shape facial recognition laws at the state and local level. Amazon, for example, has spent thousands to fight Portland’s proposed ban.
In Washington state, Microsoft opposed a bill that would have banned facial recognition until key requirements were met. The company endorsed an alternative — which was ultimately signed into law in April and goes into effect next year — allowing for facial recognition to be used in connection with a warrant and with testing and transparency measures. Among the bill’s lead authors was state senator Joseph Nguyen, who works at Microsoft as a senior program manager when he isn’t performing duties in Washington’s legislature.
As attention turns to Congress for a federal solution, some expect to see the tech industry try to flex its influence there, too.
Speaking of Amazon’s announcement specifically, Jacinta Gonzalez, field director of advocacy group Mijente, said: “That’s one year where they’re going to continue to pay lobbyists to push for what they think regulation should look like in Washington DC.”
Amazon ranks among the top 10 biggest corporate lobbyists; the company spent nearly $17 million on federal lobbying last year. At $10 million, Microsoft wasn’t far behind. Their push for legislation pits an industry with deep pockets against a Congress that more often than not displays its ignorance on some of the most pressing technology issues of the day.
At the moment, there are at least a dozen bills in Congress that either address facial recognition head-on or indirectly as part of a larger policy proposal.
Some of these bills, such as one by Rep. Rashida Tlaib, simply ban federal funds from being used to buy facial recognition software, or ban federal agencies from using the technology. Another bill by Sen. Chris Coons would limit law enforcement’s use of facial recognition tech by locking it behind a warrant requirement. Sen. Jeff Merkley has proposed banning warrantless facial recognition usage until a congressional commission can study the issue and develop policy recommendations.
Tlaib, in particular, has been outspoken on the issue. Last year, she made headlines when she said the Detroit police department should only use black officers to identify black suspects via facial recognition, out of concerns that “non-African Americans think African Americans all look the same.” Tlaib’s remarks attracted accusations of racism, but she stood by her argument, saying that even many of her congressional colleagues routinely misidentify black lawmakers.
Those are all early attempts at legislation, though, in what’s likely to become a policy battle that lasts long after the current protests.