Technology companies have a privacy problem. They’re terribly good at invading ours and terribly negligent at protecting their own.
And with the push by technologists to map, identify and index our physical as well as virtual presence with biometrics like face and fingerprint scanning, the increasing digital surveillance of our physical world is causing some of the companies that stand to benefit the most to call out to government to provide some guidelines on how they can use the incredibly powerful tools they’ve created.
That’s what’s behind today’s call from Microsoft President Brad Smith for government to start thinking about how to oversee the facial recognition technology that’s now at the disposal of companies like Microsoft, Google, Apple and government security and surveillance services across the country and around the world.
In what companies have framed as a quest to create “better,” more efficient and more targeted services for consumers, they have tried to solve the problem of user access by moving to increasingly passive (for the user) and intrusive (by the company) forms of identification — culminating in features like Apple’s Face ID and the frivolous filters that Snap overlays over users’ selfies.
Those same technologies are also being used by security and police forces in ways that have gotten technology companies into trouble with consumers or their own staff. Amazon has been called to task for its work with law enforcement, Microsoft’s own technologies have been used to help identify immigrants at the border (indirectly aiding in the separation of families and the virtual and physical lockdown of America against most forms of immigration) and Google faced an internal company revolt over the facial recognition work it was doing for the Pentagon.
Smith posits this nightmare scenario:
“Imagine a government tracking everywhere you walked over the past month without your permission or knowledge. Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first. This has long been the stuff of science fiction and popular movies – like “Minority Report,” “Enemy of the State” and even “1984” – but now it’s on the verge of becoming possible.”
What’s impressive about this is the intimation that it isn’t already happening (and that Microsoft isn’t enabling it). Across the world, governments are deploying these tools right now as ways to control their populations (the ubiquitous surveillance state that China has assembled, and is investing billions of dollars to upgrade, is just the most obvious example).
In this moment when corporate innovation and state power are merging in ways that consumers are only just beginning to fathom, executives who have to answer to a buying public are now pleading for government to set up some rails. Late capitalism is weird.
But Smith’s advice is prescient. Companies do need to get ahead of the havoc their innovations can wreak on the world, and they can look good while doing nothing by hiding their own abdication of responsibility on the issue behind the government’s.
“In a democratic republic, there is no substitute for decision making by our elected representatives regarding the issues that require the balancing of public safety with the essence of our democratic freedoms. Facial recognition will require the public and private sectors alike to step up – and to act,” Smith writes.
The fact is: something does, indeed, need to be done.
As Smith writes, “The more powerful the tool, the greater the benefit or damage it can cause. The last few months have brought this into stark relief when it comes to computer-assisted facial recognition – the ability of a computer to recognize people’s faces from a photo or through a camera. This technology can catalog your photos, help reunite families or potentially be misused and abused by private companies and public authorities alike.”
All of this takes on faith that the technology actually works as advertised. And the problem is, right now, it doesn’t.
In an op-ed earlier this month, Brian Brackeen, the chief executive of a startup working on facial recognition technologies pulled back the curtains on the industry’s not-so-secret huge problem.
Facial recognition technologies, used in the identification of suspects, negatively affects people of color. To deny this fact would be a lie.
And clearly, facial recognition-powered government surveillance is an extraordinary invasion of the privacy of all citizens — and a slippery slope to losing control of our identities altogether.
There’s really no “nice” way to acknowledge these things.
Smith, himself admits that the technology has a long way to go before it’s perfect. But the implications of applying imperfect technologies are vast — and in the case of law enforcement not academic. Designating an innocent bystander or civilian as a criminal suspect influences how police approach an individual.
Those instances, even if they amount to only a handful, would lead me to argue that these technologies have no business being deployed in security situations.
As Smith himself notes. “Even if biases are addressed and facial recognition systems operate in a manner deemed fair for all people, we will still face challenges with potential failures. Facial recognition, like many AI technologies, typically have some rate of error even when they operate in an unbiased way,” he writes.
While Smith lays out the problem effectively, he’s less clear on the solution. He’s called for a government “expert commission” to be empaneled as a first step on the road to eventual federal regulation.
That we’ve gotten here is an indication of how bad things actually are. It’s rare that a tech company has pleaded so nakedly for government intervention into an aspect of its business.
But here’s Smith writing, “We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology. As a general principle, it seems more sensible to ask an elected government to regulate companies than to ask unelected companies to regulate such a government.”
Given the current state of affairs in Washington, Smith may be asking too much. Which is why perhaps the most interesting — and admirable — call from Smith in his post is for technology companies to slow their roll.
“We recognize the importance of going more slowly when it comes to the deployment of the full range of facial recognition technology,” writes Smith. “Many information technologies, unlike something like pharmaceutical products, are distributed quickly and broadly to accelerate the pace of innovation and usage. “Move fast and break things” became something of a mantra in Silicon Valley earlier this decade. But if we move too fast with facial recognition, we may find that people’s fundamental rights are being broken.”
from TechCrunch https://ift.tt/2mhogiQ
No comments:
Post a Comment