Posted by Edy Semaan on November 25, 2019 in Blog
Glance, unlock. Accessing your beloved electronic companion has never been easier as the key is now your unique face. Simple enough? Not so fast. The convenient technology of facial recognition has not made it into our daily lives without controversy, and with good reason.
Leading electronics companies across the board have introduced the face scan-based unlock option in their newest flagships. Almost every top-end smartphone released in 2019 boasts the capacity to “learn” your facial characteristics from a one-time setup to grant you easy, continuous, and secure access with only a glimpse. However, uses of the new technology are not limited to the mainstreamed smartphone-unlock feature and its risks outweigh its benefits.
Facial recognition has branched out into numerous areas of application: lock systems and physical access, attendance tracking, marketing insights and customer service, banking and fast payments, public security, healthcare and disease diagnosis, and social integration. While in some of these applications it can be helpful, like detecting health scars from someone’s face or helping the blind assess social situations, facial recognition gets especially tricky in security applications. It is feared that the convenience it brings will not be enough to counterbalance privacy invasion concerns. The technology has already made its way to several airports and ports of entry where the goal is said to speed up flight boarding and to optimize the screening process. For passengers, this might mean less wait times but in reality, privacy advocates worry people will get too comfortable with the idea of being constantly monitored in public spaces.
The case of journalist MacKenzie Fegan went viral last spring after she tweeted about her unexpected experience ahead of an international JetBlue flight for which she didn’t need to present boarding documents. “I just boarded an international @JetBlue flight. Instead of scanning my boarding pass or handing over my passport, I looked into a camera before being allowed down the jet bridge,” she posted. “Did facial recognition replace boarding passes, unbeknownst to me? Did I consent to this?” Reliance on biometric data in travel procedures is not new but Fegan’s case brings the issue of consent to the limelight. The American Civil Liberties Union (ACLU) warned that facial recognition is the most dangerous of all biometric systems since it can be exploited passively without a person’s knowledge or consent, unlike fingerprints or DNA. Public video cameras can sneakily capture a face that, in comparison with existing photo data in government agencies, can be identified, or misidentified.
The practice of public data collection has existed for a long time but, according to experts, the newfound ability to analyze and correlate facial data and draw automated insights and conclusions from them raises greater privacy concerns. “We have really gotten used to the idea of being photographed constantly,” Gregory C. Allen, former adjunct senior fellow at the Center for a New American Security, said in an interview with WBUR. “What’s new in facial recognition technology is that we’re losing the anonymity that used to be associated with being recorded.”
And not only does the intrusive system prove problematic at a time when the technology hasn’t been mastered yet, but it also presents inherent race and gender bias. Several tests have shown that facial recognition works significantly better on Caucasians than on people of color. It proves particularly challenging as local law enforcement agencies’ implementation of it expands. They argue that its mere existence discourages criminal behavior as people become more self-aware and careful when they are being surveilled, but the fact of the matter is the technology employed to tighten security is being mostly rolled out in poor and otherwise neglected areas that tend to be majority people of color.
Another deficiency is that artificial intelligence algorithms that feed facial recognition inherit the pre-existing institutional and human biases in the sample of data used to develop them. For example, when the main set of photographs used to create a facial recognition database is of mostly white men, the algorithm will naturally be better at detecting them than darker-skinned women, as proven in an MIT Media Lab research. “When the person in the photo is a white man, the software is right 99 percent of the time,” the New York Times reported.
The global market value for facial recognition is set to exceed $8 billion in only three years, according to Forbes, so no wonder big companies are moving fast set the tech’s standards. As its pioneers, they are now dealing with a number of contentious issues facing their latest products. Amazon recently announced an updated version of Ring Doorbell, a surveillance product that has raised eyebrows for the way it is being marketed and operated. Tech Crunch even reported on Nov. 7 that a now-resolved security flaw in Ring doorbells made many home Wi-Fi passwords available to hackers until September.
Ring rep denied using facial recognition technology in its doorbells, but Amazon's general work with law enforcement has been met with intense scrutiny and Buzzfeed News reported in August that the company’s Ukraine arm has a “head of face recognition research.” Generally, Amazon’s work on facial recognition remains largely concentrated in its Rekognition software, which has attracted a lot of backlash since its release in late 2016. Reports revealed last year that Amazon even met with U.S. Immigration and Customs Enforcement (ICE) officials to discuss using the software that detects people in photo or video and runs their images “against a collection of millions of faces in real-time.”
The rapid buildout of facial recognition and the shortcomings of legislative keep-up created a vacuum that Big Tech is exploiting to create its own industry-friendly rules. Amazon CEO Jeff Bezos confirmed recently that his public policy team has developed a set of laws meant to regulate the use of facial recognition to the outrage of civil and digital rights groups. “There’s also potential for abuses of that kind of technology, so you do want regulations. It’s a classic dual-use kind of technology,” Bezos explained. But critics, even within Amazon, worry the technology could be misused and have made it clear that Amazon leadership is choosing profit over people’s safety. “Facial recognition technology exacerbates racial discrimination by police departments, violates privacy rights, and makes the personal data of millions of people vulnerable to security breaches,” non-profit group Fight for the Future said. The group has advocated for a full ban instead of regulation on facial recognition surveillance as a preemptive measure. “Imagine if we could go back in time and prevent governments around the world from ever building nuclear or biological weapons. That’s the moment in history we’re in right now with facial recognition,” the group’s deputy director Evan Greer said in a press release.
After 15 months of glitches and controversy, Orlando canceled in July a pilot program that uses Amazon’s Rekognition to “automatically identify and track suspects in real-time using facial recognition algorithms,” Orlando Weekly reported. But Orlando is only one example. Amazon is reported to have already started partnerships with 500 policing agencies across the U.S. and is feared to have created a surveillance network with its video doorbells and “Neighbors” social media app that can potentially acquire facial recognition at a time when the platform was found to host racist behavior. In addition to petty crimes being heavily policed through the app, “video posts on Neighbors disproportionately depict people of color, and descriptions often use racist language.” ACLU further exposed Rekognition’s faults in a test that used the software to compare images of Congress members with a database of mugshots and resulted in 28 incorrect matches.
And it’s not just Amazon. Other major companies have also faced backlash for embedding the nascent and faulty technology in their products. Google rushed to leap into facial recognition with smart-home device Nest Hub Max, which introduced a camera and a larger display than its predecessor. Even when training its technology for better results, Google resorted to “dubious tactics,” collecting face scans of darker-skinned students and homeless people in exchange for $5 gift cards and failing to disclose the motive, the New York Daily News recently reported.
U.S. politicians lambasted Microsoft for producing research on facial recognition with a military-run university in China. The company’s president also admitted to selling its facial recognition software to a U.S. prison before stopping the practice, citing human rights concerns and the technology’s proven inherent bias.
Facebook recently updated its controversial facial recognition settings after being hit in 2015 with an ongoing lawsuit that claims the social network violated a biometric privacy law in Illinois.
Apple is also facing court following a New York teenager’s $1 billion lawsuit that claims the company’s facial recognition system erroneously accused him of stealing from Apple stores.
Despite the polemic embracing of facial recognition, U.S. Customs and Border Protection (CBP) plans to expand its use of the technology in international vetting efforts. The upgrade aims to bring more “centric” biometric capabilities to existing systems to prevent threats “before travelers arrive to the U.S.,” according to a CBP document released in August. CBP has already relied on facial recognition for operations in many U.S. airports and checkpoints along the Mexican border, but security glitches at airports don’t seem to be going away anytime soon. Although the technology has proven helpful in some cases, like last June when authorities succeeded in identifying a suspect in the Annapolis mass shooting, it would be misleading to talk about its overhyped short-term benefits without evaluating its applications and recognizing its dangers before it’s too late. And the future presents bigger challenges for the technology, like how and whether facial data will be stored, sold, and manipulated. Additionally, the New York Times recently reported that “Immigration and Customs Enforcement officials are using facial recognition technology to scan state driver’s license databases without citizens’ knowing.”
Local governments are now taking matters into their own hands. Lawmakers in California passed legislation in September prohibiting police from associating facial recognition with body-worn cameras for three years. San Francisco, a longtime tech hub, became the first major U.S. city to ban the use of facial recognition tools by police considering the technology’s significant weaknesses. Other cities like Oakland, California, and Somerville, Massachusetts, are currently considering it. Applauding these small efforts and pushing for more comprehensive action, civil liberty groups are united in condemning a premature adoption of the technology and warn of an oppressive, privacy-deprived surveillance state.
It’s surely not enough for a few local governments to ban the technology. It’s high time the federal government took a serious and inclusive measure to apply a complete nationwide ban on facial recognition adoption by police and law enforcement agencies, at least for now in its current vulnerable state, and stop it from wreaking havoc on our society, especially on minorities and underprivileged communities where impact is the heaviest. One of the legislature’s responsibilities is to investigate all-too-powerful companies working on facial recognition and to hold both the executive branch and the private sector accountable, especially when adopting technologies like facial recognition under the premise of law enforcement to incarcerate virtual identities and institutionalize mass surveillance. Before we reach a point of no return, Congress must set priorities right and prevent at all costs anyone and anything from jeopardizing people’s freedom, safety, and privacy.