by Emily Caditz
Last week, Apple proclaimed “the future is here.” The future is, apparently, the latest generation of iPhone, the iPhone X. Although the iPhone X has several noteworthy features, FaceID has garnered the most attention. FaceID is Apple’s new security system based on facial-recognition software.
In the sense that facial-recognition software will become more ubiquitous after FaceID’s launch, Apple’s proclamation is correct: FaceID is the future. Facial-recognition software is becoming more accurate, cheaper to offer, and is being applied in more markets. However, as some reactions to FaceID’s rollout have highlighted, the facial-recognition technology raises consumer security and privacy concerns. Although not an exhaustive list, these concerns include spoofing, easier access to sensitive data, surveillance, and technological normalization.
Most obviously, facial-recognition technologies are vulnerable because faces can be spoofed. FaceID tries to curb this threat by ensuring that FaceID cannot be tricked by blinking, a mask, or two dimensional photos. Thus, people don’t have a face similar enough to any particular iPhone X’s owner to trick FaceID.
However, while FaceID’s added security is a welcomed innovation, it must be weighed against the fact that FaceID may be error prone, resulting in serious consequences. These consequences may be even more severe because (unlike previous facial recognition technology offerings), rather than “opting in,” iPhone X users must opt out to avoid using FaceID. Hence, FaceID will likely be more widely used than past facial-recognition software. And, given that Apple is expected to ship 40,000,000 iPhone Xs before the end of 2017, a vulnerability would likely affect a large number of people.
Also, who is accessing facial recognition technology—as well as that person’s motive for doing so—likely will become increasingly relevant. Facial-recognition technology relies on sensitive personal information to work. For example, FaceID is compatible with apps that have payment and password information. As such, accessing an iPhone X owner’s bank account could be as simple as pointing the phone at the owner’s face. Soon, making a purchase will be as easy as smiling at a camera. Whether it’s someone attempting to steal one’s identity, one’s money, or law enforcement performing a search and seizure, facial-recognition technology has a real threat of being abused.
For others, facial-recognition security technology is threatening because it could be used for mass surveillance. For FaceID to work, the camera must be on anytime an iPhone X is awake, locked, or a FaceID-related app is open. On average, we unlock our phones 110 times a day, so systems like FaceID collect and analyze a lot of data. Unlike government facial recognition networks, FaceID is connected to consumer platforms. While other companies have powerful facial-recognition technologies, Apple is the first to have “a facial recognition system with millions of profiles, and the hardware to scan and identify faces throughout the world.” Thus, FaceID is an attractive target for bad actors and government-surveillance orders.
In response to these concerns, some consumers have found “creative” solutions. However, these cannot reasonably be sustained. Yes, an iPhone X user can always turn FaceID off, but this will not comfort those who are worried about being observed by others’ FaceID-enabled phones.
Nevertheless, it is important to keep in mind that FaceID stores its information on the physical device, not on a cloud server. To access FaceID’s data, one would have to hack into the iPhone X itself. Historically, iOS devices have been notoriously hard to hack into. However, as cybersecurity experts have warned, iOS devices like FaceID are hackable, just like any other piece of technology.
Finally, FaceID may present a subtler problem: normalization. Apple has implemented strong measures to protect its customers’ privacy. But there is no promise that other companies will be as protective as Apple. If consumers’ security and privacy were not breached while using FaceID, they may be slow to recognize a new bio-scanning product’s danger. True, Samsung’s iris scanning technology is at least as secure as FaceID. But other technologies may not be. So, in a FaceID world, consumers face added complexity to an old task: remaining vigilant against security and privacy threats.
*Disclaimer: The Colorado Technology Law Journal Blog contains the personal opinions of its authors and hosts and do not necessarily reflect the official position of CTLJ.