“Pay with your face.” As threatening and sinister as this may sound, this isn’t a line from the new chapter of dystopian series Black Mirror. It’s a tagline for Apple’s newest technological innovation – the £999 iPhone X and its ‘Face ID’ feature. Apple’s approach to marketing seems to focus heavily on their development in facial recognition software. I’m quite attached to my face, in more ways than one, so this set off some alarm bells in the tin-hat conspiracy theorist deep inside me. Despite this, the scientist in me is more prominent, so I decided to give Apple the benefit of the doubt and get some questions answered. How does the technology work? If it doesn’t work as Apple promises, what will this mean for user security? Should we be worried about what information that organisations – legal or criminal – may be able to glean with this software? Is it even worth it?
It’s clear why Apple felt the need to give facial recognition a serious update. So far, it has been notoriously easy to trick. Nguyen Minh Duc, manager of the application security department at Hanoi University of Technology, succeeded in tricking Lenovo, Asus and Toshiba laptops with a photograph of the user. Alibaba (‘China’s answer to Amazon’) attempted to solve this problem when developing a service that allows customers to verify purchases by looking into their phone camera. The payment would only be accepted if the software could detect the user blinking. However, the average person could simply use a video of themselves blinking instead of a photo and manage to successfully deceive the system.
So, how does Apple believe it has achieved its “revolution in recognition?” They released a document to inform the consumer on their Face ID security in September 2017. When you want to unlock your phone, instead of comparing what the camera detects with a normal colour image, the iPhone camera uses infrared dots to create a sequence of 3D maps of depth and 2D infrared images (think heat-sensing photography). Because the technology uses light that isn’t in the visible spectrum of wavelengths, Face ID even works when the user is wearing sunglasses or in darkness. The camera then randomizes this data and creates a pattern that is specific to each device. This is then transformed into a string of code that allows your face to be recognised over a variety of expressions and poses, supposedly without being able to be tricked by photos, videos or even 3D face replicas.
Image credit: Wikimedia Commons
This can all be done using a piece of computer software called a biometric ‘artificial neural network,’ called biometric because it is inspired by biological brains. In a similar way to how the human mind might develop, a neural network ‘learns’ by experiencing examples to get closer to a desired result using a complex system of computer cells. Apple took infrared images and depth maps of thousands of people of different genders, ages and backgrounds, so their neural network would function for a diverse range of customers.
This all sounds very convincing, but if the saved data is such a close representation of my appearance, I would want to make certain that only the right people have access to it. In 2013, Apple changed the way that their iPhones were kept secure, using a processor chip called the Secure Enclave – a physical piece of biometric hardware for your data. It’s not interwoven with the software you use every day, as this is more vulnerable to infiltration. The string of code that allows Face ID to recognise your face is kept in this chip and isn’t sent to an external server for Apple or otherwise to access. The chip is not only well-encrypted (protected), but the images that are initially taken of your face are cropped, minimizing the amount of background information that is stored. This means that strangers won’t be able to find out where you live by seeing your road name in the corner of an image, and you won’t get targeted advertising from the stack of Domino’s boxes in the corner of your room. If someone can get a hold of your phone, they may be able to hack it remotely, but this may be unlikely due to Apple’s level of encryption.
Face ID isn’t the only feature that has been found to be controversial. The iPhone X is the first iPhone to not have a home button, meaning that Face ID will effectively replace Touch ID (uses fingerprint instead of facial recognition). In terms of security, it looks like Face ID comes out on top. The chances of someone else unlocking your phone with Touch ID are one in 50,000, but with Face ID it’s one in a million. But is this level of security even necessary, especially at the expense of convenience? Apple claims that it makes using their products a more natural experience, but the iPhone X requires the user to fully look at and engage with their device, whereas most of the time a quick tap of a finger to check the time would be sufficient. Considering the price tag and the resources, Face ID doesn’t seem justifiable for some animated emojis.
Face ID definitely isn’t a major security threat at the moment. However, there may be a few things to keep an eye on for the future. Apple will allow third parties to use the software for their own apps, so always check app permissions, even if you think you’re in the know. In the future, this biometric, infrared face recognition may be used immorally, but that comes with the territory when developing any new piece of technology. Although Apple may be known for manipulating consumers into a cult following, they are also known for their thorough approach to security. So, no real life Black Mirror just yet.