Facing the facts: Is facial recognition getting too nosy?

Ideas

Facing the facts: Is facial recognition getting too nosy?

As law enforcement and private firms start using image recognition, our privacy is ever-increasingly under threat

Matthew Field and Natasha Bernal


Criminals in London attempting to hoodwink the police have a new challenge to contend with: facial-recognition cameras.
Dotted around Soho, Piccadilly Circus and Leicester Square, the cameras continuously record the faces of thousands of citizens as part of a major trial for the metropolitan police.
Built by Japanese company NEC, the NeoFace system works by analysing the faces of people on a “watchlist”. It measures the structure of each face, including the distance between eyes, mouth and nose.
If that face, or one similar, is spotted by cameras, police can instantly track the suspect and call them in for questioning.
The police are believed to have spent more than £200,000 on the technology, but as cash-strapped police forces across the UK start to explore the possibilities of facial recognition as a way to cut costs, some believe the price citizens pay in terms of loss of privacy could be much higher.
“There has been concern about how facial recognition is being used,” says Ewa Luger, a fellow at the Alan Turing Institute and a consultant to Microsoft UK.
“Facial recognition includes identifying sensitive personal information. It needs to be treated a particular type of way.”
Many of the major technology firms have come under fire from those who allege they are treating it in the wrong way.
In the US, Orlando (Florida) is testing the controversial Rekognition facial-recognition technology built by Amazon.
Earlier this month, a group of shareholders filed a letter demanding the company stop selling its system to government agencies, citing privacy concerns.
There are also concerns over its accuracy. Last week, a study by the Massachusetts Institute of Technology (MIT) claimed Rekognition mistook pictures of women for men 19% of the time.
Meanwhile, Facebook is facing a class-action lawsuit that alleges it gathered biometric information without users’ explicit consent.
With widespread fears that facial recognition breeds discrimination and fuels mass surveillance, and major tech firms under fire for their lack of concern, can businesses navigate the privacy issues to make a profit?
Husayn Kassai, the chief executive of British digital ID startup Onfido, believes so.
His firm is trying to use machine-learning and artificial intelligence to give users more control of their digital identity, and he says putting privacy at the heart of the product is key to success.
Onfido’s technology uses image recognition to check for fraud. It can scan driving licences or bank cards and then use a smartphone camera to check for fraudulent attempts to access accounts.
It is being used by Revolut and Monzo, among others, and there are rumours it is in play at several large banks and social media companies.
The technology uses encryption and machine-learning to protect data, but Onfido’s main aim is to decentralise online identity – all users could one day be linked to a unique ID, which only they would own, and could independently verify through a smartphone or online app. No other party would ever hold their data.
The approach appears to be winning over investors. In 2017, Onfido secured £22m from backers, including Microsoft and Salesforce. Its software has been championed by Microsoft for its use in banking biometric scanning.
The Sunday Telegraph revealed at the weekend that an AI and blockchain fund linked to Japanese financial services giant SBI Holdings would be the next investor to add its funding to the British startup. Onfido declined to comment on the new funding round.
For those who get facial recognition right, the rewards could be huge.
The global market for the technology is expected to garner $9.6bn by 2022.
Megvii, a Alibaba-backed Chinese AI company working on facial recognition, is reportedly heading for a $1bn listing.
Another company placing privacy at the heart of their solutions is Aurora, a UK provider of facial-recognition technology. “Our approach has been not to get involved in surveillance-type applications,” says Nick Whitehead, head of strategic partnerships.
The company, founded in 1998, got a significant start in the construction industry, where its technology was used to check on workers claiming to be on site in a bid to clamp down on fraudulent claims for wages made by staff who were not present.
However, the exponential growth of artificial intelligence and machine-learning has meant the company has been presented with new opportunities, taking on bigger and bolder projects at places like Heathrow and Manchester airports, where the company has operated for about seven years.
Its technology is used for self-service processes to make passenger experiences faster, and will play a role in Heathrow’s biometrics rollout across multiple checkpoints in the next few years.
It is also used in terminals that host both international and domestic passengers to prevent anybody boarding a domestic flight who has not cleared the appropriate security.
Aurora says it is not trying to identify individuals from a “large population of people” and hold on to data about them.
Instead, it retains the biometric data of travellers for 24 hours after a plane has landed, at which point it is deleted.
“We’re not directly involved with retaining your biometric data for your next visit, so the next time you visit Heathrow you would start the process afresh. From a security perspective part of what we’re doing is saying it’s just temporary storage for that purpose,” says Whitehead.
Even for companies with the best of intentions, facial recognition may take some time to win over stauncher privacy advocates.
“I think you are never in control of those data,” says Dr Heleen Janssen, a computer science expert at the University of Cambridge and a former Dutch government privacy regulator.
“What does control mean in this respect? Once it is out of your control as a company it’s very hard to enforce the ethics that you originally built into the system.”
While Onfido and Aurora may be taking privacy seriously, other companies have taken a different approach.
In China, SenseTime is now the world’s most valuable AI startup. It has raised more than $1.6bn from the likes of e-commerce giant Alibaba and big-name investors such as Fidelity, and is valued at more than $4.5bn.
But, like Amazon and Facebook, its facial ID technology has concerned privacy advocates.
SenseTime is used in various Chinese police departments to build up security footage for futuristic surveillance.
A recent Bloomberg Businessweek visit to the company described its dystopian office as if one has “stumbled into a Philip K Dick novel”, filled with facial sensors that mark visitors’ faces on their happiness.
“We’re not really thinking very far ahead, you know, whether we’re having some conflicts with humans, those kinds of things,” SenseTime’s founder Tang Xiao’ou said last year.
“We’re just trying to make money.”
While modern technology, in particular facial recognition and online identity, is seen as presenting a trade-off between privacy and security, it may not always be that way.
“We are told we must surrender privacy for more security,” Kassai says. “But the tech is out there to offer both privacy and security.”
– © The Daily Telegraph

This article is reserved for Times Select subscribers.
A subscription gives you full digital access to all Times Select content.

Times Select

Already subscribed? Simply sign in below.

Questions or problems?
Email helpdesk@timeslive.co.za or call 0860 52 52 00.