Walking around without being constantly identified by AI could soon be a thing of the past, legal experts have warned.
The use of facial recognition software could signal the end of civil liberties if the law doesn’t change as quickly as advancements in technology, they say.
Software already being trialled around the world could soon be adopted by companies and governments to constantly track you wherever you go.
Shop owners are already using facial recognition to track shoplifters and could soon be sharing information across a broad network of databases, potentially globally.
Previous research has found that the technology isn’t always accurate, mistakenly identifying women and individuals with darker shades of skin as the wrong people.
If these predictions come to pass, that could mean your identity being confused with that of a criminal, barring you from shops and other buildings.
Alarms have been raised by experts over the rapid spread of the technology, warning it’s a clear breach of privacy and arguing for new laws to combat the problem.
Neema Singh Guliani, the American Civil Liberties Union’s senior legislative counsel told CNET: ‘Unless we really rein in this technology, there’s a risk that what we enjoy every day – the ability to walk around anonymous, without fearing that you’re being tracked and identified – could be a thing of the past.’
Jennifer Lynch, surveillance litigation director at the Electronic Frontier Foundation also said: ‘So far, we haven’t been able to convince our legislators that this is a big problem and will be an even larger problem in the future’.
‘The time is now to regulate this technology before it becomes embedded in our everyday lives.’
The warning comes as reports emerged of the US Department of Commerce using photos of immigrants, abused children and dead people in their facial recognition systems.
The department’s Face Recognition Verification Testing program was first set up as a way to evaluate facial recognition technologies developed by companies, academic researchers and designers.
Tech giants ranging from Amazon to Microsoft have also faced growing scrutiny from human rights and privacy advocates over their facial recognition software.
Amazon in particular has received criticism over its decision to sell its ‘Rekognition’ software to government agencies.
Scientists at MIT also found Rekognition misidentified women and darker-skinned females, raising serious concerns about the bias and discrimination as well as public safety.
When the software was presented with a number of female faces, it incorrectly labelled 19 per cent of them as male.
Rekognition incorrectly labelled 31 per cent of dark-skinned women as men
By comparison, the software made no errors when it tried to identify pale-skinned men.
The proliferation of content supplied to social networking sites like Facebook, Google, YouTube and others has made it that much easier for researchers to find data for their studies.
Many facial recognition systems are being trained using millions of online photos uploaded by everyday people and, more often than not, the photos are being taken without users’ consent, an NBC News investigation has found.
In one worrying case, IBM scraped almost a million photos from unsuspecting users on Flickr to build its facial recognition database.
However, only academic or corporate research groups can request access to the Diversity in Faces database, according to NBC News.
Once the photos are collected, they’re then tagged by age, measurements of facial attributes, skin tone, gender and other characteristics.
Many photographers were surprised to find their photos had been to train IBM’s algorithms.
‘None of the people I photographed had any idea their images were being used in this way,’ Greg Peverill-Conti, who had 700 of his photos used in the dataset, told NBC News at the time.
IBM defended the database, saying that it helps ensure fairness in facial recognition technology and promised to protect ‘the privacy of individuals.’
‘Individuals can opt-out of this dataset,’ the spokesperson added.
IBM told NBC News it would assist anyone who wanted their photos removed from the training dataset.
Despite this, NBC News found that it was almost impossible for users to prevent their photos from being used.
To request for removal, photographers have to email IBM with links of each photo they want taken down.
But the contents of the database aren’t publicly available, so it’s extremely difficult for photographers to know which of their photos have been swept up in the database.
Even celebrities are using the technology to avoid stalkers.
Taylor Swift fans were reportedly surveiled with facial recognition technology without their knowledge at one of her concerts in a bid to track her stalkers.
Those who attended the concert at the Rose Bowl in California on May 18 had their photos taken when they looked at a kiosk screen showing footage of the singer rehearsing, Rolling Stone reports.
Their photographs were reportedly sent to a ‘command post’ in Nashville and cross-referenced against images in a database of hundreds of Swift’s known stalkers, Mike Downing, chief security officer at Oak View Group, an advisory board for concert venues, told the magazine.
180total visits,1visits today