Robert W. was in his office when the phone rang.
It was the Detroit Police Department telling him to come to the station.
He thought it was a joke and ignored it.
But when he pulled into his driveway a few hours later, a police car pulled in behind him.
Two officers arrested Robert on his front lawn as his wife and daughters watched.
The police officers wouldn’t say much about why he was being arrested.
But they did show him an arrest warrant for felony larceny.
The police took Robert to a detention center. They took his mug shot, fingerprinted him and took DNA samples.
Then they locked Robert in a jail cell overnight.
The next day two detectives took him to an interrogation room.
They showed Robert a still image from a surveillance video of a theft at a jewelry store.
The detectives said the thief was him.
Robert knew that he had not committed the crime in question.
What he didn’t know was that facial recognition software used by police had matched his photo to the crime.
The problem is police should have looked at the facial recognition as a clue in the case, not a smoking gun.
Before arresting Robert, detectives (clearly) should have done more work.
For instance, they could have pulled his cell phone records to see where he was when the crime occurred.
Two weeks after his arrest, the case against Robert was dismissed.
His case is an example of flawed technology with poor police work.
The fact is, law enforcement all over the U.S. is using facial recognition software.
Yet, even the best software has an accuracy rate of only 90%.
But, 90% is unacceptable when the software is the only source of identification.
Even the company behind the facial recognition said, “A match using facial recognition alone is not a means for positive identification.”
And the company’s co-founder, Brendan Klare, CEO of Rank One Computing said his company’s system was misused.
“Rank One unreservedly opposes any misuse of face recognition technology including where a candidate match serves as a lone source of probable cause for arrest.”
So, with facial recognition software growing more and more daily…
Here are a few ways to protect yourself from becoming a victim of mistaken identity.
Stop using photo-sharing websites: Facial recognition software have to compile databases for the software to work.
Companies get these photos from image-sharing websites such as Flickr.
In fact, one company admitted that they obtained over 1 million images from the photo-sharing website.
These types of websites allow others to use your photos.
But, without reading the fine print of the user agreement most people never realize this.
Social media: Sites like Facebook use facial-recognition software to analyze photos you upload.
This way Facebook can suggest which friends to tag in the photo.
By allowing this to happen you are confirming who the person is in the photo.
You are helping improve Facebook’s facial recognition software.
This is something you should turn off.
Go to your Facebook settings and edit your preferences.
Select “no” when it comes to facial recognition.
Check all your social media accounts and turn off the facial recognition features.
Smart home assistants: One of Google’s newest smart home devices has a camera that’s always looking for your face.
Called Face Match, it uses facial recognition technology to store your faceprint.
So, once it’s learned to identify your face, it can offer you personalized data.
For example, it can tell you your Google messages and calendar appointments.
Other smart home devices with cameras have similar features.
They can tell who is in the room based on facial recognition and personalize the information.
Yet, this means Google or the smart home assistant company has a copy of your face.
And this is one more database that can share your image.
Think twice before bringing this into your home.
Face detection systems cannot work without photos.
The best way to stay out of their radar is to be selfish with your face.