Biometrics: conceding privacy and civil rights for convenience and security?

Biometrics security is being revolutionised without civil oversight

Biometrics, like all technology applications, is an ever-developing science.  Although it has been used by governments to detect criminal activity as well as to control immigration for decades, the strides in deep machine learning and increasingly sophisticated algorithms have come to transform our lives in ways that would have been inconceivable a decade or so ago and unfathomable for even Orwell’s prescient imagination.

Rem Darbinyan, CEO of Smartclick, an umbrella company for investing in, and developing deep machine learning technologies, puts this very succinctly:

“Computers see and interact with our world through countless applications, including image analysis, object recognition and sim recognition.”

In theory at least, governments know everything about each and every one of us, or could find it out if they chose to (biometrics has been used by the FBI, he points out, for decades). 

Billions globally have opted in and continue to share their data with Facebook, whose face recognition capabilities have allowed it to build the largest database of images in the world. And while the social media behemoth was the subject of a long-running legal dispute about the way it scans and tags people’s photos (it was ordered to pay $550m to a group of users in Illinois), this is akin to shutting the stable door after the horse has bolted.

In Europe, the introduction of GDPR legislation was a clumsy attempt to allows us to opt out of cookie collection. Opting out is, however, neither convenient nor entirely fool proof. 

Which brings me neatly to the main point of the privacy discourse: an entire generation of internet users have come to accept The Deal, which is ‘I give you my data in exchange of convenience’. 

Darbinyan believes that the technology allowing users to opt out will be developed further, not least because of the ethical implications in an industry that lacks government regulation at the moment of writing. 

“Every technology can be as useful as harmful and biometrics is in its teenage stage, rather than fully developed.”

Ethics are relative and it may come as a total surprise to the casual reader that the deep machine learning/AI (artificial intelligence) industry is entirely self-regulated. In other words, it is left to each company developing its proprietary technology and algorithms to make a decision on what is acceptable versus profitable – if indeed, a conflict exists between the two.

Already, facial recognition technology is chillingly invasive: it can identify one face in several hundreds of thousands in just a second with great accuracy, even post-plastic surgery, based on measurements, distance between features and bone structure. 

Biometrics is, of course, much more than face recognition. Finger-printing is being incorporated in passport chips, with voice, retina and gait (in China) recognition being mooted for the not distant future.

Each time we use an ID or our finger print, or voice or sim on our smartphone, we leave a digital imprint which allows the machine to perfect its analysis and knowledge, ultimately creating a perfectly recognisable ‘snapshot’ file of every user.

Deep Machine Learning allows for the development of products and solutions that can exponentially improve the quality of our lives, says Darbinyan, who is an active investor in scalable start-ups backed by great teams.  

He is an optimist when it comes to the future of AI: the industry, he says, is not sufficiently developed to overcome the human race.

“We will find a way to protect ourselves because we already know the potential dangers and can anticipate the threats.”

One of the biggest players in facial recognition technology is Cognitec.

Cognitec is the only company worldwide that has worked exclusively on face recognition technology since its inception in 2002, having developed one of the leading algorithms in the world. The German company specialises narrowly in ID management, immigration, law enforcement, security in stadiums and casinos. 

Elke Oberg, a spokesperson for the company, explains that most countries have adopted a standardised model of ID pictures which makes the error rates “barely measurable”. 

Privacy, as a concept, has lost ground to practical application because, she reminds me, as soon as we cross a border, we consent to our information being checked.  

Governments do store data and the USA has been storing private data for at least two decades. The entry-exit system in the EU is similarly being overhauled to create records of non-EU citizens and monitor overstayed visas.  The UK, which has left the EU, is thinking of following suit. 

Are we marching towards a future when every person will be logged in ‘The System’? 

This is the case already, says Elke. The fine line between safety and privacy is already blurred, with most of us having already relinquished the latter voluntarily.

The current limitations of facial recognition technology are being addressed by an array of newcomers, eager to stake their claim on conquering the ultimate loophole in the algorithms: image morphing (using GAN technology to combine two images and creating a single one that could be a possible match for either of the original image donors). 

NIST (The National Institute of Standards and Technology) has issued a challenge in the form of a competition to develop the best morphing detection tool precisely because it is one of the last remaining limitations. 

Lighting is another parameter that can lead to false positives or false negatives and poor quality generally is another limitation. On the other hand, the algorithms are “as inclusive as possible for every race and ethnicity”.

Finger printing presents its own challenges when applied to individuals with finger tips calloused or similarly damaged because of the type of work they do (i.e. construction). 

Combining different modalities of authentication will eventually remove all errors. As for the immediate “future”, we are almost at the point of self-authenticating through ID documents stored on our smartphones and a number of countries have now introduced digital driving licences. 

Cognitec is well aware of the industry’s responsibilities regarding privacy and civil liberties protection, abuse of technology and self-regulation. 

If a certain technology is open to misuse or outright abuse, the company (any company) can decide not to make it available to a certain sector or a country – or simply decide not to make it at all even if it has the capability.

Cognitec is justly proud of its longevity in the sector – it is one of the oldest and most highly specialised firms, with a dedicated R&D team, catering to small retail customers and large governments alike.


Trueface provides a facial recognition platform intended to make immediate decisions based on identified patterns. The company’s platform applies advanced computer vision technology to camera footage and images, enabling users to identify persons of interest and objects easily.

Nezare Chafni, Truface’s chief technology officer, does not equivocate when I put it to him that our lives are controlled by algorithms.

Everything – from online ads, to financing, to comparative quotes – is based on data and algorithms analysing it, then delivering results and solutions, he says.  If not already in use, AI/deep machine learning will soon be able to assess if someone is likely to commit a crime. 

Trueface’s software is used by air force, government and travel clients, and among other things, the company powers CCTV monitors with AI technology.

He sees “high specialisation and moving away from the common architecture” as the two main components of differentiation between the different players in the field of developing biometrics: using proprietary algorithms and, rather than re-inventing the wheel, improving the performance.

The technology itself is as good as the data – better and more balanced equals more efficient. 

“Eventually everyone has to specialise.”

Facial recognition itself relies not only on matching image to ID chip, but also on hashing data, something we help create when, for example, we use an e-passport gate.

Hashing is the process of using an algorithm to transform data of any size into a unique fixed sized output (e.g., combination of numbers). To put it in layman’s term, some piece of information (e.g., a name) is run through an equation that creates a unique string of characters. 

In other words, the more we participate in the process of tracking us, the more perfect the machine tracking us becomes.

In the works are technologies identifying us through body temperature and eye duct authentication, while document verification techniques will combine biometrics with photography and QR codes. Even if a face has been altered by plastic surgery, say, AI may fail once, but will learn after the process is repeated a few times.

The emerging biometric standards include multi-modality, i.e special cameras, thermo sensors and the like, with the ability to identify a face in a billion, the only difference being how information is stored in different countries.

I asked Nezare if he wants his children to live in a world where technology is all-pervasive and all-invasive, even if it makes us super-safe.

He offered a version of an often used quote, loosely attributed to Michel Glautier’s The Social Conscience:

“Capitalism will only survive if it develops a social conscience.”

People would reject AI if it becomes too invasive, he argues.  

“Guidelines and standards are just a starting point. Ethical standards are currently applied at company level only. I expect biometrics to get regulated and an international standard to be adopted, with national opt-outs a possibility (people being asked for an explicit consent to their data being saved and used).”


About the companies featured in this article.

Cognitec welcomes proposals from investors. Please send serious inquiries to sales@cognitec.com and refer to this article.

Truface is a private company funded by VC capital.