Facial Recognition Technology & Data Protection: Analysis of the Council of Europe’s New Guidelines
- Tanya Varshney
- Jul 7, 2021
- 6 min read
Posted on February 18, 2021
Authored by Melita Tessy*

Image Source: LA Times
Introduction
On 28 January 2021, the Council of Europe published the Guidelines for Facial Recognition (“Guidelines”) in consultation with the ‘Consultative Committee Of The Convention For The Protection Of Individuals With Regard To Automatic Processing Of Personal Data’ (“Consultative Committee”). These Guidelines were issued further to the ‘Convention for the Protection of Individuals with regard to the Processing of Personal Data’, also known as ‘Convention 108 on Data Protection’ (“Convention”).
Facial Recognition Technologies (“FRT”) were first developed in the 1960s. They have rapidly improved ever since and are currently being used around the world for various reasons, both uplifting and concerning. An example of FRTs being put to good use would be the case of the New Delhi police using the technology to identify nearly 3000 missing children within a span of 4 days by checking pictures of the missing children against pictures of children found by them.
Despite such success stories, however, there are valid reasons for concern with regards to FRTs. In June 2020, it was reported by the United Nations that more States are increasingly using FRTs to identify protesters and sometimes, this includes live facial recognition. What makes this deeply troubling is the fact that the technology is susceptible to errors and could lead to misidentification, misinterpretation and wrongful arrests. A landmark federal study of the United States has reported that current FRT systems misidentify people of color more often than white people. In fact, Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. Native Americans and Pacific Islanders were also more predisposed to misidentification than white men. To add to this, FRTs misidentify women 18% more often than men. According to the UN High Commissioner for Human Rights, Michelle Bachelet, these errors could potentially intensify discrimination based on race and gender. A good example of a FRT error may be the 2018 case of a famous Chinese businesswoman who was issued an automated ticket for jaywalking, but surveillance cameras had actually captured her image from an advertisement on the side of a bus.
The above events make it clear that there is a necessity for regulating FRTs and their use. However, the prevailing legislations across jurisdictions do not adequately address this emerging issue. In light of the said background, this article aims to provide an overview and analysis of the Guidelines.
Council of Europe’s Guidelines on Facial Recognition
The ‘Convention 108 on Data Protection’ is a multilateral instrument on the protection of personal data. It has 55 parties, most of which are in Europe and 25+ observers including the US, Canada, Brazil, and Australia. It has no Asian participation and extremely limited African participation. The Conventional Committee has produced various reference documents with respect to artificial intelligence, big data, internet governance, media privacy, health related data and data processing by law enforcement agencies.
On 28 January 2021, a document developed by the Convention’s Consultative Committee containing the Guidelines for Facial Recognition was published by the Convention. According to these Guidelines, facial recognition refers to the automatic processing of digital images that contain the faces of individuals for the purpose of identification or verification of those individuals by using face templates. The Guidelines acknowledge that the uses of FRT are numerous and diverse. Some of these uses may seriously violate the rights of data subjects. Thus, the objectives of the Guidelines are to protect human rights and uphold the rule of law by regulating the use of FRT. The Guidelines instruct the parties to Convention 108 to facilitate the development and use of FRT in their countries in such a way that ensures the privacy rights of the data subjects are upheld. Such facilitation is expected to strengthen human rights and fundamental freedoms by implementing the principles enshrined in the Convention in the context of FRTs.
Article 5 of the Guidelines provide for the protection of personal data that undergoes automatic processing, as is the case with facial recognition data. It stipulates that personal data must be obtained fairly and lawfully and that personal data so obtained must be adequate, relevant, and not excessive in relation to the purposes for which they are stored.
Article 6 provides for special categories of data. This data category comprises of personal data capable of revealing racial origin, political opinions, religious or other beliefs, as well as personal data pertaining to health or sexual life. Article 6 stipulates that such data must not be processed automatically unless national legislations provide appropriate safeguards. Article 6 also applies to data regarding criminal convictions. It may be said, in the author’s opinion, that the Interpol’s use of facial recognition is in accordance with Article 6. While the Interpol makes use of automatic processing, it always makes sure to carry out a manual process to verify the results of the automatic processing.
Article 9 provides for exceptions to Article 5 and 6. It allows for derogation from Article 5 and 6 for the following purposes: protecting State security, public security, the monetary interests of the State, the suppression of criminal offences, safeguarding the data subject or the rights and freedoms of others. For example, the use of covert live facial recognition technologies by law enforcement agencies would only be acceptable if it is strictly necessary and proportionate to prevent imminent and substantial risks to public security that are documented in advance.
Public and Private Sector Uses of FRTs
The Guidelines are divided into four parts that provide coverage for both the public sector and private sector uses of FRTs. They are as follows:
Guidelines for Legislators and Decision-Makers
The Guidelines state that legislation which authorises extensive surveillance of individuals without proper safeguards in place can be found contrary to the right to respect for private life. The mass camera surveillance schemes implemented in China may be a good example of this contravention. The Guidelines further require that different cases of use for FRTs should be categorised, and a legal framework applicable to the processing of biometric data through facial recognition should be established. It is interesting to note that, according to the Guidelines, consent should not, as a rule, be the legal ground used for facial recognition performed by public authorities and private entities authorised to carry out similar tasks as public authorities. This may indicate that private companies should not be permitted to use facial recognition in uncontrolled settings, such as shopping centres, for advertising or private security reasons. Another concern addressed here is with regards to ‘affect recognition’ technologies. These technologies can detect mental health and analyse personality traits. The Guidelines hold that they should be prohibited since they present significant risks in areas of employment, education and insurance. Additionally, the Guidelines propose that the usage of facial recognition for the singular objective of ascertaining a person’s skin colour, religious or other belief, sex, racial or ethnic origin, age, health or social status should be banned.
Guidelines for Developers, Manufacturers and Service Providers
In this part, the Guidelines direct facial recognition software developers to develop accurate systems that do not show a bias based on race or gender. It further directs that due consideration must be given to the different aspects involved such as aging, lighting and impact of face covering so as to ensure reliability.
Guidelines for Entities Using Facial Recognition Technologies
The term ‘entities’ refers to data controllers and data processors in both the public and private sectors. The Guidelines provide that entities using facial recognition technologies have to be able to demonstrate that this use is strictly necessary, and proportionate, in the specific context of their use and that it does not interfere with the rights of the data subjects. The entities are also required to ensure the security of the data as lapses in data security can have severe consequences on the employee.
Rights of Data Subjects
A data subject has the following rights with respect to their personal data:
Right of information
Right of access,
Right to obtain knowledge of the reasoning,
Right to object,
Right to rectification.
These rights can be restricted only when such restriction is provided by law with sufficient safeguards to protect the basic rights of the data subjects.
Conclusion
The European Commission’s White Paper of Artificial Intelligence also discusses the regulation of facial recognition technology and the use of artificial intelligence. The views of this section, particularly in respect of the gender and race bias, are in line with the recommendations made by the Guidelines. Though the Guidelines are not a legally binding document, it still is the most comprehensive set of proposals for regulating facial recognition technology in Europe. In the author’s opinion, the adoption of these Guidelines as a binding legislation in Europe and elsewhere will undeniably be a step towards protecting fundamental rights and freedoms of people and democracy in light of the risks posed by facial recognition technology.
*Melita is a Researcher at IntellecTech Law and a law student at CHRIST (Deemed to be University), with a keen interest in IP and TMT law. She published her novel ‘Battle of the Spheres’ when she was 15 and is one of India’s youngest TEDx Speakers.
Comments