top of page

Law Enforcement & Facial Recognition Systems: The Iron Hand Against Civil Liberty

  • Writer: Tanya Varshney
    Tanya Varshney
  • Sep 14, 2021
  • 7 min read

Posted on September 14, 2021

Authored by Milind Yadav*

Image Source: ISS Africa


An automated facial recognition system (“AFRS”) is based on complex algorithms that capture intricate datasets of a human face and convert them into codes, which are then stored on a central storage system that can be used for remote access and cross-referencing. Despite being relatively new, this technology has observed its wide application in security and surveillance. From welfare schemes to medical treatment, this technology has found its relevance in the society.

That said, with the advent of any new technology come new risks, often in the form of cyber-crimes. Interestingly, AFRS is prominently used by law enforcement agencies across the world. For instance, in the United States, the technology is used by the law enforcement agencies for mass surveillance, which reportedly promotes racial discrimination, against which Amnesty International has also raised its campaign. While the technology is also gaining popularity in the Indian enforcement regime, the lack of safeguards and dedicated regulatory framework pose risks to the civil liberty of the citizens. This article discusses the pitfalls and dangers of AFRS technology vis-à-vis the interests of society and democracy, and a ‘balanced’ approach that is the need of the hour.


Status quo in India


Law enforcement agencies in India have started using AFRS as pilot projects to maintain law and order. For instance, the Hyderabad Police launched “Operation Chabutra” where police randomly stopped citizens on road to collect their photographs, fingerprint, and Aadhar details without due consent. This may be a consequence of lack of adequate legislative framework to regulate the use of AFRS, specially for surveillance purposes. The officials, in this case, reportedly relied on the Prisoners Act 1920 but the said legislation allows the collection of fingerprints and photographs only for a person arrested or convicted for offences with rigorous imprisonment of at least one year and not for general law abiding citizens.


Similarly, the Delhi Police has been using AFRS to identify the people involved in some recent political protests. The technology was also used in a Prime Minister-led political rally where the attendees’ faces were matched with police records. It is unclear if the data recorded during the live stream remains stored with the authorities for future use. This use of the technology is based on the Delhi High Court’s order in Sadhan Haldar v. State of NCT[1], which allowed the police to use the AFRS to find minors that were reportedly missing for a long time in the NCT region. However, the police extended its application for other surveillance purposes as well, which included screening and identifying people at the public places and peaceful demonstrations.


The above-mentioned projects are just few of many officially recognised ones and it is likely that some AFRS based surveillance operations remain uncovered. The legislative framework certainly does not envisage the unrestrictive use of such technology. Despite this, the unregulated application of AFRS is in full force by relying on the provisions of the Prisoners Act, 1920 that has no rationale nexus with the intent of the enforcement agencies.


Legality of AFRS and Unauthorised Surveillance


Continued unregulated use of AFRS will only serve to increase the sense of distrust between the law enforcement agencies and society. Giving such advanced technology to the law enforcement agencies, who may often be politically manipulated, has the tendency to cause a chilling effect against the freedom of speech and the freedom of society at large. The use of AFRS agianst political protestors is one such evidence that the technology can be used to curb dissent, albeit indirectly. It is unclear how such use is constitutionally valid when freedom of speech and expression is protected as fundamental right under Article 21 and the Supreme Court has recognised dissent as a ‘symbol of vibrant democracy’.


This is problematic also because the technology is evidently inaccurate and the accuracy range varies across technologies and demographics, according to several reports. For instance, in the United States, AFRS is being criticised for being racially biased. A 2018 MIT study found that leading facial recognition tools are significantly more prone to misidentify people of colour. Arguably similar biases may be observed in India as well due to religious, caste, and regional bias. The National Crime Records Bureau reported that Muslims, Dalits, and Adivasis make half of the prison population despite being relatively small part of the total population. Since AFRS is based on machine learning that relies on sample data for predictive decision making, wide-scale use of this technology may inevitably misidentify marginalised sections of the society.


Another risk posed by AFRS is the threat to people’s privacy. For instance, Uttar Pradesh recently implemented a project where AFRS can read facial expressions of women for signs of distress and report to the nearest police station automatically without any consent mechanism. Such projects have a flagrant disregard towards privacy rights due to the lack of consent, control and knowledge over the personal data that is being collected by such agencies. Notably, the Puttaswamy[2]judgement established privacy as fundamental right which can be infringed by the state only when an action is taken that is backed by law and proportionate to serve a legitimate purpose. It is unclear how these thresholds are being met under the current use. Therefore, the use of AFRS under the current regime is not duly supported by the legislative framework and is arguably unconstitutional.


Way Forward


To limit the severe damage caused by law enforcement agencies using AFRS on society, several cities in United States like San Francisco, Oakland and Somerville have chosen to ban enforcement agencies from using the technology, citing that the technology is invasive, inaccurate, and unregulated. It also promotes racial biases and invades the privacy of the individuals under the garb of public safety which is promoting an Orwellian society. [Also see here our article analysing European Union's guidelines on facial recognition technology]


In a recent PIL before the Delhi High Court, the petitioners argued that three surveillance systems namely, NATGRID, NETRA, and CMs are being used without any legal authoirty. Prashant Bhushan, on behalf of petitioners, argued that the Government’s actions are not proportionate to the citizens’ right to privacy, as established by the Puttaswamy judgement. While the matter is lis pendens, it clearly reflects that the use of AFRS and other related technology has already penetrated the State surveillance mechanism and there is a need to take urgent action against such unregulated use. In the author’s opinion, a complete ban may not be the ideal solution (considering the benefits of the technology in improving the efficiency of state to maintain public safety and national security). However, considering that the technology is already widely used for unauthorised surveillance by the enforcement agencies, and a comprehensive legislative framework would take considerable time, a ban tends to be the only immediate solution to protect the privacy rights of the people. This would ensure that the AFRS is not arbitrarily used by the State against its own citizens and would also provide the much-required time to the legislation to establish a legal framework that could regulate the use of AFRS, while maintaining a balance between the legit use of technology for public safety and protecting the right to privacy against undue surveillance.


A framework that India may reflect upon is the United States’ Facial Recognition Technology Warrant Act 2019, a federal bill that requires federal law enforcement to take permission from the judge before using the technology for a limited time-period. The judge allowing such use is required to report to the U.S. court administrators to track its usage. This can be used as a blueprint by the Indian legislature to regulate the use of the technology instead of entirely banning it or leaving it unregulated as long-term strategy.


Another significant issue arises with the admissibility of the evidence-based on AFRS. Section 65B of the Indian Evidence Act 1972 deems ‘electronic records’[3] as document[4] and finds them admissible without any further proof. It can be inferred that results derived from AFRS may be admissible as evidence. However, this would be problematic because it solely rests on the consciousness of Courts to understand the scope of inaccuracy in the technology. Further, the issue with the evidence jurisprudence in India is that it allows admissibility of evidence even if it is collected illegally. For instance, in certain cases, Courts have refused to exclude evidence solely on the ground that it is obtained illegally. Thus, evidence obtained from unauthorised use of AFRS for surveillance could potentially be admissible before Courts. Therefore, the law needs to make sure that such use is considered not just inadmissible but also illegal to protect the civil liberty of the society.


Conclusion


Facial recognition technology, despite being useful in limited circumstances, can potentially give unlimited power to law enforcement agencies. Certainly, a democratic state cannot trade the privacy rights and freedoms of society under the garb of maintaining law and order, without adequate checks and balances. Such unregulated use is against the spirit of democracy and would lead to a dystopian situation as in China. India needs to completely ban the use of the technology as an immediate solution and should work on building a framework that standardises the accuracy and reliability of the system and clarifies the scope of its application. It is yet to be seen if the judicial system will seek out the protection of fundamental rights or whether the legislature will take initiative itself.

[1] W.P.(CRL) 1560/2017 [2](2017) 10 SCC 1 [3] Section 65B (1) of the Indian Evidence Act, 1872 identifies “electronic record” as any information stored, recorded, or copied in optical or magnetic media produced by a computer. [4] The Interpretation Clause (Section 3) of the Indian Evidence Act, 1872 identifies “document” as any matter that is recorded on any substance by means of letters, figures, marks, or their combination.


*Milind is a penultimate year undergraduate student, studying at Jindal Global Law School.


Views of the author are personal and do not reflect the views of IntellecTech Law or any members associated with it.

تعليقات


bottom of page