Explained | The use of facial recognition technology by the Delhi Police

When was FRT first introduced in Delhi? What are the concerns with using the technology on a large scale?

When was FRT first introduced in Delhi? What are the concerns with using the technology on a large scale?

The story so far: Right to Information (RTI) responses received by the Internet Freedom Foundation, a digital rights organization based in New Delhi, reveal that the Delhi Police treats matches with more than 80% similarity generated by the its facial recognition technology (FRT) system.

Why is Delhi Police using facial recognition technology?

Delhi Police first obtained FRT for the purpose of tracing and identifying missing children. According to RTI responses received from the Delhi Police, the recruitment was authorized under a 2018 order of the Delhi High Court in Sadhan Haldar vs. NCT of Delhi. However, in 2018 itself, the Delhi Police submitted to the Delhi High Court that the accuracy of the technology acquired by them was only 2% and “not good”.

Things changed after multiple reports surfaced that the Delhi Police was using FRT to monitor anti-CAA protests in 2019. In 2020, the Delhi Police stated in an RTI response that although it obtained FRT according to the Sadhan Haldar direction that was specifically related to the search for missing children, were using FRT for police investigations. The broadening of purpose for the use of FRT clearly demonstrates an instance of “fluid function” in which a technology or system gradually expands its scope from its original purpose to encompass and fulfill broader functions. According to available information, Delhi Police has therefore used FRT for investigative purposes and also specifically during the 2020 Northeast Delhi riots, 2021 Red Fort violence and 2022 Jahangirpuri riots .

What is facial recognition?

Facial recognition is an algorithm-based technology that creates a digital map of the face by identifying and mapping an individual’s facial features, which it then compares to the database it has access to. It can be used for two purposes: first, 1:1 identity verification where the face map is obtained in order to compare it with the photograph of the person in a database to authenticate the person identity For example, 1:1 verification is used to unlock phones. However, it is increasingly used to provide access to any government benefits or schemes. Second, there is 1:n identity identification where the face map is obtained from a photo or video and then compared with the entire database to identify the person in the photo or video. Law enforcement agencies like Delhi Police usually obtain FRT for 1:n ID.

For 1:n identification, FRT generates a probability or match score between the suspect to be identified and the available database of identified criminals. A list of possible matches is generated based on their probability of being the correct match with the corresponding scores. However, it is ultimately a human analyst who selects the final probable match from the list of matches generated by FRT. According to the Internet Freedom Foundation’s Project Panoptic, which tracks the spread of FRT in India, there are at least 124 government-sanctioned FRT projects in the country.

Why is the use of FRT harmful?

India has seen rapid deployment of FRTs in recent years, both by the Union and state governments, without implementing any laws to regulate their use. The use of FRT presents two issues: issues related to misidentification due to the inaccuracy of the technology and issues related to mass surveillance due to misuse of the technology. Extensive research on the technology has revealed that its accuracy rates clearly fall off based on race and gender. This can result in a false positive, when a person is mistakenly identified as someone else, or a false negative when a person is not verified as themselves. Cases of a false positive result can lead to bias against the individual who has been wrongly identified. In 2018, the American Civil Liberties Union revealed that Amazon’s facial recognition technology, Rekognition, incorrectly identified 28 members of Congress as people who had been arrested for a crime. Of the 28, a disproportionate number were people of color. Also in 2018, researchers Joy Buolamwini and Timnit Gebru found that facial recognition systems had higher error rates when identifying women and people of color, with the highest error rate in identify women of color. The use of this technology by law enforcement authorities has already led to the illegal arrest of three people in the US. On the other hand, cases of false negative results may result in the exclusion of the individual from access to essential schemes that may use the FRT as a means of providing access. An example of such exclusion is the failure of biometric-based authentication under Aadhaar, which has resulted in many people being excluded from receiving essential government services, which in turn has led to starvation deaths.

However, even if accurate, this technology can cause irreversible damage as it can be used as a tool to facilitate state-sponsored mass surveillance. India currently does not have a data protection law or specific FRT regulation to protect against misuse. In this legal vacuum, there are no safeguards to ensure that authorities use FRTs only for the purposes for which they have been authorized, as is the case with the Delhi Police. The FRT may allow constant surveillance of an individual resulting in the violation of their fundamental right to privacy.

What did Delhi Police’s 2022 RTI responses reveal?

The RTI replies dated July 25, 2022 were shared by the Delhi Police after the Internet Freedom Foundation filed an appeal with the Central Information Commission to get the information after being denied multiple times by the Delhi Police. Delhi. In their response, Delhi Police has revealed that matches above 80% similarity are treated as positive results, while matches below 80% similarity are treated as false positives which require “evidence additional “corroboratives”. It is not clear why 80% was chosen as the threshold between positive and false positive. There is no justification provided to support the Delhi Police’s claim that a match of more than 80% is enough to assume that the results are correct. Second, classifying results below 80% as false positives rather than negatives shows that Delhi Police can still investigate results below 80% more thoroughly. Thus, people who share familiar facial features, such as extended families or communities, could end up being targeted. This could result in the targeting of communities that have historically been over-policed ​​and faced discrimination by law enforcement authorities.

The responses also mention that the Delhi Police is comparing the photographs/videos with photographs collected under sections three and four of the Identification of Prisoners Act, 1920, which has now been replaced by the Criminal Procedure (Identification ) of 2022. This law allows for wider categories of data to be collected from a wider sector of people, i.e. “convicts and other persons for the purposes of identification and investigation of criminal matters” . It is feared that the Act will lead to excessive collection of personal data in violation of internationally recognized best practices for data collection and processing. This revelation raises multiple concerns, as the use of facial recognition can lead to illegal arrests and mass surveillance resulting in privacy violations. Delhi is not the only city where such vigilance is underway. Several cities including Kolkata, Bangalore, Hyderabad, Ahmedabad and Lucknow are implementing ‘Safe City’ programs that implement surveillance infrastructure to reduce gender-based violence, in the absence of regulatory legal frameworks to act as safeguards.

Anushka Jain is an Associate Policy Advisor and Gyan Prakash Tripathi is a Policy Trainee at the Internet Freedom Foundation, New Delhi

THE GIST

RTI responses received by the Internet Freedom Foundation reveal that the Delhi Police treats matches with more than 80% similarity generated by its facial recognition technology system as positive results. Facial recognition is an algorithm-based technology that creates a digital map of the face by identifying and mapping an individual’s facial features, which it then compares to a database it has access to.

Delhi Police first obtained FRT for the purpose of tracing and identifying missing children as directed by the Delhi High Court in Sadhan Haldar vs. NCT of Delhi.

Extensive research on FRT has revealed that its accuracy rates clearly fall off as a function of race and gender. This can result in a false positive, when a person is mistakenly identified as someone else, or a false negative when a person is not verified as themselves. The technology can also be used as a tool to facilitate state-sponsored mass surveillance.

Leave a Comment

Your email address will not be published.