Religion Unplugged

View Original

Special Report: India Using AI To Target Religious And Ethnic Minorities

NEW DEHLI — Authorities in the Ambala district of Haryana, some 130 miles (215 kilometers) from India’s capital New Delhi, announced in late February the start of procedures to revoke the passports and visas of people labeled as “troublemakers” who had taken part in breaking barricades or creating disruptions at the Punjab border amid the recent farmer protest.

They confirmed that the perpetrators were identified using Facial Recognition Technology, a software monitored through the use of CCTV cameras and drones. This was not the first time video technology had been used to identify people taking part in protests against the current Indian government, headed by Hindu Nationalist Narendra Modi.

In fact, a similar pattern was seen in 2020 in clashes that erupted in New Delhi, where authorities said they had utilized facial recognition technology to identify and apprehend numerous individuals.

READ: Why India’s New Citizenship Law Excludes Muslim Migrants

The majority of those facing charges were Muslims, prompting criticism from human rights organizations and tech experts regarding India's utilization of Artificial Intelligence to focus on impoverished, minority and marginalized communities in Delhi and other parts of the country.

According to a report released by the Vidhi Centre for Legal Policy, a think tank working towards forming better laws in India, FRT is defined as a technology that uses machine learning or other techniques to identify faces.

These techniques usually require large troves of images of faces compiled into what is called a training database. The software uses this training database to “learn” how to match or identify faces. While the definition may sound straightforward, tech experts said there is scant debate on the ethical implications of it and who is using this software.

Shivangi Narayan, a member of the Algorithmic Governance Research Network told Religion Unplugged that when we give technologies backed by AI to police personnel, it becomes imperative to see who they have been targeting “traditionally.”

“It's not like the elites who are being stopped by the police,” she said. “It's those marginalized people who are already on the streets, doing their job who are harassed and are caught.”

She added that the problem is not in using cameras to reinforce police work, but the biases that law enforcement in India already hold have become very evident. The use of this technology, Narayan added, will only further those biases and repress minorities.

Problem with inaccurate readings

The problem with FRT and the use of other AI-backed technologies, such as the Crime and Criminal Tracking Network System, is further exacerbated by the inaccuracies in their results.

Following a request filed by the Internet Freedom Foundation, the Delhi Police claimed that while using FRT, it treats all results above 80 percent similarity as positive. The report also said that 80 percent accuracy depends on light conditions, distance and facial angles. However, it was unclear why this benchmark had been chosen to differentiate between positives and false results.

In March 2020, in a statement given by Union Home Minister Amit Shah, it was highlighted that “over 1,900 people have been identified through FRT for inciting violence in the national capital during Delhi riots.”

Disha Verma, Associate Policy Counsel at IFF, said that 20 percent inaccuracy, in terms of identifying criminals, is a huge concern and the Delhi police should be concerned with their results.

“An inaccuracy of 20 percent means that out of every 10 people that the police arrests through FRT, two misidentified innocent people are being implicated and presumed guilty,” Verma said. “Even if you ignore the rights violations here, this is not fruitful for the authorities either as the state may end up spending their time, capacity, and resources investigating people who have not committed any crime simply due to a faulty technology.”

She added that the absence of any law on FRT makes it more dangerous.

“There is no rule, policy, guideline, standard operating procedure and law regarding the use of FRT in India,” Verma said.

Violation of human rights

Yash Giri, a criminal lawyer, said while FRT has garnered domestic and international debate around its potential benefits, it also poses risks to basic human and fundamental rights such as privacy, equality, free speech and freedom of movement.

“India lacks an AI legislation, but it does possess a directive from the government think tank NITI Aayog, emphasizing that AI systems should refrain from discrimination based on factors such as religion, race, caste, sex, descent, place of birth or residence,” he said. “Additionally, there is an insistence on conducting audits to guarantee their impartiality and absence of bias.”

While commenting on the use of AI technologies in the use of government surveillance, Verma highlighted there needs to be more transparency on the part of India’s government.

“While union and state governments seem to be using AI as snake oil or a marketing gimmick, or to emerge as a global leader in AI deployment at some point, it’s necessary that the details of any AI systems they rely on for governance or policing be made public,” she said. “Governments must be transparent about the exact technology they are using and how it is arriving at decisions, what the inaccuracy rates and privacy risks are, and there must be proper multi-stakeholder consultations before jumping into integrating AI in the policing system.”


Rishabh Jain is an independent journalist based in Delhi. Follow him at @ThisIsRjain