Can Artificial Intelligence Predict — And Even Stop — Sin?
(ANALYSIS) Imagine a world where crimes are stopped before they even take place. Science fiction has imagined this world, most famously in the 2002 film “Minority Report,” where society can predict criminal acts and allow authorities to intervene in advance.
Today, with the advent of artificial intelligence and predictive analytics, that once-distant dystopian fantasy is inching closer to reality — but with it comes a tangle of ethical, philosophical and practical dilemmas that make us question whether technology can truly stop sin.
The concept of pre-crime isn’t purely fictional. Across the globe, police departments and security agencies are experimenting with predictive algorithms that analyze past crimes, social patterns and online behavior to forecast who will commit crimes and where. A recent mass shooting in Canada brought back to the surface the power of pre-crime to thwart such a crime in the future.
READ: What’s The Most Sinful State In America?
On paper, it sounds like a public safety dream: Fewer crimes and less suffering. But the question is whether predicting wrongdoing is the same as preventing it — and whether it’s morally just to intervene based on what someone might do.
Artificial intelligence has a unique ability: Pattern recognition at a scale no human mind could match. Algorithms can detect such behavior. This gives rise to a tantalizing notion: if AI can identify potential wrongdoers, can it prevent sin itself? In a world plagued by violence and moral lapses, it raises other questions about free will and freedom.
Under a so-called “intelligence-led policing” program in Pasco County, Florida, the sheriff’s department there compiled a list of people considered likely to commit crimes and then repeatedly sent deputies to their homes. More than 1,000 Pasco residents, including minors, were subject to random visits from police officers and were cited for things such as missing mailbox numbers and overgrown grass.
In 2021, four residents eventually sued the county. In 2024, they reached a settlement in which the sheriff’s office admitted it had violated residents’ constitutional rights to privacy and equal treatment under the law. As a result, the program was discontinued.
The issue isn’t limited to Florida. In 2020, Chicago decommissioned its “Strategic Subject List,” a system where police used analytics to predict which prior offenders were most likely to commit new crimes. In 2021, the Los Angeles Police Department also discontinued its use of PredPol, a computer program designed to forecast crime hot spots.
Setting aside AI’s accuracy, there’s a philosophical question about sin. In the Judeo-Christian tradition, sin is more than an action, but a choice. It is something tied to conscience (or a lack of one) and impulse (lack of self-control).
We live in a broken world where God gives us free will. You cannot fully erase moral error with computer code. You can certainly reduce opportunities for harm, but you cannot engineer a world without any moral failings without erasing freedom and privacy.
Supporters of pre-crime systems argue that it would reduce harm, but it would also breed a surveillance state. It would change society where fear would override morality. In trying to stop sin, we risk creating a society that mistakes constraint for virtue.
The broader lesson here could be that technology is a wonderful tool, but not a panacea for our society to regain morality. AI can help manage consequences and identify risks, but it can’t change human nature.
Clemente Lisi serves as executive edttor Religion Unplugged.