The Moral And Ethical Challenges Posed By Artificial Intelligence

 

CAMBRIDGE, Mass. — Machines all around us are becoming more intelligent. Like all technological tools, they can be used for good or evil. 

How can artificial intelligence change humanity for the better? Can we rely on companies like Meta, Amazon and Google to do the right thing and put society ahead of profits? Can politicians and extremist groups be stopped when it comes to online disinformation campaigns?

These were some of the questions that were tackled during a two-day gathering at the Massachusetts Institute of Technology. The annual EmTech Digital meetup dedicated to AI — the signature conference put on by the MIT Technology Review — tackled how to harness the power of artificial intelligence. 

READ: AI, The Rise Of Religious ‘Nones’ And The Artifice Of Intelligence

The speakers that took to the stage on May 22-23 addressed a series of issues surrounding AI, including how it impacts a number of areas such as communications, entertainment, healthcare, politics, climate change and the military. 

In fact, speakers talked about the numerous potential pitfalls in a world where AI is becoming more ubiquitous. Attendees asked questions regarding AI’s accuracy, efficiency, bias and concerns around privacy as humans interact with these smart machines more frequently in the coming years. 

The need for human flourishing

Many of the world’s religions place an emphasis on human dignity and flourishing. The right of a person to be valued and respected is at the core of Judeo-Christian beliefs. There is the belief that all humans are made in God’s image. Following a long tradition that dates back to Aristotle, Catholic moral tradition, for example, teaches that humans are by nature social and communal beings.  

Most Christians in the U.S. don’t see a moral or spiritual benefit to artificial intelligence, the American Bible Society said earlier this month in the latest release from its 2024 State of the Bible.

A majority — 68 percent — said they don’t believe AI could be used to enhance their spiritual practices and thus promote spiritual health; 58 percent don’t believe the technology could aid in their moral reasoning, and 57 percent don’t believe AI can produce a sermon as well-written as a pastor’s original work. In other words, people of faith find that AI can’t help them live a good life in the Aristotelian tradition. 

The survey also found that 37 percent of responders would even view unfavorably when a pastor uses AI to prepare sermons. 

Last month, the Catholic advocacy group Catholic Answers released an AI priest called “Father Justin” — but quickly defrocked the chatbot after it repeatedly claimed it was a real member of the clergy. 

Given those bleak numbers and recent actions, can the belief that we are all made in the image of a God survive in a future world dominated by AI?  

Asu Ozdaglar, who chairs MIT’s Department of Electrical Engineering and Computer Science, addressed human flourishing and how AI could be a benefit. 

“AI promises augmenting human creativity, making work more meaningful, entertainment more enriching and invention more dynamic,” she said. “In short, expanding human flourishing.” 

However, Ozdaglar added that history is “full of examples where promising technologies did not always deliver … or had unforeseen consequences.” 

As examples, she pointed to the internet and how divisive filter bubbles have made citizens and the ability of viral content to spread of misinformation. Some of these problems have already arisen among journalists with the increased use of AI tools like ChatGPT. 

In her talk, Ozdaglar said AI can “diminish human flourishing” if it is used to replace them in areas such as making “key decisions for shaping their lives.” 

The two-day EmTech Digital conference was held in May at MIT’s campus. (Photo by Clemente Lisi)

Stopping online disinformation  

Nick Clegg, a former British politician and President of Global Affairs at Meta, led a much-anticipated session on elections and disinformation campaigns. During his time on stage, Clegg mounted a defense of Meta’s products, most notably Facebook, in how they handle disinformation on their social platforms. 

“Just because I see it, doesn’t mean I believe it,” Clegg said when asked about misleading political posts. 

This year alone, there will be more than 40 national elections taking place in countries such as the U.S., U.K. and India, representing nearly half of the world's population. 

For more than two decades, jihadists became the symbol of extremism post-9/11 and have continued to use technology to spread their propaganda and threats by using AI. Religious extremism — from ultranationalist Israeli politicians to Hindu nationalists — have all grown their online footprint. The affordability and accessibility of generative AI, for example, has lowered the barrier of entry for disinformation campaigns.

But Clegg defended Facebook’s fact-checking system – one augmented with the use of AI – to stop posts that feature misinformation from spreading through the use of algorithms. Nonetheless, Clegg said recent research showed that “the link between social media and voter behavior is a lot weaker” than previously thought.  

This comes after experts said the 2016 and 2020 U.S. presidential elections were heavily influenced by social media posts and Russian-led disinformation campaigns. Meta products, which also includes Instagram and WhatsApp, serve some four billion users worldwide. 

“Everybody’s shouting at the same phenomena from their own political vantage point,” Clegg said. 

Meta has previously faced criticism over its content moderation policies. Elections this year in the U.S. and India, the world’s largest democracy, have brought these issues back to the forefront. The use of AI – especially when it comes to audio and video posts – makes it an even more serious matter. 

Many voters, including faith-based ones, are often susceptible to AI-generated content that mimicks news content that’s meant to deceive. In fact, disinformation has not only eroded the public’s trust, it is also used by politicians as a campaign tool. In India, for example, Prime Minister Narendra Modi, who is running for a re-election, has been accused of using social media to spread hate speech against non-Hindus. 

Mounir Ibrahim, a Vice President of Strategic Initiatives for Truepic, a technology company specializing in image provenance and authenticity, said the amount of fake AI-generated images that are spread online has never been bigger. 

These deepfakes require forensic analysis, but many newsrooms in India and other parts of the world are hard-pressed to find or afford detection tools needed to highlight fakes. Oftentimes, these fake video clips that appear online make it into news coverage. Some newsrooms are not leaving it up to the social media companies to police themselves. Nieman Lab reported this week that newsrooms across India have been trying to build out a process for deepfake detection.

Last year alone, Ibrahim noted in his talk, there were 15 billion fake images created using AI — more than every photo ever taken in the history of photography. 

“We live in a zero trust world,” he added. “We operate in this world every day.”  

But Clegg said the use of AI to aid fact-checkers, who speak 160 languages, are being used to stop extremists.  

“We will not allow ads or organic posts misleading people on where to vote or how to vote,” Clegg said, adding that Facebook’s systems are “pretty sophisticated, but not perfect.” 

As for upcoming national elections like the ones in India and the U.S., Clegg said recent voting in Indonesia, Taiwan and Bangladesh did not bring with it the problems of AI-generated content as first thought possible. 

“The interesting thing so far — I stress, so far — is not how much but how little AI-generated content,” he said, adding that disinformation on Facebook is “really not happening on … a volume or a systemic level.” 

Clegg, who is in constant talks with lawmakers and tech leaders, said AI can be a solution for Meta engineers and development teams to stop disinformation.  

“AI is a sword as well as a shield in all this,” he said.


Clemente Lisi is the executive editor of Religion Unplugged. He previously served as deputy head of news at the New York Daily News and a longtime reporter at The New York Post. Follow him on X @ClementeLisi.