Do the Ethical Dangers of AI Outweigh the Benefits

Created 1 years 150 days ago
by RitaP

Tags:
Categories: categoryManagement
Views: 779

by Yonason Goldson

Always do right; you’ll gratify some and astonish the rest. Of course, it’s rarely that simple. Despite our best intentions, the decisions we face often set our moral compass spinning in all directions. At every crossroads, we find self-interest and equivocation clouding our judgment and weaking our resolve to choose good over evil.

From Socrates to Immanuel Kant to Stephen L. Carter, moral philosophers have grappled with finding an absolute definition for good to guide us in making better choices. Perhaps the most widely debated is Utilitarianism:

The doctrine that an action is defined as right because it provides the greatest benefit to the greatest number of people.

Superficially, this sounds like an ideal definition. But it presupposes that we have a universal metric for defining benefit. It also fails to address the morality of imposing harm on the few in order to provide felicity for the many. If we could calculate that humiliating or torturing an individual would yield more amusement for the group than pain to the victim, classic utilitarianism not only allows the many to afflict the individual but morally obligates them to do so.

Indeed, superficiality may be the most relevant factor as we attempt to evaluate whether the rapid advances of artificial intelligence are good, neutral, or truly diabolical.

AND I LOOKED INTO THE FUTURE…
When used with discipline and discernment, technology enhances our productivity, our creativity, and the quality of our lives. When used irresponsibly, however, it can steal our soul. Caught in a battle between costs and benefits, what is the ethical approach for evaluating and regulating the ways in which AI infiltrates our life and our work?

The lucid prose of ChatGPT leaves no doubt that we have entered a brave new world. The composition software performs better research, writes more articulately, and reasons more clearly than many of us do.

It is encouraging how classroom teachers have already begun implementing creative techniques for using the new processing tool. Some grade students on their collected research and their own notes before permitting them to feed the results into their computers to produce a final product. Others let students begin with AI generated essays, then instruct them to validate the data and improve the prose.

Writers of articles and scholarly papers claim they can triple or quadruple their output by letting ChatGPT produce first drafts. Used thus, it’s hard to argue against the measurable benefits of AI.

LOOK INTO THE ABYSS
Ethics lives at the brink of the slippery slope. You might remember from high school the bold warning at the front of every copy of Cliffs Notes: These notes are not a substitute for the text itself . . . students’ attempt to use them in this way are denying themselves the very education that they are presumably giving their most vital years to achieve.

Did you pay attention? I know I didn’t. And I went on to become an English major.

The problem with technology is that it easily becomes a crutch. And the problem with crutches is that once we start using them, we become dependent on them. Short-term gains for the sake of efficiency may lead to a long-term decline into learned helplessness.

If there’s one lesson we should take to heart it’s the need to stop reactive thinking and begin developing a proactive approach to technology and culture. Does anyone remember the shoe-bomber who, post-911 attempted to board a passenger plane with explosives in his work boots? Thanks to him, we’re still required to remove our shoes going through airport security two decades later, despite little real safety concern. We need to look forward, not just back.

History is a great teacher, but if we want to confront the challenges of the future we have to build on the past, not merely react to it. Artificial intelligence can help us only if we make sure that it serves us, and that we don’t start serving it.

Rabbi Yonason Goldson works with business leaders to build a culture of ethics, setting higher standards to limit liability while earning loyalty and trust. He’s host of the weekly Grappling with the Gray podcast and co-host of the “The Rabbi and the Shrink: Everyday Ethics Unscripted.”
Visit him at ethicalimperatives.com.