Automation Column

AI: Good or Bad?

Klaus M. Blache | October 1, 2023

The human view of a future with artificial intelligence is a mixed bag with a healthy dose of skepticism.

Artificial intelligence (AI) is everywhere you look—internet, financial, transportation, agriculture, research, fashion, health care, writing, and manufacturing.

When I was writing a recent LinkedIn post, the first prompt was, “Let AI help you with the first draft.” (I turned it down).

Some are excited to be able to interact with AI and others are fearful or at least concerned. From a survey (How Americans think about AI, Pew Research Center), “Americans lean toward concern over excitement when it comes to the increased use of AI in daily life.” Of the respondents, 18% were more concerned than excited, 18% were more excited than concerned, and 45% were equally concerned and excited. The top ten Pew research reasons for supporting and opposing AI are:

• Makes life, society better, 31%
• Saves time, more efficient, 13%
Inevitable progress, is the future, 10%
Handles mundane, tedious tasks, 7%
Helps with work/labor, 6%
AI is interesting, exciting, 6%
Helps humans with difficult/dangerous tasks, 6%
More accurate than humans, 4%
Helps elderly/disabled, 4%
Personal anecdotes, 2%

Philipp Skogstad, CEO of Mercedes-Benz R&D North America, introduced one of the first applications of generative AI (using ChatGPT) in the automotive sector, to power voice assistants in a beta program available on more than 900,000 vehicles. He stated, “People want to drive change, but they don’t want to be changed. So, the key here is to let people drive this transformation and give them access to generative AI so they can play with it themselves (Gen AI in high gear: Mercedes-Benz leverages the power of ChatGPT, McKinsey).”

From a recent Monmouth Univ. poll, “only 9% of Americans believe computer scientists’ ability to develop AI would do more good than harm to society. The remainder are divided between saying AI would do equal amounts of harm and good (46%) or that it would actually do more harm to society overall (41%). Nearly three of four (73%) Americans feel that machines with the ability to think for themselves would hurt jobs and the economy. Also, a majority (56%) say that artificially intelligent machines would hurt humans’ overall quality of life.

These results are basically the same as eight years ago. However, existential fears about humanity’s relationship with artificial intelligence have increased. A majority (55%) of Americans are now worried at least somewhat that artificially intelligent machines could one day pose a risk to the human race’s existence.” (Artificial Intelligence Use Prompts Concerns, Monmouth Univ. Polling Institute).

Artificial intelligence is already all around us in the form of ChatGPT, Alexa/Siri assistants, Netflix suggesting movies to watch, smart homes, facial recognition, any time you try to make a reservation on-line and, of course, gaming.

On the positive side, businesses like AI since it can:

• work 24/7 non-stop, be more productive
• be faster and manage large volumes of data
be more accurate, make fewer errors if algorithms are correct
create more work enthusiasm/interest
stimulate more innovation
provide enormous revenue streams.

On the negative side, AI can:

• be wrong (actually often in these early stages)
• result in errors of greater consequence when algorithms are incorrect
optimize decisions, but not always friendly or ethical
cause employment reduction and labor issues
be biased, cause mistrust
spread disinformation.

A 2021 World Economic Forum ( study of 19,504 adults in 28 countries showed that emerging countries had a more positive outlook on AI than high-income countries. It was also noted that 60% of adults expect products and services using AI will profoundly change their daily lives in the next three to five years. The areas expected to improve because of increased use of AI were (global country average):

• Education/learning new things, 77%
• Entertainment, 77%
• Transportation, 74%
Home, 73%
Shopping, 70%
Safety, 69%
Environment, 62%
Food/nutrition, 61%
Income, 53%
• Personal and family relationships, 50%
Employment, 47%
Cost of Living, 42%
Freedom of rights, 37%

A 2019 study showed that, “people are most likely to say they are concerned (32%), curious (30%), and hopeful (27%) about artificial intelligence. According to the study, “24% of respondents said AI will make our lives better, 41% of respondents think AI will make our lives both better and worse and only 10% of respondents think AI will only make our lives worse (We Asked People Around the World How They Feel About Artificial Intelligence. Here’s What We Learned, Mozilla Foundation).”

Roman Yampolskiy, Computer Scientist at Univ. of Louisville, stated in an AI research report:

“Less intelligent agents (people) can’t permanently control more intelligent agents (artificial superintelligences). This is not because we may fail to find a safe design for superintelligence in the vast space of all possible designs, it is because no such design is possible, it doesn’t exist. Superintelligence is not rebelling; it is uncontrollable to begin with. Worse yet, the degree to which partial control is theoretically possible is unlikely to be fully achievable in practice. This is because all safety methods have vulnerabilities once they are formalized enough to be analyzed for such flaws. It is not difficult to see that AI safety can be reduced to achieving perfect security for all cyber infrastructure, essentially solving all safety issues with all current and future devices/software, but perfect security is impossible and even good security is rare.

“Regardless of a path we decide to take forward it should be possible to undo our decision. If placing AI in control turns out undesirable, there should be an “undo” button for such a situation. Unfortunately, not all paths being currently considered have this safety feature (”

In the big picture, it appears that we are divided with curiosity, high hopes, and expectations. At the same time there are deep concerns around ethics, misuse, and all the things that can go wrong with an intelligence that will eventually surpass humanity. EP

Based in Knoxville, Klaus M. Blache is director of the Reliability & Maintainability Center at the Univ. of Tennessee, and a research professor in the College of Engineering. Contact him at



Klaus M. Blache

Sign up for insights, trends, & developments in
  • Machinery Solutions
  • Maintenance & Reliability Solutions
  • Energy Efficiency
Return to top