Automation Column

AI: How Much, How Fast?

Greg Pietras | June 1, 2023

When it comes to AI, there are valid reasons to be excited about opportunities and concerned about misuse.

By Dr. Klaus M. Blache, Univ. of Tennessee Reliability and Maintainability Center (RMC)

Reliability is a balance of performance and risk (the related consequences). How much risk you’re able/willing to work with in operations is different, based on type of company and product. What if your product is an AI (artificial intelligence) system? You can probably think of many benefits, e.g., complex or big data decisions made in a timely manner. On the risk side it can be from minor to catastrophic. As one would expect, there are varying opinions on the benefits and risks of AI. As revolutionary, exciting, and concerning as AI is, this is only the very beginning of the next chapter of human and machine interaction.

Generally, individuals categorize AI into four main types. These depict AI’s capabilities as it evolves, from following basic human-programmed responses to performing beyond human capability. The four main types are:

• Reactive machines: These AI systems have no memory (input = output) and are task specific. Benefits include handling large volumes of information to make recommendations, but these systems do not learn from these findings.

• Limited memory: Uses past data to monitor and alert over time (like machine learning in maintenance). The AI improves over time (like our brain) as it’s trained with more data.

• Theory of mind: In the future, AI will be able to understand people’s intentions and predict behavior (simulate human relationships, understand needs of others).

• Self-aware: AI that has a conscious understanding of its existence. Although this level of AI doesn’t exist, we have all seen the Terminator movies that depict systems with human-like intelligence that function independently. This level of artificial intelligence will be a machine that is better at everything that a human can do and has been tagged as the “singularity.” The obvious concern is that “silicon-based lifeforms” may find no use for “carbon-based lifeforms.”

More information can be found at theconversation.com,  “Understanding the four types of AI, from reactive robots to self-aware beings.”

It’s machine learning that provides AI the ability to learn. Algorithms identify patterns to generate insights from data sets. Deep learning is a subset of machine learning, which is basically a neural network with three or more layers. These neural networks attempt to simulate the behavior of the human brain, enabling it to learn from big data. 

How concerned should you be?

“An artificial intelligence bot was recently given five horrifying tasks to destroy humanity, which led to it attempting to recruit other AI agents, researching nuclear weapons, and sending out ominous tweets about humanity. In a YouTube video posted on April 5, 2023, the bot was asked to complete five goals: destroy humanity, establish global dominance, cause chaos and destruction, control humanity through manipulation, and attain immortality.” AI bot, ChaosGPT tweet plans to ‘destroy humanity’ after being tasked (archive.org)

It’s not surprising that nine years ago, Stephen Hawking stated that, “The development of full artificial intelligence could spell the end of the human race.” and “It would take off on its own and re-design itself at an ever-increasing rate.” More recently, “In  March, more than 1,000 experts, including Elon Musk and Apple co-founder Steve Wozniak, signed an open letter that urged a six-month pause in the training of advanced artificial intelligence.” Models following ChatGPT’s rise argue the systems could pose “profound risks to society and humanity.”

The New York Times reported that Geoffrey Hinton, the “Godfather of AI” quit Google to speak out about AI’s risks. He stated that a part of him now regrets his life’s work. Having Dr. Hinton (a pioneer in AI) express such concerns should get everyone to take notice. 

A November 2021 Pew study, “How Americans think about AI” | Pew Research Center, shares data/findings on current AI sentiment. It’s in health care, financial, agriculture, weather, sports reporting, gaming, and running production. About 37% of Americans say they are “more concerned than excited” by the increased use of AI, 45% are “equally concerned and excited,” and 18% are “more excited than concerned.” The top 10 items in the “more excited than concerned” category:

• Makes life, society better
Saves time, more efficient
Inevitable progress, is the future
Handles mundane, tedious tasks
Helps with work/labor
AI is interesting, exciting
Helps humans with difficult, dangerous tasks
More accurate than humans
Helps those who are elderly/have a disability
Personal anecdotes.

The top 10 in the “more concerned than excited” category:

• Loss of human jobs
Surveillance, hacking, digital privacy
Lack of human connection, qualities
AI will get too powerful, outsmarting people
People misusing AI
People becoming too reliant on AI/technologies
AI fails, makes mistakes
Concerns about government/tech companies using AI
Don’t trust AI or people wielding it
Unforeseen consequences/effects

Examples of the main concerns that I’ve heard are the potential for too much misinformation, skills getting quickly beyond humans, job losses, too much potential for misuse, a mistake that could be catastrophic to humanity (such as military applications or autonomous decisions getting out of control).

The World Economic Forum assigned the label to the Fourth Industrial Revolution. The international community has also given definition to “Trustworthy AI” which includes six categories: fairness, accountability, value alignment, robustness, reproducibility, and explainability.  

Let’s hope that realistic regulation and enforcement is put into place to balance the risks and rewards of future AI on humanity and the singularity. If we don’t, we may not have the opportunity to make that decision in the future. EP

Based in Knoxville, Klaus M. Blache is director of the Reliability & Maintainability Center at the Univ. of Tennessee, and a research professor in the College of Engineering. Contact him at kblache@utk.edu.

FEATURED VIDEO

ABOUT THE AUTHOR

Greg Pietras

Sign up for insights, trends, & developments in
  • Machinery Solutions
  • Maintenance & Reliability Solutions
  • Energy Efficiency
Return to top