SSE teaching professor discusses the risks and rewards of artificial intelligence and machine learning.

Photo of Dr. Carlo Lipizzi
Dr. Gregg Vesonder, Teaching Associate Professor & Program Lead

After Dr. Gregg Vesonder presented a webinar on artificial intelligence (AI) and machine learning for the Defense Acquisition University (DAU) and the System Engineer Research Center (SERC), he discussed the importance of viewing the evolution of this technology through an ethical lens to better advance its impact on society.

Tell us about advancements in AI and machine learning and its impact on how people use, wear and even talk to smart devices every day. Are these advances bringing us closer to what we once only watched in the movies?

Fiction has always been ahead of science. It has also served as a document of what we might aspire to. There are two approaches to AI: Thinking and acting “humanly,” and thinking and acting “rationally.” The Turing Test showed us that a computer can think. Today, we have systems and technologies that can do many tasks. However, what many are asking today is whether AI will ultimately think with emotion.

What are researchers finding? Is AI capable of having emotions? For that matter, is AI capable of being creative?

Well, those are two different things. Let’s start with creativity. We need to do a better job of understanding creativity in humans. AI isn’t creative if it’s drawn from other people. However, we are seeing signs of emotion in AI. Consider how we interact with Alexa and Google Home. Researchers are teaching these devices about human behavior, and there are signs that the devices are beginning to understand human emotion. So, when I recently asked Alexa about the weather, Alexa didn’t just tell me that it is raining today. Alexa added that with a lot of rain there are also rainbows. That could suggest that there’s an understanding of my (or someone’s) emotional response to a rainy day.

As researchers work through these advances, what is it that separates humans from AI?

We are social beings, and this makes us different from AI. What will be really interesting is when you react to a totally independent AI as if it were a person. If we embed true symbolic reasoning inside a neural engine, we will have a super-human and super-neural combination. This means we’ll be having AI that’s not just perceiving or remembering information, but one that is consciously thinking.

What steps can be taken toward ethics and safety? Where should future software engineers look to remain current on AI and ethics?

An open national and international forum for discussion is advisable. That’s where the Montreal Declaration for a Responsible Development of AI came into play in 2018. It serves as a guide so that worldwide we can work sensibly and sustainably. Among systems engineers, there is considerable concern about how we use AI as a tool in our work. There are efforts to have developers sign pledges that state that in designing AI systems, humans will always be in charge. A responsible and accountable AI framework means creating systems that are transparent, so that with any task, the system can explain to us how it got to where it is.

A lot of SSE researchers are working to improve quality of life and safety through smart initiatives. Can you discuss positive developments in software engineering?

There are many hopeful applications today. There are smart home devices used for elder care, for aging with peace. AI has been found to help people with Alzheimer’s disease get through the day. There’s AI use for soldiers who are dealing with long-term effects of war. Yes, AI systems will help individuals lead better lives. As for creating a better world, the next 20 to 30 years are going to provide the answers. To use the systems we develop to make this world a better place, we must be conscious of how AI is evolving and we must make it as transparent as possible.

Dr. Gregg Vesonder is an teaching professor and the program lead of the undergraduate and graduate software engineering program at the School of Systems and Enterprises at Stevens.

The software engineering programs at the Stevens School of Systems and Enterprises prepare students to work at any stage of the software development life cycle. Software engineers, developers and innovators are more essential than ever. As the pace of technological change accelerates, there is increasing demand for software systems that are reliable, responsive, safe and secure.

To learn more about the software engineering programs at Stevens email [email protected] or attend a graduate webinar.