Research & Innovation

Is ChatGPT Going to Eat (or Save) the World?

Stevens faculty experts weigh in on the white-hot bot that generates surprisingly human responses

ChatGPT, released to the public in November 2022 by the California tech firm OpenAI, has taken the world by storm nearly overnight.

Users are exploring and reporting upon its uncannily human responses and potential usefulness as a homework assistant, creative writer, answer key, recipe inventor, calendar scheduler and more.

The technology even passed a Wharton MBA exam and the official three-part U.S. medical licensing qualification test, the USMLE.

We asked Stevens faculty experts in various fields to weigh in on the power — and potential pitfalls — of this white-hot AI.

the letters AI against a background of ones and zeroes

Artificial Intelligence

ChatGPT has the potential to change the way we work, teach, learn, and how we solve problems.

I imagine it having a similar impact to that of the increasingly high-performance computing resources we have at our disposal, which have changed the ways we use technology to solve problems.

The way we write software has evolved thanks to languages that abstract away low-level details, and in turn maybe we'll soon teach courses on how we can prompt ChatGPT to write highly efficient programs that meet our needs.

Brendan Englot
Associate Professor and Director,
Stevens Institute for Artificial Intelligence (SIAI)

Doctor with cellphone

Healthcare & Medicine

Stevens professor Samantha KleinbergSamantha KleinbergPeople are definitely thinking about (and already using) it in healthcare.

There are challenges with transparency, both because it doesn’t show where its information is from with citations that a user can trace, and when it’s used in a conversation where a person may not realize they’re not talking to a person.

In some cases, like processing receipts for expense reports, people significantly prefer algorithms versus humans to pore over that data. But in this mental health case, people were no longer satisfied once they realized they were talking to a bot and not a person.

Other issues are the length of the replies. When I tried out ChatGPT asking questions on topics I’m familiar with, like diabetes treatments, the responses were generally hundreds of words long.

As I’ve found in my research on health decision-making, more targeted information is more helpful.

Samantha Kleinberg
Associate Professor of Computer Science



U.S. Capitol building

Politics

Lindsey CormackLindsey CormackThe likelihood that this transforms politics is low, but the potential for systems like these to be used in some races is greater than in others.

National, well-funded or major party supported campaigns will probably not turn to ChatGPT or the like for content creation. However, there may be opportunities for small, local, shoestring campaigns to perhaps use ChatGPT for first drafting ideas for slogans or campaign communications.

There is not really oversight on these sorts of things, and voters would likely not know or care either way - so campaigns and consultants could use the tools with very low risks to being found out.

Lindsey Cormack
Associate Professor of Political Science

two women in a company office at a computer screenJobs & Careers

Looking forward a bit, there will be new jobs related to managing the AI. For example, jobs that cultivate AI by feeding new data sets in: text, images, videos, structured data including numbers. Data engineers, information systems analysts, as well as machine learning occupations, will be in demand.

Longer-term, if these tools really do increase productivity by a lot, then one would expect companies that embrace them will take market share from competitors. They may absorb the competitors’ workers, training them up in the tools, as long as the winning companies are growing.

At some point, if someone with tools is much more productive than those without tools, there may even be less need for super-productive people. (When that happens — and how large the effect would be — is very hard to know.)

Also there will be new inventions on the backs of these tools, which will change current occupations and perhaps create new ones.

So I wouldn’t predict mass unemployment, but instead more gradual shifts in occupations, education, and training, with pockets of large growth as well as pockets of stagnation and contraction.

Jeff Nickerson
Professor and Steven Shulman '62 Endowed Chair for Business Leadership
School of Business

buttons keyboard saying FAKE NEWS

Misinformation

From the start, meaning the ‘50s and ‘60s, AI has been able to mimic human creativity. AI’ers built programs that could make art, music and poems, some of which were pretty cool. It's gotten better and better, and it's gotten harder and harder to distinguish products of human and artificial intelligence. ChatGPT is just the latest twist.

Does that mean AI is about to become truly creative, smarter than us, and sentient, as sci-fi writers have been warning since I was a kid? No. I don't take those prophesies seriously.

But it means humans can use tools like ChatGPT to fool other humans. Students can use ChatGPT-type programs to "write" fake papers, but that's a trivial problem; the bigger problem is that freelance or government-sponsored trouble-makers can invent and disseminate fake information more effectively.

We're already drowning in B.S., and it's going to get worse, because no AI program can reliably tell us the difference between truth and B.S. ChatGPT is fun, it's entertaining, but I'm more worried right now about the downside.

John Horgan
Director,
Stevens Center for Science Writings

fountain pen

Creative Fields

Kelland ThomasDean Kelland ThomasI think that the implications for the technology will be widely experienced, and are genuine breakthroughs for AI. There will definitely be implications - at a high level.

Human creativity will almost certainly make significant adjustments based on what the systems can do; human creativity is always evolving in response to new cultural and technological situations so that's to be expected.

Kelland Thomas
Dean, College of Arts and Letters

The tools are amazing, but still have limits. Good human writing is much better, and bad human writing is worse. GPT has a fairly vanilla style.

With respect to images, we are looking at their usage in video games. It appears that the big companies don’t want to use image generation tools from companies like OpenAI, because they can’t control the training sets; they may give away ideas to the vendors; and the legal attribution issues haven’t seen any major test cases yet.

But freelance illustrators, game concept designers and graphic artists are using the tools. In many cases, they use the tools in combination with their own manual skills and with other technical tools (like Photoshop) that allow them to touch up the images.

Jeff Nickerson
Professor and Steven Shulman '62 Endowed Chair for Business Leadership

student reading books in library

Teaching & Learning

Headshot of CarloCarlo Lipizzi I think it’s possible increased plagiarism could occur using this new technology, and it may affect our teaching and learning in higher ed if it becomes prevalent.

I'm teaching courses, primarily based on Python, in AI, machine learning, NLP and data science areas to about 200 students each year. The use of ChatGPT and similar tools presents a new challenge because the outputs appear to be highly realistic, at least with early trials in various writing styles.

One solution for instructors might be to design semantic analyzer, rating the semantic similarity between a submitted piece of writing and a pre-run output of the same assignment generated by ChatGPT. This may work well for essays.

For coding tasks, however, coding is a very personal task and there are no two coders writing the same code. Perhaps we could create a ChatGPT coding footprint and compare it with homework and exam submissions? I don’t know, but we are all beginning to think more seriously about this now. Because ChatGPT doesn’t appear to be just a passing fad.

Carlo Lipizzi
Teaching Associate Professor and Director,
Center for Complex Systems & Enterprises

These programs have the potential to make learning more efficient and I see them becoming more and more integrated into higher education.

Much like the invention of the calculator did not destroy mathematics education, sophisticated chatbots will not destroy the value of humanities education.  In fact, this new technology makes the skills gained from the study of humanities, especially interpretation and assessment of texts, whether they are human-made or artificially made, more important than ever.

The new ChatGPT and successor programs could also function as personal librarians that will make researching a new topic much easier than before.

Gregory Morgan
Associate Professor of Philosophy

code on a screen with the word PASSWORD in green

Cybersecurity

ChatGPT is being used in a variety of ways to support cyberattacks, such as assisting less experienced attackers in writing malware code. Additionally, chatbot capabilities are also being used to generate professional, more believable, phishing attacks and social engineering scripts.  The technology appears to be helping “novice” attackers become more effective very quickly.

We can expect emerging attacks will have fewer and fewer of the traditional suspicious indicators (misspellings, poor grammar, etc.) that we’ve learned to value as signals of likely bad actors.

This is really just the latest example of the technology “leapfrog” effect, where a new capability suddenly makes an opponent more effective. Protection activities need to be improved accordingly, to compensate for the latest innovations of the adversary.

Paul Rohmeyer
School of Business

Human Rationality

Nick ByrdNick ByrdText-based AI has lots of potential to improve human rationality.

We’re all familiar with spelling and grammar checkers that identify potential errors in our writing and offer suggestions to improve them. In principle, we could also develop software and algorithms that identify logical errors in our arguments; additional evidence in favor of or against the premises in our arguments; and suggestions for improving our arguments.

I am not the only person who thinks this. The Intelligence Advanced Research Projects Activity (IARPA) wing of the U.S. Office of the Director of National Intelligence is seeking proposals for systems that can do this.

However, ChatGPT is not necessarily able to distinguish each claim and each piece of evidence (and how it relates to specific claims), then rank the reasons and their evidence. ChatGPT is also not yet integrated into common word-processing apps that serve the average writer. There are also loads of other challenges, as well.

Nick Byrd
Assistant Professor of Philosophy