Research & Innovation

Dean’s Lecture Series: The Expansion of AI into the Realm of the Image, Predictive Models in Finance, and Beyond

Manuela Veloso’s talk dove deeply into what, exactly, makes intelligence, the potential uses of the image as a new focus of study, how to develop AI that follows a more holistic model of intelligent reasoning and action, and where the field can go from here

“Are you a robot?” has been the litmus test that can accurately distinguish humans from artificial intelligence (AI). While the bots still fall short, humans know how to pick out the images with stoplights, buses and crosswalks. Now, however, Dr. Manuela Veloso and her team at JP Morgan Chase have made image comprehension a seminal part of their leading-edge research, among many other fascinating areas of study into predictive financial models, machines trained to ask meaningful questions, and more.  

Photo of Dean Jean ZuJean Zu, Dean, Schaefer School of Engineering and Science

Every fall, the Schaefer School of Engineering and Science (SES) Dean Jean Zu invites a remarkable leader in research and technology to share their work with the Stevens community at the Dean's Lecture Series. This year SES welcomed Veloso, head of J.P. Morgan Chase AI Research, and Herbert A. Simon University Professor Emerita at Carnegie Mellon University, where she was previously faculty in the Computer Science Department and head of the Machine Learning Department. 

For her discussion entitled “Artificial Intelligence in Finance: Examples and Discussion,” Veloso chronicled her nearly four decades of work in artificial intelligence, from its earliest beginnings to recent challenges, and the future of the field, via the lens of finance and her novel work with images in AI. 

Veloso specified advancements she’s made during her tenure in academia and her current role at JP Morgan Chase. She and her team have developed AI models to tackle enormous challenges from public data standardization to symbiotic human-robot interactions, predictive models for the global stock exchange, and more. And as Veloso explained, what ties all these different fields and applications together is in the ability to turn one type of input and information type — say language or letters or even pixels — into another. The power is in its capacity to transform information on a giant scale. 

“AI ends up being in one way or another a technique to change representation. To change from these social media and websites all in structure to some other type of data. We are always in the business of transforming one type of information into another type of information that makes things easier.” Veloso approaches the daunting field of AI not only with exacting mathematics and statistics, but also with a focus on the “magic” of its potential, as she called it. AI, she reminded the audience, is first and foremost a science. But is it also a field that both creates and seeks to understand consciousness. 

As a leading global STEM institution, Stevens Institute of Technology is committed to charting new territory and collaborating with researchers at the top of their fields. To this end, Stevens leaders and academics have identified six areas to serve as critical pillars of research: AI, machine learning, and cybersecurity; biomedical engineering, healthcare and life sciences; complex systems and networks; data science and information systems; financial systems and technologies; and resilience and sustainability. This year marks the relaunch of the Steven’s Institute for Artificial Intelligence; and for the Dean’s Lecture Series, Dean Jean Zu asked Veloso to give a deep-dive into this exciting new era of research and exploration, and one of the school’s six pillars. Veloso’s lecture is important not only because of her expertise in AI, symbiotic human-robot autonomy, continuous learning systems, and AI in finance — but also in her ability to speak to the history of the field, its challenges and its far-reaching implications for humanity. 

The formal field of AI as began back in the fifties. “There was a proposal written to the U.S. government asking for money to fund ten men for a two-month study. In this time they thought they could identify every aspect of learning and feature of intelligence and describe it so precisely that any computer could solve it,” she said. “We are still struggling to solve this problem. We don’t know all the answers. We are in the discovery mode and the development mode now.” 

stock exchange screens with lines of numbers

Along the lines of new discoveries, Veloso describes a sudden insight she had while observing Wall Street traders on the floor of the New York Stock exchange. “I watched traders on the floor, like you see in movies, people stand behind screens and make decisions —‘Buy [or] don’t buy’ — they are just surrounded by images looking at plots of the assets guiding their decisions. So I looked at these images and the resulting decisions as deep neural nets. We [as AI scientists] have been in the object-detection and -classification business for a long time — tables, chairs, cats, dogs. I wondered if we could use the same technology to classify images, or signal images of plots if we show these images to the algorithm. 

“I decided to try this and created a system called ‘Mondrian,’ which pixelates a signal and transforms it into an image of some dimension. Then we scan through the whole signal window and we train the machine learning neural net with these images, so that when the machine looks at this now, the AI (and not the stock traders) can say ‘Buy — don’t buy.’ …[With] just the actual plot of the signal, a matrix of pixels and with that input we were able to predict successful stock trading with high accuracy. This was one of the first disruptive things we did, using images.”

According to Veloso, there are three pillars of intelligence: perception (taking in data and information through the senses), cognition (interpreting that data to make new insights and reasoning), and action (doing something intentional based on these insights). This is why so much of the field is still focused on language- and speech-processing; “but then there is this capability humans have to reason, to negotiate, which is still a mystery, and then we move, we execute, which is based on cognition. AI is a young science.” 

Headshot of Manuela Veloso, Ph.D. Head of AI Research, JPMorgan Chase & Co.Manuela Veloso, Ph.D. Head of AI Research, JPMorgan Chase & Co.

“Thinking” is not the complete picture, literally and figuratively, and what remains to be explored is the puzzle of visual processing. “Magically, the image does better than any mathematical function [in a predictive model]. The function is so nonlinear that we don’t have math. We don’t have any math that is able to learn that well. We need to open our hearts to see images as a source for decision-making, that is my only point.” Veloso’s question while looking at the stock plots on the floor of the exchange sparked a new use of AI, and her Mondrian technology is currently the closest researchers have come toward this goal. Currently, Mondrian is not in actual use at JP Morgan but instead is a proof-of-concept prototype that they are still testing. As Veloso stated, it will take some time before human skeptics see proof of Mondrian’s predictions before they put it out into the market. “One day we will have the real robot moving around being like an actual human. We don’t have it yet. We have slices and pieces of things.”

Another area that Veloso and her team are exploring is using public data discovery and interpretation of vast amounts of digital information to make predictions not just on which stocks to buy in the moment, but which assets will perform better or worse in the future. They have achieved this by creating AI that crawls the internet, mainly social media and conversational platforms like Reddit or X (formerly Twitter), and searches for language clues of overall sentiment on a particular asset, then assigns this sentiment a value of -1 or 1. Time and again, this medical literature monitoring (MLM) can predict that the negatively assigned assets go down in value while the positively assigned assets go up. This, she explained, is a perfect example of the capability of AI to transform one type of image to another, in this case social media sentiment to a -1 to 1 scale. “Now we have an AI system that is about to convert whatever is available into something that the industry, that the operators of the finance world, can use.” 

Photo of Manuela Veloso giving a talk behind a podium to an audience in the backgroundManuela Veloso delivers her talk in the DeBaun Auditorium

Veloso also discussed many other realms such as document efficiency and ended with the pioneering research she is doing into symbiotic human-robot interactions. Currently, they are training AI to ask meaningful questions when it hits a snag or doesn’t understand some aspect of its instructions. This could lead to the walking, talking, reasoning robot that she alluded to at the start of her lecture, a subfield known as “generative (gen) AI,” and one of the favorite topics of science fiction. 

Despite its immense challenges, scientists like Veloso are ushering in a Renaissance in computing that could be led, not by the Medici patrons and Michelangelos, but by intelligent machines. Researchers at Stevens are excited to re-launch the Stevens Institute for Artificial Intelligence and invite brilliant minds such as Veloso from all over the world to come here, collaborate, innovate and develop this incredible technology with the potential to solve some of the largest problems facing humanity today. As Dean Zu said, “Dr. Veloso certainly gave a fascinating talk. She was able to use a very simple concept of image that everyone understands to explain the core AI idea. The objective of the lecture series is to bring top-notch people from academia and industry to [share] cutting research and development happening in society to help broaden our knowledge and horizon.”

Watch the video of the 2023 Dean's Lecture Series.