Stevens Institute of Technology professor K.P. Subbalakshmi is the founding director of the Stevens Institute of Artificial Intelligence (SIAI). Her goal? To make SIAI the place where academic researchers and industry partners go to solve complex problems that are much bigger than those that any single engineering discipline would be able to tackle. This means not only applying the technology, but also clarifying exactly what AI can—and cannot—do.
"As scientists in this field, we know that choosing and designing models, and then tuning and adjusting algorithms to give better predictions, is much harder than it looks," Subbalakshmi says. "There's a lot of work, a lot of failing and going back and doing it again, in the process."
But Subbalakshmi believes that process will ultimately create better AI. "Eventually, I think computers will learn to survey the entire landscape of possible models and choose the best ones for a given problem, regardless of discipline."
Her perspective on the topic is well-founded—and world-class. She is both a Jefferson Fellow and a National Academy of Inventors Fellow. As a Jefferson Science Fellow, she spent a year as a senior science and technology advisor at the U.S. Department of State in Washington D.C.
She shared some of her world-class perspective with Stevens to help shed light on the future development of AI.
What is your research background?
I’ve worked with machine learning concepts and tools for a long time. Some of my earliest work in the area was on steganalysis, where the objective is to detect if a video or image contains hidden messages. In more recent years, my collaborators and I have moved to problems involving language patterns. Our team created several natural language processing-based tools that can identify the gender of the writer. Our algorithms can also detect whether a person is lying based on just a tweet. This has applications in online child safety, where older predators pose as children to lure young people into revealing too much information about themselves or into meeting them in person. More recently we have been working on non-invasive tools that will detect Alzheimer’s disease in patients.
How would you describe AI?
AI is like that old story of blind men trying to describe an elephant—one thinks it’s a tree trunk, one thinks it’s a snake, and so on. That’s pretty much what AI is at this point: a lot of people with different ideas of what it is.
One of your most successful projects uses machine learning to detect early onset Alzheimer’s. Why did you think machine learning would be successful in this application?
One of the things Alzheimer’s patients deal with is a change in their language ability. Language ability means speech ability—changes in word choice, speech patterns, sentence structures and so on. Machine learning tools are very good at understanding patterns. Since the language use patterns of someone with Alzheimer’s or aphasia or dementia are different than a person who does not have those diseases, we hypothesized that machine learning would be able to be catch those patterns. So far, our hunch has proven right.
What are you working on now?
We are now looking at detecting emotion through written text. Other researchers have looked at this, and typical models include about four or five different emotions, but we were able to create a more nuanced picture of these emotions using our algorithms. From this new picture, we're building a model of how a person’s emotional state changes over time. This can be used for something really good, like if the person has a known case of PTSD or some form of depression. We hope to catch early signs of the psychological change, and that might mean the difference between life and death for people suffering from these psychological conditions. Ideally, it would be great to connect this technology with mental healthcare professionals to create a system that can provide 24/7 care for these patients, filling in the gaps that human caregiving can have.
Why are these applications important?
Applications like these help people in very meaningful ways and make their quality of life exponentially better. That is maybe the most important reason for the research. Who wouldn’t want to do that?
But broadly speaking, foundational concepts like artificial intelligence and machine learning gain from diverse applications since these applications act like testing grounds for the theories. It’s not about taking the same tool and using it in a new application; it’s about learning what a tool can’t do, where it is failing and why, and finding ways to restructure our thinking about its foundations so we can create newer, better tools.
What does the future of AI look like?
I think researchers will begin to search for tools that "think" in a more meta way. For example, right now I am doing some work studying the viral spread of rumors in social networks, and the field borrows ideas and models from the ways epidemiologists describe the spread of diseases. So, there were humans involved in making that connection between diverse disciplines. What if a machine can do that? What if a machine were to augment the efforts of a team of experts? What if, in the future, we have a hybrid human-machine team working to solve the harder problems facing humanity now?
Parts of this Q&A were adapted from an upcoming feature in The Stevens Indicator.