Campus & Community

Stevens Dean Appearing at SXSW Tech Conference

AI expert will discuss the intersection of artificial intelligence and music

Kelland Thomas, Dean of Stevens Institute of Technology's College of Arts & Letters, will serve as a panel participant in the annual South by Southwest technology conference in Austin in mid-March, discussing music and artificial intelligence. Here's a preview of what he'll discuss.

How will AI affect the way we experience music?

It already is. Recommendation engines such as that on Pandora seem to know just what we like, but really they're just drawing a statistical picture of our own preferences, based on models developed via deep learning on a lot of data gathered about user behavior. These companies say they will give users what they want — but to what extent are they also driving that choice by exposing us to the [only available] options?

You're working on the question of AI-generated live jazz, played with real musicians. Can AI create meaningful music? Can it perform seamlessly with a group of human performers?

55f9b6152e096e1ab943ab791bd46f6dLearn more about Dean Thomas' music and technology research in

 If you're talking about great cocktail-lounge background music, that should be doable in three years. As far as great music, though, like the great jazz musicians in history make, I don't really think that will be feasible. We look for certain things: for empathy, for the aura around an individual performance, and those won't necessarily be there.

As for composition, there may be this one statistical path that generates a passage that surprises us and may even be beautiful and unexpected; you might not know it was written by AI. That's pretty feasible. But the bigger question is, anything that produces that 'right' bit of music is not going to necessarily be able to evaluate itself as having produced something beautiful. It's producing from a model, it doesn't evaluate. How will it know it has come upon the perfect thing?

Humans can do that. The Gallagher brothers in Oasis, let's say, when they heard a great riff they'd made, they'd tell the press, well, we just knew this was going to be a hit. They were totally confident. AI can't do that. We're not there yet.

Do we want our moods reflected back to us by AI-generated playlists?

0902f87136d1a292487eb06532f58115Thomas with Professor Rob Harari and CAL students in the CAL Recording Studio

Well, there are times you feel down, say, and you may want to hear music that reinforces that particular mood, but there are also times you want something that changes your mood. I would be somewhat skeptical of any statistical model or technology looking at your face, saying "this is a happy person," and then playing Katrina and the Waves' "Walking on Sunshine" and all those '80s hits that sound happy. I mean, the reasons we listen to music at any given time are actually a lot more complex than that.

https://www.vimeo.com/205865185