Careers & Student Outcomes

Shopping App, Parkinson's Detection Tech, Collision Avoidance System Net Stevens Top Prizes at NYU, Princeton, Stony Brook Hackathons

Graduate-student duo has more artificial intelligence-fueled innovation in the works

Parkinson's disease afflicts millions of sufferers worldwide, but it can be difficult and expensive to confirm — particularly in its early stages. New ideas will be critical to faster diagnosis and treatment.

Now a Stevens Institute of Technology graduate-student team has tackled the problem in a way so innovative it took home top prize in Stony Brook University's annual Hack@CEWIT Hackathon, one of two three recent wins notched by the duo.

"We are confident someone could use our methods to implement low-cost movement disorder detection," says graduate cybersecurity student Divyendra Patil, who teamed with fellow cybersecurity student Rahul Yadav to develop their prize-winning Park-Detect concept at the Long Island event.

Mining typing data with neural networks, hardware

To build their system, the pair — assisted by graduate students Sagar Jain and Poornima Pundir — relied upon a public database of keystroke data created from more than 200 international subjects (some confirmed with Parkinson's disease, some healthy) by researchers at Australia's Charles Sturt University.

Participants in the Australian project installed temporary 'keylogger' applications that automatically tracked and recorded their keystrokes as they worked on computers.

With that pool of keystroke data in hand, Patil and Yadav added an inexpensive camera to the mix. By breaking the typing videos into frames and programming a convolutional data network — a series of repeated mathematical process that filter and classify data – their system learned to figure out which hand a subject was using to press each key. (The keystroke dataset doesn't distinguish between hands, which could confuse detection algorithms.)

They also ran Python-language codes on the data to filter further by hand, key, stroke duration, time lapse and relative direction to the next keystroke.

Finally, armed with their analyses, the pair ran repeated simulations, trying to predict Parkinson's cases with no other prior knowledge of a person except his or her age and typing results. After training the network for just two days, Patil and Yadav found they could correctly detect a confirmed Parkinson's patient in a case pulled blindly from the Australian dataset an impressive 72 percent of the time.

"We felt this idea might work because typing response time is known to be slower and less accurate in Parkinson's sufferers."

With assistance from friend and restaurant owner Suresh Patel — who contributed last-minute funds for additional hardware, and drove the team from New Jersey to Stony Brook in time to make the competition — they assembled a test setup in the nick of time, powered it up with a cluster of three small Raspberry Pi controllers, and topped a field of 100-plus competitors.

Helping the sight-challenged shop for food, helping drivers avoid collisions

Patil and Yadav then took another top prize in the prestigious HackNYU Competition in late March.

For that competition, Stevens' hackathon heroes hatched SenseFood, an Android application that helps visually impaired shoppers detect foods in the market — fruit and vegetable shapes, for example, could be confusing — by rapidly scanning their shapes with a smartphone's camera in real time.

Again drawing on a convolutional network to classify new, unknown images, the app compares those scans with image libraries and instantly speaks the most likely result back to the user using text-to-speech technology.

"It's already available on the Google Play app store and receiving positive feedback," says Patil. "It's also supported for multiple local languages."

Then they took a third straight prize, for "Best Hack for Social Good," at Princeton University's annual HackPrinceton hackathon in late March.

For that competition, they thought up RoadRash, a camera-and-sensor system trained with images that could potentially sense highway dangers just ahead such as sharp curves, wet roads or animals in the road and notify drivers in time to slow down.

More ideas, competitions ahead

Patil and Yadav — who both hope to work as system administrators post-graduation — will continue to pursue the two prize-winning projects in odd hours during summer internships or after graduating from Stevens.

"The next step for Park-Detect, for example, would be to bring in facial reactions to see what the user is concentrated on, as well as gait recognition," says Patil. "Data for gait is available, but not yet for facial reactions. The same methodology could also be used to detect other movement disorders, with modifications and by understanding the various symptoms of those other diseases."

The pair have their sights set on creating more new healthcare innovations at future tech competitions.

"We work well together," notes Yadav.

"We are not doing this to make money," adds Patil. "We want to help people."

"If our systems can help people, we will keep pursuing them."