AIRS Fellowship Past Projects

2023 Projects

There were eleven projects selected for Summer 2023. Each project listed below was mentored by faculty from all schools across Stevens.

Improving Epilepsy Diagnosis Accuracy with Advanced Deep Learning and Brain Computer Interface Technology

Faculty Mentor: Dr. Feng Liu

Project Description: In order to improve the accuracy and interpretability for detecting the epileptogenic zones (EZ) based on the measured EEG or MEG, we propose develop a Model Based Deep Learning (MBDL) framework based on Unrolled Optimization Neural Networks (UONN) to solve the Epilepsy Source Imaging. The UONN framework enjoys both efficient and accurate reconstruction of brain sources brought by the deep learning paradigm, as well as good interpretability and parameter tuning brought by the optimization framework. We aim to synergistically integrate UONN reconstruction based on the EEG or MEG and validate its effectiveness compared to traditional methods and end-to-end deep learning framework. In addition, we aim to develop an EEG/MEG multimodal UONN framework to solve ESI in order to improve the epileptogenic zones detection accuracy.

Desired Prerequisites: Python, data analysis, statistics


Designing Person-Centric XAI Health Coaching Systems Towards Empowering Athlete Engagement

Faculty Mentor: Dr. Sang Won Bae

Project Description: Smart health tracking systems have been implemented in sports aiming to exploit athletes’ metrics to optimize coaching, however, full automation and generic or irrelevant interventions have led to disengagement and distrust. To fill in the gaps, we explore current health coaching strategies obtaining perceived benefits and challenges of off-the-shelf wearables and propose a person-centric health coaching dashboard design coupled with explainable AI to enable athletes’ to engage in health monitoring and coaching strategies. We plan to interview collegiate athletes and 6 coaches at Stevens to gain insights on experiences using wearable technologies, and developed a person-centric XAI health coaching framework to better design personalizing goals, understanding preferences, exploring algorithmic decisions, and integrating actionable behaviors. From evaluations, we evaluate model performance and whether participants are satisfied with algorithm-generated explanations coupled with actionable suggestions and perceived usefulness of the PXAI coaching framework. We further highlight lessons learned regarding feasibility and adaptability.

Desired Prerequisites: Python programming


Generative AI and the Future of News Work

Faculty Mentor: Dr. Jeff Nickerson

Project Description: As part of a larger project in collaboration with Columbia University and Syracuse University, we are engaged in the process of designing and building tools based on large language models and image models (GPT, Bard, DALL_E, midjourney, Stable Diffusion) to aid journalists. We are building tools, observing journalists, and iterating in the hopes of better understanding the potential impacts of AI on journalism.

Desired Prerequisites: Python is a prerequisite. We will develop initially in Python in Jupyter or Colab notebooks and use either plotly dash or flask to create web apps. Knowing all these technologies is not necessary, but an interest in learning and building tools is. Understanding some foundation text or image models would be helpful. Also, knowledge of journalistic writing, art direction, graphic design, or video production would be helpful. Students from all four schools are encouraged to apply.


AI-Enabled Vision-Based Control of Multirotor Drones

Faculty Mentor: Dr. Hamid Jafarnejad Sani

Project Description: This project aims to develop and implement vision-based control algorithms for two Voxl development quadrotors in the SAS Lab (https://saslabstevens.github.io/) to enable them to navigate and coordinate their motion autonomously. The drones will use onboard cameras to localize, detect and track objects, and avoid collision with the surroundings. Programs and algorithms such as ML-based object detection and tracking, perceptual control and collision avoidance, and waypoint tracking for autonomous operation will be developed and implemented on the drone hardware. A drone flight simulation environment will be established to test the algorithms before deployment on actual hardware. The drones are equipped with a customized flight controller and computing board. Some parts of the drones, such as the drone frame, are designed and 3D printed in the SAS lab. Flight tests will be conducted in the SAS lab, which includes an enclosed flight area (drone cage) equipped with motion capture cameras.

Desired Prerequisites: Teamwork, Python programming, ROS, Linux OS


Power to the People: Using Machine Learning to Predict Resilience During Power Outages 

Faculty Mentor: Dr. Philip Odonkor 

Project Description: From electric cars to induction cooktops, Americans are embracing electricity as the more sustainable alternative to traditional fossil fuels. But this new era comes with a high-stakes tradeoff. By relying heavily on electricity, homes are sacrificing their resiliency and ability to withstand unexpected power disruptions. The harsh winter blizzard that ravaged Buffalo, NY, last year was a devastating reminder of this fact, as homes dependent solely on electricity were exposed to sub-zero temperatures without power or heat. Emergency responders were slow to help because they had no means of prioritizing the most vulnerable homes. We need your help to fix this. 

Your mission, if you choose to accept, is to develop cutting edge machine learning models capable of accurately predicting the types of appliances installed in homes, and the fuel source powering them.  

How will we do it? By tapping into the power of prediction and classification, using historical electricity use data from over 20,000 homes. We'll be pushing the limits of what's possible, predicting everything from the presence of electric cars and solar panels in homes, to induction cooktops and electric heating. Our goal is to build a dynamic and scalable ML model that can evaluate the resilience of US households to power outages, providing a roadmap for prioritizing aid efforts during climate disasters. Currently, the only way we can comprehensively answer all these such questions is through old-school surveys. With your help, we think we can do better. 

So, join us on this electrifying project as we harness the power of machine learning to shape the future of disaster preparedness and response. 

Desired Prerequisites: Introductory knowledge of supervised and unsupervised ML, Proficiency in python programing, Basic data analysis and visualization skills


CDC Flu+COVID-19 Forecasting

Faculty Mentor: Dr. Nikhil Muralidhar

Project Description: The CDC conducts an annual competition for forecasting seasonal epidemics (e.g., Influenza) and more recently pandemics (COVID-19). This competition generally known as FluSight (https://www.cdc.gov/flu/weekly/flusight/index.html) involves developing data mining solutions to predict the weekly evolution of the current flu / COVID-19 season.

This project will involve developing novel machine learning solutions for flu / COVID-19 forecasting using spatiotemporal modeling strategies. Specifically, this project will employ deep learning techniques like Graph Neural Networks, transformers and other time series modeling strategies to predict the evolution of the flu and covid-19 season by learning from historical influenza and covid-19 data.

Desired Prerequisites: Background in python, machine learning and deep learning are a pre-requisite. Knowledge about time series forecasting is a plus.


Human-Robot Interaction

Faculty Mentor: Dr. Yi Guo

Project Description: The goal of the project is to study how social robots interact with humans to motivate older adults to exercise more. A humanoid robot can be programed to imitate human motions and can also initiate verbal or non-verbal communication with humans. Machine learning methods such as learning through demonstration may be applied.

Desired Prerequisites: Python programing in Robot Operating System (ROS) is expected. Experience with RGB-D sensors such as the Microsoft Kinect is preferred.


Hierarchical Representations for Deep Generative Models

Faculty Mentor: Dr. Tian Han

Project Description: In the era of big data, supervised learning becomes ineffective and potentially impractical as the data annotation requires domain expertise and can be costly. Generative modeling, as an unsupervised learning approach, has gained popularity in recent years and served as a unified framework for probabilistic reasoning and knowledge understanding. Using modern deep neural networks, generative models can be directly learned from big training data (without any annotations) and form a compact data representation. Such deep generative models have made promising progress in learning complex data distributions and achieved great success in image, video, and text synthesis. However, learning informative representations from such models that can facilitate AI-assisted reasoning and decision-making remains a challenge. In this project, we aim to explore the hierarchical inductive bias on the generative models to learn representations with multiple levels of abstraction. Such hierarchical representations have great potential for image and language understanding as well as robust data prediction.

Desired Prerequisites: Required skills are: Solid understanding of probability and statistics (successfully finished CS583-deep learning is preferred), familiar with deep learning packages (e.g., PyTorch, TensorFlow), and a basic understanding of computer vision (e.g., image/video representation, load/process images using Pytorch). Basic knowledge of natural language processing (NLP) can be a plus.


Object Detection and Image Segmentation for Optical Coherence Tomography 

Faculty Mentor: Dr. Yu Gan 

Project Description: Optical coherence tomography (OCT) has become increasingly essential in assisting the treatment of coronary artery disease. However, unidentified calcified regions within a narrowed artery could impair the outcome of the treatment. Fast and objective identification of the bounding area is paramount to automatically procureaccurate readings on calcifications within the artery. We aim to rapidly identify calcification in coronary OCT images using a bounding box and reduce the prediction bias in automated prediction models. In this project, we will adopt a deep learning-based object detection model to rapidly draw the calcified region from coronary OCT images using bounding box. We will measure the uncertainty of predictions based on the expected calibration errors, thus assessing the certainty level of detection results. In addition, we will also compare the performance with segmentation algorithm where detailed boundary of region, rather than boundary is delineated. Finally, we will explore the possibility of applying object detection and image segmentation algorithms to other tissues and materials in optical coherence tomography imaging. 

Desired Prerequisites: Python, Matlab


Deep Representation Learning for Personalized and Interpretable Health Prediction 

Faculty Mentor: Dr. Yue Ning 

Project Description: In this project, we aim to design a personalized and interpretable health prediction system using deep learning and multimodal clinical data. Predicting individual health events will assist clinicians and healthcare providers to make preventive and treatment plans. There are two goals in this project: learning effective representations of medical concepts (e.g., diseases, procedures, medications) and predicting future health risks based on patients’ historical health records. There are several challenges in this project: 1) multiple modalities of clinical data (e.g., time series, text, tabular data) present complementary or supplementary information. How to utilize limited and noisy multimodal data is an open research challenge. 2) different individuals may have different lengths of records. The distribution of data follows long-tail patterns which create sparsity and imbalanced data issues. 3) data-driven approaches may cause bias which makes machine learning results less robust in healthcare predictions. This project focuses on designing a temporal learning model that utilizes historical multimodal data to make personalized predictions on future health risks (e.g., heart failure). We will utilize public electronic health records (EHR) to evaluate the proposed framework. 

Desired Prerequisites: Python programming, some machine learning and deep learning knowledge, data processing techniques


Topic Modeling and TikTok/Instagram in leadership research 

Faculty Mentor: Dr. Sibel Ozgen 

Project Description: This project may involve topic modeling of leadership-related documents and analysis of Instagram/Tiktok content. 

Desired Prerequisites: Skills in NLP, text or image analysis, content analysis