AI Research Summer Fellowship Program
Stevens Institute for Artificial Intelligence (SIAI) is organizing the second AI Research in Summer (AIRS) Fellowship Program. The AIRS Fellowship Program empowers Stevens' undergraduates and master's students to embark on an exciting research journey on a topic listed by faculty interested to mentor them on an AI related research. If you are interested in taking a flight with the SIAI this summer, please fill out the application materials below.
Applications will be accepted on a rolling basis but no later than March 31, 2023.
2023 Artificial Intelligence Research in Summer (AIRS) Fellowship Program
There are eleven projects that are selected for this summer. All projects will be mentored by faculty from all schools across Stevens. Below is information about the selected projects and the faculty member who will be leading the projects.
Improving Epilepsy Diagnosis Accuracy with Advanced Deep Learning and Brain Computer Interface Technology
Faculty Mentor: Dr. Feng Liu
Project Description: In order to improve the accuracy and interpretability for detecting the epileptogenic zones (EZ) based on the measured EEG or MEG, we propose develop a Model Based Deep Learning (MBDL) framework based on Unrolled Optimization Neural Networks (UONN) to solve the Epilepsy Source Imaging. The UONN framework enjoys both efficient and accurate reconstruction of brain sources brought by the deep learning paradigm, as well as good interpretability and parameter tuning brought by the optimization framework. We aim to synergistically integrate UONN reconstruction based on the EEG or MEG and validate its effectiveness compared to traditional methods and end-to-end deep learning framework. In addition, we aim to develop an EEG/MEG multimodal UONN framework to solve ESI in order to improve the epileptogenic zones detection accuracy.
Desired Prerequisites: Python, data analysis, statistics
Designing Person-Centric XAI Health Coaching Systems Towards Empowering Athlete Engagement
Faculty Mentor: Sang Won Bae
Project Description: Smart health tracking systems have been implemented in sports aiming to exploit athletes’ metrics to optimize coaching, however, full automation and generic or irrelevant interventions have led to disengagement and distrust. To fill in the gaps, we explore current health coaching strategies obtaining perceived benefits and challenges of off-the-shelf wearables and propose a person-centric health coaching dashboard design coupled with explainable AI to enable athletes’ to engage in health monitoring and coaching strategies. We plan to interview collegiate athletes and 6 coaches at Stevens to gain insights on experiences using wearable technologies, and developed a person-centric XAI health coaching framework to better design personalizing goals, understanding preferences, exploring algorithmic decisions, and integrating actionable behaviors. From evaluations, we evaluate model performance and whether participants are satisfied with algorithm-generated explanations coupled with actionable suggestions and perceived usefulness of the PXAI coaching framework. We further highlight lessons learned regarding feasibility and adaptability.
Desired Prerequisites: Python programming
Generative AI and the Future of News Work
Faculty Mentor: Dr. Jeff Nickerson
Project Description: As part of a larger project in collaboration with Columbia University and Syracuse University, we are engaged in the process of designing and building tools based on large language models and image models (GPT, Bard, DALL_E, midjourney, Stable Diffusion) to aid journalists. We are building tools, observing journalists, and iterating in the hopes of better understanding the potential impacts of AI on journalism.
Desired Prerequisites: Python is a prerequisite. We will develop initially in Python in Jupyter or Colab notebooks and use either plotly dash or flask to create web apps. Knowing all these technologies is not necessary, but an interest in learning and building tools is. Understanding some foundation text or image models would be helpful. Also, knowledge of journalistic writing, art direction, graphic design, or video production would be helpful. Students from all four schools are encouraged to apply.
AI-Enabled Vision-Based Control of Multirotor Drones
Faculty Mentor: Dr. Hamid Jafarnejad Sani
Project Description: This project aims to develop and implement vision-based control algorithms for two Voxl development quadrotors in the SAS Lab (https://saslabstevens.github.io/) to enable them to navigate and coordinate their motion autonomously. The drones will use onboard cameras to localize, detect and track objects, and avoid collision with the surroundings. Programs and algorithms such as ML-based object detection and tracking, perceptual control and collision avoidance, and waypoint tracking for autonomous operation will be developed and implemented on the drone hardware. A drone flight simulation environment will be established to test the algorithms before deployment on actual hardware. The drones are equipped with a customized flight controller and computing board. Some parts of the drones, such as the drone frame, are designed and 3D printed in the SAS lab. Flight tests will be conducted in the SAS lab, which includes an enclosed flight area (drone cage) equipped with motion capture cameras.
Desired Prerequisites: Teamwork, Python programming, ROS, Linux OS
Power to the People: Using Machine Learning to Predict Resilience During Power Outages
Faculty Mentor: Dr. Philip Odonkor
Project Description: From electric cars to induction cooktops, Americans are embracing electricity as the more sustainable alternative to traditional fossil fuels. But this new era comes with a high-stakes tradeoff. By relying heavily on electricity, homes are sacrificing their resiliency and ability to withstand unexpected power disruptions. The harsh winter blizzard that ravaged Buffalo, NY, last year was a devastating reminder of this fact, as homes dependent solely on electricity were exposed to sub-zero temperatures without power or heat. Emergency responders were slow to help because they had no means of prioritizing the most vulnerable homes. We need your help to fix this.
Your mission, if you choose to accept, is to develop cutting edge machine learning models capable of accurately predicting the types of appliances installed in homes, and the fuel source powering them.
How will we do it? By tapping into the power of prediction and classification, using historical electricity use data from over 20,000 homes. We'll be pushing the limits of what's possible, predicting everything from the presence of electric cars and solar panels in homes, to induction cooktops and electric heating. Our goal is to build a dynamic and scalable ML model that can evaluate the resilience of US households to power outages, providing a roadmap for prioritizing aid efforts during climate disasters. Currently, the only way we can comprehensively answer all these such questions is through old-school surveys. With your help, we think we can do better.
So, join us on this electrifying project as we harness the power of machine learning to shape the future of disaster preparedness and response.
Desired Prerequisites: Introductory knowledge of supervised and unsupervised ML, Proficiency in python programing, Basic data analysis and visualization skills
CDC Flu+COVID-19 Forecasting
Faculty Mentor: Dr. Nikhil Muralidhar
Project Description: The CDC conducts an annual competition for forecasting seasonal epidemics (e.g., Influenza) and more recently pandemics (COVID-19). This competition generally known as FluSight (https://www.cdc.gov/flu/weekly/flusight/index.html) involves developing data mining solutions to predict the weekly evolution of the current flu / COVID-19 season.
This project will involve developing novel machine learning solutions for flu / COVID-19 forecasting using spatiotemporal modeling strategies. Specifically, this project will employ deep learning techniques like Graph Neural Networks, transformers and other time series modeling strategies to predict the evolution of the flu and covid-19 season by learning from historical influenza and covid-19 data.
Desired Prerequisites: Background in python, machine learning and deep learning are a pre-requisite. Knowledge about time series forecasting is a plus.
Human-Robot Interaction
Faculty Mentor: Dr. Yi Guo
Project Description: The goal of the project is to study how social robots interact with humans to motivate older adults to exercise more. A humanoid robot can be programed to imitate human motions and can also initiate verbal or non-verbal communication with humans. Machine learning methods such as learning through demonstration may be applied.
Desired Prerequisites: Python programing in Robot Operating System (ROS) is expected. Experience with RGB-D sensors such as the Microsoft Kinect is preferred.
Hierarchical Representations for Deep Generative Models
Faculty Mentor: Dr. Tian Han
Project Description: In the era of big data, supervised learning becomes ineffective and potentially impractical as the data annotation requires domain expertise and can be costly. Generative modeling, as an unsupervised learning approach, has gained popularity in recent years and served as a unified framework for probabilistic reasoning and knowledge understanding. Using modern deep neural networks, generative models can be directly learned from big training data (without any annotations) and form a compact data representation. Such deep generative models have made promising progress in learning complex data distributions and achieved great success in image, video, and text synthesis. However, learning informative representations from such models that can facilitate AI-assisted reasoning and decision-making remains a challenge. In this project, we aim to explore the hierarchical inductive bias on the generative models to learn representations with multiple levels of abstraction. Such hierarchical representations have great potential for image and language understanding as well as robust data prediction.
Desired Prerequisites: Required skills are: Solid understanding of probability and statistics (successfully finished CS583-deep learning is preferred), familiar with deep learning packages (e.g., PyTorch, TensorFlow), and a basic understanding of computer vision (e.g., image/video representation, load/process images using Pytorch). Basic knowledge of natural language processing (NLP) can be a plus.
Object Detection and Image Segmentation for Optical Coherence Tomography
Faculty Mentor: Dr. Yu Gan
Project Description: Optical coherence tomography (OCT) has become increasingly essential in assisting the treatment of coronary artery disease. However, unidentified calcified regions within a narrowed artery could impair the outcome of the treatment. Fast and objective identification of the bounding area is paramount to automatically procureaccurate readings on calcifications within the artery. We aim to rapidly identify calcification in coronary OCT images using a bounding box and reduce the prediction bias in automated prediction models. In this project, we will adopt a deep learning-based object detection model to rapidly draw the calcified region from coronary OCT images using bounding box. We will measure the uncertainty of predictions based on the expected calibration errors, thus assessing the certainty level of detection results. In addition, we will also compare the performance with segmentation algorithm where detailed boundary of region, rather than boundary is delineated. Finally, we will explore the possibility of applying object detection and image segmentation algorithms to other tissues and materials in optical coherence tomography imaging.
Desired Prerequisites: Python, Matlab
Deep Representation Learning for Personalized and Interpretable Health Prediction
Faculty Mentor: Dr. Yue Ning
Project Description: In this project, we aim to design a personalized and interpretable health prediction system using deep learning and multimodal clinical data. Predicting individual health events will assist clinicians and healthcare providers to make preventive and treatment plans. There are two goals in this project: learning effective representations of medical concepts (e.g., diseases, procedures, medications) and predicting future health risks based on patients’ historical health records. There are several challenges in this project: 1) multiple modalities of clinical data (e.g., time series, text, tabular data) present complementary or supplementary information. How to utilize limited and noisy multimodal data is an open research challenge. 2) different individuals may have different lengths of records. The distribution of data follows long-tail patterns which create sparsity and imbalanced data issues. 3) data-driven approaches may cause bias which makes machine learning results less robust in healthcare predictions. This project focuses on designing a temporal learning model that utilizes historical multimodal data to make personalized predictions on future health risks (e.g., heart failure). We will utilize public electronic health records (EHR) to evaluate the proposed framework.
Desired Prerequisites: Python programming, some machine learning and deep learning knowledge, data processing techniques
Topic Modeling and TikTok/Instagram in leadership research
Faculty Mentor: Dr. Sibel Ozgen
Project Description: This project may involve topic modeling of leadership-related documents and analysis of Instagram/Tiktok content.
Desired Prerequisites: Skills in NLP, text or image analysis, content analysis
Past AIRS Fellowship Programs
Humans and AI: An Evolving Partnership
The theme for the 2021 AIRS program was "Humans and AI: An Evolving Partnership". AI is already having an impact across myriad aspects of society, from mobility to healthcare, from education to finance, and more. In many of these scenarios, we find humans at the center: humans getting from place to place, humans striving toward wellness, humans teaching, humans learning, etc. As a scholarly community, we are now beginning to ask questions like, “How should AI augment human capabilities?” rather than, “When will AI replace humans?” This relationship between humans and AI is evolving. Selected projects in the 2021 AIRS program targeted questions under this theme to better understand and enhance the relationship between humans and AI. The AIRS Fellowship Program solicits independent undergraduate research project proposals from Stevens Undergraduates. Due to recent changes in the institute, the AIRS Fellowship Program will not be offered for Summer 2022. We look forward to launching more opportunities through the SIAI during the 2022-2023 academic year, so please watch your emails and the website for updates.
2021 AIRS Fellows and Projects
MODELING HOMELESS CHARACTERISTICS TO SUPPORT INTERVENTIONS
Jared Donnelly (Computer Science, SES, 1st Year) and Jolene Ciccarone (Software Engineering SSE, 1st Year)
Mentor: Professor Samantha Kleinberg
Abstract: Homelessness is a widespread issue that has a ripple effect across communities, affecting everyone from the victims of homelessness to the communities struggling to help them.
The lifespan of a homeless person is 30 years below the average, and the mean lifespan of someone living on the streets is only 11 years. With more homeless people in NYC than the entire population of Hoboken, it is clear that there is a homelessness problem. As a result, nonprofits like Built for Zero and Project Renewal have put it upon themselves to help homeless populations through healthcare, housing, and jobs. Since their efforts have had significant positive impacts on homelessness, especially with veteran, youth, and chronic homelessness, we would like to build upon their work. We aim to approach homelessness from a preventative rather than a reactive perspective, which is highly beneficial in the long run, especially since responses to pressing issues tend to have a history of long delays and poor implementation. Our goal is to utilize AI to provide future predictions of homeless populations and their characteristics for non-profits, government agencies, and policymakers so that they can make informed decisions and preparations.
IMPACT OF AI COMPANION ON NURSING-HOME RESIDENTS
Sakina Rizvi (Business and Technology, BUS, 3rd Year)
Mentor: Dr. D. N. Lombardi (Stevens Healthcare Educational Partnership)
Abstract: The goal of my research is to gain a comprehensive understanding of the emerging role of AI in healthcare, with a specific focus on identifying programs, opportunities and positive impacts on aging facility residents derived from individual interaction with a robot. Since the objective for this program is to better understand and enhance the relationship between humans and AI, I plan to specifically research AI in nursing homes contending with the phenomenon of aging isolation, with an intent of ascertaining if structured robot interaction can be a solution strategy.
My research will include the exploration of possible challenges and ethical concerns in order to maximize the benefits associated with AI technologies. A field survey of existing AI applications in the aging services sector, as well as the potential for improving the safety, quality, and efficiency of healthcare through robotics will also be explored.
While social isolation has been considered a dangerous health risk for older Americans, COVID- 19 intensified the dynamic in the past year. By the end of my research project, I will have successfully analyzed how social robotic interaction can represent a potential solution to the social isolation of aging facility residents.
NATURAL LANGUAGE PROCESSING AI IN CUSTOMER FEEDBACK ANALYSIS
Pawan Perera (Electrical Engineering, SES, 2nd Year)
Mentor: Jia Xu
Abstract: Natural language processing (NLP) is a subsection of AI that handles human language data, specifically recognizing and processing it. A specialized denomination of NLP, that is considered an AI-hard problem, is natural language understanding (NLU), which expands upon NLP by being able to infer, summarize, and comprehend human language data. As of yet, there have been few practical applications of NLU but its potential capabilities are of great significance. This study researches the development of an NLU to allow customers and manufacturers to quickly extract relevant key insights from large amounts of customer feedback and reviews on a product sold online. Prior research on NLUs has illustrated several common components that can be sourced from these studies. However, semantic theory will be the main hurdle of this study since it is necessary to establish the method in which the system interprets the data, judges relevance, and comprehends overall. In turn, research will aim to tailor the semantic theory to meet our needs here. This research on NLU development to analyze customer feedback has profound implications as one of the first practical applications of this AI-hard problem, which can pave the way for future use of this evidently revolutionary AI development.
ALPHAAI: NOT ALONE INVESTING
Ryan Finegan (Business and Technology, BUS, 3rd Year)
Mentor: Dragos Bozdog
Abstract: AlphaAI is a project targeted toward inexperienced investors during this new age of commission-free trading with the purpose to decrease wealth disparities through providing analytics and suggestions utilizing artificial intelligence. Recently, there has been an influx of new market participants as retail investors flood into trading platforms that are more user friendly and affordable. This progressive development has been met with obstacles, such as misguided advice delivered to oblivious market entrants regarding volatile or risky securities. This project seeks to apply machine learning models to aid in investors' short- and long-term investment decisions through time series forecasting, security sentiment analysis, and equity screening and recommendations. This software pursues analysis through various techniques such as clustering, ensemble methods, recurrent neural nets, and natural language processing to offer an individual investor the tools to increase investment returns and alpha. Investor intuition and AlphaAI's diverse set of models work cooperatively towards the goal of steadily increasing one's wealth through consistent, informed investment decisions. AlphaAI is meant to be a resource available to accompany green investors while navigating the ever changing and challenging equity markets.
SKINCARE AND AI: HOW AI TECHNOLOGY CAN DECOMMERCIALIZE LUXURY SKINCARE SERVICES
Serena Lee (Software Engineering, SSE, 2nd Year)
Mentor: Mukund Iyengar
Abstract: Millions of Americans struggle to afford health insurance and it is difficult to find a suited doctor for their needs. Many of these people are teenagers suffering from acne-related conditions and can not see a medical professional. This project aims to create a program that will allow teens to “self-diagnose” themselves of their acne-related problems without the costs of seeing a dermatologist and receiving equal results. By using convolutional neural networks (CNN), we can create a system that can find similarities between data sets and apply them to real-life situations. Teenagers around the country can figure out their skin conditions without spending a penny and from the comforts of their home.
VISUALLY ENHANCED PODCASTS
Burak Yesil (Computer Science, SES, 1st Year)
Mentor: Jason Corso
Abstract: Podcasts are an efficient way for people to gain insight into other people’s views and current events. However, as listeners may not be familiar with a specific person or subject being discussed, they may have a hard time following the conversation and will be discouraged from listening to future podcasts on the subject. My AI model will solve this issue by providing people with basic knowledge by automatically displaying pictures and article links, which will be generated by analyzing keywords from podcasts, to help listeners get a better understanding of what is being discussed.In my research, I will use Google AI’s TensorFlow platform and Facebook’s PyTorch Platform to obtain the data sets needed to train my model in the Jupyter Notebook Environment. Some specific datasets include but aren’t limited to, “CelebA” (a celebrity faces data set) and “VoxCeleb” (a large-scale audio-visual dataset of human speech). Through the use of Back Propagation, I will lower my AI model’s margin of error. One major application of my AI model will be to visually augment podcasts, allowing listeners to be more engaged and knowledgeable, enhancing their overall experience. This model can be used by platforms such as Spotify.