Stevens students viewing a student presentation.

AI Research Summer Fellowship Program

Empowering the next generation of innovators to apply AI for the greater good

Now in its fourth year, the AI Research in Summer (AIRS) Fellowship at the Stevens Institute for Artificial Intelligence (SIAI) gives rising undergraduate and master’s students the chance to spend their summer immersed in groundbreaking AI research. Designed for students completing their first year of study, the program runs from early June to early August and pairs fellows with expert faculty mentors to address pressing societal challenges through innovative applications—each based on research topics proposed by Stevens faculty.

Meet the Faculty Behind the Fellowship

From cybersecurity experts to pioneers in machine learning, AIRS Fellows are guided by faculty mentors from across all schools at Stevens. These researchers not only lead innovative AI projects but also shape the next generation of problem-solvers through hands-on collaboration.

2026 Summer Research Fellowship Projects

15 projects are selected for the 2026 summer program. Projects are mentored by faculty from across all schools at Stevens. Information about each of the selected projects can be found below:

Algorithm for Building Mathematical Models

Faculty Mentor and Affiliation: Pavel Dubovski, Department of Mathematical Sciences

Project Description: The ultimate goal of the project is to construct the Algorithm for Building Mathematical Models (ABMM). In order to approach this goal, the researchers should apply the following twofold method:

(1) find common ideas, principles, and features shared by existing math models from different areas of science, engineering, technology, medicine, and social life;

(2) organize these common essentials and derive the algorithms to build new math models.

The research will be based on the broad use of AI for revealing these principal points and the combination of joint human-AI efforts to construct the Algorithm for building mathematical models. It is supposed that the Algorithm will be able to assist many researchers in mathematical modeling.


Languages in/of Automation

Faculty Mentor and Affiliation: Sandeep Mertia, School of Humanities, Arts and Social Sciences

Project Description: This project investigates how multilingual generative AI systems are built, evaluated, and imagined by bringing computer science research on Natural Language Processing (NLP) and Large Language Models (LLMs) into direct conversation with Science and Technology Studies (STS). It approaches multilingual AI as an techno-social and infrastructural problem: languages become computational objects through concrete NLP pipelines involving data collection, annotation, tokenization, modeling, and benchmarking, while also reflecting longer histories of linguistic standardization, labor, and global digital inequalities. The research examines how pre-LLM NLP architectures and evaluation regimes shaped the category of “low-resource languages,” and how these inherited assumptions continue to structure contemporary multilingual LLM design. At the same time, it analyzes current modeling strategies such as cross-lingual transfer, parameter sharing, scaling, and fine-tuning to assess their promises and limits for building more robust multilingual AI systems. By situating technical design choices within their sociotechnical contexts, the project produces insights that are valuable to AI researchers seeking more inclusive models, as well as to STS scholars analyzing how computational infrastructures encode epistemic and socio-cultural assumptions. The project contributes to critical AI research not only by expanding the understanding of model performance across languages, but also by clarifying how infrastructural and historical factors shape what multilingual AI systems can and cannot do.


Machine Learning Based Classification of Chicken Embryo Developmental Stages from Laser Speckle Contrast Imaging

Faculty Mentor and Affiliation: Simon Mahler, Department of Biomedical Engineering

Project Description: This project extends existing published work on the non-invasive classification of chick embryo developmental stages using laser speckle contrast imaging (LSCI) and machine learning [1]. The primary goal of the summer project is to expand the imaging dataset and refine the existing machine learning pipeline to improve classification accuracy across a larger number of developmental stages. The expected outcome is a more accurate (> 90% targeted accuracy) and robust method (any developmental stage between day 2 to day 6 of incubation) for automated embryo staging in developmental biology research. Unlike the widely-used traditional Hamilton–Hamburger staging method [2], which relies on invasive manual visual inspection (after breaking the eggshell), LSCI will enable rapid and fully non-invasive staging based on blood flow imaging.

Beyond its biological significance, this project contributes to AI research by developing robust classification methods for complex biomedical imaging data under variability. Master student Sudhanshu Kakkar (Stevens 27’) in Computer Science has been working in my lab this spring to expand the imaging dataset in preparation for the machine learning phase of the project (gathering around 2,000 images). Sudhanshu has demonstrated strong technical ability and enthusiasm for advancing this work and would be an ideal candidate. This fellowship will allow Sudhanshu to apply advanced machine learning techniques to non-invasive biomedical imaging, with the broader goal of improving experimental reproducibility and advancing AI-driven tools for biological and healthcare research.

[1] Z Dong, S. Mahler et al, "Non-invasive laser speckle contrast imaging (LSCI) of extra-embryonic blood vessels in intact avian eggs at early developmental stages," Biomed. Opt. Express 15, 4605-4624 (2024).

[2] Hamburger V, Hamilton HL, “A series of normal stages in the development of the chick embryo” Dev Dyn. 195(4):231-72 (1951).


Vision-Guided AI-Enabled Mini Drone

Faculty Mentor and Affiliation: Hamid Jafarnejad Sani, Department of Mechanical Engineering

Project Description: This 10-week summer research project in the Safe Autonomous Systems Lab (SAS Lab) will engage students in the end-to-end development of a vision-guided, AI-enabled mini aerial drone for autonomous navigation in cluttered indoor environments. Students will design and implement a learning-based vision controller—leveraging state-of-the-art perception and reinforcement learning techniques—initially developed and validated in a high-fidelity simulation environment to ensure safety and rapid iteration. The project will emphasize sim-to-real transfer, controller robustness, and safety-aware decision making, culminating in indoor flight experiments within an obstacle-filled test setup instrumented for precise motion tracking. Participants will gain hands-on experience with drone platforms, computer vision, autonomy software stacks, and experimental validation, while contributing to ongoing SAS Lab research on safe, reliable, and resilient autonomous systems.


Artificial Intelligence and Portfolio Management

Faculty Mentor and Affiliation: Jingrui Li, School oF Business

Project Description: This project develops a general artificial intelligence framework for robust portfolio allocation under realistic market frictions and changing economic conditions. The objective is to evaluate when advanced machine learning and reinforcement learning methods can deliver economically meaningful improvements over simple, well-diversified benchmarks such as the equally weighted portfolio. The framework integrates dynamic policy learning, regime-sensitive state representations, and friction-aware constraints—including transaction costs, turnover limits, and position bounds—to ensure practical relevance. By emphasizing out-of-sample performance, risk-adjusted returns, and robustness to non-stationarity, the project seeks to advance the design of adaptive, economically grounded AI systems for real-world asset management.


Interpretable Machine Learning for Predicting Disinfection Byproduct Formation in Drinking Water

Faculty Mentor and Affiliation: Tao Ye, Department of Civil, Environmental and Ocean Engineering

Project Description: This project will develop an artificial intelligence framework to predict the formation and toxicity of emerging contaminants in drinking water systems. Students will work with curated experimental datasets from laboratory and literature sources to build machine learning models that link water chemistry conditions (e.g., disinfectant dose, natural organic matter characteristics, halide levels) with the formation of regulated and unregulated disinfection byproducts (DBPs). The project will integrate feature engineering, model interpretability techniques (e.g., SHAP analysis), and uncertainty quantification to identify key chemical drivers and improve prediction reliability for previously unseen conditions. The student will gain hands-on experience in data preprocessing, model development (e.g., multitask learning, ensemble models), validation, and scientific visualization, contributing to AI-enabled tools for safer and more sustainable water treatment. This research aligns with Stevens’ strategic focus on artificial intelligence and addresses critical challenges in environmental sustainability and public health.


Building a Fully Autonomous AI Researcher Using Open-Source LLMs

Faculty Mentor and Affiliation: Aron Lindberg, School of Business

Project Description: The objective of this project is to build a fully autonomous AI researcher that runs entirely on local infrastructure using open-source large language models (LLMs). This system will independently take a high-level research idea and execute the full research process: formulate research questions, design a study, identify and collect relevant public data, perform statistical or computational analysis, and generate a complete academic research paper draft. We will build the system using agent frameworks such as OpenClaw (or similar autonomous agent architectures), integrate tool use (Python execution, data scraping, API access), and deploy open-source LLMs such as Llama 3, Mistral, Qwen, or DeepSeek models running locally. Over the summer, students will design, implement, and test this autonomous research pipeline, evaluating how well it can produce rigorous, reproducible scientific work. The project combines AI engineering, data science, and research design, giving students hands-on experience building a cutting-edge autonomous system from the ground up.


Agentic AI for Code Quality

Faculty Mentor and Affiliation: Eman Alomar, Department of Systems Engineering

Project Description: This project moves beyond simple "copilots" to build Agentic AI Systems, autonomous agents capable of diagnosing code smells, identifying complex code clones, and performing high-level refactoring. Instead of just generating code, our goal is to create a "Self-Healing" (detect, diagnose, and fix) codebase. You will develop agents that can reason about software design, use tools to run tests, etc.


Knowledge-Grounded LLM for Interpretable Semiology-Based Epileptogenic Zone Localization in Drug-Resistant Epilepsy Presurgical Evaluation

Faculty Mentor and Affiliation: Feng Liu, Department of Electrical and Computer Engineering

Project Description: Epilepsy affects approximately 3.4 million people in the United States, and up to one third of patients are drug-refractory, for whom surgical resection of the epileptogenic zone (EZ) remains the most effective treatment. In current clinical practice, localization and lateralization based on seizure semiology are highly subjective, relying heavily on individual physician experience, implicit knowledge, and non-standardized interpretive workflows, which leads to inter-rater variability and limits reproducibility and scalability across centers.

To address this critical gap, this work proposes to develop a knowledge-grounded large language model that transforms semiology analysis from an experience-driven process into a structured and standardized reasoning framework. By constructing a seizure semiology-specific knowledge graph and coupling it with a large-scale external clinical and literature-derived knowledge base, and by aggregating multi-center semiology descriptions linked to surgically validated EZ outcomes, the model will perform retrieval-augmented, clinically informed inference that explicitly captures the relationships among semiological signs, temporal evolution patterns, anatomical networks, and their localizing and lateralizing significance.

Beyond generating seizure onset zone predictions, the framework will produce transparent and interpretable reasoning pathways that systematically reduce subjectivity, improve cross-center consistency, enhance generalizability across heterogeneous patient populations, and ultimately elevate the clinical utility of semiology-based presurgical evaluation.


AI-Enabled Repurposing of FDA Approved Drugs Using Boltz-2 Protein-ligand Modeling for Rapid Identification of Novel Drugs Inhibiting Therapeutic Targets with no X-ray Structure

Faculty Mentor and Affiliation: Sunil Paliwal, Department of Chemistry and Chemical Biology

Project Description: Drug repurposing of clinically validated compounds offers a powerful strategy to reduce the cost, time, and risk associated with traditional drug discovery by identifying new therapeutic uses for existing clinical molecules.[1] Identifying existing clinical compounds that bind to alternative enzyme targets involved in driving disease provides the advantage of having compounds with clinically acceptable pharmacokinetic properties. However, repurposing of drugs is not applicable to disease targets that have incomplete x-ray structures of protein–drug interactions. This proposal seeks to develop and validate an AI-driven drug repurposing platform leveraging Boltz-2,[2] a state-of-the-art deep learning model for generating three-dimensional structures of not only protein structures (as AlphaFold does) but also of protein–ligand complex structures for any protein amino acid sequence and drug molecule.

The Boltz-2 model is unique in its ability to predict binding affinity (Kd values) and to rank order compound binding with high correlation to experimental values. The model can systematically identify and rank order novel drug–target interactions among FDA-approved[3] and clinical-stage compounds. The use of Boltz-2 to identify potential lead compounds to novel protein targets from FDA approved drugs could accelerate the rate of drug discovery reducing screening from thousands to only hundreds of compounds. We will build a rapid and reliable Boltz-2–driven structural screening pipeline and apply it to identify high-confidence repurposing candidates across priority disease domains. Boltz-2 predictions will be integrated and validated using established bioactivity resources, including ChEMBL.[4]

By combining structure-aware AI predictions with cheminformatics, biological annotation, and disease-relevance scoring, this work aims to generate mechanistically interpretable and experimentally actionable drug repurposing hypotheses for high-impact therapeutic areas.

[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC9945820/
[2] https://www.biorxiv.org/content/10.1101/2025.06.14.659707v1
[3] https://www.fda.gov/drugs/development-approval-process-drugs/drug-approvals-and-databases?utm_source=chatgpt.com
[4] https://www.ebi.ac.uk/chembl/


Hallucinations in Large Vision Language Models

Faculty Mentor and Affiliation: Koduvayur Subbalakshmi, Department of Electrical and Computer Engineering

Project Description: While vision-language models (VLMs) demonstrate remarkable comprehension of multimodal text and image data, they suffer from object hallucination errors such as incorrectly identifying objects that are not present or describing incorrect relationships or positions of objects within image description. We will explore methods to detect, understand and rectify such hallucinations.


Integrating LLM to Strategic Leadership Research: An Integrative Review

Faculty Mentor and Affiliation: Sibel Ozgen Novelli, School of Business

Project Description: This project consists of integrating LLM to augment interdisciplinary research on upper echelons and strategic leadership (CEOs, top management teams, etc) - helping to map the evolution of research over time across disciplines, demonstrate most commonly examined relationships, theories etc. The target journal is Academy of Management Annals.

see Thau, S., & Katila, R. (2025). Large language models as tools for integrative reviews. Academy of Management Annals, 19(2), 435-439.


Hypersonic Vehicle Trajectory Optimization Under Adverse Weather Conditions

Faculty Mentor and Affiliation: Jason Rabinovitch, Department of Mechanical Engineering

Project Description: This project will develop an AI-enabled risk-aware trajectory planning framework for supersonic and hypersonic vehicles (> Mach 5). When high-speed vehicles fly through adverse weather conditions, such as clouds, impacts with small water droplets and ice particles can damage the vehicle. These impacts can cause degradation in sensor performance, or, in extreme cases, catastrophic vehicle failure. This scenario is relevant to both commercial hypersonic transport and to high-speed vehicles designed for national security considerations. Using real-time weather data (e.g., radar-informed precipitation fields and numerical weather forecasts), if selected, the student will build machine-learning models that translate evolving atmospheric conditions into probabilistic “hazard maps” and actionable risk metrics for high-speed flight. These results will then be embedded in trajectory optimization/model-predictive control models to compute routes that balance performance (time/fuel/thermal limits) against weather-induced damage risk. We will couple the systems-level planning to fundamental shock/droplet aerobreakup physics from ongoing ONR-supported research by Rabinovitch and Nicholaus Parziale. Reduced-order ML surrogate models will be used to predict potential damage as a function of weather (water droplet distribution and frequency), and vehicle speed. As hypersonic systems can be difficult to track and operate with compressed decision timelines, the framework emphasizes rapid inference under uncertain conditions and will also investigate the possibility of real-time trajectory replanning.


Frequency, False Alarms and Missed Signals: The Temporal Calibration of Trust in AI Uncertainty Expressions

Faculty Mentor and Affiliation: Tiffany Li, Department of Computer Science

Project Description: Systems driven by language models are imperfect – they sometimes output erroneous responses in a confident tone, such as ChatGPT and Gemini. Unfortunately, prior work has shown that humans struggle to distinguish between accurate and erroneous information from these models, especially when they have insufficient prior knowledge of the topic. Incorporating uncertainty expressions (e.g., “I am not sure, but”) in responses has shown promise for reducing human overreliance in a single, short interaction session. However, we do not yet know how humans’ trust in uncertainty expressions and their decision to take action upon receiving them change over time. Specifically, how will the system’s frequency of showing uncertainty, false alarms (showing uncertainty when it is correct), and missed alarms (not showing uncertainty when it is incorrect) affect users’ trust in the uncertainty expressions and the actions they take? This is critical for designing uncertainty expressions for language-model-driven systems that retain effectiveness in the long term. In this project, you will design a human-subjects study to answer this question and develop a chatbot-embedded platform to conduct the study.


Hallucinations in Large Vision Language Models

Faculty Mentor and Affiliation: Sunil Paliwal, Department of Chemistry and Chemical Biology

Project Description: Systems driven by language models are imperfect – they sometimes output erroneous responses in a confident tone, such as ChatGPT and Gemini. Unfortunately, prior work has shown that humans struggle to distinguish between accurate and erroneous information from these models, especially when they have insufficient prior knowledge of the topic. Incorporating uncertainty expressions (e.g., “I am not sure, but”) in responses has shown promise for reducing human overreliance in a single, short interaction session. However, we do not yet know how humans’ trust in uncertainty expressions and their decision to take action upon receiving them change over time. Specifically, how will the system’s frequency of showing uncertainty, false alarms (showing uncertainty when it is correct), and missed alarms (not showing uncertainty when it is incorrect) affect users’ trust in the uncertainty expressions and the actions they take? This is critical for designing uncertainty expressions for language-model-driven systems that retain effectiveness in the long term. In this project, you will design a human-subjects study to answer this question and develop a chatbot-embedded platform to conduct the study.


Safe Agentic AI for Care Management

Faculty Mentor and Affiliation: Yue Ning, Department of Computer Engineering

Project Description: This project focuses on developing intelligent, goal-directed AI agents that can assist with care coordination, personalized treatment planning, patient monitoring, and clinical decision support. Students will work on designing and evaluating AI systems that integrate large language models, reinforcement learning, and multi-agent coordination to improve healthcare workflows while maintaining safety, interpretability, and regulatory compliance. The project offers hands-on experience with real-world healthcare data, collaboration with interdisciplinary teams, and opportunities to contribute to publications and open-source tools. Ideal candidates have interests in machine learning, healthcare AI, or software development, and are eager to build next-generation AI systems that meaningfully impact patient outcomes.

Digital artwork of face with back of head transitioning to light particles

Addressing Societal Challenges With AI

The 2023 AIRS Fellowship Program invited Stevens students from across disciplines to contribute to the advancement of AI research through 11 research projects. Each project had a Stevens faculty member providing mentorship.