Designing Reliable Large Language Model Powered Agentic Workflow for Veracity Assessment and Cross-Domain Applications

Mind Readers: Stevens Researchers Peek at How Large Language Models Encode Theory-of-Mind

Department of Electrical and Computer Engineering

Location: Burchard 102

Speaker: Yupeng Cao, Ph.D. Candidate in the Department of Electrical and Computer Engineering at Stevens Institute of Technology

ABSTRACT

Large Language Models (LLM) are now widely used across domains for their generative and reasoning abilities, but they also introduce veracity risks—both through intentional misuse that spreads misleading contents and through hallucinations that yield confident but incorrect outputs. This talk first surveys how LLM-based systems can propagate misleading content. Then, I describe how to design reliable LLM agentic workflows for veracity assessment by integrating evidence retrieval with provenance, modular reasoning, and verification safeguards. Finally, I illustrate how the same reliability-focused workflow patterns can be adapted to other domains that require dependable, auditable decisions.

BIOGRAPHY

Yupeng Cao.

Yupeng Cao is a Ph.D. candidate in the Department of Electrical and Computer Engineering at Stevens Institute of Technology. His research focuses include Natural Language Processing, Multimodal Learning, Trustworthy AI and their applications. He has published related papers in top conferences, such as ACL, NeurIPS, InterSpeech and AAAI workshop.

At any time, photography or videography may be occurring on Stevens’ campus. Resulting footage may include the image or likeness of event attendees. Such footage is Stevens’ property and may be used for Stevens’ commercial and/or noncommercial purposes. By registering for and/or attending this event, you consent and waive any claim against Stevens related to such use in any media. See Stevens' Privacy Policy for more information.