Closing the Trust Gap: Logic-Guided Learning for Safe and Interpretable Autonomy
Department of Computer of Science
Location: Gateway North Hall, Room 204 or via Zoom
Speaker: Aniruddh Puranic, Postdoctoral Researcher, University of Maryland - College Park
ABSTRACT
As Cyber-Physical Systems (CPS) transition into human-centric environments, a significant "Trust Gap" remains because ensuring safety currently requires expert-level formal verification. To address this, I am developing a Logic-Guided Learning framework that transforms formal safety from an expert-only requirement into an inherent property of the system. By integrating neurosymbolic architectures that produce known-trustworthy results through formal guarantees, this approach enables safe interaction with autonomous systems that are currently difficult to design and verify.
The framework integrates formal specifications directly within the learning loop to improve data efficiency and interpretability. By using Signal Temporal Logic (STL) to evaluate human demonstrations, the system learns neural reward functions that guide reinforcement learning toward STL-compliant behaviors, resulting in a 70% reduction in required demonstrations and a 30% improvement in behavior interpretability. This research extends to long-horizon and composable tasks, enabling the synthesis of complex behaviors that remain formally verifiable as task horizons expand. Furthermore, it facilitates autonomous logic discovery from unstructured or unlabeled data via self-supervised learning, which is critical for anomaly detection and identifying out-of-distribution behaviors when system dynamics are not known a priori.
In this talk, I will provide an overview of the Logic-Guided Learning platform, covering its foundational algorithms and recent applications in the coordination of heterogeneous multi-agent systems using hybrid optimization and reinforcement learning. I will conclude by outlining future research directions, including grounding foundation models in formal languages to ensure safety and low latency, addressing catastrophic forgetting in autonomous agents, and developing collaborative systems for high-stakes, real-world applications.
BIOGRAPHY
Aniruddh Puranic is a Postdoctoral Researcher in the Institute for Systems Research (ISR) at the University of Maryland - College Park. His research integrates formal methods with machine learning to develop safe and interpretable autonomous systems. During his doctoral studies at the University of Southern California, he developed the Logic-Guided Learning framework, combining Signal Temporal Logic with reinforcement and imitation learning to synthesize formally verified behaviors from limited demonstrations. His postdoctoral work extends this to long-horizon tasks, self-supervised specification learning, and multi-agent coordination.
Discrimination notice: Persons of all identities are invited to and included in this group. Stevens does not discriminate against any person on the basis of sex, race, religion, disability, sexual orientation, gender expression, or any other basis prohibited by law.
Photo and video notice: At any time, photography or videography may be occurring on Stevens’ campus. Resulting footage may include the image or likeness of event attendees. Such footage is Stevens’ property and may be used for Stevens’ commercial and/or noncommercial purposes. By registering for and/or attending this event, you consent and waive any claim against Stevens related to such use in any media. See Stevens' Privacy Policy for more information.
