In safety-critical application domains, it is crucial to assess the attackability of the employed machine learning models and to design fault-tolerant models against noise and attacks. The questions to answer are: what factors make the model more attackable (easier to be fooled)? Which instances are easier to be manipulated for fooling the classification model? How to find them? This talk will first introduce our study on characterizing the attackability of a targeted classifier on categorical sequences, and then a targeted multi-label classifier under evasion attack. Fault-tolerant models will be also presented to handle the noisy data existing in the machine learning input.
Dr. Zhang is currently an Associate Professor and directs the Machine Intelligence and Knowledge Engineering (MINE) Laboratory in the Department of Computer Science and Engineering at the University of Notre Dame, USA. She received her Ph.D. degree in computer science from INRIA-University Paris-Sud, France, in July 2010. She has authored or co-authored over 170 refereed papers in various journals and conferences. Her current research interests lie in designing machine learning algorithms for learning from complex and large-scale streaming data and graph data. She was invited to deliver an Early Career Spotlight talk at IJCAI-ECAI 2018. She regularly serves on the Program Committee for premier conferences like SIGKDD (Senior PC), AAAI (Area Chair, Senior PC), IJCAI (Area Chair, Senior PC), etc. She also serves as an associated editor for IEEE Transactions on Dependable and Secure Computing (TDSC) and Information Sciences.
LOCATION: Babbio 122
DATE: Wednesday, December 8th
TIME: 2:00 PM
ATTENDANCE: Open to all