by Riyadh Baghdadi, Massachusetts Institute of Technology
This talk is about building compilers for high performance code generation. It has three parts. The first part is about Tiramisu, a polyhedral compiler designed for generating highly efficient code for multicores and GPUs. It is the first polyhedral compiler that can match the performance of highly hand-optimized industrial libraries such as Intel MKL and cuDNN. The second part is about applying Tiramisu to accelerate deep learning (DNN) inference. In comparison to other DNN compilers, Tiramisu has two unique features: (1) it supports sparse DNNs; and (2) it can express and optimize general RNNs (Recurrent Neural Networks). The third part will present recent work on the problem of automatic code optimization. In particular, it will focus on using deep learning to build a cost model to explore the search space of code optimizations.
Riyadh Baghdadi is a postdoctoral associate at MIT. He works on the intersection of compilers and applied machine learning. More precisely, he works on developing compilers that take high-level code and optimize it automatically to generate highly efficient code. His research follows two directions: (1) using machine learning to automate optimizations in compilers, and (2) using these compilers to accelerate compute-intensive domains such as machine learning and high-performance computing. While at MIT, he has led the development of Tiramisu, the first polyhedral compiler that can match the performance of highly hand-optimized industrial libraries such as Intel MKL and cuDNN. He has also successfully used deep learning to build cost models to enable automatic code optimization in compilers. Before joining MIT, Riyadh obtained his Ph.D. and master's degrees from INRIA, France (Sorbonne University, Paris VI). Riyadh published more than 15 peer-reviewed papers and participates regularly as a reviewer in more than 10 journals and conferences.