As deep learning has achieved breakthrough performance in a variety of application domains, a significant effort has been made to understand the theoretical foundations of deep neural network (DNN) models. Statisticians have devoted to understanding statistical foundations of such models by for example understanding why deep neural networks models outperform classic nonparametric estimates and providing explanations of why DNN models perform well in practice from the lens of statistical theory. This workshop aims to bring together researchers in the field to discuss the recent progress in statistical theory and foundations of DNN models, and chart possible research directions.