Statistical Theory of Deep Neural Network Models

November 7, 2024 - November 9, 2024

Organizer:

Image
Lizhen Lin
University of Maryland
Image
Vince Lyzinski
University of Maryland
Image
Yun Yang
University of Maryland

As deep learning has achieved breakthrough performance in a variety of application domains, a significant effort has been made to understand the theoretical foundations of deep neural network (DNN) models. Statisticians have devoted to understanding statistical foundations of such models by for example understanding why deep neural networks models outperform classic nonparametric estimates and providing explanations of why DNN models perform well in practice from the lens of statistical theory. This workshop aims to bring together researchers in the field to discuss the recent progress in statistical theory and foundations of DNN models, and chart possible research directions.



Participants:

  • Minwoo Chae, Postech
  • David Dunson, Duke University
  • Jian Huang, The Hong Kong Polytechnic University
  • Yongdai Kim, Seoul National University
  • Sophie Langer, University of Twente
  • Wenjing Liao, Georgia Tech
  • Lu Lu, Yale University
  • Song Mei, University of California, Berkeley
  • Ilsang Ohn, Inha University
  • Rong Tang, Hong Kong University of Science and Technology
  • Matus Telgarsky, New York Univeristy
  • Jane-ling Wang, University of California-Davis
  • Yixin Wang, University of Michigan
  • Yuting Wei, University of Pennsylvania
  • Tong Zhang, University of Illinois Urbana-Champaign
  • Yiqiao Zhong, University of Wisconsin

Poster:

Image