The Math Behind the Magic
In my previous post from Summer 2020, I had intended to continue posting about learning PINNs. Alas, it is suddenly 2022 and that series of posts is still incomplete. Now is as good a time as any to point to my upcoming sequence of lecture notes on theoretical aspects of deep learning, and hopefully I will get back to unfinished PINN business later this year.
It has become clear over the last decade that progress in practical applications of deep learning has considerably outpaced our understanding of its foundations. Many fundamental questions remain unanswered. Why are we able to train neural networks so efficiently? Why do they perform so well on unseen data? Is there any benefit of one network architecture over another?
These lecture notes are an attempt to sample a growing body of work in the mathematics of deep learning that address some of these questions. They supplement my graduate level course on this topic taught at NYU Tandon in the Spring of 2022.
All pages on that site are under construction. Corrections, pointers to omitted results, and other feedback are welcome: just email me, or open a Github pull request at this repository.