Why Neural Networks can learn (almost) anything
Emergent Garden Emergent Garden
114K subscribers
1,229,826 views
0

 Published On Mar 12, 2022

A video about neural networks, how they work, and why they're useful.

My twitter:   / max_romana  

SOURCES
Neural network playground: https://playground.tensorflow.org/

Universal Function Approximation:
Proof:   / hornik.pdf  
Covering ReLUs: https://proceedings.neurips.cc/paper/...
Covering discontinuous functions: https://arxiv.org/pdf/2012.03016.pdf

Turing Completeness:
Networks of infinite size are turing complete: Neural Computability I & II (behind a paywall unfourtunately, but is cited in following paper)
RNNs are turing complete: https://binds.cs.umass.edu/papers/199...
Transformers are turing complete: https://arxiv.org/abs/2103.05247

More on backpropagation:
   • What is backpropagation really doing?...  

More on the mandelbrot set:
   • The Mandelbrot Set - Numberphile  

Additional Sources:
Neat explanation of universal function approximation proof:    • The Universal Approximation Theorem f...  
Where I got the hard coded parameters: https://towardsdatascience.com/can-ne...

Reviewers:
Andrew Carr   / andrew_n_carr  
Connor Christopherson

TIMESTAMPS
(0:00) Intro
(0:27) Functions
(2:31) Neurons
(4:25) Activation Functions
(6:36) NNs can learn anything
(8:31) NNs can't learn anything
(9:35) ...but they can learn a lot

MUSIC
   • It Was Here  

show more

Share/Embed