22. Gradient Descent: Downhill to a Minimum
MIT OpenCourseWare MIT OpenCourseWare
5.1M subscribers
73,745 views
0

 Published On May 16, 2019

MIT 18.065 Matrix Methods in Data Analysis, Signal Processing, and Machine Learning, Spring 2018
Instructor: Gilbert Strang
View the complete course: https://ocw.mit.edu/18-065S18
YouTube Playlist:    • MIT 18.065 Matrix Methods in Data Ana...  

Gradient descent is the most common optimization algorithm in deep learning and machine learning. It only takes into account the first derivative when performing updates on parameters - the stepwise process that moves downhill to reach a local minimum.

License: Creative Commons BY-NC-SA
More information at https://ocw.mit.edu/terms
More courses at https://ocw.mit.edu

show more

Share/Embed