Neural Net and Backpropagation Basics

If you’re new to neural networks and artificial intelligence, one of the first (and most essential) steps is to thoroughly (1) understand the architecture and processing of a Multilayer Perceptron (MLP) and to (2) master backpropagation. You can do this within a single weekend.

Ideally, by the end of this weekend, you will have completed your understanding of the “old” AI methods, and be ready to invest yourself in learning “generative AI” (and also positioning yourself for AGI) as your next steps. 

Two playlists for you, one on “Neural Network Basics” and the other a detailed derivation of the backprop method, going through the FIRST PORTION ONLY of backprop for a simple three layer MLP (Multilayer Perceptron). That is, the vids take you as far as the derivation of backprop from the output nodes to the middle layer (hidden nodes). 

The derivation matches to the code that I wrote (currently available only within my Northwestern course; I’ll be porting it into an associated GitHub repository soon, and then I’ll link to it from here). The derivation also matches to the chapter drafts in the book that I’ve had in progress for WAY too long. (Links will be added shortly.) 

Playlist #1: “Neural Network Basics”

Please watch the FIRST TWO VIDS first – they introduce NNs. 

Then watch the second playlist, which takes you (at least partially) through backprop. 

Then come back to this playlist, and watch the LAST TWO VIDS – which help you understand how to assess your MLP’s learning. (Focus on evaluating “summed-squared-error” – is your network learning or is it stuck?)

Playlist #2: The “Backpropagation” Playlist

You will be SO BUSY, in the rest of your life, keeping up with “AI advances,” that NOW is the time to master the very basic discriminative-AI fundamentals. 

Leave a comment

Your email address will not be published. Required fields are marked *

Share via
Copy link
Powered by Social Snap