Introduction to Forward Propogation
Forward Propogation
Introduction
This is the second in series of 3 deep learning intro posts:
Introduction to Deep Learning which introduces the Deep Learning technology background, and presents network’s building blocks and terms.
Forward Propogation, which presents the mathematical equations of the prediction path.
In this post we wil...
Forward Back Propogation Example
Back Propogation Example
In this post we will run a network Training (aka Fitting) example, based on the Back Propogation algorithm explained in the previous post.
The example will run a single Back Propogation cycle, to produce 2 outputs: \(\frac{\mathrm{d} C}{\mathrm{d}{b^{[l]}}}\) and \(\frac{\mathrm{d} C}{\mathrm{d}{w^{[l]}}}\) for 1<l&l...
Activation Functions Derivation
Appendix: Activation Functions Derivation
##Sigmoid
Figure 1: Sigmoid
Eq. 1a: Sigmoid Function
\[\sigma{x}=\frac{1}{1+e^{-x}}\]
Eq. 1a: Sigmoid Derivative
\[\frac{\partial } {\partial z}\sigma(z)=\frac{\partial } {\partial z}\frac{1}{1+e^{-z}}=
-\frac{-e^{-z}}{(1+e^{-z})^2}=-\frac{1-(1+e^{-z})}{(1+e^{-z})^2}=-\sigma(z)^2+\sigma(z)=\sig...
Introduction to Deep Learning
Introduction to Deep Learning
This is the first in series of 3 deep learning intro posts:
Introduction to Deep Learning which introduces the Deep Learning technology background, and presents network’s building blocks and terms.
Introduction to
rd Propogation, which presents the mathematical equations of the prediction path.
Introduction...
Customization
Customization
Table of contents
Color schemes
Custom schemes
Define a custom scheme
Use a custom scheme
Switchable custom scheme
Override and completely custom styles
Color schemes
New
Just the Docs supports two color schemes: light (default), and dark.
To enable a color scheme, set the color_scheme ...
Batch and Minibatch
Batch and Minibatch
Introduction
Gradient Descent and its variations are the most common algorithms used for fitting the DNN model during the Training phase.
The basic formula of Gradient Descent parameter update is presented in Eq. 1:
Eq. 1 Gradient Descent
\(w_{t+1}=w_t-\alpha \cdot \triangledown L(w)\)
Where \(L(w)\) is a Loss function. ...
Back Propogation
Back Propogation
Introduction
This is the third in series of 3 deep learning intro posts:
Introduction to Deep Learning which introduces the Deep Learning technology background, and presents network’s building blocks and terms.
Forward Propogation, which presents the mathematical equations of the prediction path.
Backward Propogation wh...
24 post articles, 3 pages.