Introduction to Neural Networks
Types of Neural Networks
Activation Functions
$$ \frac{1}{1+e^{-x}} $$
$$ max(0,a) $$
Training vs Inference
Forward Propagation Basics
Matrix Operations in NN
$$ a^{(1)}=\sigma(Wa^{(0)}+b) $$
$$ \begin{bmatrix} a_0 \\ a_1 \\ \vdots \\ a_{n-1} \end{bmatrix}^{(1)}=\sigma\left( \begin{bmatrix} w_{00} & w_{01} & \cdots & w_{0,n-1} \\ w_{10} & w_{11} & \cdots & w_{1,n-1} \\ \vdots & \vdots & \ddots & \vdots \\ w_{m-1,0} & w_{m-1,1} & \cdots & w_{m-1,n-1} \end{bmatrix} \begin{bmatrix} a_0 \\ a_1 \\ \vdots \\ a_{n-1} \end{bmatrix}^{(0)} + \begin{bmatrix} b_0 \\ b_1 \\ \vdots \\ b_{m-1} \end{bmatrix} \right)
$$
For the arithmetic in the neuron, we will use fixed point representation, which allows us to represent real numbers using integers by allocating a fixed number of bits to the integer and fractional parts. Unlike floating point, fixed point is hardware-efficient and better suited for FPGA implementations where resources and power are limited.