Recurrent Neural Networks (RNNs)

RNNs are designed for sequential data like text, time series, or audio. Unlike feedforward networks, RNNs have loops that allow information to persist across time steps.

How RNNs Work

At each time step, an RNN takes the current input and the previous hidden state to produce an output and a new hidden state.

h(t) = tanh(W_h * h(t-1) + W_x * x(t) + b)
y(t) = W_y * h(t)

Simple RNN Simulation

This shows how an RNN processes a sequence of inputs, maintaining a hidden state.

python
Output:
Click "Run Code" to see output

Vanishing Gradient Problem: Standard RNNs struggle with long sequences. This led to the development of LSTMs and GRUs.