Autoencoders

Autoencoders are neural networks that learn to compress data into a lower-dimensional representation (encoding) and then reconstruct it back (decoding). They're unsupervised and useful for dimensionality reduction, denoising, and generation.

Architecture

Input → Encoder → Latent Space (Bottleneck) → Decoder → Reconstruction

Encoder

Compresses input into a compact latent representation. Learns the most important features.

Decoder

Reconstructs the original input from the latent code. Learns to reverse the encoding.

Simple Autoencoder Example

This shows how an autoencoder compresses and reconstructs data.

python
Output:
Click "Run Code" to see output

Types of Autoencoders

  • Vanilla Autoencoder: Basic compression and reconstruction
  • Denoising Autoencoder: Learns to remove noise from data
  • Variational Autoencoder (VAE): Generates new samples by sampling from latent space
  • Sparse Autoencoder: Enforces sparsity in the latent representation