A fully custom implementation of AutoEncoders and Variational AutoEncoders (VAE) built
from the ground up in NumPy—no TensorFlow, no PyTorch. Features a modular architecture with
encoder-decoder symmetry, stochastic sampling layers for VAEs, and support for multiple optimizers
(SGD, Momentum, Adam) with comprehensive loss functions (MSE, MAE, Cross-Entropy).
Highlights
- Custom backpropagation engine with dropout regularization
- Font character reconstruction & latent space visualization
- Stochastic layer implementation for VAE reparameterization trick
- Modular activation functions (ReLU, Sigmoid, Tanh) via Strategy pattern
- Complete with unit tests and training metrics tracking
Features
- AutoEncoder Architecture: Encoder-decoder symmetry with configurable hidden layers
- Variational AutoEncoder (VAE): Full implementation with KL divergence loss and reparameterization trick
- Multiple Optimizers: SGD, Momentum, and Adam with adaptive learning rates
- Loss Functions: MSE, MAE, and Cross-Entropy for flexible training objectives
- Regularization: Dropout layers for preventing overfitting
- Visualization Tools: Latent space exploration and reconstruction quality analysis
Built for the Artificial Intelligence Systems course at Buenos Aires Institute of Technology (ITBA).