Metadata-Version: 2.4
Name: pyautoencoder
Version: 1.0.11
Summary: A Python package offering implementations of state-of-the-art autoencoder architectures in PyTorch.
Author: Andrea Pollastro
License: MIT
Project-URL: Homepage, https://github.com/andrea-pollastro/pyautoencoder
Project-URL: Repository, https://github.com/andrea-pollastro/pyautoencoder
Keywords: autoencoder,pytorch,deep learning,machine learning,representation learning,dimensionality reduction,generative models
Classifier: Operating System :: OS Independent
Classifier: Intended Audience :: Science/Research
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: 3.14
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: torch>=2.1.0
Dynamic: license-file

# PyAutoencoder

A clean, modular PyTorch library for building and training autoencoders.

![logo](https://raw.githubusercontent.com/andrea-pollastro/pyautoencoder/main/assets/logo_nobackground.png)

<p align="center">
  <a href="https://pypi.org/project/pyautoencoder/"><img alt="PyPI Version" src="https://img.shields.io/pypi/v/pyautoencoder.svg"></a>
  <!-- <img alt="Python Versions" src="https://img.shields.io/pypi/pyversions/pyautoencoder.svg"> -->
  <img alt="Python" src="https://img.shields.io/badge/Python-3.10%2B-blue">
  <a href="https://github.com/andrea-pollastro/pyautoencoder/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/andrea-pollastro/pyautoencoder/actions/workflows/ci.yml/badge.svg"></a>
  <a href="https://github.com/andrea-pollastro/pyautoencoder/actions/workflows/publish.yml"><img alt="Publish to PyPI" src="https://github.com/andrea-pollastro/pyautoencoder/actions/workflows/publish.yml/badge.svg"></a>
  <a href="https://github.com/andrea-pollastro/pyautoencoder/blob/main/LICENSE"><img alt="License: MIT" src="https://img.shields.io/badge/License-MIT-blue.svg"></a>
  <a href="https://pepy.tech/project/pyautoencoder"><img alt="Downloads" src="https://pepy.tech/badge/pyautoencoder"></a>
  <a href="https://pyautoencoder.readthedocs.io/en/latest/"><img alt="Documentation Status" src="https://readthedocs.org/projects/pyautoencoder/badge/?version=latest"></a>
  <a href="https://github.com/andrea-pollastro/pyautoencoder/stargazers"><img alt="Stars" src="https://img.shields.io/github/stars/andrea-pollastro/pyautoencoder?style=social"></a>
</p>

---

## Highlights

PyAutoencoder is designed to offer **simple and easy access to autoencoder frameworks**. Here's what it offers:

- **Minimal, composable API**  
  You don't have to inherit from complicated base classes or learn a new training loop. Simply provide your own PyTorch nn.Module encoder and decoder, and plug them into the ready‑to‑use autoencoder wrappers. This makes it easy to experiment with different architectures (e.g. MLPs, CNNs) while reusing the same training pipeline.

- **Ready‑to‑use autoencoders**
  The library ships with working implementations of autoencoders, each paired with their respective loss functions. You can start training in a few lines, without re‑implementing reconstruction likelihoods, KL divergence, or other boilerplate.

- **PyTorch compatibility**  
  The library is fully compatible with the PyTorch ecosystem, so models integrate naturally with modules, tensors, optimizers, and schedulers.

- **Lightweight, research‑oriented**  
  The library is intentionally minimal: no training loop frameworks, no heavy abstractions. This makes it well suited for research prototypes where you want control and transparency.

> **Status**: The project is in an early but usable stage. Contributions, issues, and feedback are highly encouraged!

**Currently implemented**:
- Autoencoder (AE)
- Variational Autoencoder (VAE)

---

## Documentation

Full documentation (installation, tutorials, API reference, and examples) is available at:

👉 **https://pyautoencoder.readthedocs.io/en/latest/**

In particular, see:

- **Examples → MNIST Autoencoder** – simple AE on MNIST  
- **Examples → MNIST VAE (Kingma & Welling 2013)** – reproduction of Fig. 2  

---

## Installation

```bash
pip install pyautoencoder
```

Or install from source for development:

```bash
git clone https://github.com/andrea-pollastro/pyautoencoder.git
cd pyautoencoder
pip install -e .
```

## Examples

The [`examples/`](examples/) directory contains runnable scripts, including:

- **`mnist_ae.py`** – standard Autoencoder on MNIST  
- **`mnist_vae_fig2.py`** – reproduction of the MNIST VAE experiment from **Kingma & Welling (2013), Fig. 2**

These examples are also documented and explained in the [online documentation](https://pyautoencoder.readthedocs.io/en/latest/).

---

## License

This project is released under the **MIT License**. See [LICENSE](LICENSE).

## Citation

If you use this package in academic work, please cite:

```bibtex
@misc{pollastro2025pyautoencoder,
  author       = {Andrea Pollastro},
  title        = {pyautoencoder},
  year         = {2025},
  howpublished = {GitHub repository},
  url          = {https://github.com/andrea-pollastro/pyautoencoder}
}
```
