Self-supervised variational auto-encoders

Ioannis Gatopoulos, Jakub M. Tomczak*

*Corresponding author for this work

Research output: Contribution to JournalArticleAcademicpeer-review

Abstract

Density estimation, compression, and data generation are crucial tasks in artificial intelligence. Variational Auto-Encoders (VAEs) constitute a single framework to achieve these goals. Here, we present a novel class of generative models, called self-supervised Variational Auto-Encoder (selfVAE), which utilizes deterministic and discrete transformations of data. This class of models allows both conditional and unconditional sampling while simplifying the objective function. First, we use a single self-supervised transformation as a latent variable, where the transformation is either downscaling or edge detection. Next, we consider a hierarchical architecture, i.e., multiple transformations, and we show its benefits compared to the VAE. The flexibility of selfVAE in data reconstruction finds a particularly interesting use case in data compression tasks, where we can trade-off memory for better data quality and vice-versa. We present the performance of our approach on three benchmark image data (Cifar10, Imagenette64, and CelebA).

Original languageEnglish
Article number747
Pages (from-to)1-17
Number of pages17
JournalEntropy
Volume23
Issue number6
Early online date14 Jun 2021
DOIs
Publication statusPublished - Jun 2021

Bibliographical note

Funding Information:
The authors would like to thank Maarten Stol (BrainCreators) and Efstratios Gavves (University of Amsterdam) for their support and fruitful discussions.

Publisher Copyright:
© 2021 by the authors. Licensee MDPI, Basel, Switzerland.

Copyright:
Copyright 2021 Elsevier B.V., All rights reserved.

Keywords

  • Deep generative modeling
  • Deep learning
  • Non-learnable transformations
  • Probabilistic modeling

Fingerprint

Dive into the research topics of 'Self-supervised variational auto-encoders'. Together they form a unique fingerprint.

Cite this