Abstract
This thesis contributes to the field of deep latent variable generative models.
We provide a comprehensive background on deep generative modeling, covering different model types, neural network parameterization, and learning algorithms.
Advancements in latent variable models are proposed by enhancing both model density estimation capabilities and the quality of the learned representations.
Based on these contributions, the thesis is structured into two parts.
The first part addresses research questions related to enhancing density estimation performance by exploring better probabilistic modeling approaches.
First, we focus on a trainable prior distribution and an optimal prior defined by the prior works. Based on this concept, we develop a training framework for continual learning of the variational autoencoders and propose a deep hierarchical VAE which exhibits superior performance and better training stability. We next analyze diffusion models, another class of latent variable models. We study their generative and denoising abilities and propose DAED, a combination of the diffusion model and denoising autoencoder, where these two functions are explicitly decoupled.
The second part explores properties of the learned latent representations. First, we show how the hidden data representation learned by the model can be made robust to adversarial attack. Second, we explore the ability of the latent variable model to preserve symmetries of the data and show how these can benefit downstream applications.
| Original language | English |
|---|---|
| Qualification | PhD |
| Awarding Institution |
|
| Supervisors/Advisors |
|
| Award date | 14 Jan 2026 |
| DOIs | |
| Publication status | Published - 14 Jan 2026 |