Wasserstein Dependence and Representation Learning
Speaker: Keenan Eikenberry (Dartmouth)
Date: 11/5/24
Abstract: In this talk, I’ll discuss recent developments in representation learning, focusing on information-theoretic approaches and disentangled representations. I’ll then introduce Wasserstein dependence, an optimal-transport based analog of mutual information, as an effective loss for preserving information in latent spaces. Additionally, I’ll show that conditional Wasserstein dependence can be used as a disentanglement objective when conditioning on observable factors of variation in (potentially augmented) datasets.