KL divergence is used to compare probability distribution functions. This article focuses on deriving a closed form solution for KL divergence using in Variational Autoencoders.
Read moreThe goal of this article is to understand and derive the ELBO (Evidence Lower Bound) cost function used in training Variational Autoencoder. The article is designed with an assumption that the readers possess basic understanding of Generative Modelling and Variational Autoencoders.
Read moreThe article covers step-by-step proof for proving the convexity of a mean squared error loss function. The Ability to test convexity for different loss functions can come in handy especially with more and more exotic loss functions being proposed every day.
Read moreThe convexity property of a function unlocks a crucial advantage where the local minima of a convex function is also a global minima. This ensures that a model can be trained where the loss function is minimized to its globally minimum value. In this blog post, we shall work through the concepts needed to prove the convexity of a function.
Read more