Generating brain CT images using Disentangled Variational Autoencoders

Can we generate CT slices of a brain? Or interpolate between two different brains? Or… if we represent the brain image as a set of numbers, can we change brain size, rotation, anatomy just by tweaking these numbers?

I asked these questions when I first read the paper about Disentangled Variational Autoencoders (β-VAE).

According to the paper, you can encode an image into a small numerical vector in a way that each vector’s variable will be responsible for one independent and interpretable visual feature. For faces, it can be skin color, age, gender, or image saturation. What these features will be for brain images? Let’s find out.

Elena, one of our Machine Learning Engineers, shares a paper based on her recent research looking into whether neural networks can understand brain anatomy, maybe just a tiny bit?! Elena tries to answer this question using Disentangled Variational Autoencoder trained on brain CT slices.

Request the full text below and a downloadable version will be sent via email.