Conditional Probability Models for Deep Image Compression
This topic contains 0 replies, has 1 voice, and was last updated by arXiv 1 year, 1 month ago.

Conditional Probability Models for Deep Image Compression
Deep Neural Networks trained as image autoencoders have recently emerged as a promising direction for advancing the state of the art in image compression. The key challenge in learning such networks is twofold: to deal with quantization, and to control the tradeoff between reconstruction error (distortion) and entropy (rate) of the latent image representation. In this paper, we focus on the latter challenge and propose a new technique to navigate the ratedistortion tradeoff for an image compression autoencoder. The main idea is to directly model the entropy of the latent representation by using a context model: a 3DCNN which learns a conditional probability model of the latent distribution of the autoencoder. During training, the autoencoder makes use of the context model to estimate the entropy of its representation, and the context model is concurrently updated to learn the dependencies between the symbols in the latent representation. Our experiments show that this approach yields a stateoftheart image compression system based on a simple convolutional autoencoder.
Conditional Probability Models for Deep Image Compression
by Fabian Mentzer, Eirikur Agustsson, Michael Tschannen, Radu Timofte, Luc Van Gool
https://arxiv.org/pdf/1801.04260v1.pdf
You must be logged in to reply to this topic.