Machine Learning

The information bottleneck and geometric clustering

Tagged: ,

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 5 months ago.

  • arXiv
    5 pts

    The information bottleneck and geometric clustering

    The information bottleneck (IB) approach to clustering takes a joint distribution $P!left(X,Yright)$ and maps the data $X$ to cluster labels $T$ which retain maximal information about $Y$ (Tishby et al., 1999). This objective results in an algorithm that clusters data points based upon the similarity of their conditional distributions $P!left(Ymid Xright)$. This is in contrast to classic “geometric clustering” algorithms such as $k$-means and gaussian mixture models (GMMs) which take a set of observed data points $left{ mathbf{x}_{i}right}_{i=1:N}$ and cluster them based upon their geometric (typically Euclidean) distance from one another. Here, we show how to use the deterministic information bottleneck (DIB) (Strouse and Schwab, 2017), a variant of IB, to perform geometric clustering, by choosing cluster labels that preserve information about data point location on a smoothed dataset. We also introduce a novel intuitive method to choose the number of clusters, via kinks in the information curve. We apply this approach to a variety of simple clustering problems, showing that DIB with our model selection procedure recovers the generative cluster labels. We also show that, for one simple case, DIB interpolates between the cluster boundaries of GMMs and $k$-means in the large data limit. Thus, our IB approach to clustering also provides an information-theoretic perspective on these classic algorithms.

    The information bottleneck and geometric clustering
    by D J Strouse, David J Schwab

You must be logged in to reply to this topic.