An Information-Theoretic Analysis of Hard and Soft Assignment Methods for Clustering
Assignment methods are at the heart of many algorithms for unsupervised learning and clustering — in particular, the well-known -means and Expectation-Maximization (EM) algorithms. In this work, we study several different methods of assignment, including the Õhard” assignments used by -means and the Õsoft” assignments used by EM. While it is known that -means minimizes the distortion on the data and EM maximizes the likelihood, little is known about the systematic differences of behavior between the two algorithms. Here we shed light on these differences via an information-theoretic analysis. The cornerstone of our results is a simple decomposition of the expected distortion, showing that -means (and its extension for inferring general parametric densities from unlabeled sample data) must implicitly manage a trade-offbetween how similar the data assigned to each cluster are, and how the data are balanced among the clusters. How well the data are balanced is measured by the entropy of the partition defined by the hard assignments. In addition to letting us predict and verify systematic differences between -means and EM on specific examples, the decomposition allows us to give a rather general argument showing that -means will consistently find densities with less Õoverlap” than EM. We also study a third natural assignment method that we call posterior assignment, that is close in spirit to the soft assignments of EM, but leads to a surprisingly different algorithm.