Skip to content
Home
About Us
Resources
Profiles Metrics
Authors Directory
Institutions Directory
Top Authors
Top Institutions
Top Sponsors
AI Digest
Contact Us
Menu
Home
About Us
Resources
Profiles Metrics
Authors Directory
Institutions Directory
Top Authors
Top Institutions
Top Sponsors
AI Digest
Contact Us
Home
About Us
Resources
Profiles Metrics
Authors Directory
Institutions Directory
Top Authors
Top Institutions
Top Sponsors
AI Digest
Contact Us
Menu
Home
About Us
Resources
Profiles Metrics
Authors Directory
Institutions Directory
Top Authors
Top Institutions
Top Sponsors
AI Digest
Contact Us
Publication Details
AFRICAN RESEARCH NEXUS
SHINING A SPOTLIGHT ON AFRICAN RESEARCH
decision sciences
A component-wise EM algorithm for mixtures
Journal of Computational and Graphical Statistics, Volume 10, No. 4, Year 2001
Notification
URL copied to clipboard!
Description
Maximum likelihood estimation in finite mixture distributions is typically approached as an incomplete data problem to allow application of the expectation-maximization (EM) algorithm. In its general formulation, the EM algorithm involves the notion of a complete data space, in which the observed measurements and incomplete data are embedded. An advantage is that many difficult estimation problems are facilitated when viewed in this way. One drawback is that the simultaneous update used by standard EM requires overly informative complete data spaces, which leads to slow convergence in some situations. In the incomplete data context, it has been shown that the use of less informative complete data spaces, or equivalently smaller missing data spaces, can lead to faster convergence without sacrifying simplicity. However, in the mixture case, little progress has been made in speeding up EM. In this article we propose a component-wise EM for mixtures. It uses, at each iteration, the smallest admissible missing data space by intrinsically decoupling the parameter updates. Monotonicity is maintained, although the estimated proportions may not sum to one during the course of the iteration. However, we prove that the mixing proportions will satisfy this constraint upon convergence. Our proof of convergence relies on the interpretation of our procedure as a proximal point algorithm. For performance comparison, we consider standard EM as well as two other algorithms based on missing data space reduction, namely the SAGE and AECME algorithms. We provide adaptations of these general procedures to the mixture case. We also consider the ECME algorithm, which is not a data augmentation scheme but still aims at accelerating EM. Our numerical experiments illustrate the advantages of the component-wise EM algorithm relative to these other methods. © 2001 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.
Authors & Co-Authors
Celeux, Gilles
France, Saint-ismier
Inria Rhone-alpes
Chrétien, Stéphane
France, Besancon
Université de Franche-comté
Forbes, Florence
France, Saint-ismier
Inria Rhone-alpes
Mkhadri, Abdallah
Morocco, Marakech
Faculté Des Sciences Semlalia
Statistics
Citations: 166
Authors: 4
Affiliations: 3
Identifiers
Doi:
10.1198/106186001317243403
e-ISSN:
15372715