What is EM in machine learning?

 What is EM in machine learning?


Arthur Dempster, Nan Laird, and Donald Rubin invented the Expectation-Maximization algorithm (EM algorithm), a latent variable model, in 1977. There may be a few pertinent factors in the data sets used in machine learning applications that are not detected during learning.



There may be a few pertinent factors in the data sets used in machine learning applications that are not detected during learning. To assess the estimation of all latent variables using observed data, try to comprehend the Expectation-Maximization (EM) algorithm. You could start by comprehending the key issues with EM algorithm variables in this situation.



What is EM algorithm used for?


Image result for what is em algorithm

The EM algorithm is used to find (local) maximum likelihood parameters of a statistical model in cases where the equations cannot be solved directly. Typically these models involve latent variables in addition to unknown parameters and known data observations.






What are the steps of EM algorithm?


The two steps of the EM algorithm are:


E-step:

 perform probabilistic assignments of each data point to some class based on the current hypothesis h for the distributional class parameters;



M-step: 

  update the hypothesis h for the distributional class parameters based on the new data assignments.

No comments:

Post a Comment