\documentclass[fleqn]{article}
\usepackage{mydefs}
\usepackage{notes}
\usepackage{url}
\usepackage{graphicx}
\begin{document}
\lecture{Machine Learning}{HW12: Expectation Maximization}{CS 689, Spring 2015}
% IF YOU ARE USING THIS .TEX FILE AS A TEMPLATE, PLEASE REPLACE
% "CS 689, Spring 2015" WITH YOUR NAME AND UID.
Hand in via moodle at: \url{https://moodle.umass.edu/course/view.php?id=20836}.
Remember that only PDF submissions are accepted. We encourage using
\LaTeX\ to produce your writeups. See \verb+hw00.tex+ for an example
of how to do so. You can make a \verb+.pdf+ out of the \verb+.tex+ by
running ``\verb+pdflatex hw00.tex+''. You'll need mydefs.sty and notes.sty which can be downloaded from the course page.
\bee
\i Aside from the fact that GMMs use soft assignments and k-means uses hard assignments, there are
other differences between the two approaches. What are they?
\i Prove Jensen's inequality using the definition of concavity and induction.
\i I have two coins, A and B. Your job is to figure out $\pi_A$ and $\pi_B$, the probability of heads of each of these coins. However, I'm evil and won't let you flip the coins yourself. What I will do, however, is
flip them on my own and tell you the results. In particular I say something like: I picked one of the
coins, flipped it 10 times, and it came up heads 7 times and tails 3 times. Then I picked one of the
coins (perhaps the same one, perhaps not), flipped it 10 times, and it came up heads 5 times and tails
5 times. I tell you this information N-many times (so you the results of a total of 10N coin flips).
Set this up as an EM problem. What is the data, what are the parameters and what are the hidden
variables? Derive the equations of the E-Step and the M-Step.
\ene
Congratulations on finishing all the homework assignments! Go eat some cake:
\begin{figure}[h]
\centering
\includegraphics[width=2in]{portal-cake.jpg}
\end{figure}
\end{document}