Graphical models, referred to in various
guises as ``Bayesian networks,'' ``Markov random fields,''
``factor graphs,'' or ``stochastic processes on graphs,'' are
an elegant marriage of graph theory and probability theory.
They provide a computational framework in
which to do probability and statistics, justifying and unifying many
classical techniques for inference, learning, and decision-making, and
form the basis for a host of applications ranging from speech recognition to
error-correcting codes. This class will cover the basic algorithms for
inference and parameter estimation in directed and undirected graphical models
Syllabus:
Hidden Markov Models (HMMs): representation as a graph, inference, parameter
estimation
Semantics of Markov Random Fields and Bayesian Networks: Markov blanket,
d-separation, Hammerseley-Clifford theorem
Exact inference on trees using belief propagation
Exact inference in Markov Random Fields and Bayesian Networks: Triangulation
and elimination, factor graphs, cluster trees.
Approximate inference in Markov Random Fields and Bayesian Networks: sampling
methods, loopy belief propagation, variational inference.
ML Parameter estimation in Markov Random Fields and Bayesian Networks:
fully observed, partially observed, EM, IPF
Parameter estimation beyond ML: discriminative training, maximum entropy..
Continuous graphical models - Factor Analysis, ICA and the Kalman Filter.
Grading
There will be a final project. The homework will count for 50% of your grade
and the final for 50%.
Homework
There will be (roughly) weekly homework assignments, due one week after
being passed out.
You should try to solve problems on your own. If you discuss problems
with another student (which is OK!), indicate on your writeup the
name of your collaborator(s).
In any case you must write up your own solutions.