Machine Learning 2

Course Description

Machine Learning is a branch of artificial intelligence concerned with the design of algorithms that improve their performance based on empirical data. Computational learning theory concerns itself with the limitations of machine learning. The course deals with basic methods of computer learning theory with application to basic classification and regression algorithms. The second part of the course is devoted to advanced machine learning concepts: bayesian machine learning, sparse kernel machines, and semi-supervised machine learning.

Learning Outcomes

  1. Define the basic concepts of computational learning theory
  2. Explain the PAC and VC frameworks
  3. Apply computational learning theory methods to basic supervised machine learning algorithms
  4. Explain the theoretical assumptions, advantages, and limitations of bayesian machine learning
  5. Define the bayesian variants of classification and regression models
  6. Explain the theoretical assumptions, advantages, and limitations of sparse kernel machines
  7. Explain the main approaches to semisupervised machine learning and list their advantages and disadvantages

Forms of Teaching



Independent assignments


Week by Week Schedule

  1. Machine learning approaches and paradigms
  2. Overfitting and model selection, empirical and structural risk minimization
  3. Probably approximately correct learning (PAC), U-learnability
  4. Probably approximately correct learning (PAC), U-learnability
  5. Vapnik-Chervonenkis dimension
  6. Online learning, Weak learning and boosting
  7. Self-training and co-training, Propagation methods (label propagation, random walks), Semisupervised and transductive support vector machine, Graph-based algorithms, Multiview algorithms
  8. Midterm exam
  9. Beta-binomial model, Dirichlet-multinomial model
  10. Ridge regression, Bayesian linear regression, Bayesian logistic regression
  11. Sparse linear models (lasso, coordinate descent, LAS, group lasso, proximal and gradient projection methods, elastic net), Kernel functions (RBF, graph kernels, Mercer kernels, linear kernels), Kernel machines and sparse kernel machines
  12. Bayesian networks, Markov networks, Variational inference (variational Bayes, belief propagation), Monte Carlo inference (rejection sampling, importance sampling)
  13. Markov Chain Monte Carlo sampling methods: Gibbs sampling and the Metropolis-Hastings algorithm, Mixture models, Expectation maximization algorithm, Markov and hidden Markov models, Conditional random fields
  14. Bayesian ensambles (Bayesian parameter averaging, Bayesian model combination)
  15. Final exam

Study Programmes


(.), Shai Ben-David, Shai Shalev-Shwartz: Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, 2014.,
(.), Bernhard Schölkopf: Learning with Kernels, The MIT Press, 2001.,
(.), Daphne Koller, Nir Friedman: Probabilistic Graphical Models: Principles and Techniques. The MIT Press, 2009.,

For students


ID 222787
  Winter semester
L3 English Level
L1 e-Learning
30 Lectures
15 Exercises
15 Laboratory exercises