Machine Learning 2
Data is displayed for academic year: 2024./2025.
Lecturers
Course Description
Machine Learning is a branch of artificial intelligence concerned with the design of algorithms that improve their performance based on empirical data. Computational learning theory concerns itself with the limitations of machine learning. The course deals with basic methods of computer learning theory with application to basic classification and regression algorithms. The second part of the course is devoted to advanced machine learning concepts: bayesian machine learning, sparse kernel machines, and semi-supervised machine learning.
Study Programmes
University graduate
[FER3-EN] Control Systems and Robotics - profile
Elective course
(3. semester)
Learning Outcomes
- Define the basic concepts of computational learning theory
- Explain the PAC and VC frameworks
- Apply computational learning theory methods to basic supervised machine learning algorithms
- Explain the theoretical assumptions, advantages, and limitations of bayesian machine learning
- Define the bayesian variants of classification and regression models
- Explain the theoretical assumptions, advantages, and limitations of sparse kernel machines
- Explain the main approaches to semisupervised machine learning and list their advantages and disadvantages
Forms of Teaching
Lectures
Exercises
Independent assignments
Laboratory
Exercises
Independent assignments
Laboratory
Week by Week Schedule
- Machine learning approaches and paradigms
- Overfitting and model selection, empirical and structural risk minimization
- Probably approximately correct learning (PAC), U-learnability
- Probably approximately correct learning (PAC), U-learnability
- Vapnik-Chervonenkis dimension
- Online learning, Weak learning and boosting
- Self-training and co-training, Propagation methods (label propagation, random walks), Semisupervised and transductive support vector machine, Graph-based algorithms, Multiview algorithms
- Midterm exam
- Beta-binomial model, Dirichlet-multinomial model
- Ridge regression, Bayesian linear regression, Bayesian logistic regression
- Sparse linear models (lasso, coordinate descent, LAS, group lasso, proximal and gradient projection methods, elastic net), Kernel functions (RBF, graph kernels, Mercer kernels, linear kernels), Kernel machines and sparse kernel machines
- Bayesian networks, Markov networks, Variational inference (variational Bayes, belief propagation), Monte Carlo inference (rejection sampling, importance sampling)
- Markov Chain Monte Carlo sampling methods: Gibbs sampling and the Metropolis-Hastings algorithm, Mixture models, Expectation maximization algorithm, Markov and hidden Markov models, Conditional random fields
- Bayesian ensambles (Bayesian parameter averaging, Bayesian model combination)
- Final exam
Literature
(.), Shai Ben-David, Shai Shalev-Shwartz: Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, 2014.,
(.), Bernhard Schölkopf: Learning with Kernels, The MIT Press, 2001.,
(.), Daphne Koller, Nir Friedman: Probabilistic Graphical Models: Principles and Techniques. The MIT Press, 2009.,
For students
General
ID 223750
Winter semester
5 ECTS
L3 English Level
L1 e-Learning
30 Lectures
0 Seminar
15 Exercises
15 Laboratory exercises
0 Project laboratory
0 Physical education excercises
Grading System
Excellent
Very Good
Good
Sufficient