Deep Learning 1

Data is displayed for academic year: 2023./2024.

Laboratory exercises

Course Description

Deep learning is a branch of machine learning based on complex data representations obtained by a sequence of trained non-linear transformations. Deep learning methods have been successfully applied in many important artificial intelligence fields such as computer vision, natural language processing, speech and audio understanding as well as in bioinformatics. This course introduces the most important deep discriminative and generative models with a special focus on practical implementations. Part one introduces key elements of classical feed-forward neural networks and overviews basic building blocks, regularization techniques and learning procedures which are specific for deep models. Part two considers deep convolutional models and illustrates their application in image classification and natural language processing. Part three considers sequence modelling with deep recurrent models and illustrates applications in natural language processing. Finally, part four is devoted to generative deep models and their applications in vision and text representation. All concepts are followed with examples and exercies in Python. Most exercises shall be implemented in a suitable deep learning application framework (e.g. Tensorflow and PyTorch).

Study Programmes

University graduate
[FER3-EN] Control Systems and Robotics - profile
Elective course (3. semester)
Elective courses (2. semester)

Learning Outcomes

  1. Explain advantages of deep learning with respect to the alternative machine learning approaches.
  2. Apply techniques for training of deep models.
  3. Explain application fields of deep discriminative and generative models.
  4. Apply deep learning techniques in understanding of images and text.
  5. Distinguish kinds of deep models which are appropriate in supervised, semi-supervised and unsupervised applications.
  6. Analyze and evaluate the performance of deep models.
  7. Design deep models in a high-level programming language.

Forms of Teaching


The course does not include lectures in English.

Independent assignments

Students may receive additional points for presenting a technical seminar.


Two exercises in each half of the semester.

Grading Method

Continuous Assessment Exam
Type Threshold Percent of Grade Threshold Percent of Grade
Laboratory Exercises 50 % 20 % 50 % 0 %
Mid Term Exam: Written 0 % 40 % 0 %
Final Exam: Written 0 % 40 %
Exam: Written 50 % 80 %
Exam: Oral 20 %

Week by Week Schedule

  1. Motivation for deep learning; course information; basics of machine learning.
  2. Fully connected feed-forward models; loss functions; activation functions; universal approximation theorem.
  3. Recovering gradients by backward propagation; learning with gradient descent; differentiable programming.
  4. Deep convolutional networks: layers, architectures, visualization, fine tuning, applications, implementation
  5. Convolutional models for computer vision; convolutional layers; pooling layers; efficient implementation; classification architectures.
  6. Convolutional backprop; recovering gradient in 1D and 2D case; implementation details.
  7. Optimization with gradient descent; challenges in learning deep models; learning with momentum.
  8. Midterm exam
  9. Nesterov accelerated momentum, adaptive momentum; normalization of activations: LRN, batchnorm.
  10. Correspondence embeddings; Siamese models; triplet loss.
  11. Regularization; penalizing the parameter norm (weight decay); data augmentation and noise injection; early stopping; parameter tying and sharing; bagging and ensembling; excluding activations (dropout).
  12. Recurrent models; sequence modeling; applications in natural language understanding; formulation and optimization.
  13. RNN cell extensions; exploding gradient; advanced RNN cells; attention.
  14. Details of convolutional models; residual and dense connectivity; advanced techniques; model interpretation and prediction explanation; object detection; dense prediction; real time; challenges.
  15. Final exam


Ian Goodfellow, Yoshua Bengio, Aaron Courville (2016.), Deep Learning, MIT Press
Nikhil Buduma, Nicholas Locascio (2017.), Fundamentals of Deep Learning, "O'Reilly Media, Inc."

For students


ID 223068
  Summer semester
L1 English Level
L1 e-Learning
45 Lectures
0 Seminar
0 Exercises
8 Laboratory exercises
0 Project laboratory

Grading System

89 Excellent
76 Very Good
63 Good
50 Sufficient