Section outline

  • Course overview


    The course will be mainly divided in three chunks:

        1. Intro to Deep Learning and Multilayer Perceptrons (ca. 20h): we will study the basic concepts of Artificial Neural Networks (activation functions, backpropagation, neural networks modules, regularization and normalization...)
        2. Deep Learning for images and sequences (ca. 18h): we will introduce Convolutional Neural Networks and use them to solve image- and audio-related task. Moreover, we will talk about models, like Recurrent Neural Networks, to analyze more general sequential data. Eventually, we will introduce the concept of attention and talk about Transformers.
        3. Self-Supervised Learning (SSL) (ca. 8h): we will see techniques to learn from the data itself without using labels. We will talk about generative models like GANs and Variational Autoencoders, then we will talk about specific SSL techniques with a non-generative nature.

    If there will be time left at the end of the course, we will also talk about Graph Neural Networks (2h).

    Useful links


        Course Team:

    Evaluation

    The evaluation will be based upon two items:

       * Final Exam
       * Assignments

    Final Exam

    The final exam will be an oral examination. The student can decide whether to bring a project, which can be also done in teams of max. 3 people each, or to have a "regular" oral examination. In case a project is presented, questions may be asked only regarding the topics of the project itself, unless the student expresses insufficient knowledge about some parts of the course during the presentation.

    Assignments

    There will be assignments, usually handed out after the labs. Two assignments will be "compulsory", while the others will be elective and may be seen as a path for practicing PyTorch before the exam. For this reason, the assignments will be concentrated in the first half of the course, leaving time for the completion of the project in the second half.
    In case the "compulsory" assignments are not completed before the deadline, additional theoretical questions will be asked during the exam, even if the student will be presenting a project.
    The assignments will generally not affect negatively the evaluation of the final mark, but will be used as a way of possibly increasing it. Laude will not be given if the "compulsory" assignments are not completed by the deadline.

    Lectures recordings and material

    The recordings will be uploaded after the lectures on the Team, whose link can be seen above.
    The materials (slides, papers, etc) will be indicated both on the Team and uploaded here.
    Unfortunately, we can't upload here the recordings due to the limited space at our disposal.

    Books and references

    This course has no official reference, as, at the moment, there's no comprehensive and up-to-date Deep Learning book with the required level of detail available. Each lecture will have its own references, but we will mainly use the following:

        MIT Deep Learning Book, by Goodfellow, Bengio and Courville
        Michael Nielsen’s Deep Learning Book
        Kevin Murphy’s Probabilistic Machine Learning (new edition)
        Bishop’s Pattern Recognition and Machine Learning
        Parallel Distributed Processing (PDP), Volume 1 and 2, Explorations in the Microstructure of Cognition by David E. Rumelhart, James L. McClelland and PDP Research Group
        Jurafsky and Martin’s Speech and Language Processing

    Each lecture will have a list of references written. We will provide the list on this Moodle.

    Class schedule and rooms

    • Tuesday 9-11 - Aula I, Building C1
    • Thursday 11-13 - Aula A, Building C9
    We also have a slot on Monday whose hours and room are yet to be decided. It will be seldom use. Additional info will be given soon.