Course offered in the second semester of the M1.
This course gives a general introduction to Machine Learning, from algorithms to theoretical aspects in
Statistical Learning Theory.
• General introduction to Machine Learning: learning settings, curse of dimensionality, overfitting/underfitting, etc.
• Overview of Supervised Learning Theory: True risk versus empirical risk, loss functions, regularization,
bias/variance trade-off, complexity measures, generalization bounds.
• Linear/Logistic/Polynomial Regression: batch/stochastic gradient descent, closed-form solution.
• Sparsity in Convex Optimization.
• Support Vector Machines: large margin, primal problem, dual problem, kernelization, etc.
• Neural Networks, Deep Learning.
• Theory of boosting: Ensemble methods, Adaboost, theoretical guarantees.
• Non-parametric Methods (K-Nearest-Neighbors)
• Domain Adaptation
• Metric Learning
• Optimal Transport
Teaching methods: Lectures and Lab sessions.
Form(s) of Assessment: written exam (50%) and project (50%)
– Statistical Learning Theory, V. Vapnik, Wiley, 1998
– Machine Learning, Tom Mitchell, MacGraw Hill, 1997
– Pattern Recognition and Machine Learning, M. Bishop, 2013
– Convex Optimization, Stephen Boyd & Lieven Vandenberghe, Cambridge University Press, 2012.
– On-line Machine Learning courses: https://www.coursera.org/
Expected prior knowledge: basic mathematics and statistics – convex optimization