Skip to content. | Skip to navigation

Personal tools

Sections

UMR 5672

logo de l'ENS de Lyon
logo du CNRS
You are here: Home / Seminars / Machine Learning and Signal Processing / 2 talks: "Graph Neural Networks on Large Random Graphs: Convergence, Stability, Universality" and "Sélection d'hyperparamètres via la différentiation algorithmique"

2 talks: "Graph Neural Networks on Large Random Graphs: Convergence, Stability, Universality" and "Sélection d'hyperparamètres via la différentiation algorithmique"

Nicolas Keriven (CNRS, Gipsa) et Samuel Vaiter (CNRS, Université Côte d'Azur)
When Jun 17, 2022
from 01:00 to 02:30
Attendees Nicolas Keriven ; Samuel Vaiter
Add event to calendar vCal
iCal

Nicolas Keriven : "Graph Neural Networks on Large Random Graphs: Convergence, Stability, Universality":

Abstract: In this talk, we will discuss some theoretical properties of Graph Neural Networks (GNNs) on large graphs. Indeed, most existing analyses of GNNs are purely combinatorial and may fail to reflect their behavior regarding large-scale structures in graphs, such as communities of well-connected nodes or manifold structures. To address this, we assume that the graphs of interest are generated with classical models of random graphs. We first give non-asymptotic convergence bounds of GNNs toward ``continuous'' equivalents as the number of nodes grows. We then study their stability to small deformations of the underlying random graph model, a crucial property in traditional CNNs. Finally, we study their universality and approximation power, and show how some recent GNNs are more powerful than others. This is a joint work with Samuel Vaiter (CNRS) and Alberto Bietti (NYU).

Samuel Vaiter :   "sélection d'hyperparamètres via la différentiation algorithmique"

Abstract: Setting regularization (hyper-)parameters for variational estimators in imaging or machine learning is notoriously difficult. Grid-search requires to choose a predefined grid of parameters and scales exponentially in the number of parameters which can be quickly inconvenient or even impossible in imaging. Another class of approaches casts hyperparameter optimization as a bi-level optimization problem, typically solved by gradient descent. A key challenge for these approaches is the estimation of the gradient w.r.t. the hyperparameters. In this presentation, I will show how algorithmic/automatic differentiation can help to overcome this challenge, both for inverse problems with a differentiable Stein Unbiased Risk Estimator and in regression using held-out loss.

More information : https://maximeferreira.github.io

Exposé en salle M7 101 (ENS de Lyon, site Monod, 1er étage côté Recherche au M7) (à confirmer)