Joris Bierkens Piecewise deterministic Monte Carlo in infinite dimensions [pdf]
In Bayesian inverse problems one is interested in performing computations with respect to an infinite dimensional probability distribution. A modern computational approach consists of approximating this infinite dimensional probability distribution by running a truncated version of a genuine infinite dimensional Markov chain. If a well-posed infinite dimensional chain exists, then the truncated, finite-dimensional approximation may be expected to have desirable scaling properties with respect to dimension.
Claire Boyer Is interpolation benign in random forests? [pdf]Statistical wisdom suggests that very complex models, interpolating training data, will be poor at prediction on unseen examples. Yet, this aphorism has been recently challenged by the identification of benign overfitting regimes, specially studied in the case of parametric models: generalization capabilities may be preserved despite model high complexity. While it is widely known that fully-grown decision trees interpolate and, in turn, have bad predictive performances, the same behavior is yet to be analyzed for random forests.
Elsa Cazelles A novel notion of barycenter for probability distributions based on optimal weak mass transport [pdf]We introduce weak barycenters of a family of probability distributions, based on the recently developed notion of optimal weak transport of mass. We provide a theoretical analysis of this object and discuss its interpretation in the light of convex ordering between probability measures.
Alain Durmus Non-equilibrium sampling [pdf]Sampling from a complex distributionπand approximating its intractable normalizing constant \(Z\) are challenging problems. In this talk, a novel family of importance samplers (IS) and Markov chain Monte Carlo (MCMC) samplers is derived. Given an invertible map \(T\), these schemes combine (with weights) elements from the forward and backward orbits through points sampled from a proposal distribution \(\rho\). The map \(T\) does not leave the target \(\pi\) invariant, hence the name NEO, standing for Non-Equilibrium Orbits. NEO-IS provides unbiased estimators of the normalizing constant and self-normalized IS estimators of expectations under \(\pi\) while NEO-MCMC combines multiple NEO-IS estimates of the normalizing constant and an iterated sampling-importance resampling mechanism to sample from \(\pi\).
Sébastien Gadat Stochastic Gauss-Newton algorithm for optimal transport [pdf]Jean-François Jabir Penalization methods for diffusion processes with weak constraints [pdf]
Manon Michel Computational complexity reduction by factorization in MCMC [pdf]
In the context of high-dimensional and massive datasets in Bayesian inference, it has become crucial to develop Markov-chain Monte Carlo sampling methods which present better scaling properties in terms of dynamics, but also in terms of computational complexity: Could it be possible to derive an exact method which only needs to compute a small number of terms per move? This presentation will present the Clock Monte Carlo method and how it uses the factorization of the acceptance probabilities to produce moves in a constant O(1) complexity. This method is inspired from the factorization trick and thinning method originally used in MCMC sampling based on piecewise deterministic Markov processes. The factorization trick can lead to an important dynamical slow down, for instance in case of strong frustration in physical systems, and mitigating solutions will be discussed.
Pierre Monmarché Non-asymptotic analysis of HMC and Langevin diffusion for MCMC [pdf]Clarice Poon Smooth bilevel programming for sparse regularization [pdf]
Nonsmooth regularisers are widely used in machine learning for enforcing solution structures (such as the l1 norm for sparsity or the nuclear norm for low rank). State of the art solvers are typically first order methods or coordinate descent methods which handle nonsmoothness by careful smooth approximations and support pruning. In this work, we revisit the approach of iteratively reweighted least squares (IRLS) and show how a simple reparameterization coupled with a bilevel resolution leads to a smooth unconstrained problem.
Gabriel Peyré Computational optimal transport [pdf]Optimal transport (OT) has become a fundamental mathematical tool at the interface between calculus of variations, partial differential equations and probability. It took however much more time for this notion to become mainstream in numerical applications. This situation is in large part due to the high computational cost of the underlying optimization problems. There is a recent wave of activity on the use of OT-related methods in fields as diverse as computer vision, computer graphics, statistical inference, machine learning and image processing.
Rafael Pinot Demystifying byzantine robust machine learning [pdf]Lukasz Szpruch Mean-field perspective on training neural networks [pdf]