Repository: Freie Universität Berlin, Math Department

Optimal sampling for stochastic and natural gradient descent Robert Gruhlke, ,

Gruhlke, Robert and Nouy, Anthony and Trunschke, Philipp (2024) Optimal sampling for stochastic and natural gradient descent Robert Gruhlke, ,. arXiv . (Submitted)

Full text not available from this repository.

Official URL: https://doi.org/10.48550/arXiv.2402.03113

Abstract

We consider the problem of optimising the expected value of a loss functional over a nonlinear model class of functions, assuming that we have only access to realisations of the gradient of the loss. This is a classical task in statistics, machine learning and physics-informed machine learning. A straightforward solution is to replace the exact objective with a Monte Carlo estimate before employing standard first-order methods like gradient descent, which yields the classical stochastic gradient descent method. But replacing the true objective with an estimate ensues a ``generalisation error''. Rigorous bounds for this error typically require strong compactness and Lipschitz continuity assumptions while providing a very slow decay with sample size. We propose a different optimisation strategy relying on a natural gradient descent in which the true gradient is approximated in local linearisations of the model class via (quasi-)projections based on optimal sampling methods. Under classical assumptions on the loss and the nonlinear model class, we prove that this scheme converges almost surely monotonically to a stationary point of the true objective and we provide convergence rates.

Item Type:Article
Subjects:Mathematical and Computer Sciences > Mathematics > Applied Mathematics
Divisions:Department of Mathematics and Computer Science > Institute of Mathematics > Deterministic and Stochastic PDEs Group
ID Code:3113
Deposited By: Ulrike Eickers
Deposited On:20 Feb 2024 13:49
Last Modified:20 Feb 2024 13:49

Repository Staff Only: item control page