Sommer, D. and Gruhlke, R. and Kirstein, M. and Eigel, M. and Schillings, C. Generative Modelling with Tensor Train approximations of Hamilton--Jacobi--Bellman equations. Preprint (WIAS) . (Submitted)
Full text not available from this repository.
Abstract
Sampling from probability densities is a common challenge in fields such as Uncertainty Quantification (UQ) and Generative Modelling (GM). In GM in particular, the use of reverse-time diffusion processes depending on the log-densities of Ornstein-Uhlenbeck forward processes are a popular sampling tool. In Berner et al. [2022] the authors point out that these log-densities can be obtained by solution of a \textit{Hamilton-Jacobi-Bellman} (HJB) equation known from stochastic optimal control. While this HJB equation is usually treated with indirect methods such as policy iteration and unsupervised training of black-box architectures like Neural Networks, we propose instead to solve the HJB equation by direct time integration, using compressed polynomials represented in the Tensor Train (TT) format for spatial discretization. Crucially, this method is sample-free, agnostic to normalization constants and can avoid the curse of dimensionality due to the TT compression. We provide a complete derivation of the HJB equation's action on Tensor Train polynomials and demonstrate the performance of the proposed time-step-, rank- and degree-adaptive integration method on a nonlinear sampling task in 20 dimensions.
Item Type: | Article |
---|---|
Subjects: | Mathematical and Computer Sciences > Mathematics > Applied Mathematics Mathematical and Computer Sciences > Mathematics > Engineering/Industrial Mathematics |
Divisions: | Department of Mathematics and Computer Science > Institute of Mathematics > Computational PDEs Group |
ID Code: | 3200 |
Deposited By: | Sandra Krämer |
Deposited On: | 06 Dec 2024 14:18 |
Last Modified: | 06 Dec 2024 14:18 |
Repository Staff Only: item control page