Détail du document
Identifiant

oai:arXiv.org:2410.23467

Sujet
Computer Science - Machine Learnin... Mathematics - Numerical Analysis
Auteur
Bolager, Erik Lien Cukarska, Ana Burak, Iryna Monfared, Zahra Dietrich, Felix
Catégorie

Computer Science

Année

2024

Date de référencement

05/02/2025

Mots clés
using approach forecasting training neural
Métrique

Résumé

Recurrent neural networks are a successful neural architecture for many time-dependent problems, including time series analysis, forecasting, and modeling of dynamical systems.

Training such networks with backpropagation through time is a notoriously difficult problem because their loss gradients tend to explode or vanish.

In this contribution, we introduce a computational approach to construct all weights and biases of a recurrent neural network without using gradient-based methods.

The approach is based on a combination of random feature networks and Koopman operator theory for dynamical systems.

The hidden parameters of a single recurrent block are sampled at random, while the outer weights are constructed using extended dynamic mode decomposition.

This approach alleviates all problems with backpropagation commonly related to recurrent networks.

The connection to Koopman operator theory also allows us to start using results in this area to analyze recurrent neural networks.

In computational experiments on time series, forecasting for chaotic dynamical systems, and control problems, as well as on weather data, we observe that the training time and forecasting accuracy of the recurrent neural networks we construct are improved when compared to commonly used gradient-based methods.

Bolager, Erik Lien,Cukarska, Ana,Burak, Iryna,Monfared, Zahra,Dietrich, Felix, 2024, Gradient-free training of recurrent neural networks

Document

Ouvrir

Partager

Source

Articles recommandés par ES/IODE IA