Document detail
ID

oai:arXiv.org:2410.23467

Topic
Computer Science - Machine Learnin... Mathematics - Numerical Analysis
Author
Bolager, Erik Lien Cukarska, Ana Burak, Iryna Monfared, Zahra Dietrich, Felix
Category

Computer Science

Year

2024

listing date

2/5/2025

Keywords
using approach forecasting training neural
Metrics

Abstract

Recurrent neural networks are a successful neural architecture for many time-dependent problems, including time series analysis, forecasting, and modeling of dynamical systems.

Training such networks with backpropagation through time is a notoriously difficult problem because their loss gradients tend to explode or vanish.

In this contribution, we introduce a computational approach to construct all weights and biases of a recurrent neural network without using gradient-based methods.

The approach is based on a combination of random feature networks and Koopman operator theory for dynamical systems.

The hidden parameters of a single recurrent block are sampled at random, while the outer weights are constructed using extended dynamic mode decomposition.

This approach alleviates all problems with backpropagation commonly related to recurrent networks.

The connection to Koopman operator theory also allows us to start using results in this area to analyze recurrent neural networks.

In computational experiments on time series, forecasting for chaotic dynamical systems, and control problems, as well as on weather data, we observe that the training time and forecasting accuracy of the recurrent neural networks we construct are improved when compared to commonly used gradient-based methods.

Bolager, Erik Lien,Cukarska, Ana,Burak, Iryna,Monfared, Zahra,Dietrich, Felix, 2024, Gradient-free training of recurrent neural networks

Document

Open

Share

Source

Articles recommended by ES/IODE AI