Détail du document
Identifiant

oai:arXiv.org:2410.08033

Sujet
Mathematics - Optimization and Con... Electrical Engineering and Systems...
Auteur
Agarwal, Aayushya Pileggi, Larry Rohrer, Ronald
Catégorie

Computer Science

Année

2024

Date de référencement

16/10/2024

Mots clés
fast variables methods systems convergence optimization
Métrique

Résumé

Second-order optimization methods exhibit fast convergence to critical points, however, in nonconvex optimization, these methods often require restrictive step-sizes to ensure a monotonically decreasing objective function.

In the presence of highly nonlinear objective functions with large Lipschitz constants, increasingly small step-sizes become a bottleneck to fast convergence.

We propose a second-order optimization method that utilizes a dynamic system model to represent the trajectory of optimization variables as an ODE.

We then follow the quasi-steady state trajectory by forcing variables with the fastest rise time into a state known as quiescence.

This optimization via quiescence allows us to adaptively select large step-sizes that sequentially follow each optimization variable to a quasi-steady state until all state variables reach the actual steady state, coinciding with the optimum.

The result is a second-order method that utilizes large step-sizes and does not require a monotonically decreasing objective function to reach a critical point.

Experimentally, we demonstrate the fast convergence of this approach for optimizing nonconvex problems in power systems and compare them to existing state-of-the-art second-order methods, including damped Newton-Raphson, BFGS, and SR1.

Agarwal, Aayushya,Pileggi, Larry,Rohrer, Ronald, 2024, Second-Order Optimization via Quiescence

Document

Ouvrir

Partager

Source

Articles recommandés par ES/IODE IA

Hespi: A pipeline for automatically detecting information from hebarium specimen sheets
science recognition institutional detects text-based text pipeline specimen