Document detail
ID

oai:arXiv.org:2409.02135

Topic
Computer Science - Machine Learnin... Statistics - Computation Statistics - Methodology Statistics - Machine Learning
Author
Ichikawa, Yuma Arai, Yamato
Category

Computer Science

Year

2024

listing date

10/9/2024

Keywords
function
Metrics

Abstract

Learning-based methods have gained attention as general-purpose solvers due to their ability to automatically learn problem-specific heuristics, reducing the need for manually crafted heuristics.

However, these methods often face scalability challenges.

To address these issues, the improved Sampling algorithm for Combinatorial Optimization (iSCO), using discrete Langevin dynamics, has been proposed, demonstrating better performance than several learning-based solvers.

This study proposes a different approach that integrates gradient-based update through continuous relaxation, combined with Quasi-Quantum Annealing (QQA).

QQA smoothly transitions the objective function, starting from a simple convex function, minimized at half-integral values, to the original objective function, where the relaxed variables are minimized only in the discrete space.

Furthermore, we incorporate parallel run communication leveraging GPUs to enhance exploration capabilities and accelerate convergence.

Numerical experiments demonstrate that our method is a competitive general-purpose solver, achieving performance comparable to iSCO and learning-based solvers across various benchmark problems.

Notably, our method exhibits superior speed-quality trade-offs for large-scale instances compared to iSCO, learning-based solvers, commercial solvers, and specialized algorithms.

;Comment: 21 pages, 3 figures

Ichikawa, Yuma,Arai, Yamato, 2024, Optimization by Parallel Quasi-Quantum Annealing with Gradient-Based Sampling

Document

Open

Share

Source

Articles recommended by ES/IODE AI