oai:arXiv.org:2406.18752
Computer Science
2024
7/3/2024
In the online knapsack problem, the goal is to pack items arriving online with different values and weights into a capacity-limited knapsack to maximize the total value of the accepted items.
We study \textit{learning-augmented} algorithms for this problem, which aim to use machine-learned predictions to move beyond pessimistic worst-case guarantees.
Existing learning-augmented algorithms for online knapsack consider relatively complicated prediction models that give an algorithm substantial information about the input, such as the total weight of items at each value.
In practice, such predictions can be error-sensitive and difficult to learn.
Motivated by this limitation, we introduce a family of learning-augmented algorithms for online knapsack that use \emph{succinct predictions}.
In particular, the machine-learned prediction given to the algorithm is just a single value or interval that estimates the minimum value of any item accepted by an offline optimal solution.
By leveraging a relaxation to online \emph{fractional} knapsack, we design algorithms that can leverage such succinct predictions in both the trusted setting (i.e., with perfect prediction) and the untrusted setting, where we prove that a simple meta-algorithm achieves a nearly optimal consistency-robustness trade-off.
Empirically, we show that our algorithms significantly outperform baselines that do not use predictions and often outperform algorithms based on more complex prediction models.
;Comment: 29 pages, 10 figures, Submitted to NeurIPS 2024
Daneshvaramoli, Mohammadreza,Karisani, Helia,Lechowicz, Adam,Sun, Bo,Musco, Cameron,Hajiesmaili, Mohammad, 2024, Competitive Algorithms for Online Knapsack with Succinct Predictions