oai:arXiv.org:2406.06459
Computer Science
2024
12-06-2024
Bayesian optimization (BO) is an integral part of automated scientific discovery -- the so-called self-driving lab -- where human inputs are ideally minimal or at least non-blocking.
However, scientists often have strong intuition, and thus human feedback is still useful.
Nevertheless, prior works in enhancing BO with expert feedback, such as by incorporating it in an offline or online but blocking (arrives at each BO iteration) manner, are incompatible with the spirit of self-driving labs.
In this work, we study whether a small amount of randomly arriving expert feedback that is being incorporated in a non-blocking manner can improve a BO campaign.
To this end, we run an additional, independent computing thread on top of the BO loop to handle the feedback-gathering process.
The gathered feedback is used to learn a Bayesian preference model that can readily be incorporated into the BO thread, to steer its exploration-exploitation process.
Experiments on toy and chemistry datasets suggest that even just a few intermittent, asynchronous expert feedback can be useful for improving or constraining BO.
This can especially be useful for its implication in improving self-driving labs, e.g. making them more data-efficient and less costly.
;Comment: AABI 2024.
Code: https://github.com/wiseodd/bo-async-feedback
Kristiadi, Agustinus,Strieth-Kalthoff, Felix,Subramanian, Sriram Ganapathi,Fortuin, Vincent,Poupart, Pascal,Pleiss, Geoff, 2024, How Useful is Intermittent, Asynchronous Expert Feedback for Bayesian Optimization?