oai:arXiv.org:2406.06107
Computer Science
2024
6/12/2024
Reinforcement learning (RL) has proven to be a powerful tool for training agents that excel in various games.
However, the black-box nature of neural network models often hinders our ability to understand the reasoning behind the agent's actions.
Recent research has attempted to address this issue by using the guidance of pretrained neural agents to encode logic-based policies, allowing for interpretable decisions.
A drawback of such approaches is the requirement of large amounts of predefined background knowledge in the form of predicates, limiting its applicability and scalability.
In this work, we propose a novel approach, Explanatory Predicate Invention for Learning in Games (EXPIL), that identifies and extracts predicates from a pretrained neural agent, later used in the logic-based agents, reducing the dependency on predefined background knowledge.
Our experimental evaluation on various games demonstrate the effectiveness of EXPIL in achieving explainable behavior in logic agents while requiring less background knowledge.
;Comment: 9 pages, 2 pages references, 8 figures, 3 tables
Sha, Jingyuan,Shindo, Hikaru,Delfosse, Quentin,Kersting, Kristian,Dhami, Devendra Singh, 2024, EXPIL: Explanatory Predicate Invention for Learning in Games