Agent Learning in Relational Domains based on Logical MDPs with Negation
|
Title | Agent Learning in Relational Domains based on Logical MDPs with Negation |
Authors | |
Abstract | In this paper, we propose a model named Logical Markov Decision Processes with Negation for Relational Reinforcement Learning for applying Reinforcement Learning algorithms on the relational domains with the states and actions in relational form. In the model, the logical negation is represented explicitly, so that the abstract state space can be constructed from the goal state(s) of a given task simply by applying a generating method and an expanding method, and each ground state can be represented by one and only one abstract state. Prototype action is also introduced into the model, so that the applicable abstract actions can be obtained automatically. Based on the model, a model-freeΘ(λ)-learning algorithm is implemented to evaluate the state-action- substitution value function. We also propose a state refinement method guided by two formal definitions of self-loop degree and common characteristic of abstract states to construct the abstract state space automatically by the agent itself rather than manually. The experiments show that the agent can catch the core of the given task, and the final state space is intuitive. |
Publisher | ACADEMY PUBLISHER |
Date | 2008-09-01 |
Source | Journal of Computers Vol 3, No 9 (2008) |
Rights | Copyright © ACADEMY PUBLISHER - All Rights Reserved.To request permission, please check out URL: http://www.academypublisher.com/copyrightpermission.html. |