نبذة مختصرة : Task planners and goal recognisers often require symbolic models of an agent’s behaviour. These models are usually manually developed, which can be a time consuming and error prone process. Therefore, our work transforms unlabelled pairs of images, showing the state before and after an action has been executed, into reusable action definitions. Each action definition consist of a set of parameters, effects and preconditions. To evaluate these action definitions, states were generated and a task planner invoked. Problems with large state spaces were solved using the action definitions learnt from smaller state spaces. On average, the task plans contained 5.46 actions and planning took 0.06 seconds. Moreover, when 20 % of transitions were missing, our approach generated the correct number of objects, action definitions and plans 70 % of the time.
No Comments.