Consider an autonomous agent with a considerable (but finite) computational capacity operating in a large, complex world over a long period of time. To succeed, the agent must build knowledge that it can update and check automatically and independently, and it must continually learn to generalize its existing knowledge and behavior to novel situations throughout its lifetime.
The purpose of this workshop is to discuss, debate, and develop our joint understanding of the challenges that arise for reinforcement learning agents in this context, and their potential solutions.
online learning, plasticity, and stability
generalization/abstraction, forgetting, graceful degradation and transfer
scalability (the more data and capacity the agent has, the better it should perform)
exploration, exploitation, and intrinsic motivation
learning/planning with partial/changing models