Reinforcement Learning with History Lists: Solving Partially Observable Decision Processes by Using Short Term Memory - Stephan Timmer - Libros - Suedwestdeutscher Verlag fuer Hochschuls - 9783838106212 - 1 de abril de 2009
En caso de que portada y título no coincidan, el título será el correcto

Reinforcement Learning with History Lists: Solving Partially Observable Decision Processes by Using Short Term Memory

Precio
$ 65,49
sin IVA

Pedido desde almacén remoto

Entrega prevista 16 - 29 de jun.
Añadir a tu lista de deseos de iMusic

A very general framework for modeling uncertainty in learning environments is given by Partially observable Markov Decision Processes (POMDPs). In a POMDP setting, the learning agent infers a policy for acting optimally in all possible states of the environment, while receiving only observations of these states. The basic idea for coping with partial observability is to include memory into the representation of the policy. Perfect memory is provided by the belief space, i.e. the space of probability distributions over environmental states. However, computing policies defined on the belief space requires a considerable amount of prior knowledge about the learning problem and is expensive in terms of computation time. The author Stephan Timmer presents a reinforcement learning algorithm for solving POMDPs based on short term memory. In contrast to belief states, short term memory is not capable of representing optimal policies, but is far more practical and requires no prior knowledge about the learning problem. It can be shown that the algorithm can also be used to solve large Markov Decision Processes (MDPs) with continuous, multi-dimensional state spaces.

Medios de comunicación Libros     Paperback Book   (Libro con tapa blanda y lomo encolado)
Publicado 1 de abril de 2009
ISBN13 9783838106212
Editores Suedwestdeutscher Verlag fuer Hochschuls
Páginas 160
Dimensiones 150 × 220 × 10 mm   ·   256 g
Lengua Alemán