Repository logo
 

Sparse Bayesian reinforcement learning

dc.contributor.authorLee, Minwoo, author
dc.contributor.authorAnderson, Charles W., advisor
dc.contributor.authorBen-Hur, Asa, committee member
dc.contributor.authorKirby, Michael, committee member
dc.contributor.authorYoung, Peter, committee member
dc.date.accessioned2017-09-14T16:04:58Z
dc.date.available2017-09-14T16:04:58Z
dc.date.issued2017
dc.descriptionZip file contains supplementary video.
dc.description.abstractThis dissertation presents knowledge acquisition and retention methods for efficient and robust learning. We propose a framework for learning and memorizing, and we examine how we can use the memory for efficient machine learning. Temporal difference (TD) learning is a core part of reinforcement learning, and it requires function approximation. However, with function approximation, the most popular TD methods such as TD(λ), SARSA, and Q-learning lose stability and diverge especially when the complexity of the problem grows and the sampling distribution is biased. The biased samples cause function approximators such as neural networks to respond quickly to the new data by losing what was previously learned. Systematically selecting a most significant experience, our proposed approach gradually stores the snapshot memory. The memorized snapshots prevent forgetting important samples and increase learning stability. Our sparse Bayesian learning model maintains the sparse snapshot memory for efficiency in computation and memory. The Bayesian model extends and improves TD learning by utilizing the state information in hyperparameters for smart decision of action selection and filtering insignificant experience to maintain sparsity of snapshots for efficiency. The obtained memory can be used to further improve learning. First, the placement of the snapshot memories with a radial basis function kernel located at peaks of the value function approximation surface leads to an efficient way to search a continuous action space for practical application with fine motor control. Second, the memory is a knowledge representation for transfer learning. Transfer learning is a paradigm for knowledge generalization of machine learning and reinforcement learning. Transfer learning shortens the time for machine learning training by using the knowledge gained from similar tasks. The dissertation examines a practice approach that transfers the snapshots from non-goal-directive random movements to goal-driven reinforcement learning tasks. Experiments are described that demonstrate the stability and efficiency of learning in 1) traditional benchmark problems and 2) the octopus arm control problem without limiting or discretizing the action space.
dc.format.mediumborn digital
dc.format.mediumdoctoral dissertations
dc.format.mediumZIP
dc.format.mediumMP4
dc.identifierLee_colostate_0053A_14302.pdf
dc.identifier.urihttps://hdl.handle.net/10217/183935
dc.languageEnglish
dc.language.isoeng
dc.publisherColorado State University. Libraries
dc.relation.ispartof2000-2019
dc.rightsCopyright and other restrictions may apply. User is responsible for compliance with all applicable laws. For information about copyright law, please see https://libguides.colostate.edu/copyright.
dc.subjectcontinuous action space
dc.subjectpractice
dc.subjectsparse learning
dc.subjectknowledge retention
dc.subjectBayesian learning
dc.subjectreinforcement learning
dc.titleSparse Bayesian reinforcement learning
dc.typeText
dcterms.rights.dplaThis Item is protected by copyright and/or related rights (https://rightsstatements.org/vocab/InC/1.0/). You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).
thesis.degree.disciplineComputer Science
thesis.degree.grantorColorado State University
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy (Ph.D.)

Files

Original bundle
Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
Lee_colostate_0053A_14302.pdf
Size:
4.07 MB
Format:
Adobe Portable Document Format
No Thumbnail Available
Name:
supplemental.zip
Size:
39.85 MB
Format:
Zip File
Description: