Social Inductive Biases for Reinforcement Learning

Social Inductive Biases for Reinforcement Learning
Author :
Publisher :
Total Pages : 126
Release :
ISBN-10 : OCLC:1203140813
ISBN-13 :
Rating : 4/5 ( Downloads)

Book Synopsis Social Inductive Biases for Reinforcement Learning by : Adjodlah, Dhaval Dhamnidhi Kumar Adjodah

Download or read book Social Inductive Biases for Reinforcement Learning written by Adjodlah, Dhaval Dhamnidhi Kumar Adjodah and published by . This book was released on 2019 with total page 126 pages. Available in PDF, EPUB and Kindle. Book excerpt: How can we build machines that collaborate and learn more seamlessly with humans, and with each other? How do we create fairer societies? How do we minimize the impact of information manipulation campaigns, and fight back? How do we build machine learning algorithms that are more sample efficient when learning from each other's sparse data, and under time constraints? At the root of these questions is a simple one: how do agents, human or machines, learn from each other, and can we improve it and apply it to new domains? The cognitive and social sciences have provided innumerable insights into how people learn from data using both passive observation and experimental intervention. Similarly, the statistics and machine learning communities have formalized learning as a rigorous and testable computational process. There is a growing movement to apply insights from the cognitive and social sciences to improving machine learning, as well as opportunities to use machine learning as a sandbox to test, simulate and expand ideas from the cognitive and social sciences. A less researched and fertile part of this intersection is the modeling of social learning: past work has been more focused on how agents can learn from the 'environment', and there is less work that borrows from both communities to look into how agents learn from each other. This thesis presents novel contributions into the nature and usefulness of social learning as an inductive bias for reinforced learning. I start by presenting the results from two large-scale online human experiments: first, I observe Dunbar cognitive limits that shape and limit social learning in two different social trading platforms, with the additional contribution that synthetic financial bots that transcend human limitations can obtain higher profits even when using naive trading strategies. Second, I devise a novel online experiment to observe how people, at the individual level, update their belief of future financial asset prices (e.g. S&P 500 and Oil prices) from social information. I model such social learning using Bayesian models of cognition, and observe that people make strong distributional assumptions on the social data they observe (e.g. assuming that the likelihood data is unimodal). I were fortunate to collect one round of predictions during the Brexit market instability, and find that social learning leads to higher performance than when learning from the underlying price history (the environment) during such volatile times. Having observed the cognitive limits and biases people exhibit when learning from other agents, I present an motivational example of the strength of inductive biases in reinforcement learning: I implement a learning model with a relational inductive bias that pre-processes the environment state into a set of relationships between entities in the world. I observe strong improvements in performance and sample efficiency, and even observe the learned relationships to be strongly interpretable. Finally, given that most modern deep reinforcement learning algorithms are distributed (in that they have separate learning agents), I investigate the hypothesis that viewing deep reinforcement learning as a social learning distributed search problem could lead to strong improvements. I do so by creating a fully decentralized, sparsely-communicating and scalable learning algorithm, and observe strong learning improvements with lower communication bandwidth usage (between learning agents) when using communication topologies that naturally evolved due to social learning in humans. Additionally, I provide a theoretical upper bound (that agrees with our empirical results) regarding which communication topologies lead to the largest learning performance improvement. Given a future increasingly filled with decentralized autonomous machine learning systems that interact with humans, there is an increasing need to understand social learning to build resilient, scalable and effective learning systems, and this thesis provides insights into how to build such systems.


Social Inductive Biases for Reinforcement Learning Related Books

Social Inductive Biases for Reinforcement Learning
Language: en
Pages: 126
Authors: Adjodlah, Dhaval Dhamnidhi Kumar Adjodah
Categories:
Type: BOOK - Published: 2019 - Publisher:

GET EBOOK

How can we build machines that collaborate and learn more seamlessly with humans, and with each other? How do we create fairer societies? How do we minimize the
Machine Learning of Inductive Bias
Language: en
Pages: 180
Authors: Paul E. Utgoff
Categories: Computers
Type: BOOK - Published: 2012-12-06 - Publisher: Springer Science & Business Media

GET EBOOK

This book is based on the author's Ph.D. dissertation[56]. The the sis research was conducted while the author was a graduate student in the Department of Compu
Inductive Biases and Generalisation for Deep Reinforcement Learning
Language: en
Pages:
Authors: Maximilian Igl
Categories:
Type: BOOK - Published: 2021 - Publisher:

GET EBOOK

Inductive Biases in Machine Learning for Robotics and Control
Language: en
Pages: 131
Authors: Michael Lutter
Categories: Technology & Engineering
Type: BOOK - Published: 2023-07-31 - Publisher: Springer Nature

GET EBOOK

One important robotics problem is “How can one program a robot to perform a task”? Classical robotics solves this problem by manually engineering modules fo
Change of Representation and Inductive Bias
Language: en
Pages: 372
Authors: D. Paul Benjamin
Categories: Computers
Type: BOOK - Published: 1989-12-31 - Publisher: Springer

GET EBOOK