Agent Tamer The Secret Life Of Algorithms
Loading...
Date
Citation for Previous Publication
Link to Related Item
Abstract
Description
Although this report deals with the mechanisms of artificially intelligent
rather than intelligence agents, the former is no less a subject of
fascination. My research centred around an algorithm called Training an
Agent Manually via Evaluative Reinforcement (TAMER), which
incorporates human feedback into a reinforcement learning model. I ran
several trials in the Mountain Car environment provided by the OpenAI
gym library, altering the uniform value, credit assignment value, and
budget of each to see which changes returned the best performance for
the agent. Ultimately, lower credit assignment values and uniform
values that are slightly better than those an average human trainer can
provide are most effective in improving the performance of the agent,
while the budget does not have a significant effect on the agent's
efficiency.
Item Type
http://purl.org/coar/resource_type/c_6670
Alternative
Other License Text / Link
Language
en
