A Comparative Study on the Performances of Q-Learning and Neural Q-Learning Agents toward Analysis of Emergence of Communication

Open Access

Abstract: In this paper, we suppose the gesture theory that is one theory on the origin of language, which tries to establish that speech originated from gestures. Based on the theory, we assume that “actions” having some purposes can be used as “symbols” in the communication through a learning process. The purpose of this study is to clarify what abilities of agents and what conditions are necessary to acquire usages of the actions as the symbols. To investigate them, we adopt a collision avoidance game and compare the performances of Q-learning agents with that of Neural Q-learning agents. In our simulation, we found that the Neural Q-learning agent’s ability to reach the goal place is higher than the Q-learning agent’s one. In contrast, the Neural Q-learning agent’s ability to avoid collisions is lower than the Q-learning agent’s one. If the inconsistencies in the learning data sets of the Neural Q-learning agent, however, can be resolved, the agent has enough potential to improve its ability for collision avoidance. Therefore, we conclude that the most suitable agent to analyze the emergence of communication is the Neural Q-learning agent who changed a feed forward type neural network into a recurrent type neural network that can resolve the inconsistencies in the learning data sets.

Keywords: Q-learning, Neural Q-learning, Collision Avoidance Game, Reinforcement Learning Agents, Multi-Agent System.

Takashi Sato and Fumiko Shirasaki

The Author field can not be Empty

The Institution field can't be Empty

Volume 2 Issue 4

Volume and Issue can't be empty


The Page Numbers field can't be Empty


Publication Date field can't be Empty