Dr. Pablo Barros
Most of the current reinforcement learning solutions for competitive learning, although having real-world-inspired scenarios, focus on a direct space-action-reward mapping between the agent's actions and the environment's state. That translates to agents that can adapt to dynamic scenarios, but, when applied to competitive cases against humans, fail to assess and deal with the impact of their fast-adapting opponents. In most cases, when these agents choose an action, they do not take into consideration how their opponents can affect the state of the scenario. In competitive scenarios, the agents have to learn decisions that a) maximize their goal, and b) minimize their adversaries' goal. Besides dealing with complex scenarios, such solutions would have to deal with the dynamics between the Agents themselves. In this regard, social reinforcement learning is still behind the mainstream applications and demonstrations of the last few years.
We introduce here the Chef's Hat Cup: Revenge of the Agent! That is our second version of the competition that aims at the development of the most challenging artificial players!
This year`s competition is separated into two tracks: a virtual agent`s competitive scenario and, for the first time, a human track. In the first track, the participants will use the already available simulation environment to develop the most effective agents to play the Chef's Hat card game and be the winner. In the second track, the humans themselves will play the games against each other, generating a valuable dataset containing both in-game descriptors (moves, strategies), but also social cue information such as facial features and personality traits of the participants. To end the competition in high style, the winners of each track, virtual agents and humans, will play an exhibition match against each other, which will set the performance of the agents for the next years to come.