Logo image
Combinatorial Multi-armed Bandits for Real-Time Strategy Games
Journal article   Open access   Peer reviewed

Combinatorial Multi-armed Bandits for Real-Time Strategy Games

Santiago Ontanon
The Journal of artificial intelligence research, v 58, pp 665-702
01 Jan 2017
url
https://doi.org/10.1613/jair.5398View
Published, Version of Record (VoR)Open Access (Publisher-Specific) Open

Abstract

Computer Science Computer Science, Artificial Intelligence Science & Technology Technology
Games with large branching factors pose a significant challenge for game tree search algorithms. In this paper, we address this problem with a sampling strategy for Monte Carlo Tree Search (MCTS) algorithms called naive sampling, based on a variant of the Multi-armed Bandit problem called Combinatorial Multi-armed Bandits (CMAB). We analyze the theoretical properties of several variants of naive sampling, and empirically compare it against the other existing strategies in the literature for CMABs. We then evaluate these strategies in the context of real-time strategy (RTS) games, a genre of computer games characterized by their very large branching factors. Our results show that as the branching factor grows, naive sampling outperforms the other sampling strategies.

Metrics

8 Record Views
71 citations in Scopus

Details

InCites Highlights

Data related to this publication, from InCites Benchmarking & Analytics tool:

Web of Science research areas
Computer Science, Artificial Intelligence
Logo image