By W. Bradley Knox, Alessandro Allievi, Holger Banzhaf, Felix Schmitt, and Peter Stone
Published in Artificial Intelligence (2023)

AIJ (open access): https://doi.org/10.1016/j.artint.2022.103829
Arxiv (2021): https://arxiv.org/abs/2104.13906


Summary: Reinforcement learning (RL) for autonomous driving (AD) holds promise to optimize driving policies from the coming fire hose of driving data. But as an optimization algorithm, RL is limited by the reward function it’s optimizing against.

To aid reward design, we present 8 sanity checks for a reward function in any domain. We apply these to published reward functions for autonomous driving and find an alarming pattern of frequent failures. For example, the most risk averse reward function we analyzed would approve of deploying a policy with 4000x as many collisions as drunk US 16-17 year olds. We suspect that applying these sanity checks across various tasks, beyond autonomous driving, would also unearth frequent issues. Later in the paper, we explore obstacles to reward design for AD through initial attempts to design three attributes of an AD reward function. We also review of some government-mandated performance metrics, discuss reward learning for AD, and propose designing reward for AD with a financial currency as its unit.

This paper is for anyone thinking about RL for autonomous driving or about designing reward, cost, or utility functions or performance metrics for any task domain.