Discovering and Understanding Algorithmic Biases in Autonomous Pedestrian Trajectory Predictions

Abstract

Pedestrian trajectory prediction is an important module in autonomous vehicles (AVs) to ensure safe and effective motion planning. Recently, many deep learning algorithms that achieve near real-time trajectory predictions have been developed. However, people in the artificial intelligence (AI) ethics community have raised critical concerns about the bias and fairness of many general deep learning algorithms. For example, most pedestrian trajectory data is collected from majority populations, and models learned from this data may not generalize well to the heterogeneous needs and behavior patterns of different pedestrian groups, especially for vulnerable pedestrians like the disabled, the elderly, and children. Biases present in trajectory prediction algorithms could mean that pedestrians from certain vulnerable demographics are more likely to be involved in vehicle crashes. In this work, we test two state-of-the-art pedestrian trajectory prediction models for age and gender biases across three different datasets. We design and utilize novel evaluation metrics for comparing model performance. We find that both models perform worse on children and the elderly compared to adults. However, their performance is similar between men and women. We identify potential sources of these biases, as well as discuss several limitations of our study. Our future work will consist of testing more models, refining our evaluation metrics, further differentiating the dataset bias from the algorithmic bias, and mitigating the algorithmic biases.

Publication
Proceedings of the 20th ACM Conference on Embedded Networked Sensor Systems