As AI-based models take an increasingly central role in our lives, so does the concern for fairness. In recent years, mounting evidence reveals how vulnerable AI models are to bias and the challenges involved in detection and mitigation. Our contribution is three-fold. Firstly, we gather name disparity tables across protected groups, allowing us to estimate sensitive attributes (gender, race). Using these estimates, we compute bias metrics given a classification model’s predictions. We leverage only names/zip codes; hence, our method is model and feature agnostic. Secondly, we offer an open-source Python package that produces a bias detection report based on our method. Finally, we demonstrate that names of older individuals are better predictors of race and gender and that double surnames are a reasonable predictor of gender. We tested our method on publicly available datasets (US Congress) and classifiers (COMPAS) and found it to be consistent with them.
Elsevier, Journal of Responsible Technology,
Volume 9,
2022,
100020,
ISSN 2666-6596