Accepted Papers

(Spotlight) Estimating Skin Tone and Effects on Classification Performance in Dermatology Datasets by Newton Kinyanjui (CMU - Africa); Timothy Odonga (CMU - Africa); Celia Cintas (IBM Research); Noel C Codella (IBM Research); Rameswar Panda (IBM Research); Prasanna Sattigeri (IBM Research); Kush R Varshney (IBM Research)

(Spotlight) Understanding racial bias in health using the Medical Expenditure Panel Survey data by Moninder Singh (IBM Research); KarthikeyanNatesan Ramamurthy (IBM Research)

(Spotlight) Fair Predictors under Distribution Shift by Harvineet Singh (NYU); Rina Singh (NYU); Vishwali Mhasawade (NYU); Rumi Chunara (NYU)

Fair and Robust Treatment Effect Estimates: Estimation Under Treatment and Outcome Disparity with Deep Neural Models by (Author list retracted by request)

Hurtful Words: Quantifying Biases in Clinical Contextual Word Embeddings (Author list retracted by request)

Improving Subpopulation Miscalibration in Medical Risk Prediction by Gal O Yona (Weizmann Institute of Science); Noam Barda (Clalit Research); Noa Dagan (Clalit Research)

Fair treatment allocations in social networks by James Atwood (Google Brain); Hansa Srinivasan (Google); Yoni Halpern (Google); D Sculley (Google)

When your only tool is a hammer: The limits of computational solutions to bias in healthcare ML by (Author list retracted by request)

Validation of a deep learning mammography model in a population with low screening rates by Kevin Wu (Harvard University); Eric Wu (DeepHealth); Bill Lotter (Harvard University)

Enhancing Fairness in Kidney Exchange Program by Ranking Solutions by Golnoosh Farnadi (Polytechnique Montreal); Behrouz Babaki (Polytechnique Montreal); Margarida Carvalho (Université de Montréal)

Quantification of Bias in Machine Learning for Healthcare: A Case Study of Renal Failure Prediction by Josie V Williams (NYU); Narges Razavian (NYU Langone Medical Center)

Quantifying Fairness in a Multi-Group Setting and its Impact in the Clinical Setting by (Author list retracted by request)

Assessing Algorithmic Fairness with Unobserved Protected Class Using Data Combination by Xiaojie Mao (Cornell University); Angela Zhou (Cornell University); Nathan Kallus (Cornell University)