Call for Papers

Fair ML for Health, NeurIPS 2019 Workshop

Important Dates

  • Submission deadline: Sun, Sept 15, 11:59pm Anywhere on Earth (AoE).
  • Travel Grant application: Rolling but suggested Sun Sept 15
  • Paper acceptance notifications: Mon Sept 30, 11:59pm AoE.
  • Travel Grant notifications: Tues, Oct 1, 11:59pm AoE.
  • Camera ready deadline: Fri, Nov 1, 11:59pm AoE
  • Workshop: Sat Dec 14 (9am - 6pm PST, Vancouver, BC, CA)

Topics

The goal of this workshop is to investigate issues around fairness that are specific to ML based healthcare

(Intersectional) Fairness Metrics in ML for Health

Fairness in ML for health is unique as patients with different (protected) attributes like race and gender definitively require different care (for example, dosage differences between men and women). In addition, there exist systemic bias in terms of how specific population groups have been unable to take advantage of advances in medicine. For instance, women are diagnosed at fewer rates with heart attacks as they present their symptoms differently. Additionally, black patients have to be significantly severe in terms of health risk to be provided the same level of care as advantaged sub-groups. This results in added complexity compared to traditional work on algorithmic fairness where equity for protected groups is the primary goal.

Algorithmic Fairness in Machine Learning has been widely studied in machine learning in the context of predictive policing, criminal recidivism. These problems heavily focus on classification -- generally attempting to achieve parity of model performance across discrete groups that have been historically discriminated against. Therefore the associated definitions of fairness codify relationships between protected attributes and outcomes as discrete categories. In analyzing healthcare diagnoses and spending, not only are the outcomes continuous, it is unclear if discrete subgroups definitions capture existing heterogeneity in population requiring targeted attention (including intersectional subgroups). Are current definitions of algorithmic fairness applicable and sufficient for clinical ML? Are causal fairness frameworks directly applicable? We hope to call for work around these questions.

Fairness in Clinical Decision Making

Most fairness in ML has focused on work in prediction, whereas clinical decision making is the ultimate avenue where ML might be used. This requires a shift in characterization of fairness to a focus on fair decision making where cost of uncertainty can be extremely high.

Benchmark Datasets for Fair ML in Health

Lack of benchmark datasets that are representative of diverse populations is the most pressing challenge to advances to equitable ML based care. A primary motive of this workshop would be to invite papers introducing better datasets that are more amenable to targeted fairness research for health. We are able and willing to work with authors to disseminate de-identified datasets to the larger ML research community. In relation to data, a concrete evaluation of issues around equity and quality of data is encouraged.

Characterizing Bias in Healthcare (data, models, and systems)

Bias can manifest in models, data, and collected outcomes in a myriad of ways. One form of such bias is encoded when high dimensional embeddings are learned as intermediate feature representations - a popular practical tool for predictive ML. While the existence of such bias is well characterized for web data , the nature of clinical notes, imaging data requires careful evaluation.

Rare diseases primarily affecting minorities

Can ML improve diagnoses for rare diseases, and conditions that primarily affect minorities? This may require novel methodologies as sample sizes can exponentially reduce for intersectional subgroups.

ML for Targeting Inequities in Public Health Policy

Can ML help better policy and resource allocation that reduces inequities in the system?

Discovering Population Subtypes for improving access and care

Can novel unsupervised methods help in discovering population subtypes that require active attention for equitable care?

Submission Instructions

We invite extended abstracts of 2-8 pages (without references). No limit on number of pages used for references. Supplementary material, including code can be submitted as an additional zip file. Reviewers will not be required to assess these. Submissions should be in PDF format and in standard NeurIPS format.

The reviewing process is double blind. Please submit anonymised versions and do not include any identifying information. Previously published work will only be considered for review if it has been meaningfully extended from their published versions. Parallel submissions (to a journal, conference, workshop, or preprint repository) are, however, allowed.

All accepted workshop abstracts will be required to present posters. Some of the accepted contributions will be invited to give a spotlight. Accepted submissions will be linked to from the workshop website, along with supplementary material and code. No formal proceedings will be provided.

Submission link: https://cmt3.research.microsoft.com/FMLHNIPS2019

VISA and Travel Grants: If you are in need of a travel grant, we will notify you on whether or not you're eligible by the acceptance notification deadline to avoid delays (students and underrepresented minorities presenting at the workshop will be given preference). We will continue to provide grants on a rolling basis following these. More details on procedures coming soon.