Developing and Deploying Methodologies for Improving the Accuracy, Fairness, and Trustworthiness of Risk Assessment Models in Child Welfare
This project aims to contribute toward developing and deploying methodologies for improving the accuracy, fairness, and trustworthiness of risk assessment models in child welfare. Researchers have shown that by inferring high-consensus decisions and incorporating them into the model training process we can produce more accurate models that are also better-aligned with decision objectives that are inferable from historical decisions but aren’t captured by accuracy alone. It is estimated that that child protection agencies across the US receive over 4 million referral calls per year.
By building risk assessment models using routinely collected administrative data, we can better identify cases that are the most likely to result in adverse child welfare outcomes. A challenge is that the adverse outcome we choose to predict is often only observed for a particular subset of cases. For instance, in child welfare call screening we only observe whether an allegation is substantiated if the screening staff decide to assign a case worker to investigate the referral.
Publications
- Maria De-Arteaga, Riccardo Fogliato, and Alexandra Chouldechova, “A case for humans-in-the-loop: decisions in the presence of erroneous algorithmic scores.”
- Amanda Coston, Alan Mishler, Edward Kennedy and Alexandra Chouldechova, “Counterfactual risk assessment, evaluation, and fairness.”
- Riccardo Fogliato, Max G'Sell and Alexandra Chouldechova, “Fairness evaluation in the presence of biased noisy labels.”
PARTNERS:
FACULTY:
STUDENTS:
Maria De-Arteaga, PhD Candidate in Machine Learning & Public Policy