Columns, Op-Eds, & Interviews DCFS Foster Care

OP-ED: LA County Nixes Alarmingly Unreliable “Predictive Analytics” Foster Care Scheme—For Now

WLA Guest
Written by WLA Guest

After Disturbingly High False Positive Rate, Head of LA County’s Office of Child Protection Urges Caution With Predictive Analytics

by Richard Wexler

In Los Angeles County, they called it Project AURA (Approach to Understanding Risk Assessment).

It was among the most highly touted experiments in the burgeoning fad for using predictive analytics in child welfare – a dystopian sci-fi nightmare-come-true in which computer algorithms predict who will abuse a child. (But, we are assured, child protective services agencies would never ever actually use that information to tear apart families.)

Project AURA was the subject of gushy news stories. It was an experiment particularly favored by those who believe that more not less children should be removed from their parents, and placed in the foster care system.

And now, thankfully, it is reportedly dead.

Buried on page 10 of a report to the Los Angeles County Board of Supervisors by Michael Nash, executive director of the county’s Office of Child Protection, is the news that the Department of Children and Family Services (DCFS) “is no longer pursuing Project AURA.”

AURA was developed by software firm SAS. Exactly what’s in it is a secret. No one outside SAS knows exactly how the algorithm works.

AURA was never used on any actual cases. Rather it was tested on past reports alleging child abuse or neglect. Then SAS looked to see what actually happened to those families.

As Nash’s report revealing the death of Project AURA explains:

While the tool correctly detected a high number of children (171 cases) at the highest risk for abuse, it also incorrectly identified a far higher number (3,829 cases) of false positives (i.e., children who received high risk scores who were not at risk for a negative outcome). [Author’s ital.]

In other words, AURA identified a staggering number of innocent families. Had AURA actually been in use, an astounding number of children would have been placed at risk of needlessly being torn from their homes and consigned to the chaos of foster care.


What Finally Killed AURA?

The results of the AURA experiment – including the false positive rate – have been known for nearly two years. But that didn’t stop the county from pushing ahead. And it didn’t stop the gushy news coverage. It’s not clear what finally prompted DCFS to pull the plug.

Perhaps it’s because, as Nash points out, all those false positives would further overload the system. More likely, it was an initiative by the State of California to try to come up with a “better” predictive analytics model.

Unlike AURA, developers of the new model are promising a completely open process, including consultation with various “stakeholders” and transparency about exactly what risk factors are used and how they are weighed – allowing anyone to “interrogate the algorithm.”


Tread “Cautiously and Responsibly”

It is also encouraging that Nash’s report—commissioned by the Board of Supervisors—is filled with warnings about the need to proceed “cautiously and responsibly.” Nash says a set of strict standards “to address the important operational legal and ethical considerations…” should be adopted “before considering the use of predictive-analytics models.” Those standards should include “understanding how racism and other biases may be embedded in systemic data and addressing these within the model.”

Nash noted that the independent journalism nonprofit ProPublica found exactly that bias in predictive analytics tools already in use in criminal justice.

All this means that, if nothing else, the nightmare of “Minority Report”- style policing in Los Angeles child welfare is at least another year or two away.

The bad news is that Nash’s report accepts the naïve view that once a good algorithm is created it can be properly controlled and limited.

He writes:
Determining [predictive analytics’] “right” use – to identify families most in need of supports, rather than to trigger any negative consequences for them – will be fundamental.

But Nash, himself a former juvenile court judge, must know that’s not how child welfare works in the real world.

Whatever controls are in place at the outset will disappear the moment a child “known to the system” dies and the caseworker handling the case says “DCFS had all this information about the family, and they knew it was ‘high risk’ but they didn’t tell me.”

Then, all bets – and all restrictions – are off, and it will be take-the-child-and-run in every family where the computer spits out a high “risk score.”


SDM is Let Off the Hook

The other bad news concerns the other model of risk and safety assessment that the Supervisors asked Nash to study – the one currently used in Los Angeles: Structured Decision-Making.

Structured Decision-Making (SDM) is a model for child protection produced by the National Council on Crime and Delinquency (NCCD), that includes six different assessments conducted at different stages of a case.

Like predictive analytics, SDM also has been found to raise issues of racial and class bias. Nash acknowledges those issues in passing:

Users of the tool, in particular, fault it for not incorporating into its assessments the entire story of what is happening within a family, but instead focusing on a few broad strokes without giving weight to important nuances. Users additionally state that the tool is too narrowly focused on the caregiver and does not take into account the strengths of the family as a whole.

But immediately he adds this parenthetical aside:

(The latest version of SDM has been revised to try to be more strength-based in its approach.)

In my own experience, some version of “Yes, but the new version is different” is what developers of SDM have said for more than a decade, each time similar concerns are raised. That can only leave one wondering about all the “risk assessments” and “safety assessments” performed with old, unimproved versions of SDM.

The defeat of AURA shows that, contrary to what some predictive analytics proponents say, it is not inevitable that every legislative body and child welfare agency will embrace this latest fad in child welfare.

At a minimum, opponents in Los Angeles have more time to organize. And using predictive analytics in child welfare no longer has an AURA of inevitability.


Richard Wexler is a longtime child welfare advocate and executive director of the National Coalition for Child Protection Reform


Editor’s Note: Check out another important view of this complicated issue here with Daniel Heimpel’s LA Examines The Risky Business Of “Predictive Risk Modeling

Leave a Comment