NCCPR’s full report on the harm of predictive analytics in child welfare is available here.
John Kelly, senior editor of the Chronicle of Social Change (the Fox News of child welfare - and #1 cheerleader for the use of predictive analytics in child welfare)* argues that not only I, but also The New York Times, Forbes columnist John Carpenter, Republican
strategist Mike Murphy and others are wrong to suggest that the
presidential election revealed the emperor known as “predictive analytics” to
be, at best, scantily clad if not stark naked.
First,
Kelly argues that the presidential election predictions weren’t all that wrong,
we just misunderstood them. Nate Silver of the website FiveThirtyEight tried to
warn us that the 30 percent chance Donald Trump would win should be taken seriously.
After all, Kelly writes, “Any baseball fan should have had some respect for
that 30 percent number; it’s about the same probability that your team’s best
batter is going to get a hit.” He adds:
In child welfare, would you
take a kid away from his mother based solely on the fact that a predictive
measure said there is a 30 percent chance the child would be severely
maltreated? The answer should be, flatly, “No.”
There
are several problems with this:
§ Silver
didn’t just predict a 30 percent likelihood of a Trump win. He also predicted
a 70 percent chance that Hillary Clinton would win.
That “false positive” was by far the more likely outcome, according to Silver’s
analytics. So the real question is: Would you take a kid away from his mother
based solely on the fact that a predictive measure, however unreliable, said
there is a 70 percent chance the child
would be severely maltreated? Does anyone doubt the answer is yes?
And Silver’s prediction
was the most conservative. The Times put
Clinton’s chances of winning at 85 percent. The Princeton Election Consortium
had it at 99 percent. What are the chances a child would be removed if
analytics wrongly predicted those kinds of odds?
§ Even
using Kelly’s example, in which the analytics say there’s only a 30 percent
chance the child is in danger, the chances are excellent that a child would be
removed from the home, particularly if there was a high-profile tragedy in the
news at the time. That’s because no one has anything to lose from needless
removals – except the children and their families.
CPS Workers are Only Damned if They Don’t
Though
child protective services workers often say they’re damned if they do and
damned if they don’t, that’s not true. I’ve never seen a CPS worker fired,
suspended, demoted, even slapped on the wrist – much less criminally prosecuted
– for taking away too many children. All of those things have happened to
workers who left one child in his own home and something went terribly wrong.
When it comes to taking away children CPS workers are only damned if they
don’t.
I can
hear the politician grilling the child welfare agency chief after a tragedy:
“Any baseball fan should have had some respect for that 30 percent number; it’s
about the same probability that your team’s best batter is going to get a hit.
Yet your caseworker left the child in the home and look what happened!”
Kelly
also argues that the massive failure of predictive analytics in the election
has little to tell people in child welfare because the algorithms for election
prediction are simpler. But that’s even more reason for concern. Precisely
because child welfare algorithms have to consider so many more factors, the
chances of human error, and human bias, are greater.
The experts consulted by
the Times said the lessons extend to “every field.” I
have not found any data experts saying “every field – except child welfare,
where all the people are so much better and smarter than everyone else.”
Child Welfare Can’t Control Its Nukes
Kelly
concludes that:
Perhaps … the conversation the
field needs to have as predictive analytics becomes a part of the field [is]:
How do systems responsibly control the interpretation of valuable, but limited,
predictive analysis?
But in
the real world of child welfare, there is no way to limit the use of analytics
once you start. Imagine this scenario: An enlightened child welfare agency
leader decides to use analytics only as a guide to finding the families most in
need of preventive services. Then a child “known to the system” dies. A
caseworker (or maybe her lawyer) says: “The agency has data out there that
could have told me this child was in danger and of course I would have acted on
it – but my bosses wouldn’t let me see it.”
It is
the rare child welfare agency leader who could resist the pressure to say: “OK,
from now on, whenever the “risk score” is above 70 percent (or 50, or quite
possibly 30) you take away that child!”
The clear evidence that
predictive analytics is racially biased, the fact that it
produces astoundingly high false positive rates, as seen in New Zealand, and Los Angeles, the fact that all those “false positives”
will destroy children’s lives and overload child welfare systems – making it
even harder to find children in real danger – none of that will matter.
Predictive
analytics is the nuclear weapon of child welfare. And child welfare can’t control
its nukes.
*-This paragraph was updated in March, 2018 to more fully describe the Chronicle of Social Change.