For more recent information about the harm of predictive analytics in child welfare, see our publication Big Data Is Watching You
Much as the National Rifle Association argues that “Guns don’t kill people, people do,” Joshua New defends the use of predictive analytics in child welfare by telling us, in effect, that computers don’t remove children, caseworkers do.
But there’s a corollary: Just as we should be doing more to keep
big, powerful guns out of the hands of people who don’t know how to use them,
the human beings who run child welfare systems can’t be trusted with the
nuclear weapon of big data.
This is illustrated by the way New himself handles data. In the
second sentence of his column, the champion of big data misunderstands the
first statistic he cites. He writes: “Consider that in 2014, 702,000
children were abused or neglected in the United States …”
But that’s
not true. Rather, the
702,000 figure represents the number of children involved in cases where
a caseworker, typically acting on her or his own authority, decided there is
slightly more evidence than not that maltreatment took place and checked a box
on a form to that effect.
For purposes of this particular statistic, there is no court
hearing beforehand, no judge weighing all sides, no chance for the accused to
defend themselves.
I am aware of only one study that attempted to second-guess
these caseworker decisions. It was done as part of the federal government’s
second “National Incidence Study” of child abuse and neglect. Those data
show that caseworkers were two to six times more likely to wrongly substantiate
maltreatment than to wrongly label a case “unfounded.”
I don’t blame the federal government for compiling the
data. I don’t blame the computers that crunched the numbers. My
problem is with how the human being – New – misinterpreted the numbers in a way
favorable to his point of view.
I’m not saying he did it on purpose (after all, he’s only
human); I use the example only to illustrate why, when he says that predictive
analytics systems are merely “decision support systems,” that’s not reassuring.
Nor is it
reassuring to find that while New tells us how the predictive analytics
experiment in Los Angeles, called AURA, allegedly pinpointed a large proportion
of cases that led to severe abuse (according to a study done by the same
company that developed the software), he leaves out the fact that more than 95
percent of cases flagged by AURA apparently were false positives.
New also accuses those of us who disagree with him not simply of
opposing predictive analytics, but “sabotaging” it, a word that conjures up
images of luddites from the Vast Family Preservation Conspiracy sneaking into
offices to destroy computers. He offers no data to support his claim of
sabotage.
Then, he concludes by dredging up the latest version of the
classic canard: If you don’t support doing exactly what I want to do in exactly
the way I want to do it, then you don’t care about child abuse! His
version is to allege that those of us who disagree with him are “more fearful
of data than they are concerned about the welfare of children.”
Human fallibility intrudes in other ways as well.
Human beings decide which “risk factors” the computers seek
out. So, for example, in AURA, the computer looks at things like whether
a child has been taken to an emergency room often. But impoverished
parents rely more on E.Rs, whether they also happen to abuse their children or
not.
Another alleged risk factor: changing schools a lot. But that
happens to impoverished families who are homeless or get evicted because they
can’t afford the rent – and to families of color whose children were
victimized by the well-known racial bias in school discipline – whether they
also happen to abuse their children or not.
Instead of compensating for human biases, AURA is likely to
magnify them.
And let’s not kid ourselves. How many child protective
services caseworkers will dare to leave a child in his or her own home —
notwithstanding the 95 percent false positive rate – when it means that, in the
event of a tragedy, that caseworker will be on the front page for ignoring the
“sophisticated computerized risk assessment protocol” and “allowing” a child to
die? New’s own rhetoric makes clear what such a caseworker will face.
Right now,
AURA is going to be used to analyze reports received by the Los Angeles child
protective services “hotline” and passed on for investigation. But why wait for
reports? Once we have all that “big data” about where the “high risk” cases
are, will CPS workers simply be empowered to barge in the door of every home
that gives off an “aura” of child abuse? That federal commission on
child abuse fatalities may recommend something similar.
So no, we should not give child protective services agencies the
power to data-nuke our most impoverished families. Even if the data are up to
the challenge, the human beings are not.