Predictive analytics is the nuclear weapon of child welfare. Vast amounts of data are taken from people –
especially poor people – without their consent (like what Facebook does, only
worse). Then if someone alleges that one of those people has committed
child abuse or neglect, a secret, or perhaps only semi-secret, algorithm coughs
up a risk score. That score is an invisible scarlet number that can brand not
only parents, but their children, for life.
As Author Virginia Eubanks explains in her book, Automating
Inequality, rather than counteracting the racial and class biases of
the human beings who run child welfare systems, it magnifies those biases. She
calls it “poverty profiling.” And ProPublica
has documented how this has played out in criminal justice.
So what is the response from proponents of analytics in
child welfare – a field that is super secretive with no real accountability,
due process, or checks and balances?
Endless promises of self-restraint.
Sometimes they say: “We’ll only use it to target prevention programs.” But we already know how to target prevention
programs without an algorithm: Just put them where the poor people are, since
the overwhelming majority of cases involve “neglect” and child welfare systems
routinely label poverty as neglect.
Or they’ll say: “Child abuse hotline operators will know the
“risk score” but we won’t even tell the people who actually go out to
investigate the allegation.” But whoever
is going out to investigate knows that if they’re told to get out there in a
hurry it’s probably because the risk score was high. So whether they’re told or
not, they know.
Or they’ll say: “We’ll never, ever use the score to decide
whether to remove a child from the home.”
But again, the caseworker knows (whether explicitly told or not) when
the algorithm has rated a case high risk – and they can’t unknow it when the time comes to decide whether to remove the
child.
Or they’ll say: “Even if we get our fondest wish and get to
slap a ‘risk score’ on every child at birth (and make no mistake, for some in
the field,
it is their fondest wish) we’ll
only use it for prevention. But – well, see all the problems cited above.
The limits of
high-minded promises
But there’s an even bigger problem with all these
high-minded promises. What happens as soon as there’s pressure to be less
high-minded?
The amount of pressure needed to get politicians to abandon
their principles can be remarkably low – as is illustrated by the story of the
New York State Lottery. Yes, the Lottery.
New York State was among the first in the modern era to
institute a lottery, in 1967. It took an
amendment to the State Constitution – so there were lots of high-minded
promises to allay concerns of those who feared it would encourage compulsive
gambling or encourage those least able to afford it to waste their money.
The key selling point, it was promised, would be an appeal
not to greed but to generosity. Advertising
would emphasize that lottery proceeds would be used to help fund public
education. So the first lottery slogan
was "Your Chance of a Lifetime to Help Education." I grew up in New York and I recall an early
print ad that said “The New York State Lottery: It’s not the money; it’s the
principal. And the teachers. And the students.”
There was just one problem. Not enough people were buying
lottery tickets. Sales were way below
projections. So, by the 1980s, the lottery
took a different approach that might best be called, it’s not the principals
and the teachers and the students – it’s the money! Money! Money! Have a
look:
Yes, the Lottery still sometimes produces commercials that
take the high road, but this is the dominant theme.
If all it takes is revenue falling short of projections to
prompt this abandonment of principle (and principals), imagine what would
happen in a field where the stakes are a lot higher.
Imagine this scenario:
A child “known to the system” has died.
The media have found out the name of the caseworker who mistakenly
thought the home was safe. After being attacked in news accounts and/or by
politicians, she comes forward to tell her story.
Choking back tears, she says: “My bosses had an algorithm
that told them this family was high-risk. But they never told me. Of course, if
only I’d known I never would have left that child there.”
What are the odds that the leader of the child protective
services agency would stick to the policy of using predictive analytics with
only the utmost restraint? Even if s/he
wanted to, what are the odds that the political leadership in the state or
county would allow such restraint to continue?
I’d say you’ve got a better chance of winning the lottery.