Leave it to good old American private enterprise to find a
new, high-tech way to cash in on parerntal fears. In a front-page story, The
Washington Post reports that for
just $24.99 a firm called Predictim will pressure any potential babysitter into
giving the company permission to scour her or his social media feeds. (The
applicant can say no, but then s/he’s a lot less likely to get hired.)
Then they funnel all the data through an algorithm that
coughs up a series of “risk ratings” – on a scale of 1 to 5 – for everything
from drug abuse to bullying or having a “bad attitude.”
Of course it’s all justified in the name of “child safety.”
“If you search for abusive babysitters on Google, you’ll see
hundreds of results right now,” Predictim’s co-founder told the Post. “There’s people out there who
either have mental illness or are just born evil. Our goal is to do anything we
can to stop them.”
But the Post story
points out: “Even as Predictim’s technology influences parents’ thinking, it
remains largely unproven, largely unexplained and vulnerable to quiet biases…”
Jeff Chester, executive director of the Center for Digital
Democracy told the Post there is a “mad
rush to seize the power of AI [artificial intelligence] to make all kinds of
decisions without ensuring it’s accountable to human beings. It’s like people
have drunk the digital Kool-Aid and think this is an appropriate way to govern
our lives.”
But at least the sitters give their data to Predictim in a
way that is voluntary – albeit hard to refuse - and the potential abuses are
limited to whether a sitter gets a job.
It’s not as if some giant government agency is going to take data that
recipients are forced to surrender, funnel it through an algorithm and use it
against them to make life-and-death decisions about their families – well,
except Pittsburgh, Pa. which
already is doing it, and all the other cities and states that want to follow suit.
I refer, of course, to the use of “predictive analytics” algorithms
to supposedly predict who is likely to
be a child abuser – setting in motion
the process that can lead to the child being taken from those parents and
consigned to the chaos of foster care. In Pittsburgh, they’ve even considered
assigning a secret risk score – sort of an invisible “scarlet number” to every
child at birth, a number that then can come back to haunt the child when she or
he becomes a parent.
Proponents offer up bland boilerplate about how they will
never, ever abuse these screening tools and they’re always on guard for bias. But somehow those pesky biases keep
sneaking in.
And you may be sure all the alleged safeguards will go out
the window as soon as there’s a high-profile horror story case and, just like a
marketer for Predictim, a politician demands that the algorithm be unleashed
because “our goal is to do everything we can to stop them.”
So no, we can’t count on self-restraint to stop those in
child welfare who have drunk the predictive analytics Kool Aid. They can be restrained only by a public that realizes the
bigger threat to the children isn’t Mom and Dad – it’s Big Brother.
For a more detailed
discussion see our publication Big Data is Watching You and
this
article in Wired excerpted from
Prof. Virginia Eubanks’ book, Automating
Inequality.