Monday, September 2, 2024

Predictive analytics: The Project 2025 of child welfare

“Yes, it’s Big Brother.  But we have to have our eyes open to the potential of this model.”

-- Rhema Vaithianathan, co-designer of child welfare “predictive analytics” algorithms, discussing the idea of assessing children’s risk of abuse – while they’re still in the womb.

The Lincoln Project is a group of disaffected Republicans appalled by what has happened to their party. They’re probably best known for their ads.  One that’s gotten a lot of attention is their vision of what would happen if the near-total ban on abortion envisaged by the Heritage Foundation’s Project 2025 becomes reality.  Among the proposals: a massive increase in “abortion surveillance.” 

In the Lincoln Project ad, a father and daughter are pulled over by a police officer and ultimately arrested, the father for trying to take the daughter across the state line for an abortion, the daughter for “evading motherhood.”  (And just to be clear, some states already are trying to do things like this.) 

Have a look at the ad and then I’ll explain what all this has to do with child welfare: 


Among the striking features of the ad: the creators’ understanding of how the data we routinely surrender every day can be turned into an Orwellian nightmare of omnipresent surveillance.  Fortunately, something like this would only be supported by certain elements on the far right.  Liberals would never stand for it, right? 

But that brings us to the Great Exception of the Left: the fact that some of my fellow liberals (though many fewer than in decades past) will discard everything they claim to believe about civil liberties as soon as somebody whispers the words “child abuse” in their ears.  It’s one reason there is so little due process in what should be called family policing. 

Consider the quote at the start of this post.  It’s from a Boston Globe story in 2015, the early, heady days of the movement to bring Orwellian hyper-surveillance, what amounts to computerized racial profiling, to child welfare in the form of “predictive analytics” algorithms.  

We were told algorithms would indicate who is most likely to abuse a child so the family police agency (a more accurate term than “child welfare” agency) could swoop right in!  Big, centrist media – news organizations such as The New York Times, which now are appalled by Project 2025 -- swooned over it in 2015 and beyond.  (Almost the lone exception: a prescient warning from Prof. Virginia Eubanks in her book, Automating Inequality, excerpted in Wired. Many more, on the left and right, are opposed now.) 

At the time, the focus was on models like the Allegheny Family Screening Tool (AFST), used by  screeners in metropolitan Pittsburgh when they receive reports alleging “neglect.” Without even a pretense of informed consent, AFST harvests vast amounts of data originally surrendered, voluntarily or unknowingly, for entirely different purposes.  Then it coughs up a “risk score” – an invisible “scarlet number” -- to determine the urgency of sending out an investigator.  The questionable means by which it was sold, the questionable claims about how it would work, and the many problems that have arisen are outlined in detail, with sources, in our publication Big Data is Watching You. 

In the years since, some reliably blue states and localities rushed to adopt algorithms. Illinois, Los Angeles and Oregon all later backed away, in the first two cases after spectacular failures.  But blue-leaning metropolitan Pittsburgh presses on, even after an independent evaluation found problems with racial bias and, the Associated Press reports, the U.S. Department of Justice is investigating whether AFST is biased against the disabled. 

But the designers of AFST are proposing ever more dangerous algorithms. Even in 2015 and before, they had bigger plans, plans that sound remarkably like something out of that Lincoln Project ad. 

The Pittsburgh algorithms stamp an invisible "scarlet number
"risk score" on children. Once there, it can never be erased,
That Boston Globe story included a reminder that the co-designer of AFST, and prominent advocate of taking away more children, Emily Putnam-Hornstein had written as far back as 2011 that “prenatal risk assessments could be used to identify children at risk of maltreatment while still in the womb.” (Recall the officer in the video asking the daughter: “What are you, about eight weeks pregnant?”) 

“Prenatal risk assessments” are what Putnam-Hornstein’s co-designer, Vaithianathan, was talking about when she said: “Yes, it’s Big Brother.  But we have to have our eyes open to the potential of this model.” 

Yes, we do, but not in the way Vaithianathan has in mind. 

Pittsburgh has moved ahead with a model, called “Hello, Baby,” that stamps that invisible scarlet number risk score on every child, if not quite in the womb then immediately upon birth. Proponents say it’s only to target prevention, not policing.  That is the case – for now. But, just as  Vaithianathan says, and as is discussed in detail below, “We have to have our eyes open to the potential of this model.” 

Technically “Hello, Baby” is voluntary, but good luck taking advantage of the extremely limited opportunity to opt out – right in those first days after your child is born when, after all, it’s not like you have anything else to think about.  (Ever notice how often, when a corporation forces you to opt out of something it’s because they know you won’t want to opt in.) 

Once infants are branded with those scarlet numbers, they can never be erased.  If they’re labeled as being at high-risk of abuse as infants, it increases the chances that their own children will be labeled at high risk of abuse. If ever these “high-risk” children, as adults, are accused of child abuse, the “Hello, Baby” risk score could raise his AFST score, making it that much more likely their own children could be torn from their arms. (Again, authorities in Pittsburgh say they’ll never, ever do such a thing – but, as Vaithianathan says …) 

Among those most at risk: anyone who actually has been in foster care – since, no matter what the algorithm, having once been in foster care is likely to increase the “risk score” as a potentially abusive parent. 

But even this isn’t enough for Putnan-Hornstein and Vaithianathan. They’re pushing still another algorithm, the  Cross Jurisdiction Model Replication project.  

Like “Hello, Baby,” CJMR generates a risk score for every child at birth. Unlike “Hello, Baby” there is no opt-out at all.  And while with AFST the developers bragged about not explicitly using race (while using poverty), and with “Hello Baby” they claimed (falsely) that they weren’t using poverty, this time there’s no more let’s pretend.  The use of race and poverty as risk factors is out in the open. 

But fear not, say Putnam-Hornstein and Vaithianathan, as noted above, unlike AFST these algorithms are only to target prevention. The places that adopt them would never, ever use them as a tool to investigate families.  They promise!  

Right.  After all, to think these data would be abused is like imagining that a big company like Facebook would sell data without consent.  Oh, yeah, that.  Well, OK, but that’s private industry.  Let’s try again: 

The notion that all that “Hello, Baby” and CJMR data would be misused is like imagining a police force would misuse juvenile records that are supposed to be sealed.  Oh, yeah – that. 

And when a child “known to the system” dies and demagogic politicians condemn the family police agency for not using all that data, you can be sure that agency will stand firm and never, ever change its policy and use algorithms like “Hello, Baby” for investigations! 

The people who want us to believe that are the same people who repeatedly used questionable claims to sell their algorithms in the first place.  And that’s not the only problem. 

The co-designer of the Pittsburgh algorithms and the CJMR project, algorithms that supposedly have no racial bias problem, Putnam-Hornstein, denies the field has a racism problem in the first place. She demeans Black activists and takes pride in being part of a group that defends a self-described "race realist" law professor who hangs out with Tucker Carlson. She has said "I think it is possible we don’t place enough children in foster care or early enough" and signed onto an extremist agenda that proposes requiring anyone reapplying for public benefits, and not otherwise seen by "mandatory reporters," to produce their children for a child abuse inspection.  

She’d even taken to spewing weird personal attacks like this one against her betters on LinkedIn(!) – before deleting and replacing her profile.  (LinkedIn has rules about this, by the way.)

 


Would you trust an enormously powerful algorithm in these hands?  In fact, it’s too powerful for anyone’s hands. 

To see why, take another look at that Lincoln Project video.  But instead of a police officer pulling over a car, imagine a caseworker at the door of a newborn and her family: “The algorithm says you’re high risk,” she says to the mother.  “So we’re taking your baby.” 

Here’s the moral of the story: When you begin a sentence with “Yes, it’s Big Brother …” the only ethical way to end that sentence is: “… so we’re not going to use it.”