The news this week that councils are set to use people’s data to create predictive models to detect child abuse has caused a stir, but councils in London have been piloting predictive analysis models across their child protection services for the last three years. And it’s not just councils – the NHS and the Department for Work and Pensions have been using them for some time.

September 2017 article in Apolitical, describes how councils in London have been using data analytics to try to identify children at risk of harm since 2015, something The Guardian missed this week when it covered the issue. It also suggests that the predictive model used in one council had an 80% success rate, though it’s not clear from the article exactly how success rates were measured. What’s interesting about this article is that it highlights the financial advantages to councils, of using data in this way. The article tells us:

“Councils are expected to save over $910,000 for early targeted interventions, $160,000 by replacing human-conducted screenings with an automated system, and $193,000 for improving access to multi-agency data.”

The article says that the company responsible for creating this predictive model is called Xantura, and their pilot inside the child protection sector has been running since 2015. The cost of the software is an eye watering $1.25 million, and was launched in January 2015.

According to the article, councils using Xantura’s software have been slow to trust the model, which is not completely effective, or accurate, though social work teams may be more concerned about the threat the software poses to jobs inside the sector. The fact that councils will have to pay for the software out of their own budgets, has made uptake on the model slow, too.

Xantura is adamant that their software will save councils money in the long run, and some local authorities are getting on board as a result. Our favourite quote from the article comes from Steve Liddicott, Interim Assistant Director of Children and Young People’s Services at the London Borough of Hackney, who says:

“You actually don’t have to prevent that many children from going into care to make quite a significant saving.”

According to Apolitical, Xantura’s “Early Help Profiling System” (EHPS) uses stats from multiple agencies, including information about school attendance and attainment, families’ housing situations, as well as economic indicators. The model then takes those stats and turns them into ‘risk profiles’ for each family.

But here’s the thing. We know that doesn’t work. Remember the Troubled Families Programme? The one where a whistleblower blew the lid on the project, exposing it’s fraudulent activity, which included using stale data to assess families, and massaging the figures to engineer outcomes so that the team involved could cover up the programme’s failure and make it look like a success? They used big data, too.

The software the team used was Clear Core, and it was developed by a company called Infoshare. And that was as long ago as 2013.

While we are not against the use of technology when it is accurate and effective, the government’s drive to use predictive analytics inside the child protection sector, knowing these models do not deliver robust results, makes the software’s predictions highly dangerous, and the government vulnerable to costly litigation.

Is the answer better technology, or can big data never capture the human condition fully enough to make accurate predictions? We don’t know, but for those of you interested in this area, we’ve added some more information below:

Children At Risk: How different are children on Child Abuse Registers?

This is a piece of research from 1991, produced by Mark Campbell. It looks at whether a checklist with 118 items on it was able to identify children at risk of abuse and neglect. The checklist was applied to 25 different families, who were attending local authority centres at the time. Of those families, nine had children on the local child abuse register. The checklist scores of the families on the register were compared with those that were not, in the control group.

The research discovered something fascinating. There was little difference in the factors studied between the two groups. Mark concluded that this could have been down to one of two reasons:

  1. Either there was little real difference between the characteristics of abusing and non-abusing families, or;
  2. The process of registration was controlled by a series of events which were not solely related to the characteristics of the families in the control group.

This research deserves to be included in the discussion, as it represents the beginnings of data collection for predictive purposes in this area.

We have written before, about the risks involved in using big data and technology as it stands today. In April 2015, we shared our concerns over New Zealand’s plans to use data to try to create predictive models for child abuse. The lack of sophistication in these processes at the moment means that families could be exposed to predictive models that stereotype individuals and create unhelpful biases which could lead to large scale errors. We also mentioned another article, which was published by WIRED in January, 2018, called “A Child Abuse Prediction Model Fails Poor Families”, and is noteworthy for the way in which it talks about how this kind of software can automate inequality.

As always, these fights are never fair, or clean. In an ideal world, debate around the rights and wrongs of predictive machinery inside the child protection sector would be done only by those who are truly independent, but it’s easy to spot the conflicts of interest if you look hard enough. We’ll let you decide about this lot.

tkM7wx81_400x400