The news in September that councils are set to use people’s data to detect child abuse has caused a stir, but councils in London have been piloting predictive analysis models across their child protection services for the last three years. And it’s not just councils – the NHS and the Department for Work and Pensions have been using predictive technology for some time.

September 2017 article in Apolitical, describes how councils in London have been applying data analytics to try to identify children at risk of harm since 2015. It also suggests that the predictive model used in one council had an 80% success rate, though it’s not clear from the article exactly how success rates were measured. What’s interesting about this article is that it highlights the financial advantages to councils, of using data in this way. The article tells us:

“Councils are expected to save over $910,000 for early targeted interventions, $160,000 by replacing human-conducted screenings with an automated system, and $193,000 for improving access to multi-agency data.”

The article says that the company responsible for creating this predictive model is called Xantura, and their pilot inside the child protection sector has been running since 2015. The cost of the software is an eye watering $1.25 million, and was launched in January 2015.

Councils using Xantura’s software have been slow to trust the model, which is not completely effective, or accurate, though social work teams may be more concerned about the threat the software poses to jobs inside the sector. That councils will have to pay for the software out of their own budgets is perhaps another reason the uptake on the model has been slow.

Xantura is adamant that their software will save councils money in the long run, and some local authorities are getting on board as a result. According to Apolitical, Xantura’s “Early Help Profiling System” (EHPS) uses data from multiple agencies, which includes information about school attendance and educational attainment, families’ housing situations, and economic indicators. The model then takes those statistics and turns them into risk profiles for each family.

The now infamous Troubled Families Programme, hides a Big Data secret of its own. The programme made the news after a whistle blower exposed the project’s fraudulent activity, which included using stale data to assess families. Members of the team also massaged the figures to engineer outcomes so that they could cover up the programme’s failure and make it look like a success. What nobody mentioned, or knew at the time, was that they used big data, too.The software the programme used was called ClearCore, and it was developed by UK based company, Infoshare. That was as long ago as 2013.

Technology offers benefits when it is accurate and effective, but the government’s drive to use predictive analytics inside the child protection sector, knowing these models do not deliver robust results, makes the software’s predictions highly dangerous, and the government vulnerable to costly litigation. 

Previous attempts at using this kind of data to try to predict child abuse took place as early as 1991. A paper produced by researcher Mark Campbell and published in The British Journal of Social Work in June 1991, looks at an experiment carried out within one local authority. The paper was entitled, “Children At Risk: How different are children on Child Abuse Registers?” 

The experiment included a checklist with 118 items on it, which was created to see if it could identify children at risk of abuse and neglect. The checklist was applied to 25 different families, who were attending local authority centres at the time. Of those families, nine had children on the local child abuse register. The checklist scores of the families on the register were compared with those that were not, in the control group.

The research discovered something fascinating. There was little difference in the factors studied between the two groups. The report offered one of two reasons for the finding: either there was little real difference between the characteristics of abusing and non-abusing families, or the registration process was controlled by events which were not solely related to the characteristics of the families involved in the study.

Other countries around the world have also been exploring ways in which Big Data could be used to detect child abuse. In 2015, a call centre dedicated to child welfare concerns in New Zealand decided to collate data on families that called through, to see if it could spot patterns that might predict child abuse before it happened. This pilot used 131 indicators, including the ages of mothers on benefits, the dates of their first benefit payments and the types of family units they came from. 

Although predictive models like these claim to have success rates anywhere between 76-80%, families could be being exposed to analytics that stereotype individuals and create unhelpful biases. One year after New Zealand’s pilot at the call centre, another call centre, this time in America, tried a similar experiment. The Allegheny County Office of Children, Youth and Families (CYF) child neglect and abuse hotline in Pennsylvania, collected information from the county, sourced also from 131 indicators, to see if the Office could detect child abuse before it happened, or escalated. The predictive analysis model they used was called The Allegheny Family Screening Tool. 

What the experiment revealed was unsettling. The problem, which was initially raised by a 2010 study of racial disproportionality in Allegheny County CYF, was that the vast majority of disproportionality in the county’s child welfare services stemmed from referral bias, rather than screening bias. The research confirmed that people reporting child abuse called hotlines about black and biracial families three and a half times more often than they called to report on white families. 

The implications of the findings for child protection services in the UK, and predictive models being used inside the sector, are enormous. Tensions between social workers and single mothers are at an all time high, with research being produced by groups like Legal Action For Women, which suggests that social workers already have an inbuilt bias towards poor single mothers, who in turn feel they are being targeted due to their economic circumstances. Where bias and prejudice may already be built in to a system, and data offers part of a picture but may not be able to factor in nuance, predictive models as they stand today may just be automating inequality. 

This article is adapted from a post Researching Reform published in September, 2018.

BigData