Big data tools used in children’s social care are to be reviewed by Oxford University’s Rees Centre and the Alan Turing Institute.

The review will offer recommendations for the ethical use of machine learning in children’s social care and propose solutions for some of AI’s current weaknesses including bias, discrimination, inaccuracy, and poor data quality.

The child protection sector already applies machine learning in a variety of ways including the use of software to detect child abuse, which has raised concern among child welfare reformers, family law experts and social workers.

Several big data pilots inside the sector have been criticised for failing to interpret data effectively or offer credible methods for identifying child abuse and neglect.

The Troubled Families programme which was launched in 2012 was shut down after it was condemned by the Public Accounts Committee in 2016 for being ineffective and unethical.

Analysts at the programme were also exposed by a whistleblower in 2016, for fraudulently trying to manipulate the software’s data after it failed to achieved the desired results. The Troubled Families programme used big data software called ClearCore to evaluate families around the country. Despite its terrible track record, the programme was revived in February.

Since 2015, councils across the country have been piloting software produced by a firm called Xantura, which claims to identify children at risk of harm. While the councils say that the big data software has anywhere between a 60-80% success rate, there is no information on how that success rate has been measured.

Feedback from some councils suggests that social workers using Xantura’s software have been slow to trust the model, because it is not completely effective, or accurate.

The review into the ethics of machine learning has been commissioned by the What Works Centre, a project funded by the government which currently assists new social care watchdog Social Work England.

The press release for the review includes quotes from individuals engaged in the project.

Dr Lisa Holmes, Director, The Rees Centre (Department of Education, University of Oxford) said:

“I am pleased to be working on this project with colleagues at The Alan Turing Institute. We recognise the need for transparency in this area and welcome the opportunity to interrogate the issues related to the appropriateness of machine learning in children’s social care”.

Dr David Leslie, Ethics Fellow, The Alan Turing Institute added:

“This joint effort to assess prospects for the responsible design and deployment of machine learning systems in the children’s social care sector couldn’t be more timely or critical, given that these systems are currently in use across the UK. Our work will include engagement with researchers, practitioners, and other stakeholders to ensure we deliver results that will encourage well-informed and ethical decisions be made about the application of these powerful technologies in real world scenarios.”

Explaining the current position in relation to AI and the child welfare sector, Vicky Clayton, Senior Researcher, What Works Centre for Children’s Social Care said:

“Machine learning is already being used in the children’s social care sector in the UK but without a solid ethical framework to help practitioners make decisions about when and importantly when not to use machine learning. We are excited that The Rees Centre and The Alan Turing Institute will be combining their expertise to provide a map across what can be rocky terrain.”

The researchers will reach out to experts and stakeholders in order to produce the review, which is set to be published in Autumn 2019.

Further reading: