Colin Waterman considers the lessons learned from an independent evaluation of treatment foster care for adolescents in the UK
When the UK government brought Multidimensional Treatment Foster Care for Adolescents (MTFC-A), now known as Treatment Foster Care Oregon (TFCO-A), to England in 2003, the aim was to add to the range of services for troubled and challenging adolescents within the “looked-after children’s” care system. Outcomes for this population had long been accepted as being poor and it was hoped that an evidence-based intervention from the USA would have a positive impact.
There was a strong desire to evaluate the implementation of TFCO-A in England and the universities of Manchester and York were commissioned to research the efficacy of the intervention. The Care Placement Evaluation (CaPE) team attempted a randomised controlled trial (RCT), which was supplemented by a larger quasi-experimental study. A summary of the findings was published in the British Journal of Psychiatry in December 2013 and concluded that there was no evidence that the intervention had better outcomes than treatment as usual.
This dealt a significant blow to a programme that has otherwise shown great promise for children and young people presenting with a conduct disorder and high levels of anti-social behaviour. This article looks at the study, the timing of the research, and what might helpfully be learned from the experience.
Was the RCT element of the study successful?
In 2014, a number of the authors from the CaPE team wrote in the British Journal of Social Work. They provided a thorough critique of their research methodology, including an explanation as to why it was so difficult to assemble sufficient numbers of children for the RCT component of their evaluation.
In total, 18 local authorities participated in the study (2005-2008) with a further five being recruited to boost the comparison group. Only six provided young people for the RCT arm of the study. The researchers hoped for 130 young people – they got 34. Of these, 12 received the intervention and 13 received treatment as usual (TAU).
This reflects a significant problem within the area of social care. What is seen as a gold standard in research methodology in some areas of research is not so easy to carry out in the area of looked-after children. Many senior managers thought it unethical to run the risk of a child with high levels of need not being offered a placement in TFCO-A through the randomising process. This was despite the fact that the intervention was not proven as being effective here in the UK.
The small numbers included in the RCT achieved the goal of eradicating significant differences between the intervention and TAU groups. However, the numbers were so small as to make the extrapolation of any key findings to larger populations almost impossible.
The observational arm of the study was much more successful in recruiting higher numbers of children – 185 in total. Including the 34 in the RCT arm, this totalled 219. However, the two groups were quite different and the authors accept that the trimming of the samples using propensity scoring did not adequately mitigate all of these differences.
The difficulty in recruiting sufficient numbers to the RCT arm of the study and the noticeable differences between the intervention and TAU do not totally invalidate the findings of the research. However, caution must be exercised when interpreting the results of a study that was conducted in real-world settings where noticeable research design limitations were acknowledged. The RCT element of the CaPE study was not a success, even though the wider observational and qualitative aspects of the research contributed some helpful ideas about implementing TFCO-A here in the UK.
A question of timing
The scale of the MTFC-A implementation was large, with 24 sites being established across England. With such a significant financial and professional investment, it seemed appropriate to gather service outcome data and to expose the national implementation of the model to an independent evaluation.
While it is tempting – even intuitive – to do this in the early days of implementing something new, it is worth waiting until the sector has an agreed level of experience and competence in the chosen model.
It has already been mentioned that most local authorities struggled with the idea of participating in the RCT. This lends weight to the idea that a 12-18 month lead-in may well be required within a social care environment to adequately prepare the professionals and intended recipients for the intervention and the concept of randomisation. Similarly, many teams struggled to recruit sufficient numbers of foster carers. The recommended number is 8-10, but most had between one and four, which significantly reduced the number of placements they could offer. Having too few children and/or taking in those who were poorly matched to the intervention (20% were under the age limit for the intervention) resulted in clinical teams simply not getting enough of the right kind of practice in implementing the model.
Rather than seeking to evaluate a new intervention in the early stages of implementation, it is arguably better to wait until a number of the sites have proven competence. There were no DfE-funded MTFC-A sites that were certified with the model developers in 2008. Nationally, there are now five, and it is these sites that would be the most appropriate focus for a formal evaluation of the model.
Levels of model adherence were not accurately assessed by the independent evaluators. Global Fidelity Rating (GFR) is a tool designed by the National Implementation Service using key characteristics from the model developers’ site accreditation criteria to ascertain a site’s level of functioning. It was only after a retrospective analysis of the participating sites’ GFR that a number of interesting patterns were detected.
For example, one site that contributed data on 16 young people to the study – 11 of whom received the intervention – had a GFR in the “moderately functioning” banding during two of the three years of the study. Given that they then progressed to the “generally well functioning” and then “superior functioning” categories between 2007 and 2012, it can be surmised that they went through a period of natural development. Not surprisingly, they became measurably better at delivering MTFC-A over a number of years. They didn’t start off being highly model-adherent – it was something that was achieved in time.
This was the same for some of the other earlier cohorts of sites who set up the treatment foster care programme. Even though they received ongoing consultation and support from the National Implementation Team, it is possible to see that the GFRs were generally higher for those that came later, with the sites that eventually achieved accreditation generally maintaining consistently higher levels of model adherence throughout.
MTFC-A, or TFCO-A as it is now known, has experienced a negative impact from the CaPE study. Those who have yet to read the full research report are likely to be swayed by the shortened article that appeared in the British Journal of Psychiatry in December 2013. In it the authors suggested it was not significantly better than treatment as usual, except for those with significant anti-social behaviour and/ or who have a conduct disorder. However, the RCT arm of the study was unsuccessful in generating the data that was expected. The study came too early in the implementation of the model within the UK and there were insufficient checks on the levels of model adherence at each of the sites to say confidently just who was (and was not) delivering the intervention with high levels of competence and model fidelity. None of the sites contributing to the evaluation were accredited in the model. The experience has provided useful learning to take forward into future evaluations within social care.
About the author
Colin Waterman is the Director of the National Implementation Service (NIS) and is based in Manchester, England. He has been with the team since July 2007, firstly as a Site Consultant for the Multidimensional Treatment Foster Care (MTFC) programme, then as the Project Manager and, since April 2014, as the Director.
Biehal N et al (2012), The Care Placements Evaluation (CaPE) of Multidimensional Treatment Foster Care for Adolescents (MTFC-A), Research Report DfE-RR194.
Dixon J et al (2014), Trials and Tribulations: Challenges and Prospects for Randomised Controlled Trials of Social Work with Children. British Journal of Social Work, 44(6),1563–1581.
Green JM et al (2013), Multidimensional Treatment Foster Care for Adolescents in English Care: Randomised Trial and Observational Cohort Evaluation. The British Journal of Psychiatry 204(3):214–21.
Harold GT and DeGarmo DS (2014), Concerns Regarding an Evaluation of MTFC-A for Adolescents in English Care. The British Journal of Psychiatry Dec 2014, 205 (6) 498.
Macdonald G and Turner W (2008), Treatment Foster Care for Improving Outcomes in Children and Young People (Review), The Cochrane Collaboration, Wiley, Chichester.
Bergstrom M and Hojman L (2015), Is Multidimensional Treatment Foster Care (MTFC) More Effective than Treatment as Usual in a Three-year Follow Up? Results from MTFC in a Swedish Setting. European Journal of Social Work, DOI: 10.1080/13691457.2015.1030361.