Published daily by the Lowy Institute

The alarming truth about Countering Violent Extremism programs: We don’t know what works

There is no rigorous evaluation or evidence to demonstrate that CVE programs have reduced radicalisation and extremism

Photo: Flickr / Albany Associates
Photo: Flickr / Albany Associates
Published 17 Apr 2017   Follow @teoverga

In response to fears of terrorism and home-grown radicalisation in Australia, Countering Violent Extremism (CVE) programs have grown in number and cost. The federal Attorney-General's Department alone is funding more than 40 community-based programs, with many others funded by state and local governments. The vast majority aim to reduce radicalisation and to strengthen social cohesion and community resilience. Regrettably, and despite over a decade of funding for CVE programs, there is no rigorous evaluation or evidence produced to demonstrate that these programs have reduced radicalisation and extremism. There is clearly a critical need for rigorous impact evaluation of CVE programs in order to establish their effectiveness and ensure that government investment in such programs is not better spent elsewhere.

The absence of CVE program evaluations is reminiscent of the situation over a decade ago for community-based obesity prevention programs. The problem was addressed to some extent by requiring those obesity programs to have an evaluation framework that could yield evidence on their effectiveness, with funding for evaluation built into the prevention program. A similar approach is urgently needed for CVE programs in Australia.

Yet among Australian and international CVE experts, there is often reluctance to engage in rigorous impact evaluation because of supposed inherent barriers. For example, some argue that the concepts of extremism and radicalisation are contested, politically loaded and ultimately unclear, which impedes their precise operationalisation and measurement. Others argue that practical problems such as the social desirability bias in evaluation surveys, the difficulties in recruiting control groups and the presence of contextual factors that affect radicalisation and social cohesion (such as political events or structural inequalities) make CVE impact evaluation too difficult.

But those problems are not very different from the ones that any other social intervention has to deal with. Prevention programs that aim at reducing addictions, obesity, gang or family violence have to deal with measurement issues, stigma, politically loaded definitions, confounding variables and many other ethical and practical issues. We cannot hide behind complexity.

Based on public health best practices, I propose a list of evaluation components to lift the quality of the impact assessment of CVE programs in Australia.

Firstly, impact evaluations should always focus on a list of concrete and measurable outcomes defined by the program stakeholders, to be measured in the target population before and after the intervention. This is a feasible evaluation practice that has been also proposed by American CVE experts. Examples of the outcomes can be material or psychological resources, such as employment, social capital or self-esteem.

Secondly, evaluations should always aim at including comparison groups. I am aware of the difficulties of engaging communities and other target groups to be comparison populations. However, without including in the design a non-intervention comparison (that is, a target group that has not, or not yet, been delivered the program) it is not possible to know whether any change in the target population is due to the program or to other intervening factors (eg. a new war in the Middle East that could increase pro-jihadist sentiment). Ideally, comparison groups are randomly selected from the same population, but they can also be selected from matched settings, such as regionally representative samples or other population data.

Thirdly, funding agencies should provide funding for evaluation as a key part of the CVE programs, and the assessors should be independent from both the funding agency and the service provider. Moreover, the donors should request that evaluation (ie. data collection for evaluation purposes, staged delivery of the intervention) be embedded in the program design from its inception.

Finally, I argue that problems like the social desirability bias in evaluation surveys should not be a barrier to evaluation. Scholars have proposed practical ways to reduce the impact of those problems, and CVE experts should engage more systematically in a constructive dialogue with other fields of knowledge. High quality research conducted by American and European scholars (see for example the excellent work of Paluck, Feddes and colleagues, and Williams and colleagues) demonstrate that rigorous evaluations of community-based CVE programs are doable, and Australian researchers should use those examples as the gold standard.

In conclusion, I argue that community-based CVE programs should move towards high-quality impact assessment because ultimately this is the only way to know what works, what doesn't work, and whether there are unintended consequences in the target population. The only way to develop evaluation capacities in the CVE field would be to have dedicated funding and to attract collaborations and expertise from other disciplines that already reach high quality standards of program evaluation, such as public health. Otherwise, costly programs with weak impact evaluations risk wasting public money.




You may also be interested in