There is no single ‘right’ methodology for evaluating progress in public interest projects such as legislative reform, social change campaigns, and community services. There are core principles but each project needs special attention to its unique dimensions, limitations and potential, and the evaluator should beware of letting methodology obscure the intent of the program designers.
In this posting we highlight lessons for evaluating in the public interest, based on three projects:
- Full scope program evaluation of the Deep Sea Conservation Coalition’s international campaign to ban deep-sea ocean trawling (Pew Charitable Trusts, 2008) – JUMP TO CASE STUDY
- Internal evaluation of the global ‘TckTckTck’ initiative, which sought to spark a powerful citizens’ movement in favour of a new climate treaty at the UN’s 2009 climate summit in Copenhagen (Oak Foundation/Global Campaign for Climate Action, 2010)
- Five-year review of Ontario’s unique regulatory regime for Paralegals & Paralegal Services (Law Society of Upper Canada, 2012). Full report to the provincial Attorney-General is here.
The desired impacts of these projects ranged from increasing access to justice for ordinary people (The Law Society of Upper Canada) to changing the global rules for one of the world’s oldest industries (Deep Sea Conservation Coalition) and one of its most dangerous (Global Campaign for Climate Action, or ‘tcktcktck’).
To varying degrees, these evaluations had to model and measure stakeholder engagement, policy reform, and the potential for long-term social and economic impacts in highly complex environments. For each case, we offer ‘lessons for evaluators’ arising from the application of various evaluation techniques to each circumstance, and welcome your comments or feedback based on your practice.
CASE 1: Deep Sea Conservation Coalition (2008)
From 2004 to 2006, The Pew Charitable Trusts (through The Pew Environment Group) had funded a campaign for a moratorium on destructive deep sea bottom-trawling by the world’s high seas fishing fleets. In a climactic UN meeting in late 2006 the campaign failed by only a few crucial votes, although it had garnered support from many fishing nations. An evaluation was undertaken to understand the reasons for failure on the main objective and what, if anything, had been achieved.
We (1) interviewed Pew staff, members of the project’s core unit, diplomats, politicians and policy makers who participated in the U.N. process, and knowledgeable observers of international marine issues; (2) reviewed relevant project products and documentation; and (3) analyzed external documentary evidence, including media coverage, background papers produced by the U.N. and key countries in the debate, and websites and publications from allied organizations not members of the coalition.
We concluded that:
- The campign had been pivotal in advancing the issue of bottom trawling on the international fisheries agenda. As a direct consequence, in December 2006 the U.N. adopted Resolution 61/105, which the policy experts and diplomats we consulted viewed as a significant step forward in protecting marine biodiversity.
- According to experienced diplomats, in the period following the UN’s decision, the speed with which some countries moved to regulate bottom fishing was “surprising” and “unprecedented.”
- Evidence also suggested that the project’s efforts led to (1) Japan working with the United States, Russia and South Korea to establish a new North Pacific regional fisheries regime; and (2) the countries responsible for managing fisheries in the South Pacific taking interim measures on bottom trawling.
- The project was also credited by knowledgeable observers with helping to increase the capacity of nonprofit organizations to monitor the implementation of Resolution 61/105 and to carry out other international fisheries campaigns. The work attracted a larger international marine advocacy network, from eight original coalition members at the start of the campaign to more than sixty at the time of the evaluation.
Lessons for Evaluators:
This case illustrates the potential to reach beyond the ‘inner circle’ of participants to get cooperation even from the opposition (sometimes) of one’s project – in this case, we conducted over two dozen lengthy interviews with career diplomats representing governments. It also shows, we believe, the importance of investigating not only what has been achieved compared to stated objectives – the essential function of standard evaluation — but also the achievements that were not intended, or that could have been achieved if they had been identified soon enough. This approach provides greater strategic value for forward planning.
CASE 2: Global Campaign for Climate Action, TckTckTck Campaign (2010)
In the immediate aftermath of UN”s climate summit in Copenhagen in 2009, the funders, staff and partners of the Global Campaign for Climate Action (GCCA) asked whether there was any point in carrying on with its core mission — to activate civic organizations in developing countries (the global South) and people beyond the environmental movement to join together in pressing for a new and stronger post-Kyoto climate treaty.
The GCCA was a complex alliance, with numerous semi-autonomous teams overlapping around the world. It was launched only eleven months before the UN Summit, with a sophisticated critical path to define key milestones in its global strategy. Contextual factors evolved and membership grew rapidly. No surprise then that there had been confusion as well as optimism and solidarity among participants during the period being evaluated.
We had only ten weeks to develop and conduct the evaluation, and report findings to the GCCA’s staff leadership and Board of Directors. We did so by taking full advantage of our opinion research capabilities –
Detailed background document review including the GCCA’s communications, political, and organizational plans/reports;
Twenty five key informant interviews, in English, French and Spanish;
Six overlapping online ‘bulletin board’ focus groups for the various internal constituencies from around the world (policy committee, outreach committee, etc);
An international online survey of 300+ member organizations.
We found that although the GCCA effort had not achieved its primary objectives, it had managed to build a new and vibrant profile for the climate movement around the world, engage more young people in key countries, put new pressure on some governments, and demonstrate a workable model of North-South partnership for environmental justice.
Among our main recommendations was that GCCA adopt new objectives – echoing the views of many participants – that were not anchored so firmly to the achievement of a new international treaty. The Board did just that, following discussion of the evaluation findings.
Lessons for Evaluators:
This case example illustrates that very rapid data collection and analysis can support effective and actionable recommendations. It also shows that an assessment, even if it is primarily done with internal stakeholders, can provide a strong foundation for a new consensus to emerge from what had been significant disagreements.
We feel that this case also highlighted the importance of the evaluators having ‘Organizational Intelligence’—knowing how to accentuate positive judgments without overlooking the underlying concerns of Board members, partners, and staff, and how to work with the leadership to find a new basis of unity that is informed by diverse perspectives rather than hobbled by them.
CASE 3: Law Society of Upper Canada, 5-year Review of Paralegal Regulation in Ontario (2011-12)
This multi-modal evaluative study investigated policy and regulation around a specialized profession (paralegal services) over a five year period (2007-2012), and was based on the perceptions and knowledge legal experts, current practitioners, and a challenging-to-recruit segment of the public affected by the issues at hand.
Ontario’s regulator of the law profession, The Law Society of Upper Canada, assumed responsibility for the regulation of paralegals in 2006 – the first time in Canada that paralegals are to be regulated. The Law Society is required to conduct a review of the ‘manner of implementation’ and impacts of the new regulations on the public and the profession, five years after they went in to effect on May 1, 2007.
Stratcom designed an evaluative research project that included a detailed contextual scan; focus groups and depth interviews with experts, policy-makers, and legal practitioners; and online surveys of paralegals and members of the public who had used paralegal services in the past two years.
In addition to analyzing and reporting on paralegals’ impressions of the impact of regulation on their profession and on the public, we also investigated:
Paralegal, legal and adjudicator perspectives on the process of establishing regulations, and the extent to which it has provided:
Fair and transparent processes for applicants to obtain a paralegal license;
Reasonable standards of competence and conduct for paralegal members of the Law Society; and
Fair and transparent discipline processes for situations where it is alleged that licensed paralegals have failed to observe Law Society standards.
Public awareness and knowledge of paralegal regulation and services.
The public’s experience of using paralegal services and impressions regarding the impact of regulation on individuals seeking and using the service of paralegals.
The extent to which Law Society regulation has succeeded in establishing:
Reasonable standards of competence such that the public has access to competent services.
Accessible information about legal services available in Ontario.
Fair and transparent complaint procedures for the use of members of the public who have concerns about the conduct or competence of paralegals.
An accessible, transparent discipline process to address breaches of Law Society standards.
We found that:
- Paralegal regulation is viewed as beneficial and effective by the paralegal profession and the public who use paralegal services.
- Both groups view regulation as contributing to increased consumer protection and higher professional standards of paralegal service.
- Both groups are satisfied with most of the aspects of regulation and paralegal services they were asked to assess.
- Although a significant minority of paralegals and those using paralegal services viewed regulation as making ‘no difference’ to some aspects of the provision of paralegal services, a comparatively small number of both groups expressed outright dissatisfaction or identified negative impacts arising from the regulation of paralegal services.
Our conclusions were submitted to the Law Society as interim reports (after each phase) and a final, integrated report which has now been forwarded to the government as part of the Law Society’s review.
Law Society Treasurer Thomas G. Conway said “the results [of the review] show that paralegals are well on their way to establishing a prestigious and well-regarded profession. Paralegal regulation has provided consumer protection while maintaining access to justice.”
Lessons for Evaluators:
Paralegal practitioners are much more aware and knowledgeable about regulation of their profession in Ontario than are individuals who use paralegal services. So at first, it seemed this would have to be simply an ‘internal’ review, i.e. that it would have to rely on the knowledge and judgments of hose being regulated rather than those for whom the regulations were established. For this reason the research design included extensive input from informants who we surmised could provide reliable feedback about the public impact of the regulations:
Provincial Adjudicators and Justices of the Peace, who deal regularly with paralegal representation of members of the public;
Lawyers who employ paralegals to deliver services to clients, within their law practice;
Policy experts and former elected officials with specific knowledge of Ontario’s civil tribunals and boards (where paralegals mainly practice).
Despite the inherent limitations, the views of public users of paralegal services was essential to verify the evaluative input from experts, paralegals, lawyers, and adjudicators and to establish baseline data for later studies. The challenge was that locating and recruiting individual users of paralegal services could be time-consuming or, more importantly, unrepresentative if we relied on the most obvious channels for recruitment such as word-of-mouth in paralegals’ own practices. The approach we adopted was to think creatively about the use of contemporary quantitative research tools such as going to a reputable consumer research panel (Vision Critical). We asked a single screening question of the entire panel in Ontario to identify appropriate individual respondents, This application of market research techniques to public policy evaluation allowed us to add a robust province-wide survey of service users to the traditional depth-interview and focus group methods of investigation.
The desired impacts of these projects ranged from increasing access to justice for ordinary people (The Law Society of Upper Canada) to changing the global rules for one of the world’s oldest industries (Deep Sea Conservation Coalition) and one of its most dangerous (Global Campaign for Climate Action, or ‘tcktcktck’). To varying degrees, these evaluations had to model and measure stakeholder engagement, policy reform, and the potential for long-term social and economic impacts in highly complex environments.
The Limitations of Program Evaluation
The cases covered in this post used a modified version of a traditional program evaluation, meaning that an external evaluator used documentary and stakeholder data to draw independent conclusions about the projects’ outcomes and impact in the world. This way of evaluating can be very powerful for generating logical and credible findings; but it has the drawback that it often fails to embed new learning, because it is ‘external’ to the proponent’s creative process.
Traditional evaluations of this kind also depend very much on being able to generate a complete and transparent (to ourselves and our clients) evaluative framework before starting to test perceptions and ask questions of informants. Unless the logic of the intended outcomes and impacts, and their relationship to strategies and resources can be described in ordinary language, it is very challenging to generate insights and recommendations for action using ‘traditional’ program evaluation methods.
In further posts, we will discuss alternate methods for evaluating projects with fuzzy outcomes, or when they are just starting up or in-process.
We’d be interested to hear your feedback and what your experiences have taught you about evaluating advocacy and social change projects.