Atlantic’s Approach to Evaluation: What Is Important to Learn, and How Do We Put It to Use?
Resource type: News
Gara LaMarche |
When I was named President of Atlantic last year, I doubt that a rousing chorus of cheers went up in the offices of the American Evaluation Association. Atlantic takes evaluation very seriously, but in my philanthropic and activist life before coming here, I didn’t have much experience with it, and indeed I was and remain skeptical about some of the metrics mania that has overtaken many donors, always looking for the next hot thing, in the last few years.
But in the ten months I’ve been at Atlantic, I have had the opportunity to both learn a lot and to reflect on what I’ve learned. We’re one of a relatively small number of foundations – Annie E. Casey, the California Endowment, and the Robert Wood Johnson Foundations are a few others — with an in-house staff and budget devoted to what we call “Strategic Learning and Evaluation.”
Evaluation in some form, however, is something every organisation of any size has to think about and plan for. But I would like to redefine the term a bit. For a grantmaker to evaluate its grants and for a non-profit to evaluate its own impact, evaluation means, most simply, to learn. That’s why the first two letters of Atlantic’s SLAE acronym stand for “strategic learning.”
We try to make evaluation part of a holistic process, putting the programme officer at the center of the process, as she or he should be, and also adding the benefits of a team from our finance, communications, SLAE and other departments. These essential sources of expertise in reviewing budgets, assessing public education strategies, and strengthening organisational development and assessment mechanisms are involved in key grants from the get-go, not well after the fact, as is too often the case in philanthropy. And they often work closely with grantees and applicants as well as our own programme staff.
This interdisciplinary approach to grantmaking – not just as the grant is being considered, but during the full life of the grant, and done in close partnership with the grant recipient – is distinctive and worth studying and replicating. Many things go into our learning process, of which what we usually call evaluation is but one part.
At Atlantic, there are three main ways we go about this.
First, with a number of Atlantic’s direct service grantees, such as the Children’s Aid Society, we work with the grantee to combine an internal evaluation system focused on quality with an external evaluation focused on effectiveness. An integrated system designed for continuous improvement provides a range of analysis to let staff look at trends as well as a number of targeted questions.
For example, when participation patterns among kids enrolled in the Children’s Aid Society Carrera programme were tracked, Carrera learned that boys, and in particular older boys, were more challenging to engage than girls. The programme rethought its strategy for reaching and keeping boys involved, beginning with engaging them at earlier ages. The programme also learned that the attendance of older participants dropped substantially because they needed jobs after school and on weekends. Carrera responded by creating paid internships and integrating outside employment into the jobs component of the programme. Instead of losing these kids because they couldn’t attend, the programme figured out how to stay connected to them. Having good data about why kids weren’t attending allowed them to do this.
A second evaluation approach is to use an “embedded” outside evaluator – someone trusted by the grantee and the funder who stays with the initiative over a period of time and provides periodic reports on a regular basis that can affect the course of the work in real time. In 2004, Atlantic funded a two-year campaign in five states to restore voting rights for those with felony convictions, and soon added a recommendation to add an evaluation and learning component to the project.
A key question for this evaluation focused on the coordination among the national groups involved and the coordination between the national and state groups. States play a key role in setting policies that affect disadvantaged people and in informing federal policy. Lessons from this work are of interest across Atlantic’s programmes in the US, since the learning from this project not only enhances the Right to Vote Campaign but could potentially strengthen other efforts involving state campaigns.
Something similar was done in Northern Ireland where outside experts reviewed the Community Restorative Justice Ireland (CRJI) and Northern Ireland Alternatives (NIA) and their work with communities in Northern Ireland. Those grantees work to facilitate and promote non-violent community alternatives to paramilitary punishment attacks and exclusions relating to alleged localised crime and anti-social behaviour.
Case studies are a third form of evaluation, and are particularly useful in advocacy campaigns. Atlantic’s work in this area includes our Ageing programme’s North Carolina Campaign for better jobs for direct care workers caring for older adults, and our Rights programme’s support of indigent defense reform in Texas and other states.
In policy change work there is often less documented information than available in other fields about strategies that are more successful than others, or about what strategies seem to work best in particular contexts or situations. Given this gap, we decided that case studies of “successful” policy efforts could provide useful models for others doing this type of work, and indeed to buttress the case for other funders to join us in public policy advocacy, which is often seen as too edgy – or too soft – for many donors to feel comfortable with. Our thinking was that building a library of case studies could provide a kind of “check list” that could be used to assess strategy and serve as a growing resource for advocates to draw on.
What kinds of things have we learned through the use of case studies? One thing is that the ability to engage “strange bedfellows” can be an extremely effective tool in certain campaigns. In death penalty work, it can be abolition advocates from law enforcement working with abolition advocates who are family members of victims. In North Carolina, the coalition working for legislative policy to improve long term care for older adults included a consumer group and long term care associations. These groups had encountered one another previously only in adversarial roles. By developing cases across areas, we should be able to pull out strategies that always seem useful as well as those that work well within a particular policy area but perhaps not in others.
These three approaches – evaluation integrated into the work of an organisation, an “embedded” evaluator from the outside, and retrospective case studies – do not constitute an exhaustive list of approaches, and Atlantic also supports in many instances intensive data collection – for example, on our efforts to reduce smoking in Viet Nam – aimed at improving quality and enabling a successful programme to be increased in scale and reach.
Evaluation and learning have a special resonance for Atlantic, in no small part because we are spending down our assets and going out of business in eight or nine years. We believe it is part of our mission to share learning – and not just internally or with our grantees.
To that end we are about to launch a series of publications to make these lessons accessible to a broad audience: non-profits, funders, social justice organisations and NGOs among them. I look forward to telling you more about those publications in the future, and to making your feedback a key part of our strategic learning process.
This column is adapted from a keynote speech delivered at the Better Business Bureau of Metropolitan New York Symposium: Building Capacity for Maximum Impact, on February 28 at the Baruch College School of Public Affairs.
Gara LaMarche