Pay attention to implementation
The importance of having goals and strategies in philanthropy has been getting a lot of attention of late. But goals and strategies alone are not enough. Once a foundation decides on a goal and develops a strategy for achieving it, the next order of business is to implement that strategy and to monitor progress toward the goals. While foundations tend to spend a good deal of time (and often a good deal of money) developing their strategic plans, implementation often gets short shrift. This is unfortunate since poor execution of a good strategy is not likely to yield positive results, no matter how many hours and foundation dollars have been devoted to its development. (Of course, good execution of a poor strategy is no recipe for success either.)
One key requirement for successful implementation of a strategy is that it receives the full attention of the CEO and the board. All too often, there seems to be an assumption that, once the foundation’s goals and strategies have been agreed to, implementation is largely a mechanical process that can be left to program staff without much real discussion or follow-up. Indeed, board meetings often focus primarily on “new” business, with little time for discussion of ongoing programs. This tells the staff that program implementation is not really valued. The staff needs to know that the foundation’s leadership truly cares about effective implementation.
Monitor to stay on track
Closely related to effective implementation is the need for careful monitoring, both of individual grants and programs and of the strategy as whole. Are the grants meeting their specific objectives, and if not, is there anything the foundation can do to help get them back on track? Likewise, are the intermediate benchmarks for the strategy as a whole being met, and if not, are there any adjustments that the foundation needs to make—either to the strategy itself or to its timeframe—in order to achieve the desired goals? Foundations frequently rely on reports from the grantees themselves to monitor the progress of individual grants, but use existing data sources (such as government statistics) and independent evaluators to assess the progress of their overall strategy.
While monitoring can help a foundation determine how well its grants and programs are being implemented, it will not tell the foundation what impact those grants and programs are having or what lessons may be emerging along the way. Answering those questions will require an evaluation, which may be conducted by the foundation itself, by the grantee, or by an independent researcher.
Be clear about the purpose of evaluation
Evaluations vary greatly in terms of their scope, rigor, and cost. They may include everything from an informal qualitative self-evaluation carried out by the grantee to a randomized clinical trial conducted by an experienced independent evaluation researcher. In general, the more rigorous and credible the evaluation, the greater the cost and the greater the level of effort required. A foundation should think carefully about why it wants to evaluate a particular program before deciding on what kind of evaluation to support at the beginning of an initiative, not as an afterthought.
As a rule, if the foundation just wants a general sense of what was accomplished through its grant and is not trying to use the evaluation results to convince anyone else to replicate or learn from the program, an informal self-evaluation by the grantee may be sufficient. This is what many foundations do, especially with their smaller grants.
Evaluation as leverage
If, on the other hand, the foundation wants to leverage its investment in the program by convincing other funders to replicate the program on a larger scale, a more rigorous and costly independent evaluation may be required. The David and Lucile Packard Foundation invested almost two million dollars in a rigorous evaluation by Mathematica, a highly regarded independent evaluation research firm, of a model program to expand children’s health insurance that it was funding in Santa Clara County, California. The positive findings from that evaluation persuaded other California counties to adopt the Santa Clara model, and ultimately helped to leverage hundreds of millions of dollars in federal, state, local, and foundation dollars in increased coverage for children.
Interestingly, an evaluation does not always have to yield positive results to have a big impact. The documented failure of a particular intervention or strategy can itself serve as an important wake-up call to the field. A large-scale evaluation of an intervention funded by the Robert Wood Johnson Foundation in the late 1980s to improve end-of-life care for terminally ill patients found that the intervention had no real impact on patient care.
Instead of giving up on end-of-life care when the evaluation results were announced, the foundation intensified its efforts on the grounds that the evaluation had clearly revealed that ordinary interventions would not suffice. The foundation actively communicated the negative results of the evaluation to the public and to the medical profession as a kind of call to arms on the issue. Together with The Open Society Institute, it followed up with a multifaceted campaign to change deeply ingrained medical norms and values that has gradually resulted in widespread improvements in end-of-life care.
These and many other examples demonstrate the potential of evaluations to amplify the impact of a foundation’s programs. In addition, evaluations—even relatively low-cost, informal evaluations conducted by the grantees themselves—can yield valuable insights and lessons to the grantees and to the foundation itself.
Unfortunately, some evaluation reports—particularly those prepared by outside academics—are completed so long after the program itself has ended that the foundation has long since moved on by the time the final reports are submitted. As a result, too many potentially valuable evaluation findings languish unread in the closed-grant files of the nation’s foundations.