Relevant reading

Rome was not built in one day

A host of recent publications illustrates how little we know about successful development cooperation. At the same time, the demand for more aid effectiveness is at the core of recent multilateral debate. Imminent recession in most rich nations, moreover, means that the pressure will grow to prove the worth of official development assistance (ODA). What is needed is coherent and methodologically sound assessment of results. [ By Niels Keijzer and Gwénaëlle Corre ]

“Management for results” is one of the principles of aid effectiveness as defined in the Accra Agenda for Action (AAA) and the Paris Declaration on Aid Effectiveness, the final documents of the High-Levels Forums in 2008 and 2005. The principle implies that assessment is not only of technical or bureaucratic interest. Rather, it is a process-related issue which is supposed to shape future cooperation and policymaking.

So far, two monitoring surveys on the implementation of the Paris Declaration have been published (OECD/DAC, 2006 and 2008). Both clearly show that donors and developing countries alike are lagging behind on practically all targets agreed. On top of that, the surveys show that the assessment processes themselves need to be improved.

But what exactly does “assessment” mean, and why is there so little progress? In this article, we define assessment as comprising both “monitoring” and “evaluation”. The two are distinct but interrelated. Monitoring relates to the supervision of ongoing projects and programmes, whereas evaluation is about valuing the state of affairs ex-ante or ex-post. Both are normally part of any development intervention. Recent years have also shown a move on from assessing “what has been done?” to “what has been achieved?”

A public good

As the Centre for Global Development (CGD) argued in an influential paper (Savedof, Levine and Birdsall, 2006), there has been serious underinvestment in the particular area of assessing results. The reason is that various and diverse factors have an impact on the ground reality of poor countries, so that many development agencies do not invest in assessing that reality and their impact on it. Rather, they only limit themselves to assessing results within the direct scope of their own, individual action.

The CGD proposes to consider impact evaluations public goods: they involve considerable costs for any individual agency but, once made public, are of use to anyone at little additional cost. This is precisely the reason why public goods typically attract too little investment.

On top of the fundamental public-goods challenge, there are many practical constraints to assessing development results. First of all, it is always very difficult in methodological terms to measure impacts on complex systems. Such difficulties are compounded by the fact that, in development policy, the data describing the situation before an intervention is normally quite weak. Sometimes, there is no such baseline data at all. Moreover, monitoring data often tends to remain limited and/or inappropriate.

Pooled resources

In international development cooperation, there has been a move away from projects to programme-based approaches such as budget support and basket funding. In these cases, several donors pool funds in support of a government in a developing country. Holvoet and Renard (2007) have pointed out that these aid modalities lead to new information gaps: increasingly, there is a dearth of information on what is being done in terms of tangible activities and outputs. The authors also state that such problems tend to relate to which stakeholder bears ultimate responsibility for which part of the result chain.

In recent years, there have been successful attempts at joint evaluations. Several donors teamed up around a shared topic. Convincing examples include the 2006 EU-led joint-evaluation of general budget support, and the 2008 evaluation of the Paris Declaration (IDD and Associates 2006, Wood et al. 2008). Both studies show that coordinated and coherent action of several donors can be of great value.

Nonetheless, the High-Level Forum in Accra stressed the need for more alignment in the area of assessment, and for good reason. If developing countries are to be in charge of their own development – as is generally agreed they must – they must also be the ones to determine whether they are on track or not. Accordingly, both the AAA and the Paris Declaration emphasise the use of countries’ own systems.

Alignment matters

It was only logical, therefore, for donors to commit to strengthening developing countries’ statistical capacities and information systems in Accra. They thus implicitly acknowledged daunting challenges in this area. Many basic requirements have yet to be met in many countries. However, one should not ignore the progress some countries have made in increasing their monitoring capacity – particularly in respect to the Millennium Development Goals.

A recent study by Mokoro Ltd (2008), however, shows that the use or non-use of country systems is by no means a binary choice. Nor does the use or non-use of country systems simply follow from any specific kind of aid modality. There are many possibilities of using country systems at various levels, looking at government plans, budgets, parliamentary action, procurement, auditing et cetera.

The Mokoro study, moreover, reaffirmed a finding of the Accra summit: donors sometimes do not use country systems even where such systems are reliably operational: “There is not a strong correlation between the use of country systems and ratings for quality of public finance management.”

The AAA set the goal of using country systems for 50 % or more of all government-to-government assistance. Of course, this does not constitute an agreement to align regardless of capacity and quality, but rather to move towards more informed decision-making. While donors are interested in establishing positive track-records fast in relation to this target, it should be clear that achieving development objectives will depend on an attitude and practice of “critical alignment” (ECDPM, forthcoming).

Such critical alignment will be an ongoing exercise in deliberation and calibrating. All too often, the data that inform decision makers in development cooperation hardly transcend the realm of intuition and personal observation. If that is to change, it will not suffice to define new rules and procedures. Rather, it will be necessary to strengthen assessment procedures and use them systematically in all phases of cooperation.

Experience shows that different donors often have quite different ideas about what degree of alignment is desired in any given developing country. Therefore, more excercises in joint evaluation may actually serve the cause of making more use of developing countries’own systems, and do not necessarily conflict with the goal of alignment.

Diverse tasks

A paper by Carlsson and Engel (2002) of the European Centre for Development Policy Management (ECDPM) looks at the changing role of the evaluator, taking account of the relationships between various stakeholders in development cooperation. So far, the typical evaluator was a “distant, research-oriented person trying to systematise the known and unearth the hidden”. The evaluator’s job, however, is increasingly becoming that of a “process facilitator whose greatest skill is to design and organise others’ learning effectively”.

Generally speaking, assessment of development-cooperation results serves a wide variety of – sometimes conflicting – purposes. They include
- accountability to primary stakeholders,
- public relations including fundraising,
- the drafting of new policies and
- organisational learning.

Obviously, part of the challenge lies in the fact that special interests of various stakeholders are likely to leave a mark on assessment results. Solid methodology and procedural transparency should help to limit such distortions.

Accordingly, there are two different approaches to improving assessment capacities. The first prioritises rigour and methodo­logical standards, the second is about ensuring that assessment procedures become more accessible, better aligned and are used in a more inclusive and participatory manner. Both approaches are relevant to improving aid effectiveness. It is difficult – but necessary – to strike a balance. It does not help to have an approach that looks good on paper, but is difficult – if not impossible – to implement with a similar level of perfection without external expertise.

Telling examples of such challenges are to be found in the context of the European Union’s partnerships with countries in Africa, the Caribbean and the Pacific (ACP). The relations between the EU and the 78 ACP countries are presently defined in the Cotonou Partnership Agreement that was signed in 2000. The Agreement states that the “ACP states shall determine the development strategies for their economies and societies in all sovereignty”. It recognises two categories of “actors of cooperation”:
- state actors, including actors at local, national and regional levels; and
- non-state actors: the private sector, economic and social partners, including trade unions, and civil society in general.

The Agreement has led to the development of particular mechanisms for delivering and assessing development cooperation through the European Development Fund (EDF), guided by the principles of “co-management”, “co-decision” and “joint programming”. An annex of the Agreement lays out an elaborate performance review process consisting of annual reviews, a mid-term review (MTR) and an end-of-term review (ETR).

The ECDPM has assessed the mid-term review processes of the ninth EDF (Mackie, 2007). The study indicates that the reviews focussed on how resources were being managed, but did not sufficiently assess the progress of national development. Moreover, the official counterparts in the ACP countries were not very actively involved. Often, their contribution was limited to making comments after the European Commission Delegations had presented draft reports to them. Moreover, the participation of national parliaments and civil-society organisations seems to have been ad-hoc at best, which badly reflects on the transparency and accountability of the process. Though the practice of mutual accountability in the context of the Cotonou Agreement may thus look disappointing, the fundamental ideas are sound. It is to be expected that their implementation will improve in the years to come.

Conclusion

There is need for much more support to improve assessment capacities in developing countries (Carden, 2007). There are no fast, short-term solutions. Donor agencies must fulfill the demands of developing countries in this area. Where there is no such demand, however, they must be aware of the risk of externally-driven assessments undermining local ownership of developmentment cooperation. Wherever key stakeholders agree that assessment processes are inadequate and do not reflect the principles of equal partnership, they will have to look into whether sufficient resources have been made available to tackle the challenges.

Lowering ambitions in the field of development cooperation is not an option, the stakes are too high. If aid effectiveness is to improve, assessment practices will certainly have to improve too. One must understand, however, that Rome was not built in one day.