Header

Meta-evaluation and synthesis of the 2010 Floods in Pakistan

SHO Terms of Reference Meta-evaluation and synthesis of the 2010 Floods in Pakistan

June 18, 2012

1) Introduction

On August 12, 2010 the SHO started a co-ordinated effort for the victims of the floods in Pakistan. The participating organizations in this initiative were: Cordaid (Mensen in Nood), ICCO & Kerk in Actie, Red Cross the Netherlands, Oxfam Novib, Save the Children, Stichting Vluchteling, Tear, UNICEF Nederland en World Vision. The funds raised – 27,5 million euro – were spent on housing, medical care, food, water, sanitation, education and livelihood opportunities. The initiative officially ended in August 2011.

SHO participants agreed to carry out meta-evaluations for each action. SHO follows the ALNAP[1] definition for meta-evaluations, adapted from Lipsey (2000): systematic synthesis of evaluations providing the information resources for a continuous improvement of evaluation practice.’ This ToR provides the guidelines and expected outcomes for the Pakistan meta-evaluation.

The meta-evaluation will therefore assess the quality of the evaluations carried out by the individual SHO members with regards to their implemented activities in Pakistan. The report will be based on existing information – i.e. the available project evaluations of the SHO members.

In addition to systematically analysing available evaluations to improve our evaluation practice the SHO members want to broaden the scope of this exercise by summarizing and synthesizing the main results and recommendations of the individual project evaluations. This summary/synthesis will be used to inform the Dutch public and donors about the outputs and outcomes of our interventions at an aggregated level (accountability). The possibility to summarize depends on the actual outcome of the quality of the project evaluations (the meta-evaluation component), as well as the level of information provided.

The expected added value of this meta-evaluation is thus twofold:

  • Facilitate joint learning by examining the quality of evaluations
  • Accountability towards back donors and the general Dutch audience

Each SHO member applies its own PME system; all members account, through external project – or program evaluations for at least 50% of SHO budget received (with a minimum of 250.000 euro). These evaluations may cover direct humanitarian response and/or rehabilitation activities. Since the SHO did not agree on the contents of the ToR for individual evaluations in advance, the evaluation reports of the individual participating organisations are likely to have different objectives, methodologies, focus, lay-outs and depth. Hence, it might be challenging to draw comparable data from the individual evaluation reports.

This ToR covers the following sections: main purposes and specific objectives (paragraph 2), methodology (par. 3), organizational issues (par. 4), the budget & timeline (par. 5) and the selection process of the consultant. Annex 1 provides the evaluation grid.

2) Main purposes of the evaluation

As indicated in the introduction the main purposes of the evaluation are to provide an independent assessment of the project evaluations by the individual SHO members on effectiveness and impact of the SHO-funded responses. It serves not only joint learning by SHO members, but also the accountability of the SHO.

The meta-evaluation takes into account the agreements made with regards to evaluations as stated in the Organisation Protocol (“Organisatiereglement SHO, January 2010”, chapters 1.11 – 1.19). This refers to:

  • The OECD DAC criteria for disaster response evaluations, in accordance with the ALNAP Quality Proforma;
  • The Sphere minimum standards in Humanitarian Response during the intervention;
  • The Principles of Conduct for The International Red Cross and Red Crescent Movement and NGOs in Disaster Response Programmes during the intervention.

The main purposes and abovementioned standards and criteria are covered by the following three specific objectives:

Objective 1: Assess the quality of the evaluations of the SHO-members.

Objective 2: Provide an overview of the findings of the different project or programme evaluations that were undertaken/funded by the SHO-members.

Objective 3: Formulate lessons learnt and recommendations for SHO regarding future humanitarian interventions and its methodology for evaluations of SHO-funded projects/programs.

These objectives are further elaborated below.

Objective 1: Assess the quality of the evaluations of the SHO-members.

SHO members will provide their relevant evaluations to the consultant. Annex 1 provides a quality assessment grid that will be used to assess the quality of these evaluations. This quality assessment grid includes elements such as meeting of objectives indicated in the ToR, impact, quality of the analysis and reliability of the data. The consultant will provide the rationale for scores given and indicate strengths and weaknesses. The quality assessment grid will be used for all project evaluation reports shared. Based on the outcomes a synthesis is written with specific attention for each component in the assessment grid. Evaluations of sufficient quality will provide the basis for objective 2. The exact minimum score will be defined in a session between the SHO members and the consultant.

Objective 2: Provide an overview of the findings of the different project or programme evaluations that were undertaken / funded by the SHO-members.

The overview of the findings will consist of two parts.

Part I refers to conclusions drawn in the project evaluations in relation to the following specific topics – for each topic a specific question is defined:

  • Appropriateness/Relevance - Was the response tailored to local needs, enhancing ownership, accountability, and cost-effectiveness?
  • Efficiency - Was the response efficiently implemented?
  • Effectiveness and impact - Was the response effectively implemented?
  • Coverage - Did the response reach major population groups facing life-threatening risk wherever they were?
  • Connectedness - Did the response ensure that activities of a short-term emergency nature are carried out in a context that takes long-term and interconnected problems?
  • Coherence - Did the response take into account humanitarian and human-rights considerations (assess security, developmental, trade and military policies, as well as humanitarian policies)?

Fact-finding and the analysis to answer these questions will be based on the evaluations that have proved to be of sufficient quality. The SHO Quality group will decide on actions to take if overall information and quality is inadequate.

Part II will look at conclusions in the evaluations with regards to compliance with SHO agreed upon standards, cross cutting themes and organizational issues. More specifically the following topics and related questions are expected to be answered:

  1. Standards
  • Did the response adhere to the Code of Conduct and the Sphere Standards?
  1. Cross-cutting themes
  • Gender - Was gender considered in the agencies’ emergency assessments? Did relief provision include special components for women, men, girls and boys and, if so, were these systematically monitored?
  • Vulnerable groups - Were the special needs of acute vulnerable groups (e.g. children/elderly/disabled etc) considered in the agencies’ emergency assessments and were they consulted in the same way as other groups? Did relief provision include special components for them and if so were these appropriate and systematically monitored?
  • Strengthening of responses - Did the response of the SHO agencies strengthen and complement the response of local organizations and coping mechanisms, or hinder them?
  1. Organizational issues
  • Co-operation - What was the level of co-operation in the field between the SHO agencies? Could more have been done to help improve the effectiveness of SHO agencies’ responses in terms of co-ordination, joint-logistics, communications packages, and information flows between the key relief players?
  • Lessons learned - To what extent did responses reflect lessons-learned from previous similar disasters and more specifically from the SHO humanitarian aid action in East Asia in 2005[2]?
  • Geographical coverage - Was there appropriate geographical coverage within the affected region?

Objective 3: Formulate lessons learnt and recommendations for SHO regarding future humanitarian interventions and its methodology for evaluations of SHO-funded projects/programs.

Based on the findings, results and conclusions of objectives 1 and 2, the meta-evaluation shall present the lessons learnt and recommend future actions for SHO or its members to improve the evaluation of future humanitarian interventions. The recommendations should include ideas or proposals for learning activities for SHO.

3) Methodology and deliverables

This meta-evaluation consists – in line with its main purposes - of three phases:

  • Phase 1: Quality check
  • Phase 2: Accountability phase
  • Phase 3: Learning among SHO members

Phase 1: Quality check

The findings and recommendations of the meta-evaluation will be based on a desk study of the reports of external evaluations and reviews. Based on this desk study a first draft report will be developed that will be discussed during a meeting with SHO members. This draft report should consist of:

  1. An analysis on the quality of the evaluation reports based on the assessment grid mentioned under objective 1;
  2. A section with the conclusions, and lessons learned based on the above analysis;
  3. Recommendations for continuation of this meta-evaluation (phase 2 and 3) and if necessary an adapted work plan.

The main outcomes and recommendations will also be used in the final meta-evaluation report.

During a meeting with the SHO directors, the consultant will present the deliverables of the first phase research. Based on the collected information on the evaluations, the quality of the content and the presentation of the consultant, the SHO directors will decide whether to proceed to the second phase.

Phase 2: Accountability

During the accountability phase a synthesis report will be developed containing the following main components:

  1. Background information on the emergency context in Pakistan;
  2. A short reflection on the evaluation methodologies used;
  3. A synthesis presenting the main outcomes of the analyses based on the elements mentioned under objective 2

This synthesis report will be used to inform the Netherlands Ministry of Foreign Affairs, interested NGOs and the Dutch public on the outcomes of the questions mentioned under objective 2.

Phase 3: Learning

The outcomes of the meta-evaluation as well as the synthesis report will be shared among the SHO members. The need to further revise processes and procedures, concepts and axioms of disaster response (1st, 2nd, 3rd loop learning) as well as our evaluation system will be discussed. The consultant, in cooperation with ICCO and Kerk in Actie, will organize a learning event that shall integrate the findings and recommendations of this meta-evaluation.

Overall report

Finally, an overall report will be made which contains the reports of phase 1 and 2 including a short one-pager on the learning session. Besides the reports mentioned above, the final report will contain:

  1. An executive summary, including key recommendations (max. 3 pages);
  2. A section with the lessons learned, conclusions & recommendations;
  3. Appendices, including the Terms of Reference, maps, sample framework, summary of agency activities, sub-team report(s), end notes (where appropriate) and bibliography.

After being selected, the consultant will develop a detailed work plan at the beginning of the process. All materials collected in the undertaking of the evaluation process should be lodged with the SHO Secretariat prior to termination of the contract.

4) Responsibilities participating SHO organizations

The process of developing and finalizing the meta-evaluation and synthesis is steered by representatives of the SHO quality group. ICCO & Kerk in Actie is responsible for the overall co-ordination of the meta-evaluation, supported by Stichting Vluchteling and Unicef through a Reference group. This Reference Group[3] will:

  • Select and facilitate the communication with the evaluator(s);
  • Facilitate the administrative steps around this evaluation in coordination with SHO back office (contract, payment etc);
  • Support the consultant in assembling the relevant documentation (i.e. evaluation reports and reviews)
  • Organise preparation and progress meetings and a final learning event;

The members of the quality group will

  • Provide relevant information within their own respective organisations and send it to the evaluator(s);
  • Provide input into draft report;
  • Agree on the final report;
  • Participate in knowledge exchange and the final learning event.

Participating SHO agencies are required to submit the following material as input to the meta evaluation (preferably both in hard copy an electronic format to the SHO secretariat)

  • Key documents on the agency’s response to the emergency and their use of SHO funds (general project reports, overviews, SHO reports, etc.)
  • Reports of external and internal evaluations and reviews

5) Expected workload

Activity

Number of days

Preparation and reading (basic documents)

2

Desk study (analysis different evaluations, +/- 20)

10

Writing meta-evaluation report

3

Presentation Report for SHO and processing final adjustments

2

Writing synthesis report

3

Contribute to facilitation of learning process

2

Total working days consultant

23 days

NB: in this table the expected workload for the consultation is mentioned. The final workload might decrease depending on the decision on the go/no go of phase 2 of this meta-evaluation.

6) Timeline

Activity

Week(s)

Date

Approval of ToR

25

June 18

Deadline for application

26

June 27

First meeting Reference Group / Consultant

27

July 5 or 6

First draft meta-evaluation report

Go/no go phase 2

35

36

August 27 September 3

Writing of draft and final synthesis report

Learning event

To be discussed

Nov/ Dec 2012

Finalization of the process

December 2012

6) Selection process and minimum requirements

SHO members will propose possible candidates. These candidates should be independent and have not been involved in one of the project evaluations. The SHO quality group will make a shortlist of 3 -5 candidates. The Reference group will cross-check the independency of the candidate and select after consulting the quality group. These candidates will be invited to prepare a work plan and budget.

The evaluator should have relevant skills and a proven background in humanitarian emergency project implementation and/or evaluations. No specific sector expertise is required. The evaluator(s) should be gender aware. The selection procedure will be elaborated in a specific document. In summary, selection criteria are:

  • Experience in humanitarian aid
  • Knowledge of the country/region
  • Gender awareness
  • Availability

7) Applications

Applications can be send to martijn.marijnis@icco.nl by June 27 the latest. The application should contain a short motivation, past experience, a Curriculum Vitae and the consultants’ daily rate. Only short-listed candidates will be contacted.


Annex 1 - Quality assessment grid

Assessed report:

Version: final draft Date of report:

Assessor: Date of assessment:

Excellent
Good
Weak
Unacceptable

1. Meeting needs

The report adequately meets the information needs expressed in the terms of reference in a way that reflects the stated priorities. The demands which were made during the evaluation process are mentioned, and satisfied when possible.

2. Appropriate design

Key concepts and criteria are precisely defined. The method is described clearly. It is adequate for addressing the questions. Methodological limitations are explained, as well as their consequences on the strength of conclusions, and on the substance of recommendations.

3. Reliable data

Data are sufficiently reliable with respect to the conclusions that are derived from them. Data collection tools have been applied in accordance to standards. Sources are quoted and their reliability is assessed. Potential biases are discussed.

4. Sound analysis

Data are cross-checked, interpreted and analysed systematically and appropriately. Underlying assumptions are clarified. The main external factors are identified and their influence taken into account.

5. Valid findings

The findings are based on evidence through a clear chain of reasoning. The limitations to validity are clearly stated.

6. Impartial conclusions

The conclusions are based on explicit criteria and benchmarks. They are free of personal and partisan considerations. Points of disagreement are reported truthfully. Lessons of wider interest are identified.

7. Useful recommendations

Recommendations stem from conclusions. They are applicable and detailed enough to be implemented by the addressees. The level of recommendations (political, strategic, managerial, …) reflects that of the questions.

8. Clear report

The style of the report is interesting for and accessible to the intended users. A short summary stresses the main findings, conclusions, lessons and recommendations in a balanced and impartial way.

Overall assessment

Taking into account the contextual constraints on the evaluation, the report satisfies the above criteria


[1] ALNAP is the Active Learning Network for Accountability and Performance in Humanitarian Action and was established in 1997. ALNAP is a collective response by the humanitarian sector, dedicated to improving humanitarian performance through increased learning and accountability.

[2] See meta-evaluation 2009

[3] Members are: Wim Stellingwerk (St. Vluchteling), Martijn Engels (Unicef), Martijn Marijnis (ICCO & Kerk in Actie)