Review of SSR Process in Burundi: From Arusha to present

30/04/2013 - 30/03/2014

Target country

This evaluation mandate, jointly commissioned by the Netherlands, the Government of Burundi and the BNUB, analyses the efforts made by the Burundians and their internationals partners to reform the security and justice sectors since the Arusha Peace Agreement signed in 2000. The main objectives of this evaluation were:

  • to provide an overview of the evolution of the reform process from 2000 to 2013;
  • to identify the results obtained and the remaining challenges ; and
  • to formulate recommendations to the Burundian Government and its international partners in order to improve the SSR process.

Mandating organisation / agency / department / ministry

Mandate outputs / products

Evaluation report with main conclusions and recommendations of the SSR process in Burundi from 2000 to 2013

Outcome objectives of mandate

The outcome objectives were the following:

  • to provide an overview of the evolution of the reform process from 2000 to 2013;
  • to identify the results obtained and the remaining challenges ; and
  • to formulate recommendations to the Burundian Government and its international partners in order to improve the SSR process.

Start date

30/04/2013

End date

30/03/2014

Summary

Specific Lessons Identified

Lessons Identified

What worked well

What was challenging

Recommendations

Presence of ISSAT PO who developed the ISSAT evaluation methodology on team ensured a active implementation of this methodology when designing the mandate

This was a big and complex assignment and the work load was consequently very important.

In PO’s opinion, this type of evaluations takes between a year and a year and a half’s time and costs around 300 000 - 400 000 EUR. In this case it was completed in 3 months time. As a result, team had to make decisions relating to what level of detail to get into.

 

Scoping mission should have narrowed down even more the scope of the evaluation.

Ensure there is always a PO from the newly formed “methodology cell” collaborating with mandate teams on methodology design for mandate.

 

ISSAT should better assess workload of such evaluations and plan accordingly.

 

There was enough time during the design phase. Team needed more time to integrate the necessary working sessions for the methodology design.

 

 

One more week in the field would have been useful for team which would have been used to appropriately plan meetings in the beginning and have sufficient time to go through all meetings without rushing through or skipping meetings.

Also there was not time for informal meetings which was a loss for the team since this is the time where fruitful discussions happen and networks and relations are built.

 

 

CD did not set up a schedule of meetings prior to team deployment.

Try to ensure that some preliminary version of meetings schedule is ready before deploying so as not to risk having to set up meetings while in field.

 

Methodology design

What worked well

What was challenging

Recommendations

After initial surprise by team members concerning the methodology’s scope, everybody was willing to work within this framework. Given the complex and wide nature of this evaluation, having an explicit, robust and detailed methodology the team about not omitting or overlooking certain issues.

 

 

The methodology was very useful for team members who had thematic skills on certain sectors but lacked or had limited evaluation technical skills. As a result, learning from this experience was very high within this team.

 

 

Having a clear and explicit methodology helped have every team member on the same page concerning what kind of information to look for, what are the gaps, which phase is the evaluation at, etc.

For team members who were not part of the evaluation methodology development process, it was more difficult to be up to speed on how and why the mandate’s methodology was designed.

Ideally, ISSAT should be able to organize a working session bringing together all team members (ISSAT, Roster and Mandator) to present the ISSAT evaluation methodology and explain how it will be adapted to the mandate so all team members feel ownership of the design phase.

 

This recommendation should be applied with additional emphasis regarding non-ISSAT core team staff.

All team members were aware of all of the evaluation questions even if they were just working on only one. This helped keep the big picture in everybody’s minds, including being aware of remaining gaps and highlighting overlaps and complementarities.

 

 

The methodology was very important to justify the result, in particular because this evaluation was very political. Recommendations had to be tied into robust data collection methods and objective processes to generate recommendations.

However, the fact that references to the methodology were made only in the annex, people could not instantly see the origin of a certain recommendations.

Reference to evaluation questions or other methodological guidelines should be made in the text if possible in addition to mentioning them in an annex so as to give the reader the possibility to instantly view the origin of recommendations without having to go back to the annex or the methodology section in the beginning.

 

This is even more significant knowing that most people might not read the methodology section in the beginning or the annexes and find it more useful to see the linkages between the methodology and the recommendations in the text.

Questions, sub-questions and indicators were approved by the Comité Directeur which gave the approach a lot of legitimacy.

However, these being signed by the CD made them less flexible in the team’s consideration. Once in the field, the team would have preferred to have some margin to adapt the sub-questions, add or take out certain indicators in view of the new information they had collected.

Going forward, evaluation team should be aware that sub-questions and indicators can be amended for valid reasons once those reasons are explained to Going forward, evaluation teams should be aware that sub-questions and indicators can be changes after data-collection starts. However, the main questions cannot be changed once in the field and have to remain as they are.

This aspect has to be clearly explained to partners and even included in ToRs if need be.

 

This is the first time an ISSAT mission uses the in-house methodology. Until this mission, each ISSAT evaluation was using a different methodology.

It is a risk for future mandates to design their own methodologies and not fully use the ISSAT inhouse one. This will mean that each evaluation mission will have its own methodology and encourage inconsistency in ISSAT’s approach.

 

Security-Justice-governance linkages were the focus of this evaluation

 

Scoping mission

What worked well

What was challenging

Recommendations

Scoping mission was very useful inorder to narrow down the scope of the methodology. Since Mandator was not clear on that level, a scoping mission was necessary to move forward.

 

The scoping mission was also very useful in terms of informing the partners of the objectives of the mission and ensuring they are aware of the big picture.

 

However, an activity beyond the mere informative scoping mission would have been very useful time-permitting. A workshop with the local partners presenting the methodology and how it will be used would have helped get more support and buy-in from the Commité Directeur concerning the methodology. They would have been better able to supply comments on the substance and would have been better able to understand where the recommendations come from and how they were formulated.

If time permits, plan for a working session with local partners to present the methodology approach, get their views and adapt as necessary.  This should be more than just information sharing but a capacity building activity enabling the partners to contribute more meaningfully to the mandate’s methodology.

 

 

Implementation

What worked well

What was challenging

Recommendations

 

 

If team is under time presssure and is unable to type up notes from meetings, it is a good solution to fill in the evaluation grids directly after each interview

 

The desktop review and research phase was underestimated, the team should have given more time to that phase and could have even started filling in the evaluation grids and then used that to test and update it once in field. This would have saved a lot of time

Try to do an exhaustive research and desktop review phase before deploying. At the end of which, team should be able to fill up the grids on a tentative basis, after which those grids would be reevaluated and updated in view of information collected in the field.

 

Main findings from the evaluation should have been presented to the CD before the report was produced to get their views and so there are no surprises when the report is out

Present main findings to local stakeholders before typing up the report to avoid any surprises.

Team overcame the challenge here by grouping certain sub-questions and indicators when possible

The methodological framework was very big which resulted in a huge scope for the evaluation. This though could not have been avoided due to the nature of this evaluation. Had this been targeting a limited programme with known objectives and outputs, team would have been able to better select what is relevant and what is not.

If pressured by time, and only if applicable try to group certain sub-questions and indicators when possible

 

Working with Comité Directeur

What worked well

What was challenging

Recommendations

 

CD did not supply comments to the methodology approach once it was sent to them. This might be a result of several reasons, amongst others:

CD unable to fully comprehend this methodology and fully understand its links to the evaluation report and recommendations

OR

CD unwilling to provide comments for reasons we shall examine here.

Same recommendation as above pertaining to workshop with local stakeholders is applicable here.

Team should have done bigger effort to push the CD to react to the methodology and only deploy to the field once they have received the insights from CD.

The CD was set up to oversee the evaluation, was consulted by teh team on teh objectives of teh evaluation and endorsed the methodology.

 

 

 

The Evaluation report

What worked well

What was challenging

Recommendations

 

This evaluation targeted a local process but the report and methodology were drafted by internationals. The report should have been shared with Burundian stakeholders for a first read to get a sense of how the recommendations will be interpreted on the Burundian side. This step would have been very useful in terms of fine-tuning the language and making sure the recommendations are clear enough and cannot be mis-interpreted.

It might be useful in some cases to think about having the evaluation report read and commented by a local stakeholder; in particular if the evaluation is a highly political one. Language and presentation of certain ideas by internationals might be misinterpreted by nationals.

 

Team did not discuss thoroughly who will be the owner of this report. They had decided that the CD will be the owner but team would have benefitted from further discussion relating to what that fact would entail. Team should have clarified amongst each other certain boundaries or red lines beyond which they refuse to make concessions. There is a risk that the report is altered completely by the local stakeholders and team should be aware and ready to react to that.

It is useful for the team to engage in a detailed discussion relating to identifying the owner of the report and the consequences of that.

 

The team lacked clear vision as to the end usage of the report. Why was such a report drafted and to be used by whom and for which ends?

Going forward, evaluation team should try and clarify the usage of the end report: why is it being produced, to be used by whom and how?

 

Political Coverage of ISSAT on evaluations

What worked well

What was challenging

Recommendations

 

While the recommendations of the report were being presented, ISSAT did not benefit of full political coverage by Mandator as should be standard practice.

ISSAT should not take the political responsibility of justifying evaluation recommendations to national stakeholders, in particular in this highly political situation. This responsibility should lie with the mandator and this should be made explicit in the ToRs.

  • application/pdf

    Evaluation de la réforme de la sécurité et de la justice au Burundi

    Ce rapport d'évaluation, commandité conjointement par le gouvernement du Burundi, le gouvernement du Royaume des Pays-Bas et le Bureau des Nations Unies au Burundi, porte sur les efforts concertés des burundais et de tous leurs partenaires internationaux pour réformer les secteurs de la sécurité et de la justice (principalement la chaine pénale) depuis la signature de l’Accord d’Arusha en 2000. Les objectifs principaux de cette évaluation étaient :

    • de fournir un aperçu de l'évolution du processus de réforme de 2000 à 2013 ; 
    • d'identifier les résultats obtenus, les lacunes, les défis et les déficits ; et 
    • de formuler des recommandations aux autorités burundaises et à ses partenaires nationaux et internationaux visant à améliorer les activités actuelles et futures du processus de réforme. 

    Pour réaliser cette évaluation, le gouvernement des Pays-Bas a sollicité l’appui d’une équipe d’ISSAT (International Security Sector Advisory Team) qui a débuté ses travaux en mai 2013. Un Comité Directeur chargé d'accompagner les étapes de l’évaluation, d'en analyser les rapports intermédiaires, provisoires et finaux, et de les approuver fut établi. Ce comité était composé de représentants du gouvernement du Burundi (exécutif, législatif et judicaire), des partenaires techniques et financiers (PTF) et de la société civile.

    File