Whose reality counts? The donors or the beneficiaries?





I have spent over 15 years in the field of teaching, teacher training and school management. I have been interested in observing and understanding how do educational initiatives make a difference in people's lives. When I decided to do my PhD research, it was clear for me That will study issues related to the impact of education on people's lives, well-being and status. This opened my eyes to the concept of “Empowerment”.  Empowerment, a buzzword that has been used and sometimes overused, especially in the field of international development. But to clear my mind from all the tens of definitio, I decided to stick to the World Bank definition: “the capacity of individuals or groups to make purposeful and effective choices in the interest of pursuing a better life for themselves” (Walton, 2003, p.3).
Empowerment is a complex term with endless interpretations, due to its multi-contextual (political, socio-cultural, economic…) and multi-dimensional (individual, community, national, …) nature.  To add to all those confluted complexities, “empowerment” can be viewed as a process, as a product and as an approach, But, on the ground it is more commonly viewed as an aim.


As I became more and more interested in understanding the empowerment of marginalized and disadvantaged populations in developing countries, especially children and youth, I found myself in love with research within the “Education in Emergencies” realm. There I realized that most rhetoric, concepts, and even program approaches have been adopted from the broader ‘International Development’ arena. In fact what has happened during the last four decades was a mere borrowing and adaptation of  techniques developed by the big players in the international development community such as the World Bank and the Organization for Economic Co-operation and Development (OECD). Evaluation and evaluation techniques are not the only example of that. Most practitioners assigned with ‘monitoring and evaluation (M&E)’ assignments, rely primarily on the OECD principles and approaches. I am hereby, not trying to underestimate the value of the  DAC Principles for Evaluation of Development Assistance, which have been for that last decades the main tool of evaluation. There is however, always a risk to bypass the most important aspect: ‘People’s Reality’ by missing out to observe, incorporate and analyze the relational and categorical (or group-based) inequalities that play an incremental role in people’s (dis)empowerment.  


On the ground practitioners undertaking evaluative assignments, are usually under pressure that hinders them (even if they would like to) to incorporate and count people’s reality. There is the pressure of time, lack of resources, often lack of documentations and and meaningful data. Under those situations practitioners tend usually to resort to the ready-made ‘impact evaluation techniques’ that count outcomes versus planned objectives and goals. Inadvertently the real questions get omitted; the real  question/s of how much did the project (program or initiative) impact people’s reality, how much did they benefit from it, how much did it influence their ability to make, for their well-being, appropriate decisions and choices. In a short, how much did the project under evaluation ‘empower them’.


Evaluation of empowerment might cause tension in relationships. On the ground programme managers are accountable to their own constituents (parliaments, managers, tax payers….) as well as to the programme beneficiaries. To satisfy the first group, they need to provide measurable outcomes, which come sometimes at the expense of empowering processes that are usually complex and difficult-to-predict. This means the process of evaluating someone else’s empowerment is potentially disempowering. When we say ‘disempowering’ this means that the process of evaluation or project assessment becomes (mostly unintentionally) more powerful, i.e. exerting power over the less powerful, the marginalized and disadvantaged program recepients. One of the reasons behind that ‘bad’ practice is the selection of frameworks and evaluation criteria that are often political, based on the interests of donors and project implementers, which are flowed from domination by the more powerful.  These top-down, donor-centric, practices overlook the capabilities, needs and aspirations of the programme recipients, directing preference to providing assistance to states and those working within centers of power.


In order to avoid those top-down evaluation practices, it is very much advised to undertake sincere ‘participatory approaches’ while assessing projects and programs directed to marginalized and disadvantaged populations. Participatory monitoring and evaluation approaches (PM&E) tend to appreciate the realities of poor people which are local, complex, diverse and dynamic. Unlike the top-down conventional development agenda, PM&E do not focus primarily on  ‘Income-poverty’, but extent their vision to other and more broader aspects of  deprivation, e.g.  humiliation, isolation,  social inferiority, isolation, physical weakness, vulnerability,, powerlessness … PM&E  approaches differ from the conventional approaches, in that they seek to actively engage project stakeholders, while assessing the project’s progress and results, aiming to shift power from programme implementers to the intended programme recipients and beneficiaries.  This allows people (i.e. the intended programme recipients)  to set the direction for change, plan their priorities, and decide whether the intervention has made progress and delivered relevant change. It is not just a matter of using participatory techniques, but it is about radically rethinking who initiates the process, undertakes it and who learns or benefits from the findings.  


If practitioners in the evaluation arena are sincerely interested in ‘counting people’s reality’ they have to start realizing the necessity to provide local people with space and opportunity to establish their own analytical frameworks.  According to participatory approaches it is the local, complex and dynamic reality of people that count.  These complex realities can not be counted if data needs and instruments are designed in remote donor offices, in Geneva or the USA. These complex realities will stay missed out, if the whole process of data extraction takes place while beneficiaries are kept passive, un-and/or ill informed. These complex realities will be missed out, if practitioners, experts and project implementers insist on imposing top-down investigation, assessment and evaluation procedures.

Comments