23 research outputs found

    Realist Evaluation : an overview

    Get PDF
    This report summarises the discussions and presentations of the Expert Seminar ‘Realist Evaluation’ with Gill Westhorp, which took place in Wageningen on March 29, 2011. The Expert Seminar was organised by the Wageningen UR Centre for Development Innovation in collaboration with Learning by Design and Context, international cooperation

    Hot issues on the M&E agenda

    Get PDF
    This report summarises the discussions and presentations of the Expert Seminar ‘Hot Issues on the M&E Agenda’, which took place in Wageningen on March 23, 2012. The Expert Seminar was organised by the Wageningen UR Centre for Development Innovation in collaboration with Learning by Design and Context, international cooperation. The report describes the hot issues on the M&E agenda, divided into a global top 10 of hot issues, the politics of evaluation, the battle field of rigour, and the future of evaluative practice

    Developmental evaluation : applying complexity concepts to enhance innovation and use

    Get PDF
    This report summarises the discussions and presentations of the Expert Seminar ‘Developmental Evaluation’, which took place in Wageningen on March 22, 2012. The Expert Seminar was organised by the Wageningen UR Centre for Development Innovation in collaboration with Learning by Design and Context, international cooperation

    Improving the Use of Monitoring & Evaluation Processes and Findings : Conference report

    Get PDF
    This report summarises discussions and presentations of the Conference ‘Improving the Use of Monitoring & Evaluation Processes and Findings’, which took place on March 20 and 21, 2014. This conference is part of our series of yearly ‘M&E on the cutting edge’ events. This conference particularly looked at under what conditions the use of M&E processes and findings can be improved. Report CDI-14-01

    Seeking surprise : rethinking monitoring for collective learning in rural resource management

    No full text
    Commonsense says that monitoring systems should be able to provide feedback that can help correct ineffective actions. But practice shows that when dealing with complex rural development issues that involve collaborative action by a changing configuration of stakeholders, monitoring practice often falls short of its potential. In this thesis, I describe my search to understand why practice is so limited and what might be needed to design monitoring processes that foster learning in concerted action around equitable and sustainable development. I examine the contradiction between monitoring as the basis for learning in ‘messy partnerships’ and the reality of monitoring driven by a concern for upward financial accountability. The environment – natural, organisational and socio-political – constantly gives feedback. But feedback needs to be perceived and interpreted for learning in rural resource management. Monitoring can be viewed as designing and implementing the feedback loops necessary to ensure that collective learning is fed by ongoing information flows within and among members of ‘messy partnerships’ and enables concerted action. However, neither monitoring nor learning are, by and large, described in neither comprehensive nor precise enough terms for implementation as part of sustainable resource management. The promising potential of more participatory approaches, if based on the same logic as mainstream M&E as is commonly the case, does not provide sufficient innovation. In Chapter 1, I introduce the focus of the thesis via a metaphor that emerged during fieldwork in Brazil – ‘tiririca’ (Cyperus rotundus) a pernicious weed that sprouts back more ferociously the more it is cut back. ‘Tiririca’ represents the complexity of developing a learning process based on monitoring concerted action, as well as the need for structural solutions. In this chapter, I introduce concepts – institutional transformation, messy partnerships, monitoring and (collective) learning – that have spawned my quest for monitoring alternatives. I outline the growing relevance of the topic, which brings me to my research questions: 1. How is ‘monitoring’ viewed by rural development and resource management discourses that advocate more adaptive forms of rural resource management? On what assumptions and presuppositions about processes of monitoring, collective learning and improved action are these discourses based? What practical orientation do they give for learning-oriented monitoring? 2. What is the underlying logic – with related presuppositions – of mainstream monitoring approaches and hence what is the monitoring theory that is expected to guide practice? 3. What can practical experience from small scale rural change processes in Brazil and from a large rural development organization show about what is needed for monitoring to contribute to collective learning? 4. What insights are offered by studies on cognition and organizational learning that can help fill the theoretical gaps and overcome the practical challenges of learning-oriented monitoring? 5. Given these empirical and theoretical insights, what would an alternative monitoring approach require so that it can trigger the forms of learning needed to ensure adaptive and collaborative rural resource management? In Chapter 2, I explain how the thesis evolved from questions emerging from my involvement from 1994 to date in diverse interventions and organisations. I have sought to fuse the strands of experience into a cohesive argument by discussing the key experiences and theories on which I draw in this thesis, and the methodologies used. I make use of four aspects of theory to interpret experiences: contextualising discourses, the espoused theory and theory-in-use of monitoring, theoretical building blocks, and methodological theory. In Chapter 3, I argue why the focus of this thesis is so critical. I examine three key discourses that are currently guiding much of the thinking and practice in rural resource management – adaptive management, collaborative resource management, and sustainable rural livelihoods. These discourses are concerned with adaptive behaviour, collective learning and interactive decision–making. They are value-driven and focus on environmental conservation, equitable resource use, and poverty alleviation. The adaptive management discourse highlights four features in monitoring for resource management: the hypothesis-refining effect of models by using simulated monitoring data; the role of indicators to make tangible the visions, targets and resource states; the importance of investing in long-term data collection and deliberative processes on that data; and the focus on scientific experimentation and surprise. However, in practice various problems occur, including the time and expense of the necessary data; inadequate ecological monitoring; difficulty of agreeing on what merits experimentation and should be monitored; and naivety about the challenges of jointly designing monitoring systems and information analysis. For collaborative resource management (CRM) , monitoring efforts should combine a logic model perspective and hypothesis testing. The logic model perspective is used to plan initiatives and structure their monitoring. Such models focus on monitoring indicators for specific pre-determined results to prove progress and ensure accountability. Joint articulation and continual assessment of indicators is central to monitoring CRM. Criticism of CRM includes: naivety about ‘community’ and consensus and simplifying the complexity of collective monitoring. The sustainable rural livelihoods approach (SLA), or framework, calls for an M&E system, with accompanying indicators, to assess progress towards livelihood sustainability. Livelihood approaches rely on mainstream M&E practice, which, in the case of externally-driven/initiated development interventions, means using programme logic models. The role of monitoring is couched in general terms, such as using the livelihoods framework to structure M&E processes. The livelihoods literature offers a set of desirable monitoring practices, which constitutes an idealised and overly simplified perspective, and refers uncritically to existing methods and approaches that perpetuate the problems they bring and no guidance on integrated use. Notwithstanding the mentioned limitations, ‘learning’ with and by stakeholders is an important principle in all three approaches and is expected to help identify actions that, in turn, are expected to be more effective for goal achievement. Such learning is assumed to require systematic seeking and sharing of information, hence the need for feedback loops for which monitoring is considered the prime vehicle. However, none of the discourses identifies how these feedback loops need to be constructed. Monitoring is expected to provide raw data and spaces for reflection to create insights. How learning should occur is articulated mainly in terms of intentions and principles, with practical references being made towards existing logic models or hypothesis-testing approaches and to participatory methods. The discourses rely on an unclear mix of monitoring as a research process and monitoring of set objectives based on programme logic models. In Chapter 4, I discuss programme-logic based monitoring by drawing on several monitoring guidelines, representative of those widely used in the development sector. I identify 13 presuppositions that underpin the espoused theory of mainstream monitoring. These presuppositions relate to: the definitional boundaries of monitoring, how information is viewed, and how monitoring processes are perceived to be constructed and implemented. A definitional boundary is created between ‘monitoring’ and ‘evaluation’, presumed to be a useful enough distinction to construct feedback mechanisms and information systems (Presupposition 1). A link is assumed to exist between monitoring and how it is to serve management (Presupposition 2). Strategic analysis and sense-making are presumed to not require explicit inclusion when developing a monitoring process (Presupposition 3). The second cluster of presuppositions relate to how information is viewed. Monitoring systems are designed to fill information needs, rather to interpret information (Presupposition 4). Stakeholders are expected to be able to anticipate their information needs adequately, in terms of a comprehensive and fairly stable set of indicators, with related methods and processes, irrespective of the diversity of actors or issues at stake (Presupposition 5). Monitoring guidelines overwhelmingly ignore processes to analyse, reflect critically, interpret, and communicate information (Presupposition 6). Indicators are considered an appropriate form in which to express and convey a balanced picture of information that enables learning (Presuppositions 7 and 8). The third set of presuppositions relate to how monitoring processes are expected to be constructed and implemented, which are summarised as a series of standardised steps. Stakeholders are presumed to have sufficient time, expertise, clarity and willingness to follow the basic steps in sufficient detail for effective results (Presupposition 9). Mainstream monitoring presumes that the steps have a generic validity, irrespective of socio-cultural context (Presupposition 10). Power relations between those involved in monitoring are ignored other than, at most, to say they matter (Presupposition 11). Mainstream monitoring presumes that people will know how to deal with and effectively use informal monitoring that occurs through daily interactions outside the prescribed formal processes and channels (Presupposition 12). Monitoring systems are not viewed as needing to learn from, or adapt to, the environment in which they are being implemented (Presupposition 13). Mainstream monitoring based on these presuppositions is expected to provide the feedback or information that is supposed to trigger learning in development initiatives. No distinction is made in terms of the validity of this model of monitoring for different types of development processes or for different types of organisational configurations. Empirical material is discussed in Chapters 4, 5 and 6: M&E efforts in 36 IFAD projects operating on the basis of mainstream monitoring, and a three-year action research project with a ‘messy partnership’ in Brazil based on participatory monitoring as a possible alternative. Evidence from 36 IFAD projects indicates that the presuppositions on which mainstream monitoring is premised are problematic. Two types of difference can be found: between the monitoring theory about the operational context and the surrounding realities, and between monitoring theory and monitoring practice. Difficulties result from insufficient attention is given to the ‘fit’ of monitoring processes and their underlying logic with the operational contexts of IFAD projects. Furthermore, the linear cause-effect perspective and procedural focus on how to construct and implement monitoring does not recognise the reality of dynamic partnerships having to construct a shared understanding of the initial intentions of development intervention. Finally, monitoring practice is not based on a clear understanding of what learning is, how it can be designed and how it occurs in relation to monitoring. The action research work in Brazil showed that participatory monitoring is not necessarily the answer. Five important issues need to be addressed if more participatory forms of monitoring are to contribute to collective learning. First, learning must be seen to result from the process of developing monitoring and from the data. Valuing both is important for ‘messy partnerships’, who must continually articulate, refine and (re)align understandings and priorities. Second, messy partnerships require finding an interpretation of ‘participation’ that fosters concerted action, yet respects the uniqueness of partners and their own cultures and rhythms of reflection. Third, dialogue between partners is critically important if data are to be useful. Therefore, participatory monitoring requires shifting from a view of monitoring as a data system to monitoring as a communication process. Fourth, approaching all monitoring through one type of data process (i.e. indicators stacked in an objective hierarchy) and a static image of partnership in concerted action does not fulfil the need for diverse learning processes that occur in institutional transformation (e.g., technical innovation, dissemination, organisational change). Finally, setting up the participatory monitoring process proved more costly and less sustainable than initially expected. The dynamics within and between the partners, and the shift in strategic focus as understanding emerged (in part as a result of monitoring) mean that activities come and go, and so does the related monitoring. Participatory monitoring only provides some advantages as it replicates, at least in part, several of the questionable presuppositions of mainstream monitoring. The empirical material brings me to suggest that programme-logic based monitoring – whether as mainstream or participatory practice – might benefit from insights drawn from other theoretical areas. Chapter 7 offers a set of ideas drawn from two fields: one, cognitive studies, that has not yet influenced monitoring practice in the development sector, and another, organisational learning, that is slowly being ‘courted’ as potentially interesting. Monitoring constitutes a deliberate and collective attempt to guide our ‘knowing’ or ‘cognition’ by seeking and processing information. Organisational learning examines how a group of people communicate and deal with information that is vital for the survival of their organisation, and in so doing draws on cognitive science. Therefore, both fields have potential to help reconsider beliefs about monitoring. Drawing insights from the two fields together brings me to four ideas with thought-provoking potential for rethinking monitoring: (1) messy partnerships as collective cognitive agents; (2) distributed cognition; (3) sense-making; and (4) cognitive dissonance. The ideas can be summarised as follows. Messy partnerships must maintain coherence in their organisational and collective cognition, and correspondence with the external environment. Cognition in a messy partnership is distributed, which requires convergence in order to come to effective concerted action. Sense-making is critical for convergence for which different approaches are needed, depending on the complexity of the circumstances and issues faced. Cognitive dissonance, or ‘surprise’, is an important indicator where coherence or correspondence are awry. Monitoring systems could be more purposively designed based on valuing cognitive dissonance as an important trigger for learning. Monitoring requires innovation if it is to contribute to its much lauded potential to enable learning. A shift is needed to see monitoring as: dialogical (not only a singular rationality), multi-ontological (not only assuming an ordered universe), distributed (not centralized), functioning through relationships and heuristics (not only through data and the hope of omniscience), essential for impact (not just a contractual obligation), sustaining collective cognition (not only the tracking of implementation), and seeking surprise (not only documenting the anticipated). Chapter 8 integrates the empirical and theoretical strands of the thesis by suggesting a set of eight design principles that are needed for collective learning in adaptive rural resource management. These design principles have been identified to offset the identified limitations found in the dominant paradigm of mainstream monitoring and in participatory monitoring. They are not a comprehensive set of design principles for learning-oriented monitoring. The first three principles relate to the purpose of monitoring, the next three principles relate to operational concerns, and the last two relate to sustaining monitoring practice. 1. Understand the nature of institutional transformation being pursued as a social change process, in order to know the degree of complexity one is dealing with, and the extent to which information needs can be anticipated and learning functions will be significant. 2. Recognise the nature of actors and partnerships on monitoring, by analysing the commitment of partners to concerted action, governance structures and decision making processes of each partner, allocation of responsibilities in the partnership, degree of overlap of information needs, way in which information is shared, and monitoring capacities. The reality of ‘messy partnerships’ forces a questioning of a hierarchical, intra-organisational model that underpins mainstream monitoring. 3. Specify distinct monitoring processes in terms of learning purposes to enable a more precise definition of tasks, protocols, responsibilities, time frames, formality and degree of ‘collectiveness’ .For institutional transformation on the basis of deliberate concerted action undertaken by a messy partnership, nine learning purposes are likely to be relevant (though not all necessarily simultaneously or equally prominently): financial accountability; operational improvement; strategic adjustment; contextual understanding; capacity strengthening; research; self-auditing; advocacy; and sensitisation. 4. Plan for sense-making as well as information. The sense-making process must be appropriate for the type of situation and issue being considered (i.e. multi-ontological). Seek to understand what is needed for critical reflection to be possible among and between the partners and how insights are best communicated, which capacities must be built to make this possible, which additional communication processes are needed, and allocating resources to this end. 5. Balance formal protocols and informal processes, incorporating everyday interactions of sharing and debate into the monitoring system, and linking the informal sphere to formal processes and channels. Informal processes are not only crucial for ongoing sense-making but also a source of information sharing. 6. Value and seek diverse types of information, related specifically to the nature of development (principle 1) and the learning function (principle 3) that has to be met, and understand which processes exist and/or are needed to ensure that such information is shared and debated and informs decisions. 7. Ensure the institutionalisation of learning-oriented monitoring. Concerted efforts are needed to ensure that policies, practices, methodologies, responsibilities, and incentives are all helping make monitoring as discussed in this thesis possible. 8. Approach monitoring as an evolving practice, thus allowing it to become a dynamic knowledge production process, which when subjected to regular critical reviews and adaptations retains relevance and usefulness. These design principles must be translated into practice by the key actors in development if the future of monitoring is to be more useful. Development implementers, facilitators, funding agencies and academics have distinct roles to play in the transformation of the ‘DNA’ of monitoring. The issues discussed in this thesis have relevance far beyond the approaches and initiatives discussed in this thesis. The notion of development-as-project is being replaced by the recognition that social injustices require institutional transformation. ‘Messy partnerships’ and other types of alliances are the new configurations through which institutional transformation increasingly must unfold. Monitoring, when conceived as a socially negotiated, evolving methodology for structuring information flows and knowledge production and use, offers an approach to help construct ‘pathways to sustainability’. However, we need to significantly revise mainstream beliefs and practices about how monitoring can create feedback to harness its potential to deepen and sustain the learning that societies need to deal with ‘wicked problems’. This requires reassessing the epistemic and ontological perspectives and principles that underpin monitoring, and determine its feasibility, relevance and ultimately, usefulness

    The myth of community? Implications for Civil Society Organizations and Democratic Governance

    No full text

    Gender and participation: bridging the gap

    No full text
    corecore