Tags:

Monitoring and evaluation (M&E) are essential for accountability and learning from the UNDAF. They are the basis on which the UN system assesses and makes transparent its contribution to the achievement of national priorities and the SDGs. They help the United Nations ensure that it is delivering on the commitment to leave no one behind, and that its support is primarily reaching those who are most disadvantaged. Anticipated M&E activities during the UNDAF cycle are laid out in a costed M&E plan.

Monitoring takes place continuously to track progress towards anticipated results, and checks if the theory of change identified at the design stage is still valid or needs to be reviewed. Building on identified data needs and baselines established during the CCA, monitoring helps the UN system and partners to prioritize, learn, make course corrections and communicate these to stakeholders. It incorporates attention to programme and operational bottlenecks.[23]

The UNDAF should be regularly monitored against the programming principles and approaches in each stage of the programming cycle. As part of the annual review process, the One UN Country Results Report, based on existing evidence, demonstrates how the UNDAF:

  • Contributes to the implementation of the 2030 Agenda, the SDGs and recommendations by UN human rights mechanisms;
  • Reaches those left furthest behind first, and contributes to the reduction of inequalities and discrimination;
  • Is inclusive, participatory and transparent, and enables stakeholders to hold the UN system accountable for results;
  • Addresses risks and resilience;
  • Is based on a valid theory of change, whereby assumptions on how UN programmes affect development change are confirmed and revised in light of changes in the context;
  • Contributes to developing the capacity of duty-bearers to meet their obligations and rights-holders to claim their rights;
  • Enhances coherence between the development, humanitarian, human rights, peace and security, and environmental agendas;
  • Contributes to fostering new and effective partnerships between national stakeholders and international actors, including through South-South and triangular cooperation;
  • Promotes integrated and coherent policy support to partners;
  • Contributes to strengthening national capacities to collect and analyse data for policy-making and reporting.

UNDAF evaluations are external and a minimum requirement of a quality UNDAF process. They are conducted once in the UNDAF life cycle, with timing coordinated among UN entities so that organizational or programme evaluations can contribute to them. UNDAF evaluations assess whether planned UNDAF results were achieved, whether they made a worthwhile and durable contribution to national development processes and delivered on the commitment to leave no one behind, whether this was done in a cost-efficient manner and whether results built on the United Nations’ collective comparative advantage (rather than that of individual agencies) in a coherent manner. UNDAF evaluations also assess the extent to which UN interventions contribute to the four UNDAF programming principles.

An UNDAF evaluation supports institutional learning on what works and does not work, where, when and why, and provides information that contributes more broadly to the evidence base for policy approaches backed by the UN system. It serves as the foundation for subsequent UNDAF planning processes. UNDAF evaluations and management responses issued by the UNCT are prepared in line with the UNEG Norms and Standards on Evaluation.

At the country level, an inter-agency M&E group supports the planning and coordination of joint monitoring and evaluation efforts, including the coordination of data collection, provision of coherent M&E advice, capacity strengthening, and sharing of monitoring and evaluation information. In doing so, it draws upon expertise from across the UN system, acknowledging that organization-specific monitoring and evaluation practices will complement the UNDAF monitoring and evaluation work. The M&E support group works closely with the Results Groups and in some cases is an integral part of them. In UN mission settings, M&E groups work with mission staff to ensure coherence. In humanitarian settings, the groups link as much as possible with humanitarian response monitoring frameworks and systems.

Monitoring and evaluation of the UNDAF contributes to strengthening national data collection systems, including by improving data quality, analysis and use with regards to monitoring progress on national SDG targets, and consistency with global SDG monitoring. Building on and strengthening existing national data and information systems help ensure national ownership as well as sustainability.

Increasingly, the United Nations undertakes joint real-time monitoring activities to support data collection, gauge perceptions from national stakeholders on progress towards UNDAF outcomes, monitor risks and test the continued relevance of the theory of change. A monitoring platform such as DevInfo/ (UNINFO) can support the transparency of data and provide information for reporting. The companion guide on monitoring and evaluation lays out the different steps in detail.


[23] Bottlenecks are blockages that may be related to supply or demand (e.g., knowledge of services, behavioural factors that influence people’s ability to access available services), the quality of services, or social values, legislative frameworks, finances or management influencing a sector or area. For more, see the UNDG Guidance on Frequent Monitoring for Equity.

Related Blogs and Country stories

Silo Fighters Blog

Innovation scaling: It’s not replication. It’s seeing in 3D

BY Gina Lucarelli | September 12, 2018

My brother is a mathematician and on family vacations, he talks about data in multi-dimensions. (Commence eyes-glazing over). But as the family genius, he’s probably on to something. Lately, in my own world where I try to scale innovation in the UN to advance sustainable development, I am also thinking in 3D, or, if properly caffeinated,  multi-dimensionally. As new methods, instruments, actors, mutants and data are starting to transform how the UN advances sustainable development, the engaged manager asks: when and how will this scale?  To scale, we need to know what we are aiming for.  This blog explores the idea that innovation scaling is more about connecting experiments than the pursuit of homogeneous replications. Moving on from industrial models of scaling innovation In the social sector, the scaling question makes us nervous because the image of scaling is often a one dimensional, industrial one: let’s replicate the use of this technology, tool or method in a different place and that means we’ve scaled. This gives us social development people pause not only because we can’t ever fully replicate [anything] across multiple moving  elements across economic, social and culture. Even if we could replicate, it would dooms us to measuring scaling by counting the repeated application of one innovation in many places.   Thankfully, people like Gord Tulloch have given us a thoughtful scaling series that questions the idea that scaling social innovation is about replicating single big ideas many times over. [Hint: he says scaling innovation in the public sector is less about copy-pasting big ideas and more about legitimizing and cultivating many “small” solutions and focusing on transforming cultures.]  Apolitical’s spotlight series on scaling social impact includes a related insightful conclusion: when looking at Bangladesh’s Graduation Approach as one of the few proven ways out of poverty, they suggest that while the personalized solutions work best, they might be replicable, but too bespoke to scale. So if scaling ≠ only replication, how do we strategize for scale? I’ve got a proposal:  what if we frame the innovation scaling question more about doing deep than broad? The scaling question becomes: How will we move from distinct prototypes managed by different teams at the frontier of our work to a coherent, connected use of emergent  experiments in programme operations? Scaling also means moving from fringe to core Scaling innovation in a large organization like the UN has a glorious serendipity to it. Did you hear that we are looking into impact bonds in Armenia? What about the food security predictor in Indonesia? Nice collective intelligence approach in Lesotho. Blockchain is being used for cash transfers in Pakistan and Jordan. Check out the foresight in Mauritius. UNICEF is using Machine learning to track rights enshrined in constitutions. UNHCR is using it to predict migration in Somalia. UNDP is testing out social impact bonds for road safety in Montenegro. These organic innovations are beautiful and varied and keep us learning, but we as a UN system are not yet scaling in 3D. These days, I’ve been talking to people (my brother’s eyes glaze over at this point) about how to see various methods of innovations not as distinct categories of experiments, but rather as connected elements of an emergent way of doing development. Towards a connected kind of 3D.  Yes innovation is more of an evolving set of disruptions than a fixed taxonomy of new methods, but if we narrow our scope for a moment to the subset of innovations which have passed the proof of concept stage, can we start thinking seriously about how they connect? [As an important side note, thinking in terms of taxonomies of innovations is not a panacea. Check out @gquagiotto’s slides for a more thorough story on how classification is trouble for public sector innovation because it means we limit our vision and don’t see unexpected futures where they are already among us.] Projectizing innovation without keeping an eye on the links among the new stuff won’t get us far, and might even be counter-productive.  Instead, what would it be like if innovations were deployed in an integrated way? A bit like Armenia’s SDG innovation lab where behavioral insights, innovative finance, crowd-sourced solutions and predictive analytics [among others] are seen as a package deal.  I am looking for collaborators to learn more about how are all these methods and tools related. Do they help or hinder each other? Are there lessons that can be learned from one area and applied to others? Should some new tech and methods not be combined with others? 9 elements of next practices in development work A few of us UN experimenters came together in Beirut in July to pool what we know on this.  We had a pretty awesome team of mentors and UN innovators from 22 countries. We framed our reflections around the 9 elements of innovation which I see as approaching critical mass in the field. This is by no means exhaustive, but it’s a start to moving these methods from fringe tests led by various teams to core, connected operations. Here are the “nine elements of next practice UN” we are working with: Tapping into ethnography, citizen science and amped up participation for collective intelligence to increase the accuracy, creativity, responsiveness and accountability of investments for sustainable development. Using art, data, technology, science fiction and participatory foresight methods to overcome short-termism and make sustainable futures tangible. Complementing household survey methods with real time data and predictive analytics to see emerging risks and opportunities and design programmes and policies based on preparedness and prevention. Building on the utility of “superman dashboards”  for decision makers to helping real people use their own data for empowerment, entrepreneurship and accountability. Leveraging finance beyond ODA and public budgets by finding ways to attract private capital to sustainable development. Evolving the way we do things and even what services we offer by managing operations through new technologies Applying psychology and neuroscience for behavioral insights to question assumptions, design better campaigns and programmes and to generate evidence of impact when it comes to people’s behavior. Carving out space for science and technology partnerships within the UN’s sustainable development work Improving how we support our national partners in managing privacy and ethical risks Moving from “that’s cool” to “aha it’s all connected” We need to start thinking of these 9 elements as connected. It might be that they reinforce each other - whereby focusing on data empowerment gives meaning, context and legitimacy to the use of big data to understand behaviors and online activity. Or that they undermine each other - in the way that citizen science can undermine innovative finance pay-outs, or behavioral insights are helping companies get around privacy regulations. Looking for the practical connections, here’s what we’ve got so far: Collective intelligence methods that listen to people organically can help determine whether your behavioral campaigns are resonating.  Because people’s intell is often more granular than statistics, they could also be used to test whether new forms of finance are making an impact on health, education and other development issues. Small scale and/or internal experiments in the UN to manage operations with new technology help us know what the next generation privacy and ethics risks are. Experiments in gray zones can then inform future-oriented regulatory frameworks. Keeping a focus on helping people use data for empowerment is a good northstar when using new data and predictive analytics to ensure that cultivating realtime sources of data isn’t deepening the digital or data privacy divide. Using foresight methods or predictive analytics can point to signals of where to invest with innovative finance instruments [Follow Ramya from IFRC innovations for more on this. Hence some early connections form a budding conspiracy theory! If you are thinking multi-dimensionally too, or using a few of these methods and see where this line of thinking can be improved, help me draw more lines on the innovation conspiracy board! [Or tell me why this is the wrong tree to be barking towards… That’s always helpful too.]   We’re working on a playbook to codify what we know so far in terms of principles and methods for each of these 9 elements. Stay tuned for that... and please do get in touch to throw your own knowledge in!

Silo Fighters Blog

Promise to data: What the SDGs mean for persons with disability in China

BY Marielza Oliveira, Elin Bergman | August 29, 2018

China has strong and capable statistical systems, no surprises there. After all, China is known for its ambitious Five-Year Plans, which have shifted focus from economic growth to policy planning, environmental protection, and social programmes for its population of 1.4 billion. What's different and unique about its 13th Five-Year Plan is that it's very much aligned with the 2030 Agenda for Sustainable Development. Even so, China faces a daunting challenge to implement Agenda 2030. For starters, it only has official data for less than 30 percent of the Sustainable Development Goals (SDGs) indicators, and much less when considering data that covers vulnerable groups, such as persons with disabilities. With more than 85 million, China has the largest population with disabilities in the world. The good news is that China keeps a record of people with disability, so the official data sources are up-to-date. To support the Chinese government’s efforts to improve monitoring of the SDGs addressing people with disabilities, we at UNFPA, UNESCO, UNRCO, UN Women and WHO came together to test innovative approaches to collect focused and disaggregated data. Starting in Qinghai We selected the Qinghai Province in Northwest China as the pilot location to test new ways of collecting data. In Qinghai, the estimated number of persons with disability is five percent of the total population, of which about 70 percent live in rural areas. There are about 150,000 people registered in Qinghai Disabled Persons’ Federation, the local chapter of China Disabled Persons’ Federation. Therefore, it was important for us to look at their administrative data, which are key for crosslinking data from various sectors, including public services data. To demonstrate how data collection in underdeveloped regions can be operationalized in a smart way, we collected, analyzed and crosslinked all the administrative data of people with a disability ID with the following big data sources: Data from the national survey of basic services and needs for people with disabilities which is developed and updated by China Disabled Persons’ Federation, the National Bureau of Statistics and local Disabled Persons’ Federations; Data from the public services and various sectors including health, education, employment, social security, poverty alleviation and community services. This type of data is gathered from crosslinking disability ID data with public services data. Data from internet-based platforms. It's possible to use big data to integrate and crosslink all data from the disability ID system, administrative data of disability services from China Disabled Persons’ Federation and the administrative data of public services. By expanding the existing official data with information from other sources, China has the potential to not only monitor the additional SDG indicators, but it can also compile additional disaggregated views of SDG progress to monitor specific groups and locations in need of support while strengthening “real-time” monitoring and analytics. During this process, we engaged the vulnerable groups in the analysis and interpretation of data. For us, knowing what people living with disabilities think and need is key. We carefully examined their views to highlight the SDGs indicators that could directly benefit their well-being. The hindrances of data collection We experienced a few setbacks throughout the process, but, we adopted coping mechanisms to address the issue of data collection and analysis: Quality control of data. The disability data available from different sectors uses very different standards and follows different collection approaches. Moving forward, we propose to check and purify the data using standard disability datasets and a data crosslink approach. We also optimized the timeliness and the mechanisms to update the data. Sharing data among sectors. The key index of disability and people with disabilities was determined using the disability ID. The data across sectors was crosslinked with key index such as disability ID and others. What we discovered The administrative data platform of people with disability was recently updated with the results from the annual survey of unmet needs and services for people with disabilities nationwide. This platform provides timely data for monitoring SDGs that address people with disabilities. Other sectors have developed big data platforms using citizens’ ID. To continue enhancing the administrative data records, it's important to collaborate with other stakeholders, such as health care and educational departments to extend the existing data sources. Household surveys can also be used to fill in the gaps of official disability statistics. We shared our discoveries with an expert panel, which included representatives from the Chinese government, the National Bureau of Statistics, China Disabled Persons' Federation and its Qinghai branch, Qinghai Department of Commerce, Institute of Rehabilitation Information/WHO Family International Classifications Collaborating Center China, China Disability Data Research Institute, Soochow University, Nanjing Special Education Teachers College, UN agencies, as well as Chinese IT giants What's next The methodology implemented in Qinghai province can easily be extended to other vulnerable groups since they also face similar challenges. Stakeholders can also adopt similar tactics to develop specific SDG indicators, data collection and analysis to evaluate their progress. As for next steps, the UN country team will continue to research protocols and methods to monitor disability-inclusive SDGs. We will also develop a knowledge platform in Chinese to promote capacity building for the implementation of Agenda 2030 and conduct an international comparative study of technical approaches of data collection and analysis. Data and internet-based surveys will also be developed to learn more about the needs of people with disabilities and improve services for them, while at the same time using those statistics to make sure that we leave no one behind. What methods are you disaggregate the SDGs to ensure data for action with people living with disabilities? If you have some tips, do tell! Photography: Jonathan Kos-Read. License by Creative Commons