A values-based approach to evaluating the role of tech in social change projects: starting with a broader canvas

11 minute read

Our field is growing up. All around us, colleagues and friends who, like us, specialize in using tech for social change are developing nuanced, practical and helpful guides for practitioners. These tools are a far cry from the simplistic checklists that the sector produced five years ago - they are wide-ranging, practical walkthroughs of the challenges that tech-enabled projects face. For example, check out the Tool Selection Assistant from the Engine Room - the latest of many excellent contributions they’ve made to our field - and the Data Starter Kit developed by the Cash Learning Partnership’s Electronic Cash Transfer Learning Action Network. What’s great about them is that they go beyond purely technical considerations to cover the enormous range of success factors that come together to make an impactful, sustainable tech-enabled project, from legal implications, to organizational information management processes, to fit with existing community habits and capacity. We’re proud to be collaborators on these tools, and we’ve taken a similarly open approach with one of our latest products - our Monitoring and Evaluation Framework.

About the SIMLab M&E Framework

The Framework was written for SIMLab staff, to help our team develop monitoring and evaluation (M&E) plans and evaluation criteria that would get at the breadth of considerations that we felt went into successful tech-enabled project design. Developed with Linda Raftree, it was previewed in this blog post way back in 2014. From the outset, we wanted to make it available in draft format to others working on similar challenges - so that they could help make it better, and so that they could pick up and use any and all of it that would be helpful to them. It’s currently housed in a Google doc linked from our website, which anyone can read and edit, and released under an open license. It includes practical guidance on developing an M&E plan, a typical project cycle, and some methodologies that might be useful, as well as sample logframes and evaluator terms of reference.

What good looks like: criteria as value statements

One of our toughest challenges while writing the thing was to try to recognize the breadth of success factors that we see as contributing to success in a tech-enabled social change project, without accidentally trying to write a design manual for these types of projects. So we reoriented ourselves, and decided instead to put forward strong, values-based statements.* For this, we wanted to build on an existing frame that already had strong recognition among evaluators - the OECD-DAC criteria for the evaluation of development assistance. There was some precedent for this, as ALNAP adapted them in 2008 to make them better suited to humanitarian aid. We wanted our offering to simply extend and Consider the criteria for technology-enabled social change projects. You can read summaries of the criteria we’ve developed at the end of this post.

So what now?

Like all of our products, the Framework will be continually revised and updated as it’s used, and as we learn, and we hope that our community will support this too. It’s not currently designed as a quick guide to evaluating tech-enabled projects, because it’s intended as an end-to-end guide for our staff, so if you’re interested in helping make a resource like that happen, get in touch. That said, we’ve already heard from colleagues in the governance and transparency space who don’t currently have an internal M&E approach, and who are keen to experiment with the whole Framework. For others with established approaches, we hope that elements can be lifted and incorporated for relevant projects - Oxfam GB has committed to using the adapted OECD-DAC criteria in three project evaluations in 2016, for example. More broadly, we hope that the thinking we’ve done on why this matters can help inform better practice and scrutiny on tech-enabled projects throughout the project cycle, but particularly throughout the project cycle.

One thing I would love to see in future are meta-evaluations of tech-enabled social change projects, comparing similar projects, channels, technologies, design approaches and regions - but this is impossible unless more organizations share their evaluations publicly, as we have. Remember, folks, sharing is caring! and if you’ve shared yours, or have any other comments, do leave them in the comments section below or in the Framework document itself.

The adapted criteria

Read more about them in the Framework.

Criterion 1: Relevance

The extent to which the technology choice is appropriately suited to the priorities, capacities and context of the target group or organization.

Consider: are the activities and outputs of the project consistent with the goal and objectives? Was there a good context analysis and needs assessment, or another way for needs to inform design - particularly through participation by end users? Did the implementer have the capacity, knowledge and experience to implement the project? Was the right technology tool and channel selected for the context and the users? Was content localized appropriately?

Criterion 2: Effectiveness

A measure of the extent to which an information and communication channel, technology tool, technology platform, or a combination of these attains its objectives.

Consider: In a technology-enabled effort, there may be one tool or platform, or a set of tools and platforms may be designed to work together as a suite. Additionally, the selection of a particular communication channel (SMS, voice, etc) matters in terms of cost and effectiveness. Was the project monitored and early snags and breakdowns identified and fixed, was there good user support? Did the tool and/or the channel meet the needs of the overall project? Note that this criterion should be examined at outcome level, not output level, and should examine how the objectives were formulated, by whom (did primary stakeholders participate?) and why.

Criterion 3: Efficiency

Efficiency measures the outputs – qualitative and quantitative – in relation to the inputs. It is an economic term which signifies that the project or program uses the least costly technology approach (including both the tech itself, and what it takes to sustain and use it) possible in order to achieve the desired results. This generally requires comparing alternative approaches (technological or non-technological) to achieving the same outputs, to see whether the most efficient tools and processes have been adopted. SIMLab looks at the interplay of efficiency and effectiveness, and to what degree a new tool or platform can support a reduction in cost, time, along with an increase in quality of data and/or services and reach/scale.

Consider: Was the technology tool rollout carried out as planned and on time? If not, what were the deviations from the plan, and how were they handled? If a new channel or tool replaced an existing one, how do the communication, digitization, transportation and processing costs of the new system compare to the previous one? Would it have been cheaper to build features into an existing tool rather than create a whole new tool? To what extent were aspects such as cost of data, ease of working with mobile providers, total cost of ownership and upgrading of the tool or platform considered?

Criterion 4: Impact

SIMLab guidance on Impact: Impact relates to consequences of achieving or not achieving the outcomes. Impacts may take months or years to become apparent, and often cannot be established in an end-of-project evaluation. Identifying,documenting and/or proving attribution (as opposed to contribution) may be an issue here. ALNAP’s complex emergencies criteria include ‘coverage’ as well as impact; ‘the need to reach major population groups wherever they are.’ They note: ‘in determining why certain groups were covered or not, a central question is: ‘What were the main reasons that the intervention provided or failed to provide major population groups with assistance and protection, proportionate to their need?’ This is very relevant for us. For SIMLab, a lack of coverage in an inclusive technology project means not only failing to reach some groups, but also widening the gap between those who do and do not have access to the systems and services leveraging technology. We believe that this has the potential to actively cause harm. Evaluation of inclusive tech has dual priorities: evaluating the role and contribution of technology, but also evaluating the inclusive function or contribution of the technology. A platform might perform well, have high usage rates, and save costs for an institution while not actually increasing inclusion. Evaluating both impact and coverage requires an assessment of risk, both to targeted populations and to others, as well as attention to unintended consequences of the introduction of a technology component.

Consider: To what extent does the choice of communications channel or tool(s) enable wider and/or higher quality of participation of stakeholders? Which stakeholders? Does it exclude certain groups, such as women, people with disabilities, or people with low incomes? If so, was this exclusion mitigated with other approaches, such as face-to-face communication or special focus groups? How has the project evaluated and mitigated risks, for example to women, LGBTQI people, or other vulnerable populations, relating to the use and management of their data? To what extent were ethical and responsible data protocols incorporated into the platform or tool design? Did all stakeholders understand and consent to the use of their data, where relevant? Were security and privacy protocols put into place during program design and implementation/rollout? How were protocols specifically integrated to ensure protection for more vulnerable populations or groups? What risk-mitigation steps were taken in case of any security holes found or suspected? Were there any breaches? How were they addressed?

Criterion 5: Sustainability

Sustainability is concerned with measuring whether the benefits of a technology tool or platform are likely to continue after donor funding has been withdrawn. Projects need to be environmentally as well as financially sustainable. For SIMLab, sustainability includes both the ongoing benefits of the initiatives and the literal ongoing functioning of the digital tool or platform.

Consider: If the project required financial or time contributions from stakeholders, are they sustainable, and for how long? How likely is it that the business plan will enable the tool or platform to continue functioning, including background architecture work, essential updates, and user support? If the tool is open source, is there sufficient capacity to continue to maintain changes and updates to it? If it is proprietary, has the project implementer considered how to cover ongoing maintenance and support costs? If the project is designed to scale vertically (e.g., a centralized model of tool or platform management that rolls out in several countries) or be replicated horizontally (e.g., a model where a tool or platform can be adopted and managed locally in a number of places), has the concept shown this to be realistic?

Criterion 6: Coherence

DAC does not have a 6th Criterion. However we’ve riffed on the ALNAP additional criterion of Coherence, which is related to the broader policy context (development, market, communication networks, data standards and interoperability mandates, national and international law) within which a technology was developed and implemented. We propose that evaluations of inclusive technology projects aim to critically assess the extent to which the technologies fit within the broader market, both local, national and international. This includes compliance with national and international regulation and law.

Consider: Has the project considered interoperability of platforms (for example, ensured that APIs are available) and standard data formats (so that data export is possible) to support sustainability and use of the tool in an ecosystem of other products? Is the project team confident that the project is in compliance with existing legal and regulatory frameworks? Is it working in harmony or against the wider context of other actions in the area? Eg., in an emergency situation, is it linking its information system in with those that can feasibly provide support? Is it creating demand that cannot feasibly be met? Working with or against government or wider development policy shifts?

*Thanks to Amy O’Donnell at Oxfam for this excellent observation. Basing our vision of what good looks like on values, rather bald factual statements allows the evaluating team to decide what’s appropriate for their context, and for the criteria to evolve as our understanding does.