The Evidence Agenda: appealing for rationality in tech for social change
6
minute read
We don’t have a culture of evidence-informed action in tech for social change. Instead of being seduced by the lure of the new, we have to start building a body of evidence; professionalizing our action; and focussing on incremental improvement in practice over mindless innovation. Our CEO Laura Walker McDonald charts a way forward.
A ca. 6 months old Winter White Russian Dwarf Hamster (Phodopus sungorus) in a hamster wheel. Doenertier82 at the German language Wikipedia [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0/)], via Wikimedia Commons
As a profession, the technology for social change field is at least a decade old. We can and should be working to professionalize, setting standards and improving practice, and developing a body of evidence on which we can all draw.
But get any group of practitioners together and similar stories will emerge:
- Challenges in moving from pilots to effectively ‘scaling’ interventions
- Duplicating effort, funding and building multiple platforms and tools to perform the same functions
- Technology interventions that are disconnected from the context, real needs and capacities in the target environment, the implementing organization’s existing infrastructure and systems, and the wider field’s existing efforts
- Conflating the availability of a consumer technology in a market (access), or household-level ownership of a technology (ownership) with meaningful use of the technology (use), leading to exclusion of burdened groups such as women, people with disabilities, and older people from tech-enabled services and accountability mechanisms
These problems persist because technology for social change has a weak culture of evidence and accountability:
- No requirement or culture of demonstrating appropriateness or empirical research before technology is implemented in a given project
- Few, if any, evaluations of technology projects, and very few published or shared
- A culture of reporting by blog post (partly encouraged by frequent funding through pilot, innovation and core funding without the rigor of programmatic funds)
- A relatively intimidating technical field reducing the intensity of critique of our proposals and reporting by non-technologist colleagues, including donors, compared to more mainstream areas of work
- No industry-standard criteria for monitoring and evaluation (M&E) of technology projects
- Scarce funding for M&E of aid, and less for tech-specific enquiry
- Lower rates of follow-on or repeated funding for technology to the same organizations or for the same project, in contrast to strong relationships between donors and humanitarian agencies or with particular development issues in particular places
There is evidence that this is starting to change. For example, the Principles for Digital Development propose best practice which could be used as a standard at project level - although at present, implementers are encouraged only to make a corporate commitment to the Principles by adopting them. The growing M&E technology field meets several times a year at MERLTech conferences - although the focus is usually on using technology for M&E, not how to tease out the contribution tech makes to social change.
Requiring evidence-based working in technology-enabled projects would require overcoming cultural, resource, and technical constraints, some of which are summarized in SIMLab’s Monitoring and Evaluation Framework. But sustained investment in changing the way we work could allow:
- Improved project-market fit, with appropriate technology being utilised more of the time
- Better value for money, as tech-enabled social change projects become less risky propositions
- Publication of results meaning that future projects can iterate on past learning, contributing to a global understanding of what works
- Systematic sharing of learning allowing collation of knowledge across thematic, geographic, time or other themes through meta-evaluations and other exercises
- Increased ownership by target populations through feedback and participatory governance measures
- Improving impact.
It is possible to build a culture of evidence in a field previously not known for it. The development of the accountability agenda in humanitarian aid following the catastrophic response to the 1994 Rwandan genocide shows how a culture of self-reflection, improvement, and holding each other to account - however imperfectly realized - can be traced to the realisation of failure as a sector. Now, accountability is a given, a requirement on all aid projects, and built into most organization’s daily work. Every three years, ALNAP produces the State of the Humanitarian System report, based in part on shared evaluations. The SPHERE project and the Core Humanitarian Standard set clear benchmarks for quality. Although there is always more to do, the humanitarian sector’s progress on evidence and accountability can be a model for us to follow.
For now, though, ICT4D professionals can still operate more on conviction than evidence. Ultimately, the population targeted by our intervention may pay the price, in wasted time or resources, or worse, in actual harms caused by inappropriate and exclusive practice.
In the months we have remaining to us SIMLab will try to make meaningful progress towards a better future with the resources we have - we’re working with DIAL to finalize openly-licensed Framework approaches to Monitoring and Evaluation, and Context Analysis. Both are open for consultation RIGHT NOW on our website - head over to our resources page to find links to the Frameworks in Google Doc, ask questions, suggest improvements and get right in there and edit.
But there’s more we’d love to see done. Both the Frameworks could be improved way beyond what we’re able to do now, with more design work, improved sample tools and resources, and translations. I’d like to see better partnerships between academic researchers and practitioners in the ICT for Development field. Most importantly, I’d like to see a repository for practitioner evidence (like evaluations) for tech for social change projects, hosted by an impartial body or jointly by a network of organizations, donor-funded and supported to solicit and manage contributions, and with resources to conduct analysis on the evidence that arrives so that it can become open knowledge. ALNAP, the humanitarian body I mentioned earlier, does this already. There is a precedent. Can we challenge ourselves to be as generous with our learning as humanitarian agencies working in some of the toughest contexts in the world?
Without this kind of investment in improving our practice we’re just experimenting without learning anything - and that’s not ethical, when the projects we work on affect people’s lives. We are bound by our ethical codes - be they the Digital Principles, human rights, or humanitarian principles - to do better.
__
SIMLab is closing in early 2018. We’re behind on staff salaries, and some bills. If you liked what we did, or ever used our resources, please donate to help us close as gracefully as possible. Hire our team! And keep working with us until we close - we’re still consulting! Get in touch and find out how we can help make technology part of what you do.