How Can New Technologies Innovate and Demonstrate Evidence of Impact?
4
minute read
Evaluating the use of ICTs—which are used for a variety of projects, from legal services, coordinating responses to infectious diseases, media reporting in repressive environments, and transferring money among the unbanked or voting—can hardly be reduced to a check-list. At SIMLab, our past nine years with FrontlineSMS has taught us that isolating and understanding the impact of technology on an intervention, in any sector, is complicated. ICTs change organizational processes and interpersonal relations. They can put vulnerable populations at risk, even while improving the efficiency of services delivered to others. ICTs break. Innovations fail to take hold, or prove to be unsustainable.
For these and many other reasons, it’s critical that we know which tools do and don’t work, and why. As M4D edges into another decade, we need to know what to invest in, which approaches to pursue and improve, and which approaches should be consigned to history. Even for widely-used platforms, adoption doesn’t automatically mean evidence of impact. Increasingly, donors and consortia like USAID’s Development Innovation Ventures are requiring evidence of impact as a prerequisite for larger funding.
FrontlineSMS is a case in point: although the software has clocked up 200,000 downloads in 199 territories since October 2005, there are few truly robust studies of the way that the platform has impacted the project or organization it was implemented in. Evaluations rely on anecdotal data, or focus on the impact of the intervention, without isolating how the technology has affected it. Many do not consider whether the rollout of the software was well-designed, training effectively delivered, or the project sustainably planned.
Supported by DFID, as part of our SIMLab:Credit project in Kenya, we’re developing our own monitoring and evaluation framework, which will help us to understand the role and impact of ICTs, including the Frontline platforms, in different organizations and on different kinds of programs. We’ll be at the M&E Deep Dive workshops in New York and Washington, DC, working through some of these issues with the help of our friends and colleagues.
The framework will be licensed under Creative Commons and made available on our website in a few months, in draft form, for others to freely use and provide feedback on. We hope to identify additional research questions and next steps that will help us understand the impact of technology on social change work.
Some questions we’ve been asked, and have asked ourselves, include:
- Why is robust evidence of impact of these platforms in such short supply? Is this about conflicting incentives and a broken funding model that’s unrecoverable, or is it just that we aren’t thinking specifically enough about tech?
- How should you evaluate the impact of tech on your program when it’s a means to an end, and a small part of a larger project, or when you simply don’t have the budget or the time to tease out the impact of one element of your work with something like a control?
- What are clever ways to layer effective evaluation of technology as a separate thread into an existing or broader monitoring and evaluation effort?
- What are the biggest differences in how large multilateral organizations and small, shoe-string budget NGOs should consider using and evaluating ICTs in their programs?
- When is technology simply disguising other, more challenging, non-technological issues?
Let us know if you know the answers. And of course, keep an eye on the hashtag today and tomorrow: #mandetech.
Update: Sean McDonald, CEO of FrontlineSMS has written in to highlight the importance of skilled tool-users. He’s given us permission to reproduce his comments below:
Just wanted to contribute a quick thought about the recent ICT of M&E blog post: there’s an enormous difference between whether a tool works and whether the way a person uses a tool is good/smart.
For example, FrontlineSMS may work, but if someone never publicizes the phone number, it won’t matter at all.
I know this is stuff that you have one eye on, but the phrasing makes it seem like we need a better approach to knowing what tools “do” and “don’t” work. You wouldn’t use a crane to eat soup, nor would you use a spoon to construct a building. People having a sense of fitness for purpose and appropriate use isn’t something that can be overlooked or assumed, and it’s not the spoon’s fault that someone doesn’t take the time to understand what it’s for, or even hurts themselves trying to construct a building with it.
Obviously, this could be a more complicated and/or nuanced discussion, but I think it’s important in discussions about M&E of ICT to make the distinction between the tool and the effective use of the tool.