Mobile Money in the last mile: the independent evaluation
9
minute read
Seán Ó Siochrú (Nexus Research, Evaluator, left) and Christopher O. Juma (Mombasa SACCO, Manager, right)
In December, SIMLab underwent an independent project evaluation of our two-year Mobile Money in the Last Mile Project in Kenya. We’ll let you in on how the evaluation went plus some key takeaways and as always with a link to the full evaluation.
Our organizational principles (more on these soon) ask us to model both openness and a commitment to learning from our work, so we’re sharing this evaluation under an open license and encouraging your comments! We hope that others will be able to learn, as we have, from this challenging but very rewarding project.
The Project was funded by a grant from the UK Department for International Development (DFID)’s Global Poverty Action Fund (GPAF) Innovation stream. The grant allowed us to work with FrontlineSMS to build a mobile money management tool and support a transition from cash to mobile money for small and medium sized enterprises (SMEs), microfinance organizations and savings groups, and NGOs serving vulnerable communities throughout Kenya. For more info on the project you can read the case study we launched last year, which we’ll be updating soon with the learning from the evaluation.
We were eager for the evaluation to see how this project, grounded in SIMLab’s approaches and principles, had performed against its objectives. We were also curious to learn how our partners, who we had worked very closely with over the last two years, would evaluate the overall project, our performance and the quality of our trainings and support. And we wanted to see how our newly-developed internal monitoring and evaluation framework stood up (the Framework will also be out for public review and discussion next month, and launched at a webinar in April at the OpenGov Hub in Washington, DC).
We’re glad to say, the evaluation went well and was a very rewarding experience. You can read the full report here. What follows is a short summary of some of the key learnings.
How it all worked
We developed the Terms of Reference (ToR) for the evaluation from a template provided by DFID, to extend the enquiry a little. The ToR stated:
‘[f]or SIMLab, the evaluation should provide an independent statement on the quality and outcomes of the project and help to identify and codify learning from the project and lessons that can be taken forward in the future.’
Two dozen or so evaluators responded to the ToR, which we posted on our website and advertised in technical communities of practice and through partners. We interviewed a very strong shortlist to test their approach to the task and their ability to grasp both the technical and the technological challenges the project had posed. While all the candidates were excellent, we selected Seán Ó Siochrú for his long-standing interest in the use of mobile technologies for development, a strong understanding of the introduction of technology coupled by human complexities and because his location in Ireland meant that our budget could go further and support a longer engagement in Kenya.
So how did the project go?
We were pleased with the outcome of the evaluation, not because the project achieved its targets - in fact, it failed on most of the indicators in our logframe. But the evaluation shows that we adapted nimbly to challenges in the project, and worked with DFID and our Grant Managers at Triple Line to proactively reorient our approach and focus as new needs emerged. When our partners turned out to have differing needs, we tailored our training and advice to suit them, rather than promoting just the product we built through the project (FrontlineSMS’ Payments module). We focussed on our partners and their needs, and not on the technology we were piloting (although this was also successfully developed, delivered and deployed by a number of our partners).
It also became clear that a major contribution of the project has been learning. The evaluation notes;
“The learning aspect, much of it achieved through overcoming obstacles encountered, is among the most important of the outcomes. The Project was working at the leading edge of mobile money management, and lessons emerging concerning how rural and marginalized institutions can benefit from it, the obstacles involved and the solutions found are invaluable and deserve to be fully explored and disseminated.”
As this is central to SIMLab’s model and ethos, we’re glad to have a lot of material to work with— and you’ll hear a lot more from us as we work through the criteria, diagnostic tools, training approaches and other frameworks we developed while working on this project.
Key Takeaways
Some of the successes and challenges are highlighted below. The full list can be found in the evaluation report.
What helped the project
- FrontlineSMS with Payments was released on time and with the right functionality according to the project plan (although in hindsight it could have been helpful to have released earlier in the project)
- The agile, user-centered project management approach we followed to determine upcoming needs and activities worked well for our partners
- Some highly-enthusiastic partners were willing to dedicate significant time to working with SIMLab to make major institutional changes
- Thorough documentation and dissemination of the project’s challenges and decisions
- Critical and intimate understanding of the context and culture of Kenya
- On-demand support for partners for technical, communications and operational needs of partners available via Whatsapp, SMS, email, phone and in person visits— although this was relatively high-touch and makes the approach harder to scale or replicate (see the case study for more on this)
What hurt the project
- Partners overall had much lower organizational and technical capacity than was predicted and this impacted implementation significantly
- The technical solutions on the market were complicated, and partners using different combinations of technical applications and mobile network operator products decreased SIMLab’s ability to remotely troubleshoot effectively
- The implicit theory of change, and the immediate and long-term objectives, were not spelled out entirely clearly in the initial Proposal and subsequent alterations— this requirement is now a feature of our project management methodology
- When choosing partners to work with, recruitment was not based on the total number of beneficiaries (end-users) that could benefit from mobile money. Some of our partners are community based organizations (CBOs) and have only a small number of beneficiaries, while others only have the need to send/receive payments from a small proportion of their total beneficiaries. Had we aimed to maximize the number of beneficiaries we would have excluded some of the CBOs.
- Some of the hardware components necessary for the system to work are difficult to source, resulting in delays, and in a few cases they did not function as expected.
Changes as a result of the evaluation
- All SIMLab projects will draft inception reports which reconfirm project deliverables, theory of change and high-level project plan, including monitoring and evaluation plans, an explanation of how the project fits into SIMLab principles and a risk identification and mitigation exercise. Regular Steering Group meetings (quarterly or monthly during high-intensity periods for the project) will review progress against plans, real-time monitoring data and financial information, as well as the risk register. The rest of the SIMLab team and other key stakeholders will be consulted for feedback.
- SIMLab needs to do more to address key areas of importance to donors and other stakeholders which do not apply neatly to projects (for example, for DFID both Value for Money and gender were significant cross-cutting areas of interest across their portfolio, which did not resonate strongly with the project and for this reason were perhaps under-considered by our team).
- SIMLab has added a requirement for some form of context analysis to be carried out at the beginning of all projects. Our internal Framework for this is forthcoming and will be shared, as ever, under an open license. Context assessment should be reviewed and updated throughout the project lifecycle in order to account for market changes in products and services, access and affordability of tools etc.
- If technology plays a significant role in the project, there will need to be an assessment of the sustainability of the technology and ensure that proposed products are readily available and accessible beyond the period of the project.
- Relevance will be better monitored in future projects by creating cost-benefit and cost-effectiveness thresholds expected to warrant participation. In the end it became clear that some of our partners though fully using the systems were not saving huge time or costs because of a low-threshold of need, and thus little to gain. In future programs we will preset thresholds to participation to ensure only those who need to participate do so.
- An even stronger commitment to dissemination of learning. The evaluation has reaffirmed that a great deal of information learned will be valuable to donors and other practitioners so we will include the final six months of learning in the Last mile case study, along with upcoming blog posts and some output to share the organizational capacity criteria we developed as part of the project.
Conclusion
The learning from this Project covers how small and under resourced organizations operate and make change, their capacity for technology adoption and changes overall and what influences success. There’s more to examine here—how software projects evolve over time in a changing marketplace; how technology projects in the last mile can offer capacity-building but still be scalable and replicable (and not cost-prohibitive); and lessons relating to adaptive programming and how to do it thoroughly and well. Stay tuned for more writing coming out of the Mobile Money in the Last Mile of Kenya Project and especially our updated case study.
In the meantime, we hope you’ll take some time to read through the evaluation report and share your thoughts—let us know what you’d like to know more about!