M&E of ICTs4D: Project Level Learning in an Evolving Sector


7 minute read

ICT4D and M4D as fields are both exciting and clumsy, still in their infancy and most consistent in how quickly they are changing. The M&E Tech Deep Dive in New York in early October 2014 mustered up familiar conversations around monitoring and evaluation, as well as around ICT4D. Beyond the use of ICTs for M&E, finding common ground on the evaluation of ICTs was challenging. There is wide consensus that technology is not a silver bullet, and further, that it simply can’t be separated from the context in which it operates or is applied. How then, can we think about monitoring and evaluating the role of a particular ICT or digital platform in the context of a larger initiative?

The Deep Dive affirmed that one organization’s innovation is another organization’s absence of evidence. In other words, participants from different sectors and backgrounds had similar questions yet seemed to be speaking different languages. Two participants summed up the range of sentiments well. The first person recounted that ‘off-the-shelf’ platforms had been mentioned several times throughout the conference, yet no one there could point to an experience of using one without the need for extended assistance, additional tools, or diverting additional resources from other programs (usually in short supply). Another attendee relayed that in the early 1990s, companies and non-profits alike needed to strategize on how the Internet could be used in their daily work. In 2014, the Internet is indistinguishable from operations in many cities around the world: from email to search engines, to travel arrangements and daily news and the list continues. The first commenter conveyed that while ICTs (including mobile) are put forward to simplify and transform, they require a lot of work, and can be far more complex and costly (both in ethical and material terms) than the proposed benefits they offer. We need evaluation! The second commenter highlighted that the global shift toward digital is increasingly inevitable (in some form or another) and that investing significant organizational resources to prove that ICTs are useful may seem like a futile exercise (with possibly high costs) to affirm the obvious. Is evaluation really worthwhile?

The gap between immediate utility and longer-term impact, for better or1 worse, of ICTs can be attributed to any number of factors, including politics or competing priorities, neither of of which is unique to ICTs. Though it is clear that this gap reflects another rift, one between how the use of ICTs is conceived for specific projects and programs (often aligned with proposals and funding timelines) and the realities of a sector for which the infrastructure is still being laid, with many different forces and factors shaping accessibility, support, service areas, civil and political rights, and information regulation among others.

Appropriate uses of technology are found somewhere in between trial-and-error and a growing, albeit very incomplete, repository of experiences and evidence base. There is a clear bias in the stories and experiences that are circulated (and thus those that get noticed and counted), toward organizations that are well resourced enough to synthesize and publish their findings, toward individuals that leverage social media, and toward those who attend global conferences. These groups’ stories and experiences often drive, and even become, the conversation. Those without the resources for visibility, though with important insights and perspectives on the same applications, are often left out of the conversation entirely.

There’s a lot we don’t know about ICT4D. There are many voices that remain peripheral, or altogether invisible, in the conversation. What we do know is that there are big differences in thinking about ICTs at the project level and thinking about them more widely as part of an evolving sector. We also know that all programs (whether in the public, multilateral or non-governmental sectors) involve power dynamics and decision-making challenges during organizational planning and program execution.

As we continue listening to, learning about, and engaging with various approaches to M&E of ICT for development, here’s a summary of what we discussed in one session at M&E Tech NYC.

Separate monitoring of ICT tools/platforms into three phases:

Applying a tool or platform in ‘development’ involves different phases and approaches. In one discussion at M&E Tech NYC, we distinguished between three phases of ICT application and evaluation, with the intention of better assessment and learning for continuous improvement. These three phases were:

  • Rollout. The rollout phase is the initial window when a tool or platform is first adopted or applied, whether that tool or platform is completely new, a new component of an existing system, or an effort to integrate two or more existing systems or platforms. It is a window when active observation and active adjustments are prioritized. Rollout is contained, almost always messy, and things often go wrong. (In other words, there is no ‘formula’.) In technology, it is accepted practice to ‘fail fast’ - a counterintuitive approach for development practitioners, who think about failure in terms of trade-offs whose cost is borne largely by the most vulnerable. In the rollout of new technologies, a thing breaking or dropping out mid-action is not only normal, but expected. This recognition can alleviate some of the pressure on how evaluations count what is ‘good’ and what ‘needs to improve’. While it doesn’t minimize the importance of understanding the cost (and who bears it), evaluation of a rollout might focus more on organizational learning than longer-term ‘impact’ of activities. Every phase should calculate, minimize, and mitigate risks to people involved or affected.

  • Implementation. Implementation takes place after rollout, when lessons have been learned and a tool or product has been adapted. Evaluating implementation normally focuses on how people, context, decisions, and outcomes interact with the technology. How to do this, unsurprisingly, varies and is often a source of great debate.

  • Long-term use, adoption, and sustainability. Sustainability of ICTs is similar to other programs or projects. Resources, training, buy-in, and legitimacy issues are some of many factors that affect the long-term use and sustainability of ICTs. System requirements, upgrades, and interoperability considerations add new sustainability questions to consider. (These technical considerations are another blog post, or series, entirely.) Regardless, evaluating sustainability and rollout, without distinguishing between the two phases may compress different kinds of activities into an unhelpful average that undermines, rather than enhances, learning and shared benefit.

Evaluative questions

Along with the different phases mentioned above, it may be useful to answer three kinds of questions when evaluating the role of technology tools and platforms in development programming:

  • What is the role of technology in organizational processes and is there ease of adoption among those using it? Here we can try to identify how and where technologies and platforms change (or are intended to change) organizational processes as well as how receptive team members are to using them. If multiple technologies and platforms are used at different points in an organizational process (say, prioritizing service delivery according to need and geography), trying to isolate the impact of just one may not be the goal as much as identifying points for greater compatibility between and among systems and tools used.

  • What is the role of tech in decision-making and program outcomes? Here we may want to ask how the data collected (whether through SMS, maps, sensors, etc.) change the way decisions are made, transform working or reporting relationships (whether due to shifts in cost, power dynamics or anything else) or how the ICT’s role has shifted from rollout to implementation to long-term adoption.

  • What level and type of tech support is provided or needed? ICT4D or M4D ‘solutions’ are not often actually simple “solutions” to problems. Most tools and platforms require ongoing support for the people using them, for data analysis, and for integrating a tool’s functions with other tools or systems being used for similar purposes. Keeping an eye on how much support and learning is needed (and supplied) as part of ICT evaluations can help to assess how sustainable, user friendly and cost-effective the technology tool or platform is.


  1. There are many open questions about how to understand and separate the data integrity, security, surveillance and consent considerations of any ICT application, program or platform. While guides, indices, and regulatory approaches are emerging to help navigate these considerations, they were explored in greater detail during several other sessions at the NYC Deep Dive.