Is this really the first ever independent evaluation of an open-source software? SIMLab begins the Sigmah platform evaluation.

3 minute read

Exciting times at SIMLab: we’re delighted to share that we’re working with Groupe URD, a humanitarian think tank focussed on evaluation, innovation, training and strategy for the sector, on the first ever evaluation of their open-source project management platform, Sigmah.

Created in 2011, Sigmah is a multi-lingual, open source information management software for international aid projects. It helps users to organize and share project information, like documents, timelines and indicators, which often get stuck in documents and spreadsheets. At the end of a second large phase of funding, and prior to considering what’s next for the platform, the team behind Sigmah wanted to take stock of its performance, governance, impact, and sustainability, and chose to commission an independent, external evaluator to help them do so.

Credit: Pixabay CC0

Evaluating the creation of a non-profit open-source platform is groundbreaking

As far as I know, this is the first time that the makers of a non-profit software platform have commissioned such an evaluation.* There are a number of evaluations focussing on the implementation of platforms (such as Ushahidi in Kenya and Haiti, or RapidSMS in Rwanda). But none that I’ve been able to find focusses on the effort to build the software in the first place. There is no evaluation of our work to develop and maintain FrontlineSMS, although we have published the evaluation of our DFID Credit project, which touches on the development of FrontlineSMS’ Payments platform. We started our work on Monitoring and Evaluation of technology in social change projects because we recognized that we weren’t able to gather the monitoring data we needed to understand the outputs the software was generating, let alone the outcomes or any idea of impact, or any insights that might help us to do a better job of building the platform. This is bad news - in business, any experiment you don’t thoroughly understand the results of is just a risky innovation you’re not learning from.

The lessons learned might not only be for the producers. It’s also important to understand the issues and pitfalls that these types of platforms have to overcome to be successful, and this evaluation will allow SIMLab to look in depth at how for Sigmah, users, emergencies, donorship, open source and business models intersected.

Testing out our tech-focused evaluation criteria

The evaluation is also the first time we’re using the criteria we adapted from the OECD-DAC Criteria for the Evaluation of Development Projects, which seek to tease out the contribution tech makes to the social good projects in which it is implemented. So far, all I’ve done is tailor the criteria to the questions thrown up by the project, and I’m already seeing that as this evaluation focuses on the development of a platform and not its implementation, a number of things need to be shifted—but the criteria are still very relevant, and, taken together, seem likely to produce a rounded picture of the story of Sigmah. Below are some of the themes we’ll cover:

  • Does the way the software is coded and the choice of license make sense for the goals of the project and the sustainability of the project?
  • Is the software easy to use, and does it have the features its users need?
  • How did the Sigmah project perform against the intended outcomes and outputs?
  • How effective was the Sigmah team’s support to organizations adopting the software?
  • Was the technology tool rollout carried out as planned and on time? If not, what were the deviations from the plan, and how were they handled?

Our next steps are interviews and an open-access survey with Sigmah users, the Steering Group and staff, and a workshop in Paris later in the month. Groupe URD are committed to openness and transparency, and so in addition to being supportive of blogging the outcome of the evaluation, they will be sharing the report online. We’ll share the link when we have it.

* If anyone knows different, let us know in the comments!