Data collection nightmare: administrative costs


9 minute read

paper Stacks of administrative records with no digital counterpart

SIMLab, in a former life, developed software tools to help organizations do what they already do, but better. We have long pointed to increased efficiency and savings as being strong value propositions of low-end digital capacity-building - for example, using text messages. We tend to think that using SMS as a communication channel (appropriately and smartly) can save organizations time and money, compared with other forms of communication such as phone calls, informational pamphlets, on the ground mobilizers, or using tools with high barriers to entry, such as apps. There’s a similar efficiency argument for digitizing financial processes; for example, moving from cash payments to mobile money. This potential has prompted donors to fund hundreds of grants since 2007 to get organizations to drop cash and adopt mobile money or other digital payments. In 2014, USAID issued an executive bulletin requiring e-payments as the default payment mechanism for implementing partners.

In markets with a well-developed mobile money ecosystem, our experiences have shown that time and costs savings are achievable by even the smallest of organizations, and exponentially so for larger organizations. Huge time and cost savings, especially for beneficiaries or end-users, are reported anecdotally, and satisfaction surveys and feedback show similar gains for administrators, project coordinators, and accountants. The often un(der)stated hope in digital transitions is that the time and cost savings realized through more efficient operations in finance, outreach, coordination and management will be cycled back into the organization and put to use in other ways - including increasing social impact.

So, where’s the data? How much money does mobile money save an organization? How much time does it save and what is time saved translated into value?

This blog post tries to do two things: first, make the case for more investment in a body of evidence around the repeated claim that digital payments save organizations time and money. Second, (and probably step one in achieving the body of evidence), we offer some reflections on why the quantitative evidence of operational savings is so hard to gather.

Yes but, where’s the data?

Time and cost savings make sense - organizations, and I’d venture to say, people, everywhere strive for them. But, if you don’t know how much you’re spending now, how will you know how much you’re saving later - and if you know you’ll save, why would you even begin to track the changes?

As implementers working with partner organizations to change operational processes and evaluate the impacts of the changes, we have access to very little operational data. We had this problem as a tool provider, too. When we asked partners to share what they knew, we found that they had no data to give us. Staff felt that they were saving time, but hadn’t tracked it quantitatively - and anecdotes, past a point, don’t help you to fundraise.

There are multiple reasons for this.

We don’t track that

At the inception of our Last Mile Mobile Money project funded by the UK Department for International Development (DFID) we conducted baseline assessments with each of the organizations we worked with in order to be able to better track the impacts of our intervention over time. Straight from inception we realized getting baseline data was going to be difficult.

“How much do you spend monthly on making payments to beneficiaries?”

Of course organizations track how much is being sent to beneficiaries, but how does it get there? What are the numerous tasks repeated each time when making payments, how often does it happen and how much does it cost, is it variable or dependent on volume of payment? Fees for bank transfers, trips to the bank, cost of delivering cash, staff hired to make deliveries, are all things that should be included in calculating costs associated with making payments, but these were not figures that any of the 40 organizations we work with were prepared to give us. It was up to us as outsiders to define what the costs associated with making payments included. However, that was only the beginning.

Every organization has these costs, but for overburdened and understaffed organizations, tracking operational indicators in addition to programmatic ones is not a priority. Making payments to beneficiaries is not optional, it’s a part of the program - so why would it be critically tracked and documented? It’s seen as ‘either we do it or we don’t,’ and not viewed on a spectrum from expensive to inexpensive. This may be a feature of non-profit organizations, as even large international NGOs don’t subject their internal processes to the same monitoring they employ around programmatic outputs. Financial process costs for NGOs receiving funding are often written off as a bulk “administrative” cost, pushed into budgets without any real indication of how the money was spent. This practice puts them, and us as project implementers (or tool providers) at a standstill, unable to claim any tangible time or cost savings as a result of a transition to new more efficient processes.

In our project, ongoing monitoring of how costs and time spent changed with the introduction of mobile money was next to impossible. We would visit a partner six months or so after they had begun using FrontlineSMS’s mobile payments software and be welcomed into their office to hear there had been huge savings. We couldn’t wait to document the changes so we’d ask the basic questions from the baseline right away.

“How many hours are spent monthly recording payments? How much does it cost to make payments monthly?”

And more often than I’d like to admit, we got the same number we’d been given six months earlier. And we’d say, ok, so you’re saying that you’re saving money, yet the numbers you’ve given us show no change. The response would be that the figures, were estimates, “but, I know we’re saving money.”

It’s important, we should track it

Throughout the two-year grant, we tried to encourage organizations to monitor the costs and time associated with payment and communications tasks. We came up with lists of all the parts of transactions and administrative costs they should keep track of on a monthly basis. Yet, this was difficult to do as an outsider, and time-consuming for an overburdened organization, so success was achieved only in isolated and simplified cases. For more on this, read our independent evaluation, in which the evaluator expands on the data collection difficulties we encountered throughout the 24 months, and offers his own reflections of difficulties faced when he attempted to reconcile our reporting.

So, as implementers, we were left struggling to collect indicators for our logframe. Evidence beyond our logframe indicators were given at times by certain organizations, but a documented success in one case is hardly anything we could do more than smile about so long as others weren’t reporting similar ‘wins’. Luckily, we were able to get strong data from a few partners and from beneficiaries who themselves are better suited to track their own cost and time savings on a per transaction basis, as opposed to the organization trying to track all beneficiaries’. But with organizations, it’s a lot harder, and at the end of the day, if cost savings from operations don’t have an easily transferable alternative (for example, donors matching cost savings with additional investment) then there’s very little incentive to document the savings and no possibility of passing them on, except indirectly when budget remains at the end of the year. And then, those funds will probably be recycled back into the general budget and yet another year or program cycle will persist with estimates in the administrative line.

What about us?

The truth is, even we don’t do a good job of tracking this type of data when it comes to internal processes. As a small organization, costs and, sometimes even more so, opportunity costs matter. One recent change we made was from previously being heavily dependent on Gmail and Skype to now transitioning most internal conversations to Slack, and task management to Asana. We know this has cut down on our email load, it’s obvious to each of us… anecdotally. But by how much? We don’t routinely track our time, so we don’t know - but we feel good about it. Our team has reported feeling more connected to teammates, better aware of projects outside of their own and generally more part of a team, which is important for a team of 7 operating on 3 different continents. I wonder if I’ll be contacted by someone at Slack asking me for our organization’s time and cost savings since transitioning to Slack. I sure hope not, because I’d have some pretty anecdotal tales and very little data.

Can anecdotes be powerful evidence of change?

At the end of the day, it’s not all about numbers, it almost never is. Most significant change technique is a participatory approach to collecting stories of significant changes that have occurred throughout a project. It can be a laborious method and best done through interviews and focus group discussions, but can be quite useful in understanding impacts of the project when quantifiable data is not accessible, or inadequate. When paired with a quantitative approach the two methodologies together can better represent a story.

What needs to change?

Not everything is quantifiable, and it shouldn’t need to be. Throughout our DFID project, data was often hard to come by, and often resorted to estimates, which we had to use to update the logframe reporting required by our donor. Once we let up our focus on achieving logframe targets, and concentrated more on learning beyond the numbers, collecting new data became much easier and we gained critical insight into last mile organization’s needs. We were not always able to collect the indicators we had hoped, but through more intimately understanding changes occurring at the organizational level, we were able to start to prove the project’s impact and explain what had happened through narrative reports. In retrospect, we should have written a logframe that allowed for more qualitative impact assessment methodologies but at the time we had no idea the nightmare it would be to collect administrative data.

We also need to improve the incentives inherent in our grants and reporting for better data monitoring. Perhaps non-profits need to be reminded to see themselves as businesses, with bottom lines that can be powerfully impacted by efficiency savings - leaving more funds for the organization’s mission, and investment in people and facilities.

We’d love to hear reflections from other implementers and tools providers - weigh in in the comments below.