Author(s)
Elise Shea and Meg Sattler
While we are proud of the progress made1 in pushing for perceptions to be a recognised metric for humanitarian performance, we are constantly refining our methods and questioning if we are doing things ‘right.’ We are working to make sure crisis-affected people’s views are heard, but we are not always sure that our research is based on their priorities. We also worry that our work itself, easy to list by response leaders as an ‘accountability mechanism’, risks being used as a check-box for accountability, whether or not concrete change is happening as a result. Is our work perpetuating the very behaviour we seek to change?
The worrying does not stop there. When it comes to decolonising aid, what are the responsibilities of an independent accountability organisation like ours, working at multiple levels for global systemic change? Headquartered in Austria, we are aware that we could perpetuate perceptions of ‘helicopter research2,’ studies in which researchers fly in, collect data, fly out, analyse data elsewhere and then publish results with little local involvement. What is the ideal structure, approach, and mix of methods to help us influence change in response management and at the highest levels of humanitarian policy? As an aid-adjacent organisation, the questions surrounding systematically shifting power are different from those of implementing organisations, relating to how we design research, collect and analyse data, disseminate results, and with whom we advocate for humanitarian reform.
This type of self-reflection has always been, and will continue to be, the impetus for the innovation at the margins of our projects. We invite you to join us and take a critical look at our work to find new possibilities.
Pushing accountability? Yes. Shifting power? Maybe not.
GTS was founded to fill a glaring gap in the way responses were monitored. That gap was people’s views. We felt it would be useful to know how people were experiencing a response – to what extent they felt it was effective, participatory, inclusive, and well-managed. We knew that if this was somehow quantifiable, it could feed into the language of humanitarian monitoring: numbers. The methodology drew on customer satisfaction research, and research themes were developed around a mix of country-specific humanitarian objectives and accepted normative frameworks like the Core Humanitarian Standard (CHS). These frameworks drew on substantial hefty consultations, such as the much-referenced Time to Listen report3.
But that is not to say the research themes are always of utmost importance to crisis-affected people. We are quick to point out when humanitarians fail to consult affected people about aid programming before its implementation, but we rarely meaningfully include affected people in our initial research design processes. This matters because we can miss things that might unearth important information. Take Haiti, where, on a hunch, we included an extra question theme that focused on transparency. Turns out this was not only more important to those we spoke to than anything other theme, but it also led directly to relevant action points from Haiti’s civil protection and coordination leaders.
In Burkina Faso, to collect children’s perceptions about humanitarian assistance, we began with an ‘exploratory phase,’ holding focus group discussions with children across the response to understand what they find important in their daily life and explore their thoughts about humanitarian aid. By using broad discussion guides, children’s initial reactions steer the conversation, their priorities informing the design of a later phase of the project with child leaders. Our Ukraine project consulted people about their aid priorities to ensure that the quantitative and qualitative research was based on what people found most important. Further, our user journey research in the Central African Republic4, Iraq5, Lebanon6 and elsewhere consists of a series of qualitative interviews to understand people’s perspectives and experiences. Inspired by human-centred design, these users’ experiences guide the research, rather than predetermined research objectives. Ideally, these processes ensure that our studies push forward people’s priorities, and actions taken as a result are as relevant to improving people’s experiences as possible.
We have learned over time that consulting humanitarians during the inception phase is undoubtably important to establish relationships and ensure buy-in (especially because our research rarely shines a flattering light on a response). At times, we have almost let that drive us too much and have nearly fallen into the trap of doing commissioned research for specific clusters that strays from our mission. This is tricky for us, because our organisation places primary importance on crisis-affected people, not organisations. Our research has shown that no matter how close to a community aid actors can be, they rarely represent their views. Many of the humanitarians we speak to are from affected communities and can share valuable insight, but we cannot assume they represent the views of crisis-affected communities. Local humanitarians will speak from their positionality, which is likely a position of power in comparison to affected communities. When planning our questionnaire design in one country, consultants resisted the plan to test the questionnaire with affected communities because they thought the population was not smart enough to understand, emphasising their ‘local knowledge’ and expertise in the sector as the priority.
Closing the loop can be tokenistic, too
Our analysis and dialogue processes strive to be cyclical: we share preliminary analysis, gathering and integrating feedback in the hopes that the final conclusions are a nuanced and accurate reflection of what affected people think while also addressing the constraints humanitarians face. We discuss our data with affected communities, to make sense of the data and gather recommendations. For example, in early 2022, we partnered with Fama Films7 in Burkina Faso to facilitate a community gathering and have an open dialogue about whether the results of an earlier quantitative study accurately represented people’s thoughts. Participants were quick to tell us when they thought our data was outdated and wrong or when it resonated, which helped us to correct or clarify our analysis.
In the past, we simply shared data back with communities, assuming that was ‘good practice’ and knowing that most researchers did not do it. But we realised that unless we had a clear purpose (like, equipping local actors with data they could use), or could report the concrete changes made based on people’s feedback, simply telling them what they had told us in the first place was redundant.
Even a more involved dialogue process can be extractive. Although such processes gather rich qualitative input to bolster our analysis with the aim of it making data more actionable (and thus increasing the chances that people’s views will be listened to and aid will improve), communities may gain relatively little from their participation in these sessions. This fact hit hard in a recent meeting with Fama Films where they noted that community leaders wanted to know what became of their engagement. “Nothing changed,” said the community leaders. People have long told us that they do not need us to “share back what we learned.” They know what they said. They want to know what was concretely done based on the feedback they gave.
After conducting quantitative and qualitative studies in Haiti, our team discussed the findings with humanitarians, and participants developed recommendations for how to act on the data. Local consultants then held community dialogue sessions with diverse community representatives to share the quantitative and qualitative data, as well as the recommendations from humanitarians. Participants in this dialogue session said that they not only felt respected but emphasized how they would use these data and recommendations in their own community work. “The very fact of sharing this information for us means a lot because at least we see that there are some organisations that respect people. Coming to us is a sign of respect!” said a leader of a motorbike taxi association. A representative of the Haitian Red Cross added, “I will use these recommendations to get closer to the community. When we have to carry out activities, we will be more attentive to the comments of the community.” Returning to people with more than just data was critical so that they could understand how their previous engagement was discussed and contributed to the advocacy process.
Equal and empowering partnerships
We are proud of the fact that we do not have a growth model, nor do we have one method that we roll out everywhere. Rather than set up shop in all the contexts we work, spending precious time and resources registering and establishing costly sub-branches, we typically contract local research organisations or local data collection companies to support our sampling design and collect data. Project teams develop strong relationships with local research teams, looking to them to help contexualise research tools and methodologies. Data collection partners are, of course, paid according to their determined rates and their company name is acknowledged in methodology descriptions, but their support is typically noted as the ‘data collection service provider’ or ‘enumerator team.’ We have not systematically acknowledged (or in fact always utilised) their knowledge contributions8 to design and implementation. Further, we have not often coauthored reports with research teams. Report writing is our forte but involving local partners in the drafting process will ensure “knowledge production” is shared, and not only allocated to GTS staff.
We broke this cycle in Bangladesh where, following Covid-19 research co-led by the Bangladesh Red Cross Society, we have partnered with the International Centre for Climate Change and Development9, a Bangladesh-based research institute, on a climate adaptation project. Similarly, our newest project in Afghanistan was co-designed and is co-led by Salma Consulting, a local research agency. And we are in the process of applying for a new project in Nigeria with local research partners as co-leads.
While pursuing our goal to elevate affected people’s perceptions in decision-making, we must not forget about their own agency to advocate for themselves. Assuming the role of ‘advocate,’ without sharing the data with community groups misses a step. We have a responsibility to ensure that our data can be used by community members, not just discussed with them. For our Haiti project, we were enthused to hear that by translating our detailed report into Creole and building relationships with civil society, local actors had a useful tool to advocate and take action.
All countries have their own accountability ecosystems, involving a range of systems, people and institutions: academia, local media, civil society organisations, activists, think tanks, and more. A hyper-focus on ‘humanitarians’ has often seen us miss opportunities to partner with those most likely to elevate community views or hold the humanitarian system to account. This has been a strategic priority for a while, but the rising pressure to feed the humanitarian beast and ensure enough buy-in from response leaders that they commit to listening to communities at all, has left our project teams with little time. We know that this is something we can do better.
Cooperating or co-opted?
After years of resistance to taking affected people’s perceptions seriously as an indicator of the effectiveness of a response, 2018 saw the system turn a corner. In Chad, the Office for the Coordination of Humanitarian Affairs (OCHA) ensured that the perception data we collected was integrated into the 2019 Humanitarian Response Plan (HRP) and tied to the strategic objectives for the response. Perception data embedded into the planning document for a humanitarian response was a big step forward for transparency and lauded as a massive step forward in ensuring affected people’s views drive the response. Affected people’s perceptions were, finally, on the map. This uptake was revolutionary. We thought the needle might move. This accomplishment prompted us to advocate for perception data to be integrated in all HRP documents, to see all responses living up to their HPC commitments. Slowly but surely, we have checked this box in almost all the contexts where we work. In many responses, we are asked by coordination teams to implement perception surveys year after year, so that the data can feed into annual HRP documents.
But consistently negative data indicated a dismal reality: nothing was really changing. Coordination teams and humanitarians could ask for our data, invite us to present at meetings, and plop percentages into glossy planning documents, but no one was ever held accountable for acting on the data. Worse still, our data became expected and risked becoming co-opted. People started to expect our data to appear among many other data sets to feed the HRP, which diminished the shock value. It was about ticking a box, not highlighting community views. Meanwhile, integrating perception data in HRPs — even if the responses were damning, and even if nothing was done to improve them — enabled the coordination to create the illusion that they were listening to communities. Suddenly, we realised that we might be facilitating a facade. Rather than reforming the system, what if we were enabling it to stay the same? Our perception surveys, alongside AAP working groups and activities, served to create a mask for country responses to wear year after year, pretending to be accountable.
Nauseated by how our research enables apathy, we started frantically searching for reasons why no one was acting on perception data. Like many in the sector who were scratching their heads, wondering why nothing was improving, we concluded that incentives were a large part of the problem. Humanitarians at global, national, and organisational levels are not incentivised to act on people’s feedback. We find that HCTs are not often motivated to coordinate clusters or organisations to act on the perceptions published. Convening the ICCG is about as good as it gets. And forget about any follow up to that meeting. HCTs sometimes even throw their hands up, claiming that they have no authority to hold operating organisations to account once our data is on the table. Meanwhile, organisations like to point fingers at their donors, claiming that short-term and/or inflexible funding prevents them from being able to adapt to affected people’s preferences and inhibits accountability.
To counter the risks of co-opted data and pushback, we have increasingly bolstered our country and global advocacy to ensure our reports do not just pile up on ReliefWeb. Long gone are the days when publishing reports and sharing them with response teams was the extent and norm for our advocacy. Better advocacy is not rocket science. We find that closed door conversations allow us to hear the challenges clusters, agencies, and organisations face to being accountable and the support they need. This approach allows us to be privy to all sides of the argument: where organisations point fingers at donors, there are normally three fingers pointing back at them. Armed with data on what everyone else’s excuses are for not being accountable, we use these secluded, ‘safer’ spaces – where more people are actually listening – to push hard for uptake of results.
We are also dipping our toes into more public advocacy. We do this with caution, because our behind-the-door advocacy remains our most successful, and we also need to have a trusting relationship with decision-makers for public advocacy to work; we need to shock, not alienate. But we cannot help but notice that for Protection from Sexual Exploitation and Abuse (PSEA), another humanitarian priority, it was the media that propelled this theme’s objectives forward. We wonder if the same could be true with accountability and effectiveness. A few of our efforts have shown promising results. A collaboration with The New Humanitarian10 helped us ‘go public’ with Haiti data, raising awareness at a national and international level that we have struggled to do elsewhere. Similarly, advising Mark Lowcock on his final statements11 as the departing Emergency Relief Coordinator helped to reinvigorate a global conversation about real accountability at the highest level.
Disrupting means working within
At times, we might feel limited by the system we are trying to change. Relying largely on project funding from humanitarian allocations can limit our ability to plan long term research and advocacy, just as it can limit humanitarians from moving beyond life-saving aid to durable solutions. Yet our funding partners are key strategic allies, enabling affected people’s voices to be heard across the system and influence policy decisions. To disrupt the system, we need to keep working within it while remaining independent enough to provide an ‘audit’ function. This certainly keeps us on our toes.
While we do have reasons to question if we are doing things ‘right,’ knowing that humanitarian action can increasingly appear accountable while lacking incentives for real change is powerful motivation to keep pushing at all levels for community voices to be heard. We refuse to see our work perpetuate the façade of accountability and hope that by countering stagnant reform with rigorous, multifaceted advocacy, we can influence real, incremental change, until humanitarian action is determined by crisis-affected people’s agency, preferences, and priorities.
Elise Shea is Policy Coordinator and Meg Sattler, CEO, of Ground Truth Solutions.
- https://groundtruthsolutions.org/2022/09/28/a-decade-in-the-trenches-of-accountability-and-so-much-still-to-accomplish/
- https://theconversation.com/helicopter-research-who-benefits-from-international-studies-in-indonesia-102165
- https://www.cdacollaborative.org/publication/time-to-listen-hearing-people-on-the-receiving-end-of-international-aid/
- https://groundtruthsolutions.org/wp-content/uploads/2021/10/CAR-GTS-CASH-report-ENG-1.pdf
- https://groundtruthsolutions.org/wp-content/uploads/2021/10/Falling-through-the-cracks-_-GTS-_-CCI-2021.pdf
- https://groundtruthsolutions.org/wp-content/uploads/2021/10/GTS_CAMEALEON_user_journeys_report_052021.pdf
- https://www.facebook.com/famafilms226/
- https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1009277
- https://www.icccad.net/
- https://www.thenewhumanitarian.org/analysis/2022/04/04/haiti-wide-gap-between-aid-promise-and-reality
- https://www.theguardian.com/global-development/2021/apr/21/humanitarian-failing-crisis-un-aid-relief
Read more
Pages
p. 10-19.