Reflections on if and how our partnership is working in Bangladesh

Effective partnership working is crucial to producing quality results, especially when working with complex problems such as the worst forms of child labour (WFCL). Even more so when the mechanism for achieving results is participatory generation of innovations. The consortium partnership in CLARISSA is a key aspect of our process innovation. Understanding if and how the partnership is working is therefore central to our learning focused monitoring, evaluation and learning (MEL) system.

Our evaluative rubric

The rubric is a qualitative self-assessment tool to monitor the functioning of the partnership. An evaluative rubric was developed to assess the functioning of the partnership during the co-generation phase with all consortium partners. It has evaluative descriptions of what performance or quality looks like at three levels (well-functioning, emerging and needs help). Based on the agreed principles and values that underpin the partnership, the partners agreed on the seven aspects of performance (or elements) and evaluative descriptors at three levels.

We apply the rubric during six-monthly After Action Reviews (AARs). The CLARISSA consortium partners in Bangladesh have now applied the rubric twice and have further contextualised and refined it to improve its relevance and applicability in context.

Building co-ownership and shared understanding

CLARISSA is committed to regularly generating, sharing, identifying, documenting and nurturing the learning related to partnership and use of learning to fuel adaptive management. For this reason, proper understanding and appropriate use of the rubric becomes crucial. By ‘proper’ and ‘appropriate’ we mean to emphasize not only understanding of each element of the rubric but also following the same methodology in application.

Self-assessment using the rubric was completed in the first AAR held in Dhaka in February 2020, by participants grouped by their organization (Terre des hommes Foundation (Tdh) and Grambangla Unnayan Committee (GUC) and BRAC Institute of Governance and Development (BIGD)). At that time the tool was new for most of the staff and a sense emerged that not everyone fully understood the meaning of the elements of the rubric, nor what to do as part of the rubric self-assessment exercise. We learned that the expediency of the rubric depends on how critically partners apply their understanding of the rubric. Participation in the assessment exercise is intended to generate dialogue across diverse experiences of how the partnership is working.

At the same time, people potentially showed less courage in mentioning negative points about their organisation if the exercise is done in a group consisting of people working in different positions within the organisation. Not all group processes are equally safe for everyone sitting around the table. Power dynamics can create blockages to generating open dialogue.

We concluded that there is scope to contextualise the rubric, to build co-ownership and make it more meaningful. In response, we contextualised it through a participatory process including colleagues from Tdh and Grambangla – the two main consortium partners in Bangladesh. Through this process people were able to understand the nitty-gritty of the elements and could explain and reflect on the rubric again. The team discussed the definitions of each element in their native ‘Bangla’ language which increased the effectiveness. The process generated an increased sense of ownership over the contextualised partnership rubric for the Bangladesh team.

Adapted use of the rubric to enhance rigor

For the second AAR in August 2020 we adapted our process to produce more diverse and in-depth evidence of our shared partnership performance. We added individual assessments to be implemented prior to the facilitated group session. The facilitation team asked each partner to request of their team members to individually complete the rubric exercise, and, for the key contact to then collate the answers for their organisation. In practice, the instructions were interpreted differently.

In one partner organisation, team members individually assessed the partnership, and the key contact then compiled everyone’s work into one document. Another partner did it as a group exercise, much like in the first AAR. In the third partner organisation, staff worked together using a shared online document so that all individuals could provide their view, and, they then discussed and agreed before submission. This partner also understood that they were required to provide evidence for each of the levels of performance, rather than rate at which level the partnership is performing.

As a result, diverse types of evidence and discrepancies in the volume of information by partners were brought to the facilitated group session. Further, as the second AAR was implemented virtually due to the Covid-19 pandemic the quality of the dialogue that was possible and consequently the quality of the collective assessment was also affected. Some staff we know find it easier to contribute their views through a virtual platform, whereas others may have found this harder, due to weaker connections or simply feeling less confident using technology.

Nonetheless, the exercise did generate evidence of positive progress in building the partnership, as well as surfacing some challenges, particularly around a need for increased communication and coordination among themselves of the team as they approach field level implementation. Compared to the first application, the adapted process generated critical actionable learning about the partnership which enabled a focus on a few elements of the rubric such as how to improve communication and integration.

Shifting mindsets as we apply the evaluative rubric

We identified further learning to increase usefulness of the rubric and ensure we do not fall into common traps with participatory methods, such as failing to navigate power and not enabling all voices to be heard. Central to overcoming these tensions is the mindset with which we engage.

The evaluative process can build the mindset yet might also require an enabling environment to be cultivated more broadly. There is a danger that we create a sense of competition between partners and reinforcing a dichotomous view of big-small and by association strong-weak. We know that every partner is unique in their scope, capacity and nature of work and this is what makes the partnership function. The rubric will be most effective when we nurture a culture of learning and mutual understanding. Honesty, transparency, commitment and mutual respect are all essential elements, as well as learning to accept criticism.

The path to more consistency

If we are to maximise learning and to use the learning and evidence generated for adaptive management, it is important to build greater consistency in use of the rubric across partners. Building on the co-ownership generated, supporting greater rigour and consistency will allow (i) diversity in reflection which will unearth and help understand the multidimensional aspects of the partnership working; (ii) greater scope to compare between partners self-assessment findings; (iii) to check individual-level understanding on the elements as well as explanation.

Our commitment now is to use the positive evidence and learning generated through the assessment of the partnership and take collective action and strengthen. Critical to this is the framing and associated mindset that a meaningful and effective partnership is one where everyone can contribute from their own space to bring change to children working in the worst forms of child labour, which is the ultimate and shared goal of all CLARISSA partners.

December 14, 2020
Authors:
Sukanta Paul, Mieke Snijder & Marina Apgar
Country: