By Tom Dichter
We live in a meta age. We talk about meta humor (jokes about jokes), meta cognition (knowing about knowing) and meta data. In the capacity development field we appear to have arrived at the meta-framework stage – there are so many frameworks for organizational capacity, organizational capacity assessment, and capacity development indicators that we are beginning to produce frameworks that frame the frameworks.[1]
There are at present at least fifteen capacity assessment frameworks (e.g., MSI’s ISF, MSH/USAID’s OCA, the World Bank’s CDRF, McKinsey & Co.’s OCAT [2]) as well as a number of past and ongoing attempts to create frameworks for the monitoring and evaluation of capacity development (CD) interventions.[3]
If we in the development industry were better at reflection (not to mention meta reflection) we might wonder whether we have gone too far – beyond usefulness into a zone where the framework has become an end in itself; a zone where we have the false comfort of believing we have corralled many unruly elements into a neatly configured set that we can control.
For the gap between what we have been learning over the years about capacity and capacity development and how we do our work in this arena remains as wide as ever. Indeed our own imperatives as donors or practitioners of capacity development require us to willfully ignore the implications of what we increasingly know to be the case – capacity and in turn capacity development is (just as Development itself) is an unruly process, and some of its most important aspects – those that may count the most for organizational effectiveness – may not even be knowable.
The literature on organizational capacity has been converging for some time now on a view that amounts to a modified black box. In this view much of organizational capacity is about soft factors, somewhat hidden, hard to grasp; a view in which CD is a non-linear process, inherently unpredictable, hard to assess; a view that CD is so intertwined with context and the enabling environment in which organizations exist that sorting out attribution in CD evaluation is not possible. And to the extent that the literature tries to see a bit into the black box it moves way beyond the “1.0” level of organizational capacity, with its idealized view of what an organization “should” be like, where tools and technically rational systems are the core of the work (administrative control systems, governance structures, human resource policies, strategic planning, board composition), and converges around what we might call Capacity “2.0” – the capacity to adapt, to question and reflect, to change by trial and error, and on to Capacity “3.0,” the even softer level we call organizational “culture.”[4] And all of this is added on to what CD thinkers have been saying for years – capacity development takes time, and moreover may not require others to do it – many organizations can and do develop their own capacities.[5]
Because of the growing implicit acceptance of this modified black box, those who address the practical question of how we as practitioners can be of help, recommend muddling through – quite the opposite of basing our actions on templates or frameworks:
“This often implies that incremental “muddling through” is the best alternative; testing, trying and adapting approaches along the road, and accepting that the risk of failure is high….That implies sometimes doing less, sometimes doing more for CD. First of all, it demands a more managerial, strategic and dynamic look at CD and change, requiring that country and development partners change the mental mode in which they traditionally dialogue about and deal with capacity issues as if it was mainly a technical issue.”[6]
This way of thinking about CD is reflected in works like Merilee Grindle’s “Analytics of Next Steps”[7]; Andrews, Pritchett and Woolcock’s “Problem driven iterative adaptation,”[8] Owen Barder’s call for wholesale experimentation;[9] Donald Schon’s work on reflective practice, Chris Argyris’ discussion of single and double loop learning, David Ellerman’s work on horizontal learning, and so on. And of course if we go back a century to John Dewey, we are reminded that not much is new, for in his work we see the notion of learning by doing, of the oscillation between adaptation, reflection, action, failure, and trying again.
Hopefully these convergences mean that the real world — complex, messy and unpredictable — is making a comeback. Indeed, it is largely because we are so uncomfortable with (and more important, organizationally unprepared for) messiness that we keep hoping for templates and frameworks to ease our unhappy sense of disorder. But that temptation ought to be avoided. And to be sure that we do not get too hopeful about the prospects of being rescued by frameworks and templates, there is the growing literature on complexity to remind us to live with the mess, indeed to embrace it. Here, for example are Snowden and Kurtz, who question three assumptions, ones that seem to underlie our old habits in capacity development:
“The assumption of order: that there are underlying relationships between cause and effect in human interactions and markets, which are capable of discovery and empirical verification. In consequence, it is possible to produce prescriptive and predictive models and design interventions that allow us to achieve goals. This implies “best practice”…. It also implies that there must be a right or ideal way of doing things.”
The assumption of rational choice: that faced with a choice between one or more alternatives, human actors will make a “rational” decision based only on minimizing pain or maximizing pleasure.
The assumption of intentional capability: that the acquisition of capability indicates an intention to use that capability, and that actions from competitors, populations, nation states, communities, or whatever collective identity is under consideration are the result of intentional behavior. In effect, we assume that every “blink” we see is a “wink,” and act accordingly. We accept that we do things by accident, but assume that others do things deliberately.”
They conclude that
“…in decision-making at both policy-making and operational levels, we are increasingly coming to deal with situations where these assumptions are not true, but the tools and techniques which are commonly available assume that they are.”[10]
Therein lies the gap we need to mind. Frameworks and templates are ill-suited to realms such as capacity development where assumptions like the three above (and probably others) are simply not true.
What to do? Less may be More.
The good news is that there may be less for us to do (as practitioners of CD, promoters of CD, donors of CD), and thus less for us to worry about, and less money that we need to spend. The bad news is that this might mean we don’t need everyone who now works in this field to continue to do so. But then again the other good news is that we really now can begin working ourselves out of a job. [That is what we set out to do 50 or more years ago, isn’t it? ]
If we can reflect on why we feel the need for frameworks, and acknowledge the roots of our discomfort and the ways in which our own organizational imperatives drive the need for frameworks, we may become freer to recognize that we have been tearing our hair out unnecessarily; not so much that there has been ‘much ado about nothing,’ but that there has been more ‘ado’ than is warranted.
If we stand back from the task of defining, codifying, and “delivering” capacity development to others, we can begin to see ways to support CD that acknowledge the messy real world and indeed embrace it. Hence new roles and a new script, perhaps as follows:
- “We don’t need to tell you what capacities count for you, you tell us.
- “We don’t need to tell you how to acquire capacities you feel you need, you tell us.
- “We don’t need to devise indicators for your improvements in capacity, you devise them and then tell us.
- “And if you are not ready to do all this, fine, then come back when you are and we’ll begin talking about what we can do together.”
For example, we (as donors) can introduce you to others who seem to face the same dilemmas; we can fund knowledge exchanges such as study tours, cross visits, organizational “twinning;” we can fund research on enabling environment matters you say are critical to your evolution; we could rent you space (or give you space) where you and 2 or 3 others could share the space and exchange problems and solutions; we can play the role of curmudgeon, disturber, sounding board, stimulate you to meet your own objectives according to your timeframe; we can support your experiments in x or y; or we can play “tough love” with you and offer you some money for your endowment, but only if you raise as much or more on your own first.
[1] Jerry VanSant, “Frameworks for Assessing the Institutional Capacity of NGOs,”
Duke Center for International Development, Duke University, Revised December 2008.
[2] See for example Maria Carrasco, “Monitoring and Evaluation of Organizational Capacity Building Interventions for Civil Society Organizations,” Draft – Limited Internal Distribution, USAID, July 2012.
[3] See Wendy Stickel, “Part III: Suggested Guidelines for Evaluating Capacity Development under IPR; M&E Approaches to Capacity Development,” in Working Paper on …August 6, 2012. See also Charles Lusthaus, Marie-Helene Adrien, Gary Anderson, and Fred Carden, “Enhancing Organizational Performance, a ToolBox for Self-Assessment,” International Development Research Centre, Ottawa, 1999. Or UNDP “General Guidelines for Capacity Assessment and Development in a Systems and Strategic Management Context,” January 1998.
[4] Merilee S. Grindle, “Divergent Cultures? When Public Organizations Perform Well in Developing Countries,” World Development, Vol 25, No 4., 1997. This study focused on 29 organizations in 6 countries, and determined that “organizational culture” was the key variable in accounting for good performance.
[5] “Echoes from the Field, Proven Capacity-Building Principles for Nonprofits,” A collaboration between The Environmental Support Center and Innovation Network, Inc. funded by the David and Lucile Packard Foundation, no date.
[6] LenCD Perspectives Note prepared in 2011 for the Busan High-level Forum. See also ECDPM, 2006, and Baser & Morgan, 2008,
[7] See Merilee Grindle, “Governance Reform: The New Analytics of Next Steps,” in Governance: An International Journal of Policy, Administration, and Institutions, Vol 24, No. 3, July 2011 (pp 415-418).
[8] Matt Andrews, Lant Pritchett, Michael Woolcock “Escaping Capability Traps through Problem Driven Iterative Adaptation (PDIA),” CID Working Paper No. 239, June 2012.
[9] Owen Barder, “Complexity, Adaptation and Results,” Sept 7, 2012 in the Blog – Global Development: Views from the Center.
[10] “The new dynamics of strategy: Sense-making in a complex and complicated world,”
by C. F. Kurtz, D. J. Snowden, IBM Systems Journal, Vol 42, No. 3, 2003.