Organizing Dialogue, Experience and Knowledge for Complex Problem-Solving

The Complexity of Planning and Design in Social Innovation

January 7th, 2012

The Architecture of Complex Plans

Planning works well for linear systems, but often runs into difficulty when we encounter complexity. How do we make use of plans without putting too much faith in their anticipated outcome and still design for change and can developmental design and developmental evaluation be a solution? 

It’s that time of year when most people are starting to feel the first pushback to their New Year’s Resolutions. That strict budget, the workout plan, the make-time-for-old-friends commitments are most likely encountering their first test. Part of the reasons is that most of us plan for linear activities, yet in reality most of these activities are complex and non-linear.

A couple interesting quotes about planning for complex environments:

No battle plan survives contact with the enemy – Colin Powell

In preparing for battle I have always found that plans are useless, but planning is indispensable – Dwight D. Eisenhower

Combat might be the quintessential complex system and both Gens Powell and Eisenhower knew about how to plan for it and what kind of limits planning had, yet it didn’t dissuade them from planning, acting and reacting. In war, the end result is what matters not whether the plan for battle went as outlined (although the costs and actions taken are not without scrutiny or concern). In human services, there is a disproportionate amount of concern about ‘getting it right’ and holding ourselves to account for how we got to our destination relative what happens at the destination itself.

Planning presents myriad challenges for those dealing with complex environments. Most of us, when we plan, expect things to go according to what we’ve set up. We develop programs to fit with this plan, set up evaluation models to assess the impact of this plan, and envisage entire strategies to support the delivery and full realization of this plan into action. For those working in social innovation, what is often realized falls short of what was outlined, which inevitably causes problems with funders and sponsors who expect a certain outcome.

Part of the problem is the mindset that shapes the planning process in the first place. Planning is designed largely around the cognitive rational approach to decision making (PDF), which is based on reductionist science and philosophy. Like the image above, a plan is often seen as a blueprint for laying out how a program or service is to unfold over time. Such models of outlining a strategy is quite suitable for building a physical structure like an office where everything from the materials to the machines used to put them together can be counted, measured and bound. This is much less relevant for services that involve interactions between autonomous agents who’s actions have influence on the outcome of that service and that result might vary from context to context as a consequence.

For evaluators, this is problematic because it reduces the control (and increases variance and ‘noise’) into models that are designed to reveal specific outcomes using particular tools. For program implementers, it is troublesome because rigid planning can drive actions away from where people are and for them into activities that might not be contextually appropriate due to some change in the system.

For this reason the twin concepts of developmental evaluation and developmental design require some attention. Developmental evaluation is a complexity-oriented approach to feedback generation and strategic learning that is intended for programs where there is a high degree of novelty and innovation. Programs where the evidence is low or non-existent, the context is shifting, and there are numerable strong and diverse influences are those where developmental evaluations are not only appropriate, but perhaps one of the only viable models of data collection and monitoring available.

Developmental design is a concept I’ve been working on as a reference to the need to incorporate ongoing design and re-design into programs even after they have been initially launched. Thus, a program evolves over time drawing in information from feedback gained through processes like evaluation to tweak its components to meet changing circumstances and needs. Rather than have a static program, a developmental design is one that systematically incorporates design thinking into the evolutionary fabric of the activities and decision making involved.

Both developmental design and evaluation work together to provide data required to allow program planners to constantly adapt their offerings to meet changing conditions, thus avoiding the problem of having outcomes becoming decoupled from program activities and working with complexity rather than against it. For example, developmental evaluation can determine what are the key attractors shaping program activities while developmental design can work with those attractors to amplify them or dampen them depending on the level of beneficial coherence they offer a program. In two joined processes we can acknowledge complexity while creating more realistic and responsive plans.

Such approaches to design and evaluation are not without contention to traditional practitioners, leaving questions about the integrity of the finished product (for design) and the robustness of the evaluation methods, but without alternative models that take complexity into account, we are simply left with bad planning instead of making it like Eisenhower wanted it to be: indispensable .


Filed under: complexity, design thinking, evaluation, Systems science, systems thinking Tagged: complexity, design, developmental design, developmental evaluation, evaluation, health, human services, Social media, systems thinking

Leave a Comment

Categories: complexity, design, design thinking, developmental design, developmental evaluation, evaluation, health, human services, Social media, Systems science, systems thinking

Leave a Reply