Organizing Dialogue, Experience and Knowledge for Complex Problem-Solving

Developmental Thinking and Evaluation

June 20th, 2011

Think developmentally before evaluating developmentally

Developmental evaluation is difficult to initiate, largely because the thinking behind it is so foreign to normal program planning and reporting. It appears that developmental thinking needs to be in place before one can hope to implement a DE project successfully.

Over the next few days I will be meeting with colleagues working with the Social Innovation Generation Group, Michael Quinn Patton  and others who share an interest and wrestle with developmental evaluation (DE) in practice.

Over the course of the last year we have been meeting monthly to discuss our experiences, challenges and learning on the issue of developmental evaluation. Although our group members come from diverse fields — government, academia, non-profit and others — and are focused on projects that range in scope, we all share one common experience: frustration with implementing DE.

Reading through a case study the other night I couldn’t help be see something I’d seen before: the principal barrier to the implementation of DE is that the program, its partners, or the stakeholders associated with the program didn’t individually or collectively function in a manner that supported DE. Whether they actually bought into DE in the first place is also not known, but it seems to me that the two are related.

Developmental thinking about social issues has shown itself in my work to be a linchpin for any progress on developmental evaluation. Commiserating with colleagues in this area, it seems evident to me that assessing for DE is a critical step in the pre-work that needs to come before any evaluation takes place. Without developmental thinking, developmental actions and evaluation is hard to reasonably achieve.

If you do not see your program as one that evolves, but rather just gets bigger, better, stronger, weaker etc.., having real-time evaluation tools will be less useful or perhaps even harmful in the absence of a thinking framework to make sense of the data. Real-time, consultative evaluation, and its utilization-focused actions makes DE stand apart from other approaches to evaluation, even if the methods and tools are similar.

The implications for this assertion in practice are enormous. It means that a DE practitioner cannot be just an evaluator or at least must find others that can work with a program to educate, inspire and contemplate collaboratively about developmental thinking and what it means for a program. It also brings the evaluation function far closer to program planning than evaluators (and program planners) might be used to. It also means holding a willingness to think different, not just implement different thinking. To that end, knowledge of motivation and some sense of how one provokes or creates space for change is also important.

Taken together, we have ourselves a real challenge. The “core competencies” for DE already include qualities like people skills, knowledge of complexity, and communication skills (in addition to fundamental skills in evaluation methods and process implementation), but now we are adding additional ones. Systems thinking, behaviour change, program planning and design are all reasonable skills that would assist an evaluator in doing this work. Nice in theory, but how about in practice? Can we reasonably expect that there are enough people out there with these skills to do it well? Or is this a call for more of a team-science (or rather, team evaluation) approach to evaluation?

 

 

 

 


Filed under: evaluation, innovation, research Tagged: design, developmental design, developmental evaluation, evaluation, systems thinking

Leave a Comment

Categories: design, developmental design, developmental evaluation, evaluation, innovation, research, systems thinking

Leave a Reply