Evaluating Planned Change

Evaluating Implementation and Outcomes

Before we can begin to institutionalise planned change, we have to evaluate two things. 

    1. Was the implementation itself successful?
    2. Has the change had the desired impact on the outcomes, that we proposed, or desired?

There are several ways in which things could turn out different to what we have expected.  I am going to use the example of a sales manager wanting to increase the sales of his team.  The manager has read about team building, and he has read about the power of collaborative sales teams.  His conclusion is that using team building, he will create a collaborative sales team (they are currently very competitive) and his desired outcome is increased sales.

Unsuccessful Implementation, Outcomes not Achieved

The implementation could have been unsuccessful, and there could have been no positive change in the outcomes. 

With other words, the manager might have used an instrument to measure sales team collaboration, and for now, let us assume that this instrument has been tested and has been found reliable and valid for testing collaboration in sales teams.  I will get to this in more detail just now.

He did a difference test by applying the instrument right before and after the team building event, and again about six months after.  The results showed almost no increase in collaboration after the event, and, in fact, a decrease in collaboration six months later.

The manager also looked at sales measures, and find that these showed no significant improvement.

Now if the manager only measured outcomes, he would have potentially made the mistake to believe that the intervention doesn’t work in principle.  With other words, he would have concluded that collaboration does not increase sales, because he would not have realized that collaboration has not increased.

Successful Implementation, Outcomes not achieved

The implementation could have been successful, but the changes that we have made to the organisation, did not have the desired effect. 

Using the same instrument, the manager might have found a significant increase in collaboration, but no significant increase in sales, or maybe even a decrease in sales. 

The manager now can conclude that he had chosen the wrong intervention for his team and situation.  In his situation, improving collaboration did not have the desired effect.

Unsuccessful Implementation, Outcomes achieved anyway

The implementation could have been unsuccessful, but the desired outcomes were achieved anyway.

In this case, the possible cause could be that the variables selected to measure, were based on incorrect causation theory.  It is probably inevitable for an intervention to have more effects than what was intended.  So for example, the team building programme could have been aimed at improving team collaboration, but in the situation, because of the strong competitive paradigm, the activities that were performed failed to impact team collaboration.  It did, however impact a positive change in the sales people beginning to see their relationships with the clients as collaborative.  The instrument only measured collaboration amongst the team, and not collaboration across organisational boundaries, and so failed to reveal the cause for the result.

Successful Implementation, Outcomes achieved

The implementation could have been successful, and the desired outcomes achieved.

Typically, when this happens, we just celebrate.  However, as in the previous point, it is possible that the success was not as directly related to the things that we were measuring.  However, it has shown that the specific implementation, in this situation, has produced certain results, provided that nothing else has changed.  What could happen in future, is that someone might, based on this results, assume that this intervention will be applicable to a different situation, where the fact that the causation was in fact not as it seems, could be highlighted through a failure in that situation.

Designing Good Measures and Selecting Right Variables

So we see from the above example that in order for us to accurately evaluate the success of an intervention, we have to design good measures and select the correct variables to inform those measures accurately.

Designing Good Measures

There are three important characteristics to consider when designing or selecting measures:

  • They need to be operationally defined
  • They need to be reliable
  • They need to be valid

Operationally Defined

Operationally defined measures have a clearly defined way of measuring them, and a clearly defined way of converting those measures into useful information.

So, for example, in the above example, the manager might have wanted to measure the relationship between collaborative emails and the value of sales transactions.  He could have defined this by saying that it is measured through counting the number of collaborative emails sent and received by a certain sales-person, and comparing that with the average value of that person’s sales in the same period.

To convert this to useful information, he might want to create a ratio by e.g. dividing the average sales transaction value, into the number of collaborative emails.

To measure this, a data collector would be given access to the sales person’s emails, scan through them, and count all the ones that are collaborative.  The data collector would also use the sales report to calculate the average transaction value over the same period.

Reliable

Reliability refers to how accurate the measure can be considered to be.  So, in the above example, the sales report could possibly be considered close to 100% accurate.  It is likely to be very reliable.

However, the classification of emails as collaborative or not, is a lot more subjective.  It is going to be influenced by what the data collector considers to be collaborative.

When considering the reliability of measures, we therefore also have to consider how we can compensate for this.  In this example, we could have used three data collectors to scan the same email box, agreed on an acceptable deviation between them, and averaged their findings.  Provided that they were within the acceptable deviation of each other, we could consider the data reasonably reliable.

We could also work on more clearly defining the measure.  For example instead of using the word “collaborative”, we might have said that a collaborative email is part of a conversation in which certain elements must be present e.g. there must be a request for assistance, as well as an offering of assistance.

Valid

Valid measures measure what we really want to measure.  So, in the above example, we might, in the attempt to make the data more reliable, define a collaborate email conversation.  However, we could make a mistake in that definition, which means that we might be accurately counting a number of emails, and get a very low deviation – but the emails we are counting are not truly a reflection of collaboration.

So to increase validity we have to critically ask ourselves whether measuring what we are measuring, really translates into what we want to know.

Selecting the right Variables

We could apply the principles of designing good measures, but if we select the wrong variables, we are going to measure the wrong things very accurately, and we are going to understanding something, that is of no value to us, very well.  More dangerously, we will understand one thing, and think that we understanding another.

Our chances of selecting appropriate variables are significantly improved if we keep the following principles in mind:

  • Base the variables on a proved underlying model
  • The model should clarify the cause and effect relationships
  • Ensure that variable include outcome as well as implementation success elements

Base the variables on a proven, underlying model

The underlying model should accurately and closely reflect our current reality.  We need to understand that models are simplifications of reality, and therefore inevitably to a certain extent, it contains an element of falsifying.  We need to understand how the simplification process skews the reality, and make sure that the variables we select are appropriate to our REAL situation.

Clarifying the Cause and Effect Relationships

One of the critique points of planned change is a lack of understanding causal relationships.  The intervention will be done through e.g. an OD consultant and a manager working together.  The model should help them understand how their actions are going to impact the motivational levers of the people in the organisation.  That is the first level of causal understanding.  Secondly, the model should help them understand how the impacting of those motivational levers, will change the behaviour of the organisational members.  Thirdly, it should help them understand the likely results and outcomes of those changed behaviours.

There is a dearth of research regarding these, and so continually evaluating activity, measurement and outcomes against each other is critical.  Just like the intervention needs to sometimes be adjusted based on continual evaluation, so the evaluation criteria and method itself sometimes need to be adjusted.

Outcomes and Implementation Variables

The underlying model should help us understand the implementation of the intervention, as well as the outcomes required from the intervention, and it should help us understand how these relate to each other.  From this then, we can select variables that will indicate whether the implementation was successful, as well as variables that will indicate whether we have successfully implemented the solution.

Three Levels of Change

No, this is not “Organisation”, “Group” and “Individual” – which seems to be what “Three levels” refer to most of the time when talking about OD.

The three levels of change is Alpha, Beta and … you guessed it:  Gamma!

Understanding these three levels of change is critical in understanding your measures.  Not understanding these, can cause you to come to conclusions that are the exact opposite of reality.

Alpha Change

Alpha change is pretty straight forward. 

Cummings and Worley puts it in a way that I suspect no human being would understand without an explanation – but I cannot think of a better way to put it.

“… movement along measure that reflects stable dimensions of reality.”

What that means is that the fundamentals of what I am measuring stayed the same, but something on top of that changed.  That doesn’t make much more sense.  Let me give you an example.

When I am travelling towards a certain city, Alpha change is involved as I travel.  The road, the city and the way I measure it, remains constant.  My position on that road changes.

In performance measurement, if I produced ten hats yesterday, and I produce twenty of the same hats today, my performance has doubled.

Beta Change

In Beta Change, some of the stable things change.  It often happens through a redefinition of understanding of certain things that we are measuring.

So for example, if I were travelling towards a city, and I was measuring the distance to that city on a map, with a ruler, then my distance could have been 50km.  However, along the way, someone explains to me that the road is not straight and that the real distance I have to travel, must be measured along the windy road between here and there.  So I measure the distance again, and find that now I am 75 kms from the city.  In reality, I have not moved further from the city, but my perception of the distance has changed.

In performance measurement, I might have been measuring the quality of my hats in a certain way.  However, after some hat-making training, I realize that a good hat has certain quality characteristics that I’d been unaware of before.  So previously, I might have considered my average hat to meet 80% of my quality standards, and now my average hat meets 50% of my quality standard.  Simply looking at my quality measure tells you that my hats have reduced in quality, whereas comparing two hats with each other might show you that the later products are actually of higher quality.

Gamma Change

In Gamma Change, the very definition of what we are looking at, changes.

For example, I might have been measuring the distance to the city.  We have a strat meeting along the way, and we realize that we need to be travelling to another city – which is 300 km away.  So from the moment before and after that, the overall achievement of goal has just moved from being 75 km away, to being 300 km away.  However, my real position has not changed.

In hat quality, I might have previously defined quality as referring to things such as the strengths of my hat, the detail of my stitching and the stylishness.  However, I might do a workshop with my customers, and discover that for them, quality is considered against the suitability of the hat to the occasion and the rest of the outfit. 

I adjust my measurements and now I measure quality by asking customers and their peers about the suitability to the occasion and the rest of the outfit of the hats.  So the measure of quality before and after this intervention, reflects something totally different.

In Conclusion

It is clear therefore that determining the success of an intervention is not as simple as asking a few questions and measuring a few measurable items.  You have to start by identifying exactly what it is that you want to achieve.  You then have to consider the possible ways to achieve that and either find existing, or develop your own model and theories as to what changes in the organisation, would bring those outcomes about.  You have to choose variables that would indicate both whether your intended changes have been successfully brought about, and whether those changes resulted in the required outcomes.

Apart from that, when measuring these, you have to consider the operational definition of your chosen variables, understand and compensate for weaknesses in their reliability and validity.

Having done all that, you have to now not only implement your evaluation process,

but you have to consider how the intervention that you are performing, is adjusting the very things that you are measuring.  If you are redefining existing constants such as the definition of quality, then remember that your measurement of success will be skewed if you simply continue reporting the results of this variable and measure.

Planning your evaluation is as important as planning the intervention itself, because evaluation will drive institutionalising.

Apart from planning it up front, you have to evaluate your evaluation process, and adjust it if needed.

 

The information in this article is based on “Organisation Development and Change,” by Cummings & Worley, International Student Edition, 8th Edition

 

This entry was posted in All, Organisation Development. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>