Too often quality management programmes produce contrived results. Putting a huge amount of time and effort to design a quality scorecard is worthless if the form isn't adjusted to reflect accurate scores.
Here's an example of how it may occur. A team had spent hours reviewing scorecards from other departments and organisations, and settled on 23 copycat questions from another group.
Why Replicate an Existing Form?
The logic was; it has the highest average score at 97%+ monthly so it's obviously the best. That would make me raise an eyebrow and want to know where the 97% was coming from.
Investigate your Suspicions
If your suspicions are raised, start to investigate; where there are extremely high marks drill through and review the score in details. Listen to the call yourself or review the chat messages before looking at the scorecard and ask yourself; was this an extremely good customer interaction? Next review the actual form; the questions, the weightings, whether there were non-applicable questions and consider whether all the important questions that should have been asked were asked and graded appropriately. Carry out a calibrated score yourself... do you agree with score you have given based on the criteria in the scorecard?
Looked under the Hood to See What's Really Going on
In this scenario once the hood was lifted, there were three takeaways that exposed the truth:
1. The majority of questions were scored Yes/No/NA where the NA value carried the same point value as a Yes answer.
2. The six questions that accounted for 35% of the overall score, were answered NA 75-80% of the time.
3. Organisational focus was on improving the overall score, and therefore very few people were aware of the data behind the score or even the overall logic used to calculate the score.
A discussion of the findings and deeper look into the questions and answer options for this new form was in order and we tackled the refresh with two thoughts in mind:
1. Why are there questions on the scorecard that rarely apply to our customer interactions?
2. Why are points awarded for behaviors that are not displayed?
A Change of Heart: Own Custom Designed Scorecard Comes up Trumps
Upon review the team opted to remove questions that were not likely to be relevant or important for the majority of customer interactions. New questions reflected attributes of key behaviors as questions. In addition, the team consciously decided to grant points for YES answers only, removing the non-applicable behaviours entirely from the scoring formula.
When reviewing your current quality standards programme, take a good look at the driving forces behind the score. If there are underlying methodologies causing your scores to become falsely inflated and inaccurately representing performance, you need a system refresh. The goal is a healthy quality system that inspires lasting change, which happens in the presence of accurate results.
If redesigning quality management scorecards is difficult task why not use Scorebuddy's best practice sample scorecards provided in our free trial.
30 Day FREE Trial on All Accounts
Sign up in 60 seconds. No credit card required.