Many call centres have a love/hate relationship with quality assurance monitoring: Management loves it but agents can hate it. Management has a legitimate interest in monitoring the quality of service their teams are providing, but monitoring can nevertheless be a hard sell to agents. And, if you look at it from their perspective, it’s pretty easy to see why. The good news is that you will have much greater success getting your call centre agents on board if you address their concerns ahead of time.
Talking with Scorebuddy clients we have identified some of the primary concerns agents have with being monitored.
Loss of Privacy
How would you feel if someone were looking over your shoulder and judging you during your entire shift? It is hard to be passionate about doing your job if you think someone is listening in on every conversation. Monitoring can make agents self-conscious, and that’s not conducive to good performance.
Give agents a thorough explanation of how monitoring works. Is someone listening live to every call? Or are calls recorded and spot-checked? Removing the mystery and giving agents a realistic expectation of the likelihood that any one call is being monitored can go a long way toward reducing anxiety.
Lack of Transparency
This is another situation where it’s helpful to put the shoe on the other foot: How would you like to know that you’re constantly being graded but have no understanding of what you’re being graded on? Agents need insight into not only how they’re being measured, but who is doing it, what they’re being measured on, and why it matters.
Train agents on the quality assurance program: who, what, why, when, and how. Why not run internal focus groups and ask your agent what they think you should measure?
Scoring agents on metrics that are, by nature, contradictory undermines the credibility of the entire program. A good example is measuring on both first call resolution and number of calls per hour. Measuring agents on the number of calls they handle per hour encourages them to end calls quickly, which works directly against resolving issues on the first call.
Choose metrics carefully. If it’s necessary to include metrics that may appear contradictory, explain how that apparent conflict will be handled.
Even the best quality assurance monitoring can go awry if it’s implemented poorly. Things can go wrong in several ways, ranging from scores being used punitively, to metrics that are scored differently by different managers.
Make sure managers and anyone else involved in quality assurance understand why and how monitoring is to be used. For instance, it shouldn’t be used as an excuse for getting rid of an employee with whom a manager has a personal conflict. Your calibration process should be explained and support consistent scoring. In addition, evaluators should receive regular training on how to interpret customer interactions and be encouraged to annotate their evaluations with coaching tips.
Measurement On Things Out of The Agent’s Control
Sometimes an agent can do everything right, but the call still goes wrong.
- Some customers have unreasonable demands that agents don’t have the authority to grant.
- Some customers don’t follow the anticipated script, so the agent must make things up as they go along.
- Sometimes call volume can be unexpectedly light, driving down the metric of how many calls an agent handles during their shift.
- Sometimes agents call in sick, causing a spike in the time it takes other agents to answer a call.
- Management decisions on things like staffing can have a direct impact on agent effectiveness.
Train managers to consider extenuating circumstances before reacting to agents’ scores -- and give them the flexibility to do so. In addition, give agents a sense of control by granting them direct access to their own scores, so that they see the same information that management sees and challenge a score if necessary.
Quality assurance is an important part of any call centre operation: Management has every right to expect agents to do the job they’re paid to do in the way they’re paid to do it. But any quality assurance program needs some built-in flexibility if you want the support of the agents. A pattern of behaviour that does not follow established procedures may be a cause for increased coaching, but a one-time incident where the agent did the best they could under the circumstances should not be used punitively.
Neither should agents be punished for failing to meet measurements that are mutually exclusive, e.g. keep talk time to a minimum but fully explore the customers problem! The bottom line is that agents are a lot more accepting of call monitoring when they trust management, understand the goals and know the program will be used with a little compassion and common sense.
That’s where Scorebuddy outshines other quality assurance programs. Not only does Scorebuddy aggregate and analyse the behaviours that lead to success, the Scorebuddy dashboard presents insights in an easily accessible, highly visual way that makes it easier for both managers and agents to turn insights into actions and collaborate to deliver quality customer experience.