Standardization of Calibrations and Reporting Procedures
The Classic Concept of Calibrations
Many contact center staff will understand calibration to mean a meeting where a group of team leaders or reviewers review an interaction, either listening to a call or reading an e-mail or chat, and then sit around the meeting room table and discuss its “good” and “bad” points. The object of the calibration session is to ensure that all reviewers perceive the call they are reviewing in the same way and come to the same conclusions. Each team member will, in turn, express their opinion and will, in turn, be influenced by the reactions of the senior manager to the opinions of the reviewers who spoke before they did.
Many contact center staff will question the purpose of such meetings since, in many cases, the senior manager selects a call at random and then impose his or her standards on the team. If a team member expresses an alternative point of view it will be considered an incorrect perception of the standard.
Widening Their Appeal
Calibrations play a crucial role in the quality process. They ensure that all members of the quality team are aligned and apply the same understanding of the quality standards to the conversations they are reviewing.
Gaining the quality team’s enthusiastic participation will make them more effective. Here are a few techniques that can be used to increase “buy in”.
- Allow team members to select the conversation to be reviewed. This may be a call or other conversation that proved difficult to review where the team member would appreciate a second opinion. This will make the calibration session more relevant to the needs of the participants and reinforce the idea of working in a team.
- Select conversations for a specific reason and make this reason clear. If a call, e-mail or chat session may have been identified as “excellent” the purpose of the calibration session is to confirm this judgment and identify as a group what makes it “excellent” so that this excellence can be defined, trained out to the other agents and made standard practice. Similarly, an interaction may have been identified as being of such poor quality that it warrants disciplinary action. In this case a fully documented calibration session will go a long way to justifying disciplinary action if the case ends up in litigation. Again selecting calls for specific reasons will make the calibration sessions more practical and relevant to the quality team members.
- Allow team members to review and review the interactions privately before the meeting to form their own opinions before joining the meetings. This way the meeting organizer will have a better idea about the thought processes the team members use to arrive at their conclusions.
- Allow other members of the quality team to run the calibration meetings. These meetings have a clear structure and aims which makes them easier to delegate. There is also potential for disagreement among participants, which gives the person chairing the meeting an opportunity to develop and practice his/her leadership skills in a controlled environment. Once again, this will serve to make the calibration sessions more relevant and interesting for the quality team members.
How Technology Contributes to the Calibration Process
- Many quality management solutions include some sort of built in calibration process and reporting. These can be used to facilitate more effective calibration sessions. The technology can be used to let participants review the same conversation as their colleagues using the same review form separately as they would a normal evaluation. In this way participants' thoughts and decisions are captured as a sample that's not influenced by the opinions expressed by the rest of the group.
Reporting tools can calculate the variance from the average or even from a designated “base” score. This can give the quality team leader an indication of how closely aligned his or her team members are. Where the calibration is testing a new review form this variance can give an indication as to the usability and quality of the review form and where it needs to be improved.
Further details on how to Select a Suitable Review for Calibration are available on the following page:
Trainers Tips
Quality Calibration
Regular calibration of quality assessment forms is an effective way to avoid potential subjectivity within the team of people conducting evaluations. It may happen that each individual team member will interpret the questions on the forms slightly differently. When this happens the agents may feel confused or see that their final scores are inconsistent and unclear. Our Trainers have prepared are several methods for conducting calibrations with Eleveo Quality Management, as well as suggested best practices.
Note that calibrations are not one-time processes, but should be performed at regular intervals, preferably frequently, to achieve optimal results.
Questionnaire for Calibration Best Practice:
Calibrations are often an internal exercise which ensure that the review process is objective. For this reason, the results of the process should not be counted for or against the agent. One way to accomplish this is to select the Questionnaire that will be used for reviews, change its name so that it is more obvious that it is for calibration purposes (e.g., "Calibration Customer Service Review Form
"), and then select "Save Copy."
When performing calibrations, using either of the selection methods described below, select this new Questionnaire dedicated to this purpose, instead of the Questionnaire normally used. Since the system stores and trends scores at the Questionnaire level, all calibrations performed will be directly linked to the scorecard created specifically for the calibration.
Note: When creating reports and trends, be sure to select the appropriate questionnaires and avoid calibration data accordingly.
Calibration Methods
System Selected Call Calibrations:
With this method, the Review Scheduler can be used to select a random call for calibrations for a selection of evaluators to review. The advantage of this method is that the evaluation process can be performed at regular intervals with selection criteria built into the selection process (e.g., minimum call duration, call direction, etc.). A disadvantage of this method is that the selected calls can not be replaced, so the calls may turn out to be overly simplistic or may not even meet the requirements of the evaluation. Another point to keep in mind is that if you set up the Review Scheduler to repeat regularly. you will either have to select the same agents you want to use each time, or select all agents and delete the ones you do not want to use in a certain time period.
Instructions on how to use the Review Scheduler can be found on the page named Review Scheduler.
Some additional notes to consider:
- If you have created a Questionnaire specifically for Calibrations, make sure you select this questionnaire when prompted to do so.
- Select the "For Calibrations" box when prompted in the Review Scheduler. This option will ensure that all reviewers receive the same agents/content to review.
- Once the Review Scheduler has been run, have all reviewers perform the evaluations independently of each other.
- Run the Compare Evaluators report to review the result (see Calibration Reporting and Interpretation below).
- Note: The last day of the review period is the day the data appears in the Compare Reviewers Report.
Ad Hoc Call Calibrations:
This method uses the Conversation Screen to select a specific call to assign to multiple reviewers for calibrations. The advantage of this method is that complex, nuanced calls can be selected for calibration. However, one disadvantage is the manual nature of the process.
For instructions on how to perform an ad-hoc review using the Conversations Explorer screen, refer to the dedicated page named Selecting a Suitable Review for Calibration.
Some additional notes to consider:
- If you have created a Questionnaire specifically for Calibrations, make sure you select this questionnaire when prompted to do so.
- have all reviewers perform the evaluations independently of each other.
- Run the Compare Evaluators report to verify the result (see Calibration Reporting and Interpretation below)
- Note: The date of the review is the day the data appears in the Compare Reviewers Report.
- Repeat these steps at regular intervals (e.g., monthly, quarterly, etc.)
Calibration Reporting and Interpretation:
Regardless of whether you use the Review Scheduler or the ad-hoc method of selecting calls for calibration, run a Compare Reviewers report after the reviewers have completed their assessments. This report is a useful tool that allows you to quickly identify where reviewers may have differing opinions on questionnaire items and the conversation being reviewed.
Instructions on how to use the Compare Reviewers Report can be found on the page named Compare Reviewers Report.
Note: When selecting the date, select either a specific date for the ad-hoc calibration or the last date of the period for the calibration selected by the system.
Interpretation of Results:
When differences are discovered, it is important to find out what may be driving those differences:
- Is the question too subjective?
- Are the answer choices appropriate for the specific question?
- Is there confusion about what the goal of the question might be?
- Could the standards for calculating the results of the question (especially if the question might include granular scores) be clarified?
For each question where there are differences, pinpoint either the specific part of the call or the agent behavior that is targeted by the question. Discuss what the goal of the question is and what end goal (through coaching and training) it is trying to achieve. The variance could be from the reviewer's perspective or adjustments may need to be made to the questionnaire and/or answer options.
TIP: It is recommended that you use a specific questionnaire for calibrations. If you use the same Questionnaire to perform the actual reviews, consider whether those calibration results should count for or against the agent.
For the reviews you do not want to include in the reporting it is necessary for the Reviewer to complete the following steps:
- Select the completed evaluation on the Reviews screen
- Select "More Actions" from the top ribbon menu
- Then uncheck the "Include in Reports" checkbox from the drop-down menu.