Performance assessment should first start with the self-appraisal by the employee. Even Though the manager may know enough about the reportee, to be fair to him, he should have details of every one of his accomplishments. For this, Self-appraisal of the employee will be helpful. When assessing the performance of the employee, the manager should evaluate not just the technical accomplishments but also the associated skills like Innovation, Quality, Project Management skill, Mentoring, Knowledge sharing etc.,. Beyond that, the assessment of the soft skills like Initiative, Independence, Communication etc., should also be included. The weightages of these different skills could vary based on the grade/band of the employee. In my experience I have seen some of the biases in the assessment that should be avoided.
1. Broad brush Bias - Manager has to critically evaluate every skill of the employee. Just because an employee is good at few skills, it need not make him good at all skills. But because of this bias, many a time the assessment is skewed. An employee who is good in a few skills is rated as good across all skills or if the employee is bad in a skill then this bias carries forward to other skills as well. Beware of this bias.
2. Recent Bias - The recent accomplishment or misses are fresh in one's memory. So at the time of assessment one should not get carried away only by the recent accomplishment or misses. To avoid the recent-bias, one should look at the whole year of accomplishment. For this, Self-appraisal of the employee will be useful.
3.Promotion bias - I have seen managers assess an employee in the Top percentile even though the employee was just promoted in the last cycle. The recently promoted employee should be critically evaluated in the new grade/band as the expectations of the promoted grade/band is higher than the previous grade.
4. Group accomplishment vs Individual accomplishments - Organisational goals are achieved due to team work between many employees. Some of the project accomplishments have many owners. But for the performance assessment of the employee, one has to be critical of his exact contribution. Should be careful of not attributing the group accomplishment to an individual.
5. Group Bias - When calibrating the performance of the employees across the organisation, managers should be open and sensitive to assessments of the employees in the other group. He should not have unfair bias towards employees in his group.
Calibrating the performance of employees across groups is a challenge to the organisation head. All managers using the same template for evaluation helps in calibration. However objective the performance template is, there is still a subjectivity involved in the assessment. So one should not get caught on the exact ranking of employees, but look at clusters of employees who are of similar performance. For a fair calibration, the manager of one group should know reasonably well the employees in the other group. Active participation of managers in cross functional project reviews would help them for assessing the employees in the other group. As a manager you should also create opportunities for your employees to have visibility outside of the group.
Performance assessment of the employee is also an excellent tool for the manager to identify the gaps of the employee. He can work with the employee to put an action plan to improve on the gaps, so that he improves his performance for the next review cycle.
Why This Matters
Flawed performance assessments corrupt every downstream talent decision-compensation equity, promotion choices, succession planning, and retention of top performers. When bias distorts evaluations, organizations systematically reward the wrong behaviors, promote the wrong people, and watch their best talent leave for competitors who recognize their true contribution. A recent study found that 95% of managers express dissatisfaction with their performance review process, yet the consequences extend far beyond dissatisfaction: biased assessments cost companies millions in lost productivity, wrongful termination litigation, and the replacement costs of talent who leave when they're undervalued or watch lesser performers advance.
Leadership in Practice
A major software company's transformation of performance management in 2012 provides a compelling case study in assessment discipline. The company eliminated annual performance reviews and stack rankings-the forced distribution system that required managers to rate employees on a curve regardless of actual performance. The company's research revealed that their traditional process consumed tens of thousands of manager hours annually while generating minimal value and significant resentment. More critically, they discovered that recency bias and the halo effect were systematically distorting ratings, with Q4 performance disproportionately determining annual assessments and employees skilled at self-promotion receiving inflated evaluations. The company replaced annual reviews with "Check-In" conversations-ongoing discussions between managers and employees focused on feedback, development, and expectations. Critically, they trained managers extensively on cognitive biases and required structured documentation of specific contributions across multiple dimensions throughout the year. They separated compensation decisions from developmental conversations and implemented calibration sessions where leadership teams collectively reviewed assessments to identify and correct bias patterns before finalizing ratings. The results validated their approach: voluntary attrition decreased significantly following implementation, particularly among high performers who previously felt underrecognized. Manager satisfaction with the performance process increased dramatically, and exit interview data showed that departing employees cited performance management concerns far less frequently. Most significantly, the company found that by eliminating the recency and halo biases through structured ongoing assessment, they made better promotion decisions-tracking data showed that employees promoted under the new system performed substantially better in their new roles compared to those promoted under the old annual review system, suggesting more accurate identification of true capability.
Leadership Framework
**The BIAS-FREE Performance Assessment Framework**
**Step 1: Document Continuously, Not Retrospectively**
Implement a system where you log significant contributions, challenges, and observations throughout the evaluation period-monthly at minimum. Use a structured template that captures technical delivery, professional skills, and soft skills separately. This practice defeats recency bias by creating a contemporaneous record and provides concrete examples that prevent vague, impression-based assessments.
**Step 2: Demand Multi-Dimensional Evidence**
For each evaluation dimension (technical, professional skills, soft skills), require at least three specific examples with measurable outcomes. Force yourself to answer: "What concrete evidence supports this rating?" If you cannot cite specific instances, your rating reflects impression rather than performance. Apply the "courtroom test"-could you defend this assessment with evidence if challenged?
**Step 3: Calibrate Against Role Expectations, Not Peers**
Evaluate each employee against the defined competencies and expectations of their current grade level, not against team members. Newly promoted employees must be assessed against their new grade's standards, even if this means rating them as "developing" in their elevated role. This eliminates promotion bias and sets honest expectations. Create explicit competency matrices for each level to remove ambiguity.
**Step 4: Conduct Pre-Assessment Bias Audits**
Before finalizing ratings, systematically review your assessments for patterns. Are ratings clustered around one dominant trait? Do recent achievements disproportionately influence overall ratings? Have you rated individual contributions in team accomplishments, or simply rewarded everyone equally? Challenge yourself: "If this person's most visible quality were removed, would my assessment change across other dimensions?" If yes, you're experiencing halo effect.
**Step 5: Implement Structured Calibration**
Before finalizing performance ratings, conduct calibration sessions with peer managers or your leadership team. Present your assessments with supporting evidence and invite challenge. Calibration exposes inconsistent standards, reveals hidden biases, and ensures organizational fairness. The goal isn't consensus-it's consistency in how evidence translates to ratings.
**Critical Success Factors:** Self-appraisal must precede manager assessment; assessment conversations must separate performance evaluation from development planning; and organizations must train managers explicitly on cognitive bias recognition, not just evaluation mechanics. **Warning:** Without ongoing documentation, even well-intentioned frameworks collapse into recency-biased, impression-driven assessments during review season.
Leadership Takeaway
Your performance assessments shape your organization's culture more powerfully than your mission statement ever will-they signal what you truly value versus what you merely claim to prioritize. Starting tomorrow, implement a simple practice: spend 15 minutes weekly documenting specific observations about each direct report across technical, professional, and interpersonal dimensions. This single discipline will transform your assessment accuracy, eliminate recency bias, and ensure your best performers receive the recognition that keeps them engaged. Remember: in talent management, your assessment credibility directly determines your retention effectiveness.
"If you can't measure it, you can't manage it, but if you measure it badly, you'll manage it badly." — Peter Drucker
Ramu Kaka's Wisdom
The wise gardener knows each plant by watching it through all seasons, not just at harvest time. Those who judge the tree by its last fruit alone will mistake a strong year for a strong tree, and wonder why their orchard fails.
Reflection Questions
- When I review my last performance evaluation cycle, can I identify specific examples where recency bias or halo effect influenced my ratings—and more importantly, what concrete system will I implement to prevent this next cycle?
- If I were to anonymize my team's performance assessments and present only the evidence without names, would my ratings remain consistent, or would I discover that personal affinity and visibility are influencing my judgment more than actual contribution?
- How effectively do I distinguish between employees who drive team accomplishments versus those who benefit from being on successful teams—and what specific behavioral evidence do I use to make this critical distinction?
Comments