Why Do We Have “Performance Evaluations”, Anyway?

In the follow-up to my recent posts about performance management and judgement vs metrics, someone commented:

And in fact, the whole idea of routine performance evaluations is suspect here. Why is performance being evaluated? Do we mistrust our management staff? Was there a problem? Was there a huge success? Or are we doing it because we’ve always done it or business magazines say we should do it? What’s wrong with defaulting to paying people market value for the work they do, then dealing with exceptionally good or bad performances as they occur?

These questions really sunk their hook in me. As an engineering manager, formal performance evaluations are my least favorite part of the job (my second least favorite is interviewing, which is sort of like a formal performance evaluation, but for someone that you’ve only known for an hour). Formal evaluations feel very different from one of the pieces that I love best: supporting someone to grow in the ways that fit them best. You’d think that encouraging employee growth and helping your team level up is what performance evaluations are about, but it doesn’t feel that way — the formal process tends to focus more on the broader organization than on the individual and their needs (or even their team’s needs), and comes with big overheads.

One big piece of formal performance evaluations that companies care about is that they be done as systematically and homogeneously as possible across the organization. The idea is to try to be as objective as possible. The process of getting all of these evaluations discussed and honed and equalized takes a fair amount of work — going as far as long meetings where managers debate evaluation scores down to the decimals, as in Google’s “calibration” meetings. The resulting scores feed into things like compensation formulas or promotions, but have little bearing on what really matters: improving how we work.

So, could we do without this process? Maybe.

We could delegate to individual managers monitoring for exceptionally good or bad performance, as the comment above suggests. One possible pitfall is lack of consistency across managers, which in turn could lead to unequal treatment of people on different teams. We would want to have some level of transparency and checks-and-balances, a way for managers to share their assessments with each other and align on a shared standard. Probably we’d want to document this, so we would write down what we mean by “exceptionally good” and “exceptionally bad”, and share that with everyone on the team. Suddenly it seems we have something that looks a lot like a nascent performance review system — perhaps a kinder one, with no decimal point scores, but still something that looks eerily familiar.

In my ideal world, we would work in a way where we can set compensation based on criteria as objective as possible, including market rates and some kind of measure of expertise (and a way of adjusting these over time). It would be amazing if we could trust managers to do a good job of helping their teammates grow, to recognize exceptionally good performance when they see it, and to correct poor performance as soon as it appears. This does involve, first and foremost, trust, and a caring management team that is working hard to be as unbiased as possible. Unfortunately, most executive-level leaders I’ve worked with in tech seem to have trouble getting to this level of trust towards their managers (as I’ve recently written about here).

Some younger companies are experimenting with these ideas. Buffer, for example, says they no longer do performance reviews. Others are playing with tweaking the format and frequency of performance evaluations. However, I have yet to see a company with more than a couple of hundred employees that has figured out a way to get rid of some sort of periodic systematic performance assessment. I am hopeful, though, that over time more companies will experiment in this space and come up with better ways to handle employee performance and growth.