Fair? Does anyone care about fair? Well, outside of politics, yes, people do. But it really has much less to do with fairness per se than self-interest.
Let’s say you are the CEO of a company. You have 4 division heads. They all work hard and believe they do their jobs well, and it is your job to evaluate them. If you evaluate them incorrectly, say giving a better grade (and therefore bonus and promotion) to one over another, they will view it as unfair. But it is not just about fairness. If the one who views himself as treated unfairly is a good executive, they are likely to leave for better opportunities. Similarly, if the one you promoted “unfairly” receives greater authority, they are likely to hurt your bottom line and, eventually, your own job performance.
What if manager A had a profit margin of 20% and a market share of 8%, while manager B had a profit margin of 15% and a market share of 6%. Clearly, manager A is doing a better job, and should get higher compensation and better promotion, right? On the other hand, what if you started the year with A at a profit margin of 22% and a market share of 10%? Your “better” manager actually lost 2% margin and 2% of market share. In the meantime, manager B started the year with a margin of 10% and a market share of 3%. This “worse” manager grew his margin by half, and doubled his market share. Who’s the star now?
On the other hand, what if the market for A is splitting dramatically, new entrants everywhere, you have your own division disrupting it, and all the other competitors dropped market share by half, with margins dropping down to 12%. Well, your manager A managed to hold onto only a 1/3 drop in market share, and kept margins well above market averages. Everyone was suffering, he made sure you suffered a lot less!
So should you evaluate on market share and profit? On revenues? On changes? On position relative to the market? How about on change in position relative to changes in competitors’ markets? The simple answer is all and none. The more complicated (but more correct) answer is that a simple few metric that work across all your direct reports only works with task-based employees. You can measure software engineers based on functioning productive output, and customer service agents based on tickets resolved, and sales people based on pure revenue. But at a higher level, you must be aware of the unique circumstances of each group. More specifically, you need to decide what the goal of the particular group is, and only then craft an evaluation plan. One manager’s goal is to grow a business by 50% for a grand-total of $10MM in revenue, the other’s to slow the slide in a business with $100MM in revenue, and yet you might pay the first manager more than the other.
Does this mean that your evaluation plan for manager A might be different than that for manager B? Sure; it probably should be. Does it mean that you might have both managers excel, or both fail? Of course it does. If you claim to be hiring A players, then be ready for it. The success of one does not come at the expense of the other, and you better have enough in your compensation budget to pay out rewards to all of them.
And yet, human nature is to look for a single metric to compare individuals (and everything else). An article in a recent WSJ showed that MBA admissions officers hired people with a higher GPA, even over those with a lower GPA from a better school. They will select a 3.4 GPA candidate from a class with 3.6 GPA average (i.e. they are in the bottom half of the class) over one with a 3.2 GPA from a class with a 3.0 GPA (i.e. they are in the top half of a very demanding class). Similarly, in an experiment, executives preferred to promote airport managers based on the on-time performance of their airport, completely ignoring what the performance was before the candidate arrived. It isn’t just the performance that matters, it is also the change wrought by the manager.
People prefer simple metrics, but they are dangerous. I personally have seen companies reward software engineers based on the number of lines of code written or modules shipped. You get what you pay for: they shipped buggy products with too many lines of code. Modify the metrics, and you get somewhere.
My favourite metric is one I designed at a client, called “Velocity.” The engineering and operations staff get +1 for every deployment that works. They get -1 for every deployment that either creates a new bug, has to be rolled back, or does not complete the specifications targeted for the release. The metric drives the engineers to release small bits of safe code as often as possible… which is exactly what we wanted.
You will always get what you measure, and what you pay for. Make sure your metrics encourage precisely the behaviour you want.