Gaming engineering metrics doesn’t always have to be a bad thing. Here’s a model for cooperative gaming to drive positive technology change.
Many engineering leaders might shy away from the phrase “engineering metrics.” Software development is hard to quantify, and many metrics that have been elevated in the past – such as “lines of code” or “number of commits” – do a poor job of measuring the quality, impact, or efficiency of a team’s output.
Worse, these metrics are gameable. If engineers know their output is being assessed solely in terms of volume or speed, it incentivizes bad behavior: inefficient code, unnecessary changes, and quantity at the expense of quality.
In truth, every metric is potentially gameable, but that doesn’t mean the outcome of gaming each will be negative. The key is to use metrics that incentivize best practices. If your reporting strategy is sound, “gaming” the metrics should enable your team to progress. Emphasizing measurements that focus on how your team is working together, instead of what each individual is doing, will lead to better results.
It’s also important to look at multiple, complementary measurements, so that no single behavior is overly incentivized. Finally, your reporting strategy should be transparent: everyone should know and agree to the rules of the game.
It's the work, not the workers
Agile software development is a collaborative activity. It makes sense for developers to be suspicious, or even resentful, of metrics that focus on individual outputs, because their work is highly interdependent. Metrics that track the outcomes of collaborative efforts will give you better insight into the health of an engineering organization – and “gaming” such metrics requires teams to communicate and collaborate effectively.
Take for example, Pull Request Throughput – the total number of PRs merged over a defined period. The most effective way to “game” this metric is to break the work into smaller segments. Smaller PRs tend to get reviewed faster. They also result in a more maintainable code base. Indeed, when we noticed a trend of increasing Cycle Times among our remote developers, we were able to trace the issue to bloated PRs. Coaching on this and a few other issues allowed us to increase our productivity by 83% month-to-month. The “cheat” for this metric was instituting a CI/CD best practice.
Balance, not tunnel-vision
Even when you’re optimizing for a metric that reinforces a best practice, it shouldn’t be done in a vacuum. If you’re focused on increasing throughput, you should still be keeping an eye on key code quality metrics, to ensure that you’re not sacrificing quality for the sake of quantity or speed.
Too much emphasis on nearly any single metric can have unintended effects, and a “good” number can hide a problem. This is exactly what one of our clients in the insurance industry experienced. Their engineering team was moving quickly, but not prioritizing the high-impact projects, and expending too many resources on basic configurations.
The engineering managers uncovered this problem because they were looking at multiple metrics to gain a more holistic view of their processes. If the company had taken a single metric at face value, they may not have known about the opportunity to improve their engineering team’s health and process. In the end, the engineering team was able to automate some common mundane tasks, freeing up engineers for more complex work.
You can always guard against over-optimization by intentionally tracking complementary sets of measurements, and by choosing metrics that keep your team in balance.
Playing by the rules
With any reporting strategy, it’s crucial that your team knows what you’re monitoring and why. Your reporting strategy should be tied directly to goals that are meaningful to your organization, and metrics should be used as a tool to help your team meet their objectives. In particular, metrics should never be used to single an engineer out for punishment, though they can be used to help managers provide individualized, actionable coaching.
For example, one of our customers, a SaaS company in the HR space, was experiencing a lack of communication between their product and engineering teams. Engineers were prioritizing and developing features without input from sales or their users. This had a negative impact on the company’s product development. To address this issue, they restructured teams and paired product managers and engineering team leads together, giving the new teams a lot of operating autonomy, while standardizing their reporting structure to measure outcomes.
With an emphasis on flexibility and accountability, they looked at how many PRs were tied to Jira issues to help demonstrate how engineering teams’ work was supporting product development, and measured progress towards outcomes that were most important to the organization
Ultimately, you cannot – and should not – measure everything your team does. If you select and roll out metrics in service to your most important goals, choosing metrics that are primarily focused on teams’ outcomes and balanced to prevent over-optimization, you won’t have to worry about engineers trying to game them.
If your reporting structure is right, gaming the system might even become synonymous with improving your team.
At Code Climate, we regularly help customers identify the right data strategy for their objectives. Visit www.codeclimate.com to learn more about data-driven engineering management best practices or to book a consultation.