3 mins
You have 0 further articles remaining this month. Join LeadDev.com for free to read unlimited articles.

Promoted partner content

Picking the right software developer metrics isn't as easy as it sounds.

As the demands on engineering leaders evolve, engineering metrics are a key tool for improving processes and aligning with business objectives. 

While this concept isn't new, the sophistication and intentionality in approaching these metrics have evolved significantly over the past few years. Leaders have learned that some metrics can be toxic, and that it’s impossible to represent anything with one single metric.

Engineering metrics provide insights into the software development process, helping to identify inefficiencies, inform resource allocation, and demonstrate value to stakeholders. Used correctly, they can drive continuous improvement and enhance predictability. These objective measures allow leaders to identify inefficiencies that might otherwise go unnoticed. 

For instance, tracking cycle time can highlight delays in the code review process or testing phases that weren’t apparent through anecdotal evidence alone. Similarly, deployment frequency metrics can shed light on the agility of an organization’s release process by pointing out when work is making it to production more slowly than expected.

Which metrics?

However, choosing the wrong metrics can create unintended incentives, and there’s always the risk of overemphasizing easily quantifiable factors – such as PR cycle time or deployment frequency – at the expense of equally important qualitative aspects of software engineering – such as developer satisfaction and time waiting on other people.

DORA (DevOps Research and Assessment) metrics have become a widely adopted framework to assess software delivery performance. These include deployment frequency, lead time for changes, mean time to recover, and change failure rate. Together, they provide a high-level view of how quickly and reliably an organization can deliver software changes.

Process efficiency metrics provide a second layer of information, focusing on the flow of work through the development pipeline. Cycle time, throughput, and flow efficiency can all help you identify bottlenecks here. Flow efficiency – which is a measure of how long a piece of work is active vs waiting – is particularly useful in highlighting areas of delayed impact and outright waste.

How do I choose?

One common pitfall is the temptation to track too many metrics at once. This can lead to information overload, making it difficult to discern which are the truly meaningful and actionable insights.

"When a measure becomes a target, it ceases to be a good measure.” – Goodhart's Law.

Some metrics – such as code churn, cyclomatic complexity, and lines of code – are easily quantifiable but don’t reflect the value or complexity of software work. For example, velocity metrics based on story points or the number of tickets completed can incentivize breaking work into smaller, less meaningful chunks, or prioritizing easy tasks over more impactful but challenging work.

As an engineering leader, you can take advantage of this concern. If a focus on metrics leads to more and smaller pull requests, that’s not gaming the system. That’s a throughput win.

Using metrics effectively

  • When it comes to picking metrics, start small and focus on a few areas of improvement. 

As you gain experience and insight, you can evolve your metrics based on the team’s needs and learnings. For example, DORA metrics might reveal that a team has a high mean time to recovery. This may drive you to start tracking test coverage, and setting better team goals around it.

Achieving organization-wide goals against DORA-type metrics can be challenging if you’re trying to get everyone to a “good” number. Instead, set goals around steadily improving the numbers and not backsliding. Think of DORA metrics like a thermometer: they tell you if you have a fever, but they don’t tell you why.

  • Make metrics visible and accessible to all relevant stakeholders, promoting transparency and accountability. 

No engineering metrics should be tracked at a leadership level that aren’t also visible to teams and individuals. This transparency helps build trust and ensures everyone understands what’s being measured and why. The same transparency helps teams quickly recognize when the data is wrong or misleading, improving overall data quality.

  • Metrics are for insight and improvement, not punishment. 

Encourage team-level ownership of metrics and improvement initiatives. When team members feel a sense of ownership, they’re more likely to drive change actively. Encourage teams to use the metrics to tell stories about the challenges they’re facing – this can help them embrace the idea of capturing this kind of data.

  • Regularly reassess the relevance and impact of chosen metrics. 

As processes change and evolve, so too should your metrics. Be prepared to retire metrics that no longer serve you well or introduce new ones that better reflect your current goals and challenges.

  • Remember that metrics are indicators, not ends in themselves. 

They should inform decision-making rather than dictate it. Consider the broader context when interpreting metrics, and don’t neglect qualitative insights like feedback from engineers.

Connecting metrics with business outcomes, not outputs

While engineering metrics provide valuable insights into the software development process, engineering leaders need to connect these metrics to meaningful business outcomes. This means working with business stakeholders to align their goals with broader company objectives. 

It’s also important to measure the impact of engineering improvements on business metrics, such as tracking how reductions in change failure rate correlate with decreased customer support costs, or improved Net Promoter Scores.

For example, rather than simply tracking deployment frequency, consider how this metric relates to the ability to respond quickly to customer needs or market changes. A higher deployment frequency might enable the business to iterate on features more rapidly, leading to increased customer satisfaction and retention. 

Similarly, when measuring lead time for changes, consider how this metric impacts the company’s ability to deliver value to customers. Shorter lead times might allow the business to capitalize on market opportunities more quickly, or address critical issues faster, resulting in improved customer experiences and potentially increased revenue.

By connecting engineering metrics to business outcomes, leaders can demonstrate the tangible value of their teams’ efforts and make more informed decisions about where to focus improvement initiatives. This approach helps justify investments in engineering effectiveness and ensures that technical improvements directly contribute to the company’s success.

A careful approach is worth it

When you’re thoughtful about how you implement and use engineering metrics, the data can be a powerful catalyst for improvement in software organizations. Engineering metrics provide valuable insights into processes, productivity, and quality, enabling continuous improvement and data-driven decision-making.

Remember – and help your stakeholders remember – that the goal of metrics isn’t to hit arbitrary targets. Use data to inform decisions that lead to better software, more effective teams, and more delivery directly associated with business outcomes.