Berlin

November 4 & 5, 2024

New York

September 4 & 5, 2024

How to foster data-driven tech leadership

Using metrics to empower teams and improve delivery
August 19, 2021

You have 1 article left to read this month before you need to register a free LeadDev.com account.

Translating a project’s technical needs to a client is no small task, especially when you need to be there for your team at the same time.

How do you strike a balance? Is it essential to know how to align engineering metrics with business goals? How can we empower engineers to push features into production with valuable data on metrics?

Tech leaders are beginning to realize the need for more alignment between developers and business goals. As team spearheads, we need to develop metrics that bring visibility to product owners, such as time to merge, cycle time, and other delivery pipeline-related metrics. Aligning these metrics to business KPIs can help teams understand how their work impacts company objectives. With growing visibility, your team can perform better and deliver more value to the product. Additionally, by having information like cycle time and mechanics, features still under development can be problem-proofed. Finally, if product owners have a greater understanding of how their product is performing from a technical perspective, they can accurately make decisions backed by reliable data.

In this article, I’ll talk about how metrics helped the engineering team at Vinta ship more reliable features to our customers, as well as allowing the team to become more knowledgeable and productive.

Code Climate advert

Collecting initial metrics

In its early years, Vinta was more focused on web development and MVP construction. We concentrated on development pipelines without going too deep into product consultancy. As a result, early metrics were more basic, such as the number of tasks developed in a sprint or code coverage. However, as the number of developers and the complexity of the projects grew, we started to dive into other areas such as product management and tech consultancy. In addition, we felt the growing need for data that would give a more detailed insight into how users were interacting with the product and how we could improve the quality and frequency of deliverables.

As the nature of Vinta’s projects changed, so did our desire to improve the quality of the products we were delivering. Understanding our burndown chart and how much of our code was covered by unit and integration tests had so far been helpful, but we wanted to dig deeper. Was there other data that we hadn’t considered, which could help us further?

Diving deeper into development data

Once we started to understand more about product metrics and how to build better products, we moved to development metrics. The first step was to construct an internal product that evaluated each company’s area, including product management, engineering, and design. This ‘State of’ research allowed us to see more accurately how each project was performing and whether it was conforming to or diverging from the company standard practices.

Once we defined the areas we wanted to measure, we needed to determine which topics would comprise these areas. Again, we worked to ensure that this tool would give proper visibility and rank each team practice against a company-defined standard, which would derive from a combination of our experience, literature, and available reports, such as the State of DevOps.

With this in mind, we had a set of engineering-related metrics to track, such as:

  • Time to Merge
  • Time to Review
  • Time to Open
  • Pull Request Size
  • Change-lead Time
  • Merge Rate
  • Time To Restore Service
  • Change Failure Rate
  • Code Coverage
  • Code Documentation Coverage
  • Code Quality
  • Deployment Frequency

These metrics helped us to more fully understand our delivery pace and where there was room for improvement. In addition to organizing what we wanted to track, providing visibility to the team was also essential to get to the next level.

Development metrics ‘side-effects’

Knowledge of our development cycle time (Time to Merge, Time to Review, Time to Open, PR Size, Merge Rate) and DevOps-related metrics (Change-lead Time, Deployment Frequency, Change Failure Rate, Time to Restore Service) was fundamental. It helped us properly define processes and practices that would make both engineering and product teams happier. Pain points and bottlenecks in the development process became evident once we started paying attention to its cycle time. With this in hand, we implemented successfully defined rules for pull request size, feature flags usage, code review policies, and more.

As part of these changes, we made sure that the cards we were working with would be small enough to be continuously integrated into small pull requests with feature flags. This decreased our change failure rate, as incomplete features were behind disabled flags and would only be activated when validated. Thus, when customers started using it, they went through lots of small iterations by the QA team. It is important to also highlight here that this significantly impacted our deployment routine as we began to deploy on-demand. Nevertheless, the result was favorable to our Change Lead Time, Deployment Frequency, and Merge Rate metrics.

Conclusion

Previously, we were oblivious to how we were performing based on cycle time – these changes were therefore a great improvement. Additionally, reviewers were glad that there weren’t any insanely huge pull requests, and merge conflicts became rare as code was constantly checked in and out. The whole team also felt more confident and motivated to see their work rapidly hitting production.

One final point to highlight is that these metrics also help tech leads execute technical assessments more confidently, as they don’t need to rely solely on intuition. Instead, there is a solid data-based foundation behind their decisions.

Code Climate advert