From cycle time, to story points, to lead time… It’s well known that finding effective ways to measure team performance is a point of contention for managers of engineering teams. And as with most engineering leadership challenges, there is no one-size-fits-all approach.
To try and get to the bottom of an almost universally shared pain point in the industry, we brought together five brilliant engineering leaders to share their thoughts on how they understand and measure engineering velocity, as well as examples of ways to improve it.
In software engineering, developer velocity is a crucial indicator of productivity and efficiency. However, engineering velocity isn’t just about analyzing the numbers – there are many other human and technical elements to consider.
When measuring velocity, the right metrics provide an important window into tracking progress and pinpointing obstacles. A robust plan for tracking velocity should consider both quantitative data and qualitative factors. While tangible metrics like sprint speed, lead time, and throughput provide valuable insights into team performance, it’s also critical to consider aspects such as code quality, customer satisfaction, and team morale.
Tailoring metrics to a team’s specific needs and goals is also crucial. The expectations for developer velocity at a 10-person startup will differ greatly from those at a large tech company. This requires setting clear objectives, establishing benchmarks, and fostering an environment where teams can innovate while meeting their commitments.
Once you’ve identified the metrics that are right for your team, consider how you’ll use them on a daily basis. Metrics should act as guides for improvement, not tools for reprimand. If applied correctly, regular review of these metrics, consistent communication, and adaptation will foster a culture of ongoing improvement.
Measuring velocity is about identifying what’s working well and what is not. This means keeping an eye out for data as feedback so that engineering teams can best understand how to pivot and tackle new or additional work. When a customer or end user is unhappy with a new feature, feedback remains consistently negative, usage drops, or sales decreases. As a result, stakeholders look to engineering managers to investigate the root cause of these outcomes.
An operational investigation typically leads to “engineering work” – often taking the form of a feature fix, implementation, or verification. Engineering work is typically blocked by a combination of technical debt, other feature development work, and slow time to delivery. Technical debt can come in the form of poorly or hastily written code or dependency debt, where a codebase relies on insecure or out-of-date third-party libraries, frameworks, and components. Having inadequate build processes or making architecture decisions that block future feature development can also result in build debt.
When improving engineering velocity to enable engineering work, it’s important to look carefully at various dimensions and to be very specific about what contributes to or takes away from a team’s engineering velocity. I’ve led workshops where teams will map out every step in their development workflow to understand better where time is spent and identify bottlenecks. For example, if a team cannot deliver a feature fix on time because of technical debt, being specific about the term technical debt can provide clarity and promote efficiency.
Understanding the type of blocker associated with your engineering velocity and outlining the specific fixes with the team can ultimately help others prioritize and take action accordingly in roadmap plans, agile workflows, other project planning, and general team orientation.
As an engineering leader, my focus is on enhancing team performance and delivering exceptional-quality work.
When discussing engineering “velocity,” we refer to the speed at which a team delivers software. A few key metrics I focus on in order to improve such velocity are cycle time, build time, and average story size:
- Cycle time is used to determine the duration from task creation to completion, encompassing elements like code review and QA. It helps identify bottlenecks during specific stages of development, allowing teams to streamline processes and increase development speed.
- Build time is crucial to continuous integration/delivery (CI/CD). It measures the time taken for code changes to progress through the build pipeline. Reducing build time is essential for swift development, timely feedback to developers, and rapid iteration.
- Average story size involves breaking stories into manageable tasks to boost velocity. Small, well-defined tasks speed up completion and generate positive momentum within the team.
While these metrics offer valuable insights, their effectiveness depends on wider workflow optimization, including clearly outlined goals and deliverables from product managers, the team’s use of effective estimation, and having clear definitions of “done”. All are crucial for maximizing efficiency and, therefore, velocity.
Ultimately, improving engineering velocity is indispensable for organizations striving to remain competitive and adaptable. On a more micro level, it also positively impacts team health; it’s motivating for everyone involved when things are running as efficiently as they should!
Missed deadlines, slow lead time, and constant bug fixes are signals that you may need to improve velocity. Instead of placing pressure on your engineers for speed, look to understand why development is slow, what changes you can make to improve it, and if those changes are working.
How do you find the right metric for you and your team?
Talk to your stakeholders and your team to understand what isn’t working for them and what they find frustrating.
- If it takes a long time for each story to be released, look at cycle time, and identify how long each step takes. Set a plan to improve it.
- If you feel like you are fixing bugs after every release, look at your change failure rate: the percentage of deployments that cause issues in production. Identify why bugs are not being spotted before shipping.
- If your team is constantly firefighting in production, look at the mean time to resolution: the average time between an incident being reported and fixed in production. Identify where the time is going.
You’ve picked your metric – what next?
Set out to gather current data on your preferred metrics. Track the three metrics mentioned above and set a goal to work towards in the next sprint, incorporating team feedback into your calculations. At the end of the process, brainstorm with your team about how you might improve your metric. What worked and what didn’t work? Vote on some changes and try to implement them in the next sprint.
Avoiding altering too many things at once – it will be difficult for the team to keep track of it all and you won’t know which change had the highest impact. Make sure the metric is visible to the team, so everyone is aware of how they’re doing. If you’re finding that you just can’t seem to move the needle, you may be using an incorrect metric for you and your team. Go back to square one, evaluate the data, and pick the next metric to improve.
Check out more Huddles:
-
5 software engineering predictions for 2024
As we hope for a clean slate in 2024, what do engineering managers expect the year to hold?
-
LeadDev Advent Calendar
The countdown to Christmas is underway!
-
What makes a great developer experience?
We asked five engineering leaders what developer experience means to them, and how they’ve put it into practice in their own orgs.
-
What does generative AI mean for the future of software development?
We asked five engineering leaders whether they think generative AI will drastically change the future of their craft.