3 simple productivity metrics for every software team
If you want to know how well a business is doing, you usually ask about the revenues and customers. These two metrics are easy to understand for everyone, and they generally give you a good idea of the success of an organization.
But when it comes down to measuring the performance of the team, things can quickly get murky. Velocity, throughput, lead time, cycle time, MTTR, FCR, coverage, churn... There's no shortage of metrics, but they're often technical, domain-centric and hard to get out of the box.
As a result, we tend to rely only on tracking sales at the end of the funnel while hoping that productivity stays the same. And it's only after customers have started to leave us that we look back at the way we work to understand what went wrong.
I want to offer here three simple metrics that every software team can use to measure and improve how they work. It doesn't matter what tech stack you have, if you're doing continuous integration or continuous delivery, if you have a monolith or microservices. These metrics are designed to be easy to adopt and share with your team. And by focusing on them, you will be very likely to increase productivity and get happier customers.
Release frequency: how often do you deliver improvements to your customers?
I need to start here by making a distinction between the act of deploying to production and the act of releasing to your customers. The code for a feature might be in production, but it's only released if people can use that feature.
So the first question to ask your team is how often do you put changes in the hands of your customers. It is essential for many reasons. First of all, it's about being competitive. Technology has completely changed the pace of innovation, and now we have companies that can disrupt decade-old industries within a couple of years (Uber, Airbnb, Netflix). The more often you can get feedback on your ideas, the more data you're getting to improve your product.
The other benefit is that it reduces the risks of development. Your team will be getting more familiar with your deployment workflows, and you will have a natural drive to start automating your processes. It also means shipping smaller batches of changes to production. Thanks to that you will be less likely to release conflicting changes, and it will be easier to troubleshoot problems.
Finally, getting a rapid development cycle helps accelerate the decision-making process. Your team can reach a consensus much faster if they know that they can iterate within weeks instead of months. But if your product team knows that they'll have to commit to a version for months, then you'll start to see them discussing even the most trivial change in color or text for days.
Cycle time: How long does it take for a commit to go to production?
Sometimes teams can ship code often, but the changes going to production have been soaking in a staging environment for several weeks before that. There can be good reasons for that, but I would advocate reducing the time it takes a developer to know how their work is affecting production. Not only will you minimize context-switching for developers but it's also a great start to foster a better DevOps culture.
The DevOps movement is about recognizing that teams are more efficient when they share the responsibility of the production environment instead of throwing code over the wall to the Ops team. Having a fast cycle time allows you to change the definition of done from "you reached staging" to "your code is in the hands of customers". It is a subtle but critical difference that helps drive better customer focus and increase the quality of the work.
Bugs/user
So far we've been focused on increasing the velocity of the team. But we need a control metric to know that we are not sacrificing quality over speed. I'm a big advocate of continuous integration and continuous delivery but it can take time to get a good CI culture. However, there's still a simple formula that you can use to make sure that you're moving fast while not breaking things (too much): take the number of bugs that have been caught or reported in production, divide it by the total number of users in your system.
Once you understand what your baseline is you will need to make sure you do not have spikes happening after a release. It's virtually impossible to ship bug-free code, but you can control that you're not dropping the quality of your releases significantly. Don't misunderstand that for me saying that you shouldn't try to catch bugs before production. Au contraire! Knowing how many bugs get past your team, and having a goal to reduce that number will most likely push you to implement tests to catch regressions early.
The whole is greater than the sum of its parts
You will need to work on all three metrics together to improve your productivity. If you focus on one and forget the others you risk making tradeoffs that are detrimental overall. For instance, you can quickly reduce your bugs/users count by not shipping more features. But there's a high chance that your product will become stale. Or you can start releasing every day by shipping broken code to your customers - causing people to rapidly abandon your product.
But if you focus on driving all three metrics together, you will automatically have an incentive to improve your workflows, adopt better automation, and get closer to your users. Then, as you get comfortable, you will start bringing in more sophisticated metrics to help you optimize things even more.
Squadlytics is a productivity analytics platform for software teams. Try it for free today.
(Photo by Himesh Kumar Behera on Unsplash)