# Benchmarks & comparisons

Understanding whether your team’s performance is on track isn’t always straightforward. Are your pull request cycle times good enough? Should you be concerned about your change failure rate? And which teams might need extra support?

Swarmia provides benchmarks for the most important engineering metrics, along with tools to compare teams within your organization.

## **Why benchmarks matter**

Swarmia benchmarks help you identify improvement areas by giving you a proven point of reference. Instead of guessing what “good” looks like, you can see how your metrics compare to industry-validated performance levels.

[Benchmarks](https://www.swarmia.com/blog/engineering-benchmarks/) are most useful when you treat them as **directional guidance**, not targets to optimize blindly. They help you:

* Spot the parts of the workflow that are holding teams back
* Prioritize improvement efforts
* Create a shared understanding of what healthy engineering performance looks like

## **Why team-to-team comparisons matter**

Comparing teams within your organization often reveals more actionable insights than looking at benchmarks alone.

High-performing teams show what’s *already possible* in your own context. When some teams consistently perform better than others, improving the organization’s overall performance usually means helping struggling teams adopt similar ways of working.

To improve performance across the organization:

* Focus on **closing the gap between the best and worst performing teams**
* Look for differences in processes, tooling, ownership, or how work is broken down
* Use strong teams as learning examples — not as pressure mechanisms

The goal isn’t to rank teams, but to understand where support, coaching, or structural changes will have the biggest impact. [Swarmia’s working agreements](https://help.swarmia.com/continuous-improvement/working-agreements) can also help teams adopt proven practices and build better habits, once you’ve identified where improvement is needed.

## **How to use benchmarks and comparisons in Swarmia**

When viewing metrics in Swarmia, click the **Previous period** button above the table to select from three options:

* **Previous period (default).** See how metrics have changed over time and whether your improvement efforts are working.
* **Organization.** Compare individual teams against your organization’s baseline to identify outliers, improvement opportunities, and teams that may need extra support.
* **Swarmia benchmark.** See how your organization and teams compare to industry standards based on proven benchmarks.

<figure><img src="https://2772466312-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FMa8uBmGhQgR7MTPq9yh7%2Fuploads%2FQwra64KixxFM4RBMeaXf%2FScreenshot%202026-02-06%20at%2011.29.51.png?alt=media&#x26;token=41852c27-14db-4565-912b-25a4e160a224" alt=""><figcaption><p>Select from the different comparison options. Previous period is the default.</p></figcaption></figure>

When using Swarmia benchmarks, you’ll see a color-coded label — **great**, **good**, or **attention** — that indicates how your performance compares to benchmark ranges. These labels help you quickly spot which metrics are at a healthy level and where you should consider digging deeper.

For organization comparisons, you’ll see the difference displayed next to each metric. This makes it easy to spot which teams are ahead or behind the org average.

Swarmia benchmarks are available for **code metrics** and **DORA metrics,** while organization comparisons work across all normalized code, DORA, and issue metrics.

|                         | Great                                  | Good                           | Needs attention                          |
| ----------------------- | -------------------------------------- | ------------------------------ | ---------------------------------------- |
| Pull request cycle time | < 24 hours                             | < 5 days                       | ≥ 5 days                                 |
| Review rate             | ≥ 90%                                  | ≥ 80%                          | < 80%                                    |
| Time to first review    | ≤ 4 hours                              | ≤ 24 hours                     | < 24 hours                               |
| Time to merge           | ≤ 4 hours                              | ≤ 24 hours                     | < 24 hours                               |
| Batch size              | < 200 lines                            | < 500 lines                    | ≥ 500 lines                              |
| Deployment frequency    | <p>≥ 10 per week<br>(Continuously)</p> | <p>≥ 5 per week<br>(Daily)</p> | <p>< 5 per week<br>(Less than daily)</p> |
| Change lead time        | < 24 hours                             | < 5 days                       | ≥ 5 days                                 |
| Time to deploy          | < 15 mins                              | < 60 mins                      | ≥ 60 mins                                |
| Change failure rate     | < 5%                                   | < 15%                          | ≥ 15%                                    |
| Mean time to recovery   | < 1 hour                               | < 3 hours                      | ≥ 3 hours                                |
