Stable Teams Should be Non-Negotiable

How can delivery vary so much in time and quality?

How can delivery vary so much in time and quality?

A few years ago, we had a house built in rural Maryland. Working through a builder, we designed the layout of our home and painlessly moved into our newly constructed home about 6 months later. After we took full ownership of the house, there were minimal construction issues (a few nail pops). Fast-forward a year. A family went through the same builder and purchased land next to ours. It took about 9 months for the builder to complete the house and for them to move it. After they moved into their house, they had major construction issues (plumbing, electrical, HVAC). We were surprised and delighted by the builder and our overall experience. Our new neighbors were not. Considering our houses had the same builder, we wanted to understand why their house took an additional 3 months to complete and had so many quality issues. After comparing notes, we had our answer.

The root cause? The builder foreman left the company after our house was completed and there was nearly a 100% turnover of the main construction team and the sub-contractors (electricians, plumbers, HVAC, cabinet makers…). All of the optimizations the team had accumulated from building homes together were gone. Instead of taking the usual 6 months to build the house, they had to train new people (which slowed the original team members down) and mistakes were made along the way (requiring rework).


A stable team is a team that stays together (no adds or drops) over a period of time, ideally for at least 3 months. The construction team for our neighbors’ house had drops and adds throughout the 9-month construction, most importantly the building foreman. We had none.

Stable Teams have a few characteristics:

  1. Individuals on the team only belong to one team.

  2. The team has one backlog or queue of work.

  3. The team has limited dependencies on other teams.

  4. The team stays together (unchanged) for at least 3 months.


Stable teams can be applied to any organization and at any level, from delivery teams to program teams, to portfolio teams. If they meet the 4 criteria listed above, they are a stable team.


Plenty of surveys and studies have been done noting how stable teams are “better” than non-stable teams.

Particularly, I’m going to reference a 2014 research study by Larry Maccherone. With over 50,000 respondents, let’s note a few key points that it concluded:

  1. Teams that stay together are more productive.

  2. Teams that stay together are more predictable.

  3. Teams that stay together are more responsive.


If we modify teams regularly or staff teams with people who are merely temporary members, we will never allow ourselves the opportunity of high performance. If we consider Tuckman’s stages of group development (forming, storming, norming, and performing), we will continuously be forming and storming. We’ll never have the opportunity to norm and perform at the same level as a stable team.


Having unstable teams hurts productivity, which makes sense. If we shift people around (think re-orgs), we have to train the new team members in the norms of the existing teams. While we are ramping up new teams or team members, we’re also not getting work done. The changes, though well-intended, will have negative impacts in the short-term.

When trying to improve an organization, some start with a practices-first approach, installing Scrum, SAFe, Kanban, or some other framework. But whatever your framework, stable teams should be non-negotiable, if you want a predictable system. Have a backlog of work, build and maintain a team, and then deliver value. As noted in the Maccherone research, stable teams resulted in throughput improvements (as much as 60%), increased predictability (variability of throughput effect improved as much as 40%) and quicker responsiveness (time in process improved as much as 60%).

Good organizational design should include stable teams. Leadership should be incentivized to keep teams together and incrementally improve them. Remember, if you make too many organizational changes too often, it will be at the expense of customers and the business.

Take a fresh look at LinkedIn Connections

Image Credit: Pictofigo

Image Credit: Pictofigo

I'm slowly pruning my LinkedIn network. I recently noticed a lot of junk in my LinkedIn feed. It's like the second coming of Facebook or Twitter. That's when I noticed a lot of the junk is from people I don't really know in real life (IRL). If I actually know you, I'm probably more tolerable of weird stuff you might share. But, if I don't know you, chances are you're going to get unfollowed and the connection removed. It's hard to maintain real relationships.

After a deeper review, I believe LinkedIn uses "Connections" as a KPI and does not want us to remove them. In the last week, I lost the ability to remove a connection from a users post. Click on the ... in the upper right corner of a post in your feed. You can still easily unfollow the person (it leaves the connection but hides them from your feed). But a week ago, there was also a "remove connection" option. You’ll now have to take the extra step of clicking on their profile to remove the connection (which notifies them that you're looking at their profile).

The connection pruning continues.

I believe for LinkedIn to maintain being a relevant business networking site, people need to start looking at having quality connections and not just accept every connection request. Nobody should care if you have 500+ connections to people you don't know. It's like having 500+ followers on Twitter and 99% of them are bots.

What are your thoughts? What's your policy for accepting or not accepting connections on LinkedIn?

Outcomes > Outputs

Choice 1Choice 2

Today, I want to challenge people to think differently about what they do every day.  It's not enough to just be busy.  That's called output.  I believe we need greater focus on outcomes.

Time is a resource that can't be saved. It can only be spent.  In the end, we're exchanging a limited resource for something of value. All things being even, in the table and graph above, we're asked to split our time between creating Choice 1 and Choice 2. If we spend all of our time creating Choice 1, we'll have no time available for Choice 2.  The graphical curve provides a visualization of what combinations are possible.  Anything to the left of the curve (red) is possible. Anything to the right of the curve (blue) is not possible. We have to decide.

The graph is something some may remember as a principal of economics; the Production Possibilities Curve.

An economy’s factors of production are scarce; they cannot produce an unlimited quantity of goods and services. A production possibilities curve is a graphical representation of the alternative combinations of goods and services an economy can produce. It illustrates the production possibilities model. In drawing the production possibilities curve, we shall assume that the economy can produce only two goods and that the quantities of factors of production and the technology available to the economy are fixed.
— University of Minnesota - Principles of Economics

You and your delivery team are the economy and you have to choose what you are going to work on, given you have limited time.  The choice above is unnecessarily complicated because it only focuses on output.  I challenge you to think about the contributions of the curve, leading with an outcome perspective. Given the economy is fixed, what choices would give you disproportionate value?  What outcome(s) provide the greatest benefit to your customers?

So, next time you prioritize your work, consider both the amount of time it takes to complete the work and value it will provide.   

Velocity Metric and the Cobra Effect

The cobra effect occurs when an attempted solution to a problem makes the problem worse, as a type of unintended consequence.


According to reports from the 1800s, the British Empire wanted to reduce deaths caused by cobra bites in Delhi. Thinking the best solution was to reduce the number of cobras, the locals were offered a financial incentive for every cobra skin. Some locals saw an opportunity to earn money by farming cobras. When the government become aware of this, they removed the incentive, cobra farmers released their snakes, and the overall cobra population increased. 

Applying to Metrics

Let's use a metric like Velocity, for example. Management wants the teams to deliver more product so it can get more returns on its investment. Management begins to measure how much the teams are completing, by linking productivity and velocity (rather than to use it to understand capacity). To make things more interesting, not only does management ask the teams how much they can produce but then offers some kind of incentive for reaching the velocity goal.  What do you think will happen?   


At the end of each iteration, sprint or planning increment, a team adds up estimates associated with work that was completed during that timeframe. This total is called velocity.  If you're using a flow-based approach, you may call this throughput or something else.  Either way, we're trying to measure stuff we got done and that we could potentially ship.  If you have a stable team and stable velocity, you could better understand team capacity for future commitments.

The Effect

If you're not careful, incentivizing teams around velocity may have the cobra effect. First, they may throttle back their commitments.  Week one they commit to 100 points. The next week they commit to 80 points.  If they get everything done that they commit to, they get the reward and management gets 20% less delivered.  Alternately, the team may begin to pad their estimates.  Week one they commit to 100 points. The next week they pad every estimate by 20%.  They still commit 100 points.  They just inflate estimates for the same effort.  Again, they are rewarded.

What can you do?

First, if you can't link your metric back to the key results you're looking for or the outcomes you desire, it's probably a bad idea to use this metric. Know why you need this information!  I use velocity to understand team capacity.  That's it!  Remember that choosing to use this metric comes with risks.  When you're trying to measure people, there has to be a certain level of trust and safety.  You trust the people giving you the estimates are being honest and forthcoming. You trust that they understand why (you believe) the data is important to you and the company. You will get bad information, if the team does not feel safe. Don't use this metric as a stick to punish them. If you do, this metric can most certainly come back and bite you.

How to Use Metrics to Reach Better Business Outcomes

Most organizations I deal with think the more metrics they have, the better off they are. To the contrary, collecting metrics data takes time.  If the metrics you're reporting on are not valuable, then that time you spent collecting data is all wasted time.  So, the following is not intended to be an exhaustible list of indicators or instructions. Rather, the intent is to provide context on how anyone can glean information from metrics to demonstrate results toward better business outcomes.

This post focuses on an organization's desired to have its Scrum Delivery Team predictably meeting its release or milestone commitments.

I will lay out the process in 5 step.

  1. Know what outcomes look like.
  2. Know how to demonstrate key results toward an outcome.
  3. Know what metrics should be used to measure key results.
  4. Know where metrics data will originate from.
  5. Know collection methods and frequency of the metrics data.

Theory: Monitoring Goals

A journal article* from 2016 supports the suggestion that monitoring goal progress is a crucial process between setting and attaining the goal.  Additionally, the article* stated that progress monitoring had a larger effects on goal attainment when the outcomes were reported or made publicly available, and when the information was physically recorded.

I believe leading indicators are the mechanism by which we monitor goal progress and reviewing that progress frequently by way of dashboards provides the larger effects of goal attainment.

*Psychological Bulletin, Vol 142(2), Feb 2016, 198-229


For this post, I will assume you are already familiar with LeadingAgile's Compass. Comparing the planning characteristics of a company with the planning expectations and needs of the client allows you to put your company, or your division, or even just your product area into one of four quadrants. The four quadrants will help you understand more clearly some of the challenges you might experience meeting customer expectations, why you are struggling or might struggle to adopt agile, and how to talk about the steps necessary to safely and pragmatically move your organization forward.

For this blog post, I am focusing on organizations that are actually in alignment with their customers expectations, but may struggle making and meeting commitments due to lack of clarity around the requirements, poor estimation, and extreme variability in the rate in which individuals can actually complete work against the estimates. These organizations often have highly structured PMO organizations, very disciplined governance, long-term planning and tracking processes, but still struggle to make and meet commitments on a regular basis.

Understand Goals and Outcomes of Going Agile

Your journey toward greater business agility starts by identifying what outcomes are most important to your company’s success. This knowledge helps you lay a foundation for making decisions about how to tailor your approach to measurably show progress toward your critical business objectives.  For this blog post, our goal is Greater Predictability.

Outcomes Key Results Metrics

Step One: Define the Desired Outcome

In our example, our desired outcome is to have teams plan, coordinate, and deliver predictably enough to meet a release level commitment.  

But what do we need for this outcome?


Backlog items need to be appropriately sized. They need to be ordered and prioritized to capture work needed to develop product or deliverables.

Stable Team

A stable team has everyone and everything (skill sets, tools, etc.) needed to deliver working, tested, documented, deployable product.

Working Tested Product Increment

This means deliverables meet defined acceptance criteria, have been reviewed and approved by product owner/ stakeholders, have been tested, and are shippable.

Step Two: Align Key Results to Desired Outcomes

For our example, I will list two key results:

  1. The team delivers the committed functionality each sprint.

  2. There is nothing preventing the team from meeting its commitments.

Step Three: Identify Metrics to Measure the Key Results

  1. The team delivers the committed functionality each sprint.

    1. Story Point Completion Percentage

    2. User Story Completion Percentage

    3. Velocity Variance

  2. There is nothing preventing the team from meeting its commitments.
    1. Team Stability

    2. Depth of Ready Backlog

    3. Outstanding Blockers

The key results are also known as lagging Indicators.  The metrics to measure them are known as leading indicators.  You can read more about leading and lagging indicators here.  The important distinction is that a lagging indicator is one that usually follows an event. You can use the leading indicators to make changes to your behavior or environment, resulting in more positive key results and better outcomes.

Step Four: Know Where to Get The Data

Any of the major Agile ALM tools will provide leading indicators.  You just need to know what is a meaningful indicator. Atlassian JIRA, CA Agile Central, VersionOne, VSTS... They all work. I also don't discount the value of physical boards and spreadsheets.  But since most of the people I'm working with are struggling with many teams working at once, an ALM tool can pay for itself through less duplication of effort and higher quality of data.

Step Five: Collection Method and Frequency 

Let's use Story Point Completion Percentage as an example leading indicator.  Capture key information about the metric, to ensure there is a shared understanding.  Metric description, how its calculated, when to measure, acceptable thresholds, and the source location are all critical for us to have a share understanding.

An example would looks like this:


Story Point Completion Percentage measures Delivery Team predictability.  It is strongly correlated to the team’s stability and governance ability to manage dependencies.  

Metrics Calculation

Total Story Points (Accepted) / Total Story Points (Committed)

When to Measure

Committed Story points are calculated at Sprint Planning; Accepted at Sprint end.


>=90% (Green) | 80-89% (Yellow) | < 80% (Red)

ALM tool (CA Agile Central) Source

Story Point Completion Ratio in CA Agile Central

Story Point Completion Ratio in CA Agile Central

I have documented collection methods in all of the major ALM tools. I'm just going to use CA Agile Central for our example.  Within an CA Agile Central active project, navigate to the Iteration Status page.  One of the helpful elements of this page is the ability to see if the team has committed to an appropriate level at Sprint Planning (Planned Velocity graph) and how they are progressing toward the end of their Sprint (Accepted Velocity graph).  Point Completion Ratio is calculated by using the Accepted Velocity graph. (N of N points)


As noted early, leading indicators are the mechanism by which we monitor goal progress and by reviewing that progress frequently via dashboards, we increase the odds of reaching the goal. Use dashboards (with meaningful metrics) to monitor your progress.