An IT program is a large IT delivery team composed of two or more sub-teams (also called squads). The purpose of program management is to coordinate the efforts of the sub-teams to ensure they work together effectively towards the common goal of producing a consumable solution for their stakeholders. In some ways “program coordination” is a more accurate term than “program management,” but “program management” is a far more common term within the IT community so we have decided to stick with it.
This article addresses several topics:
- Why program management?
- Team structure of a large program
- The program (team of teams) lifecycle
- Workflow within the program
- Choosing your WoW
- External workflow with other teams
- How tactical scaling affects program management
- Program management and DevOps
Why Program Management?
There are several reasons why large IT delivery teams exist in the first place:
- Some endeavours are inherently big. Sometimes an organization will decide to take on a complex effort, such as developing an operating system, an air traffic control system, a financial transaction process system at a bank, and many other examples.
- Overly-specialized staff promote larger teams. When IT staff are narrowly focused it requires many people to work, at least part time, on a team so that the team has sufficient skills to get the job done. When people are generalizing specialists your teams become much smaller and more collaborative.
- Overly bureaucratic processes promote larger teams. Sometimes the systemic bureaucracy in the organization requires large numbers of people to address that bureaucracy. Scott once assessed an eighty-person project team who were doing work that only required between ten and fifteen people to do the “real work” and everyone else to conform to the overhead of their traditional CMMI-compliant process. Sadly they didn’t rework the team and failed to produce anything after three years and many millions of dollars of investment. As an aside, it is possible and highly desirable to effectively combine CMMI and disciplined agile approaches, but you need to overcome the cultural dissonance of the two paradigms. Similarly, we’ve seen teams misjudge and adopt a scaling framework such as SAFe when their situation didn’t warrant it – this motivated them to create a much larger team than they actually needed,
- Working on large teams can lead to greater rewards. Similarly, someone is “empire building” and purposefully creates a large team so that they will be rewarded for doing so. We have worked in two organizations where before their agile transformation the pay grade of a manager was determined by the number of people the person managed. Worse yet, in one organization the people on the larger teams tended to get better bonuses, and quicker promotions, than people on smaller teams regardless of the actual ability of the team to deliver value to the organization.
In our opinion only the first reason is a valid one for building a large agile team. The other reasons reflect aspects of organizational cultures that need to be fixed in time. Luckily, there are several strategies that you can employ to reduce the size of a team:
- Reorganize the problem into a collection of smaller problems. Disaggregation of a large problem is performed through a combination of agile envisioning and agile business analysis. This is a key responsibility of your product management efforts: to feed reasonably-sized portions of work to IT delivery teams.
- Reduce the problem. Sometimes a large problem can be shrunk down through pruning features out of the vision, or at least by deferring them until later.
- Address your organization’s culture. As we discussed earlier, most of the reasons that organizations build large IT delivery teams are the result of cultural challenges. Fix the real problem by adopting agile and lean ways of thinking and working.
- Organize the large team into a collection of smaller teams. In other words, create a program.
- Adopt a Disciplined Agile approach. Enable your teams, regardless of size, to choose their own way of working (WoW) and thus have a process that addresses the needs of the situation that they face. Some people call this process right-sizing, de-scaling, or process disaggregation – we simply call it pragmatic.
When you find yourself in a situation where you need a large IT delivery team, and those situations do exist in many organizations, and you can’t find a way to reduce the size of the team, then you will need to adopt strategies to coordinate that team.
Team Structure of a Large Program
We described in detail the team structure of a large agile program in Large Agile Teams. The key ideas are that a large team is organized as a team of teams and that structures are required to coordinate people, requirement, and technical concerns within the overall program. Where a “scrum of scrums” may suffice for this coordination on small-to-medium sized programs (say up to five or six sub-teams), it quickly falls apart for larger programs. As a result large programs will find that they need:
- A Product Management (or Product Ownership) strategy where the Product Owners coordinate their activities
- An Architecture (or Architecture Ownership) strategy where the Architecture Owners coordinate their activities
- A Product Coordination (or Management) strategy where the Team Leads coordinate their activities.
- An optional Program Coordinator/Manager, a specialist role, is responsible for coordinating the overall leadership team.
Figure 1. Team structure for a large program (click to expand).
The following structure shows how the Product Owners of each sub-team are also members of the Product Management/Product Owner team for the program. Similar structures, see Large Agile Teams, will also exist for Product Delivery and Architecture as well.
Figure 2. The team structure for the Product Owner team on a large program (click to expand).
The Program (Team of Teams) Lifecycle
Disciplined Agile Delivery (DAD)’s Program lifecycle, shown below, describes how to organize a team of teams. Large agile teams are rare in practice, but they do happen. This is exactly the situation that scaling frameworks such as SAFe, LeSS, and Nexus address.
Figure 3. DAD’s Program lifecycle for a team of teams (click to expand).
There are several critical aspects to this lifecycle:
- There’s an explicit Inception phase. Like it or not, when a team is new we need to invest some up front time getting organized, and this is particularly true for large teams given the additional risk we face. We should do so as quickly as possible, and the best way is to explicitly recognize what we need to do and how we’ll go about doing so. SAFe has a similar concept called program increment (PI) planning, although Inception is more robust as it includes potential activities around forming teams, explicit requirements and architecture modeling, test strategizing, and more.
- Subteams/squads choose and then evolve their WoW. Subteams, sometimes referred to as squads, should be allowed to choose their own WoW just like any other team would. This includes choosing their own lifecycles as well as their own practices. We may choose to impose some constraints on the teams, such as following common guidance and common strategies around coordinating within the program.
- Subteams can be either feature teams or component teams. A feature team works vertical slices of functionality, implementing a story or addressing a change request from the user interface all the way through to the database. A component team works on a specific aspect of a system, such as security functionality, transaction processing, or logging. Our experience is both types of teams have their place, they are applicable in certain contexts but not others, and the strategies can and often are combined in practice.
- Coordination occurs at three levels. When we’re coordinating between subteams there are three issues we need to be concerned about: Coordinating the work to be done, coordinating technical/architectural issues, and coordinating people issues. This coordination is respectively performed by the Product Owners, the Architecture Owners, and the Team Leads. The Product Owners of each subteam will self-organize and address work/requirements management issues amongst themselves, ensuring the each team is doing the appropriate work at the appropriate time. Similarly the Architecture Ownership team will self-organize to evolve the architecture over time and the Team Leads will self-organize to manage people issues occurring cross teams. The three leadership subteams are able to handle the type of small course corrections that are typical over time. The team may find that they need to get together occasionally to plan out the next block of work – this is a technique that SAFe refers to as program increment (PI) planning and suggest that it occurs quarterly. We suggest that you do it when and if it makes sense.
- System integration and testing occurs in parallel. The lifecycle shows that there is a separate team to perform overall system integration and cross-team testing. Ideally this work should be minimal and ideally entirely automated in time. We often need a separate team at first, often due to lack of automation, but our goal should be to automate as much of this work as possible and push the rest into the subteams. Having said that we’ve found that usability testing across the product as a whole, and similarly user acceptance testing (UAT), requires a separate effort for logistical reasons.
- Subteams are as whole as they can be. The majority of the testing effort should occur within the subteams just like it would on a normal agile team, along with continuous integration (CI) and continuous deployment (CD).
- We can deploy any time we want. We prefer a CD approach to this, although teams new to agile programs may start by releasing quarterly (or even less often) and then improve the release cadence over time. Teams who are new to this will likely need a Transition phase, some people call these “hardening sprints” or “deployment sprints” the first few times. The Accelerate Value Delivery process goal captures various release options for delivery teams and the Release Management process blade for organizations as a whole.
Workflow Within the Program
Let’s consider the workflow implied by the Program lifecycle in Figure 3 above given the team structure of Figure 1. Someone in the role of Program Manager coordinates the three leadership teams (described in greater detail in Large Agile Teams):
- Product Coordination Team. This team is responsible for dealing with cross-team “management issues” such as moving people between teams, resolving disputes that cross team boundaries, and any coordination issue that doesn’t fall under the purview of the other two leadership teams. The Program Manager often leads the Product Delivery team, which is made up of the Team Leads from the delivery sub-teams, and may even be a Team Lead of one of the delivery teams as well.
- Product Owner Team. This team is responsible for requirements management, prioritizing the work, and assigning work items to the various sub-teams. This team is led by a Chief Product Owner (CPO), not shown, who is often a Product Owner for one more more sub-teams.
- Architecture Owner Team. The AO team is responsible for facilitating the overall architectural direction of the program, for evolving that vision over time, and for negotiating technical dependencies within the architecture. This team is led by a Chief Architecture Owner (CAO), also not shown, who is often an Architecture Owner on one or more delivery sub-teams.
An important difference between the Disciplined Agile approach and SAFe is that the delivery sub-teams may be following different lifecycles. The Disciplined Agile (DA) toolkit supports several delivery lifecycles, including the Scrum-based agile lifecycle, the Kanban-based lean lifecycle, a Continuous Delivery:Agile lifecycle, a Continuous Delivery: Lean lifecycle, and the Lean Startup-based exploratory lifecycle. Even when the sub teams are following the same lifecycle they may be working to different cadences (or not) – in the section on choosing your WoW we will learn that there are several strategies for sub team cadences.
The lifecycle diagram of Figure 3 also shows that some programs may include a parallel independent testing effort in addition to the whole team testing efforts of the sub-teams. The delivery sub-teams will make their working builds available to the testers on a regular basis, who will integrate all of the builds into their testing environment. This independent testing effort often addresses end-to-end system integration testing as well as other forms of testing that make better economic sense when done in a centralized manner. Independent testing is common for large programs that are tackling complex domains or complex technologies or that find themselves in a regulatory environment that requires independent testing. The SAFe equivalent to a parallel independent test team would be called a system team, in this case one doing system integration plus independent testing. Same basic concept, slightly different wording.
Choosing Your WoW!
The following process goal diagram overviews the potential activities associated with disciplined agile program management. Because every team, including programs, are different, they need to be able to choose (and later evolve) their own way of working (WoW). The goal diagram indicates the decision points critical to program management, such as how to allocate work across teams and how to coordinate between teams, and then potential options for doing so. Your team will need to choose the best way that they can work right now, given your skills, culture, and situation that you face. And of course you should strive to learn and evolve your WoW over time.
Figure 4. The process goal diagram for Program Management.
The process factors that you need to consider for program management are:
- Allocate work. Work items must be allocated to delivery teams, or to open source efforts in the case of programs which include internal open source components, throughout the lifecycle. The type of work and the focus of the sub-team are the primary determinants of how work is allocated. However, team capacity and load balancing concerns, for example a team has run out of work or a team currently has too much work, will also be considered when allocating new work. Work allocation is the responsibility of your product owners although team capacity planning and monitoring is typically performed by the program manager and team leads. Regardless, these activities should be performed collaboratively by the available people at the time.
- Prioritize work. The work performed by the teams – including new requirements and fixing defects – needs to be prioritized. There are several ways to prioritize the work, such as by business value, by risk, by severity (in the case of production defects), or by weighted shortest job first (wsjf) to name a few strategies. Prioritization is an ongoing activity throughout the lifecycle and is the responsibility of your product owners.
- Plan program. Traditional programs are often planned on an annual or even ad-hoc basis. Agile programs, at least the disciplined ones. tend to be planning on a rolling wave basis.
- Organize teams. There are three common strategies for how you can organize delivery teams within a program – feature teams, component teams, and internal open source – each of which has advantages and disadvantages. In addition to delivery teams, in a large program you are likely to find the need for leadership teams – the Product Owner team, the Architecture Owner team, and the Product Coordination/Management team – made up of the product owners, architecture owners, and team leads from the delivery teams respectively. These leadership teams are responsible for work/requirements coordination, technical coordination, and management coordination within the program respectively.
- Coordinate teams. There are several ways that the sub-teams can coordinate with one another. For example they could choose to have cross-team coordination meetings (also called a Scrum of Scrums (SoS)); they could visualize the work through task boards, team dashboards, and other information radiators such as a modeling wall; they could choose to have “big room” planning sessions where all team members are involved or “small room” agile modeling sessions where a subset of people are involved; or even traditional (or agile) checkpoint meetings. All of these strategies have their advantages and disadvantages, and all can be applied by the various types of teams mentioned earlier.
- Coordinate schedules. There are several strategies that a program can adopt to coordinate the schedules between sub teams. The easiest conceptually, although often hardest to implement in practice, is to have all sub-teams on the same cadence (e.g. every sub-team has a two week iteration). This is what both SAFe and LeSS prescribe. Another option is to have multiplier cadences where the schedules of sub-teams align every so often. For example, we once worked with a large program where some sub-teams had a one-week iteration, some had a two-week iteration, and a few had a four-week iteration. We’ve also seen another team where sub-teams had one, two, or three-week iterations that provided alignment of iteration endings every six weeks. Most common, although rarely discussed, is for sub-teams to have disparate cadences. This is guaranteed to occur when teams are following different lifecycles (remember, the DA toolkit supports several). For example, when some sub-teams are following the Scrum-based agile/basic lifecycle that has iterations, yet other sub-teams are following the lean or continuous delivery lifecycles that have no iterations, then you have an alignment challenge. Or if you have sub-teams adopting any iteration length they like (we’ve seen some programs with sub-teams with two, three, four and sometimes even five-week iteration lengths) then they also in effect have disparate cadences.
- Schedule solution releases. Programs need to schedule their own releases, in accordance to your organization’s release management strategy, which involves coordination between the sub-teams. When the cadences of the sub-teams are (reasonably) aligned then it is easier to coordinate production releases. For example, when all sub-teams have two-week iterations (or at least the sub-teams with iterations do) then they could potentially release into production every two weeks. In the case of multiplier cadences, there is the potential to release into production each time the iteration endings align.
- Negotiate functional dependencies. An important responsibility of the Product Owner team is to manage the functional dependencies between the work being performed by various sub-teams. There are strategies to manage dependencies between two agile sub-teams, between an agile sub-team and a lean sub-team, and even between an agile/lean sub-team and a traditional sub-team (this isn’t ideal, but sometimes happens).
- Negotiate technical dependencies. Similarly, an important responsibility of the Architecture Owner team is to work through technical dependencies within the solution being developed by the program.
- Govern the program. The program must be governed, both internally within the program itself while still operating under the aegis of your organization’s overall IT governance strategy. Program-level metrics, particularly those tracking the progress of sub-teams and the quality being delivered, are vital to successful coordination within the program. Sub-teams should also be working to common conventions, ideally those of the organization but in some cases specific to the program itself (perhaps your solution is pioneering a new user interface look-and-feel or new data storage conventions). Programs, because of their size and because they are usually higher risk, often have more rigorous reporting requirements for senior management so as to provide greater transparency to them. The implication is that a program’s dashboard often has a more robust collection of measures on display.
Workflow With Other Teams
The following diagram overviews the major workflows that a disciplined agile program is associated with. Note that feedback is implied in the diagram. For example, where you see the Technology Roadmap and Guidance flow from Enterprise Architecture to Program Management there is an implied feedback loop from the program to the enterprise architects. Also note that the workflows do not necessarily imply that artifacts exist. For example, the data guidance workflow from Data Management could be a conversation with a data management person, it could be a concise description of data standards, or it could be detailed meta data – or combinations thereof. A second example would be a program providing their development intelligence to the Governance team through automated rollup of metric data via your organizations dashboard technology.
Figure 5. The relationship of program management to other aspects of your process.
The following table summarizes the workflows depicted in the diagram.
|Process Blade||Process Blade Overview||Workflow with Program Management|
|Continuous Improvement||Addresses how to support process and organizational structure improvement across teams in a lightweight, collaborative manner; how to support improvement experiments within teams; and how to govern process improvement within your organization.||Your continuous improvement efforts should result in improvement suggestions gleaned from other teams that the program can learn from.|
|Data Management||Addresses how to improve data quality, evolve data assets such as master data and test data, and govern data activities within your organization.||The data management group will provide data guidance, such as naming conventions and meta data regarding legacy data sources, to all delivery teams.|
|Enterprise architecture||Addresses strategies for collaborative and evolutionary exploration, potential modelling, and support of an organization’s architectural ecosystem in a context-sensitive manner.||The enterprise architects will produce a technology roadmap that delivery teams should follow and be a good source of development guidance (such as programming guidelines, user interface conventions, security guidelines, and so on). Delivery teams will provide development intelligence (metrics) and feedback pertaining to the usage of key architectural components and frameworks to help inform the decisions of the enterprise architects.|
|Governance||Addresses strategies for consolidating various governance views, defining metrics, taking measurements, monitoring and reporting on measurements, developing and capturing guidance, defining roles and responsibilities, sharing knowledge within your organization, managing risk, and coordinating the various governance efforts (including EA governance).||The governance team will provide guidance to all teams, including large delivery teams. This guidance typically focused on financial and quality goals as well as any regulatory constraints where appropriate. Delivery teams will provide development intelligence to the governance team to enable them to monitor your team and provide informed guidance to it.|
|Operations||Addresses how to run systems, evolve the IT infrastructure, manage change within the operational ecosystem, mitigate disasters, and govern IT operations.||Your operations group will provide operations intelligence (metrics) to delivery teams, in particular around the usage of systems and features that a team is responsible for. This enables the delivery teams to make informed decisions regarding the value of delivered features.|
|Portfolio Management||Addresses how to identify potential business value that could be supported by potential endeavours, explore those potential endeavours to understand them in greater detail, prioritize those potential endeavours, initiate the endeavours, manage vendors, and govern the your portfolio.||Your organization’s portfolio management activities will provide the initial vision and funding required to initiate a program, as well as ongoing funding for the program. It will also provide guidance, often around management and governance conventions, to the team. Delivery teams will make their development intelligence (metrics) available to the portfolio management team to help inform their decisions.|
|Product Management||Addresses strategies for managing a product, including allocating features to a product, evolving the business vision for a product, managing functional dependencies, and marketing the product line.||The Product Management team will provide a business roadmap and stakeholder priorities to all delivery teams, including programs.|
|Release Management||Addresses strategies for planning the release schedule, coordinating releases of solutions, managing the release infrastructure, supporting delivery teams, and governing the release management efforts.||Your program will release solutions into production via your organization’s release management strategy.|
|Reuse Engineering||Addresses how to identify and obtain reusable assets, publish the assets so that they are available to be reused, support delivery teams in reusing the assets, evolving those assets over time, and governing the reuse efforts.||All delivery teams should reuse existing assets – such as services, frameworks, and legacy data sources – whenever appropriate.|
|Support||Addresses how to adopt an support strategy, to escalate incidents, to effectively address the incidents, and govern the support effort.||Your support/help-desk team will provide change requests, including defect reports, identified by end users to all delivery teams. These change requests are in effect new requirements.|
There is clearly overlap with some of the activities in the portfolio management, enterprise architecture, release management, product management, and governance process blades. The issue is one of scope. Where these process blades address activities across all of IT, the scope of the related activities within program management is the program itself. For example, where enterprise architecture addresses architectural issues for the entire organization, the architecture activities encompassed by program management relate only to the architecture of the solution being produced by the program.
How Tactical Scaling Affects Program Management
Although program management primarily addresses the team size scaling factor, your tailoring decisions will still be affected by the other scaling factors:
- Geographic distribution. Chances are very good that large teams will also be geographically distributed in some way. There are two flavors of this: Are teams geographically distributed (e.g. in different physical locations) and are people within a team geographically dispersed (e.g. people are working in cubes, on different floors, in different buildings, or from home)? Both add risk. Coordination within the program becomes more difficult the more distributed the teams are, and more difficult within teams the more distributed the people are. Distribution hits the leadership (the product owner team, the architecture owner team, and the team lead/product delivery) teams particuarly hard because members should be located with their delivery sub-teams but also need to work regularly with their counterparts located elsewhere. The implication is that the team may require more sophisticated tooling to enable collaboration and more importantly be prepared to invest in travel regularly to foster better communication between disparate locations. Furthermore, when your stakeholders are geographically distributed it may require your Product Owners to get support from agile Business Analysts in the various locations to help elicit requirements from them.
- Compliance. Compliance, either regulatory compliance required by law or self-imposed compliance (i.e. CMMI-compliancy), will definitely have an effect on your approach to program management. In fact, the larger the program the more likely it is to fall under regulatory compliance due to the greater risk involved. Regulatory compliance generally requires greater governance both within the program and outwards facing as well. Under some regulations your coordination efforts will require proof that they occurred, such as some form of meeting minutes that capture who was involved, the decisions made (if any), and action items taken by people. Compliancy may also motivate more sophisticated approaches to capturing requirements by your Product Owners and to documenting technical concerns by your Architecture Owners.
- Organizational distribution. The larger the team, the more likely you are to involve contractors, consultants, or even to outsource portions of the work. When external organizations are involved the Program Manager will likely be involved it the contract management effort, which in turn may require assistance by the team leads.
- Solution complexity. The larger the team, the more likely it is that they are taking on greater solution complexity. Or, another way to look at it, greater complexity often motivates the creation of larger teams to deal with that complexity. Greater solution complexity will motivate greater attention to architecture and design, thereby motivating more regular collaboration of the Architecture Owners.
- Domain complexity. Similarly, team size and domain complexity tend to go hand-in-hand. Greater domain complexity will require the Product Owners to work in a more sophisticated manner and may even motivate them to get support from agile Business Analysts (or junior Product Owners as the case may be).
- Skills availability. The larger the program, the less likely it is for you to immediately have sufficient numbers of people with the sufficient skills to do the job. The implication is that you will need to work with your People Management group to help existing staff to learn the requisite skills or hire people with those skills. You may also need to work with Vendor Management to partner with external firms to provide people, or even entire teams, with needed skills.
Program Management and DevOps
A common question that we’ve gotten is how program management is affected by DevOps. For example, you see in the diagram that Operations, Support, and Release Management (amongst others) are shown as things that are external to Program Management. Remember that the focus here is on process, not on team organization. For example, in organizations with a Disciplined DevOps strategy in place it is very common to see program teams taking on the responsibilities of operating and supporting their own systems in production, and of doing the work to release their solutions into production. In organizations without a strong DevOps mindset (yet), you are likely to find that operations, support, and release management are done by separate groups outside of your program team. Context counts, and it’s good to have a process framework that is flexible enough to support the situation that you find yourself in.