Taming software projects – controlling and tracking development with metrics

Port of Seattle Sea-Tac capital improvement program (CIP)

Introduction

In this first year of the new millennium it is still amazing how reliant we are on technology solutions in all aspects of our lives. Some form of automation affects everything we do, from purchasing groceries to driving home. It is difficult to imagine what our day would be like without air conditioning in Florida, heat, banking, television and mail delivery—all of which heavily rely on hardware and software. Despite this pervasiveness, the information technology (IT) industry is still in its infancy. The Software Engineering Institute (SEI) of Carnegie-Mellon University confirms this in their research findings—immature, nonstandard development processes dominate information system development. Software engineering is a different animal from other engineering disciplines such as construction and manufacturing; however, financial issues and competitiveness dictate that it must be just as efficient.

Additionally, threats of outsourcing and downsizing, together with standards such as ISO 9000 and the Software Engineering Institute's Capability Maturity Model for Software (CMM), are further pushing organizations to improve. No longer is it acceptable for Information Technology (IT) divisions to feign ignorance when asked to quantify improvements in productivity and quality. We no longer can hide behind acronyms and technology—the business world has started to realize that their spending on IT is really an investment in their business, and are demanding Return on Investment (ROI) ratios that support their expenditures. It is the opportune time to implement a formal software metrics program to prove the benefits of a formal process improvement program. We can illustrate our worth to the business—in terms that the business understands.

It may surprise you that the Internal Revenue Service is one of the most advanced in their adoption of software metrics—both internally (to track the results of their improvement initiatives) and externally (by applying metrics to measure outcomes). In fact, since 1993, the IRS accepts a valuation of depreciated software based on the sizing metric: function points.

Despite these collective pressures, it is only recently (in the last 10 years) that the IT industry has responded by collecting and quantifying metrics and reporting on process improvement results. While management based on measurement has been fundamental in accounting and production departments for decades, measures for system development (beyond the computer operations’ MIPS and CPU usage data) are still relatively new to the IT industry. The evolution of software metrics can be profiled as: “Over the years, the application of software metrics has evolved from tentative experiments to accepted best practices based on repeatable successes.” (Grady 1992, p. 3)

There are many flavors of software metrics and with them come many reasons or goals for measuring. The old adage that you cannot manage what you cannot measure has become the driving force behind many software measurement programs as many companies seek to measure, and therefore, manage their system development processes.

Measurement Program Definition

What exactly is a “software metrics program?” At the core of a software metrics program is the conscious and continuous cycle of measurement, analysis, corrective action and re-measurement to effect long-term process and product improvement. In other words, a metrics program is all of the activities associated with taking measurements along the system development process in order to improve the overall process. Although this sounds simple, actually implementing a software measurement program involves much more than initially meets the eye. While space prevents a full exploration of all aspects of measurement program setup, it does provide you with enough information to realize that software measurement is not a trivial exercise and that real process improvement comes through the real work of properly establishing measurement.

My first years out of engineering school were spent in engineering and systems development firms where structured development methodologies and time tracking were the norm. So, it always comes as a surprise to me to encounter firms that have only recently started measuring and tracking their time and purport that they have a “measurement program.” Tracking a single variable does not constitute a software measurement program. A formal software measurement program includes multiple, complementary metrics that report and feed decision-making processes.

Where Should You Begin—First Steps to Launch a Software Metrics Program

One of the most powerful and common sense approaches to software metrics is the “GQM” or Goal—Question—Metric (Basili 1984) approach is one of the underlying principles behind our approach to successful measurement programs. Simply stated, GQM begins by identifying and solidifying measurement Goals. The next step consists of identifying Questions to which measurement will provide answers. These questions are tied to the decisions that management must make in order to achieve the goals. Finally, the supporting Metrics are selected, the component measures are identified, and the measurement program is then ready to implement.

Exhibit 1. Plan Implement Measure Act (PIMA) Model for Implementing Software Measurement

Plan Implement Measure Act (PIMA) Model for Implementing Software Measurement

Blending the Total Quality Management (TQM) model: Plan Do Check Act, with the essential elements of Goal Question Metric results in the “PIMA” cyclical model for software measurement implementation: Plan, Implement, Measure, and Act. Similar to the traditional waterfall model in system development where user requirements are identified first, the PIMA approach ensures that Plans and measurement requirements are done first. As in the GQM approach, an organization first identifies the measurement goals, then the questions (requirements) for software measurement. This is the PLAN stage and must be completed before metrics and measures to complete the program are designed and implemented. The IMPLEMENT step includes selection and design of the appropriate metrics that fit the Plan, together with training, initiation and collection of specific pilot measures. The MEASURE and ACT steps follow. In MEASURE, the actual data collection, analysis, data audit and reporting is done with target measures, and actionable management information is compiled and presented. The final step, ACT identifies and builds appropriate process improvement remedies based on the data of the previous step. The ACT step then cycles back to PLAN to affirm that the measurement program is still on target and collecting the right metrics. At every step of the cycle, communication and marketing are key ingredients that must be built into the implementation work plan. Exhibit 1 illustrates this cycle.

This methodological approach overcomes many of the obstacles common in haphazard measurement program design, such as trying to find goals and uses for collected metrics, and trying to fit together measures that are incompatible.

Stage 1: PLAN—Tactical or Strategic Business Goals?

The old adage, no one plans to fail; yet many fail to plan is especially true in software measurement (Dekkers 1999). While few software projects would be financed without at least sketchy requirements, there are many software measurement programs funded without goals and objectives. Measurement programs that are not targeted to meet business goals may survive initially, however, once management realizes that it does not support strategic or tactical objectives, funding to the program is cut as soon as there is any need for budget restraint. Successful measurement programs work toward firm objectives, and measurement is an integral part of the tracking and control of specific processes.

This step is not easy for many organizations because often management cannot articulate what they want or need in terms of a measurement program. They do know that they want software development process improvement but cannot definitively say what that means. As a metrics practitioner or project manager you may be forced to derive measurement program goals by taking a look at the corporate, strategic or tactical objectives— and selecting the most urgent, compelling goals for your program. In my classes and at my clients, I encourage practitioners to find a “burning platform” within their organization—which basically means to find a source of pain in system development to which measurement can provide answers and relief.

At Hewlett-Packard, for example, software measurement targeted both tactical and strategic goals. Project managers were tasked to define the right product, execute the project effectively, and release the product at the right time—and software metrics help to clarify those details. Today their measurement program has evolved to become a strategic advantage through software process improvement (Grady 1992). Overall, a properly planned and implemented software metrics program allows a company to identify, standardize, improve and leverage their software development best practices.

In a first exposure to software measurement, I consulted a company who recently outsourced its systems development and the agreement included clauses about measurement. Specifically, productivity and quality goals were established with the proviso that they be met on an annual basis. This was a compelling reason to measure and one that sustained the measurement program for the life of the agreement. Whatever your goals for measurement—make sure that they are the same goals that are important to your company.

Once the goals are defined, a set of questions or decision points are identified which will enable management to track their progress toward the goals. Questions easily fall into place once clearly defined goals are set. For example, a small utility company set goals of achieving a 50% improvement in enhancement productivity in one year, and their questions were: What is our current enhancement productivity? What is our enhancement productivity on various projects? Why are some projects more (or less) productive than others? What process changes might result in higher levels of productivity? What is the project productivity as a result of a particular process improvement action? What are the differences in processes between project teams?

Once your goals and questions for measurement are set, a walkthrough meeting helps to assess whether the goals you have set are appropriate for the program. We have found that these additional walkthroughs, if well planned, are similar to peer reviews on software development—obvious design flaws can be detected out at an early stage.

Stage 2: Implement the Plan

The IMPLEMENT step includes selection and design of the appropriate metrics that fit the Plan, together with training, initiation and collection of specific pilot measures. (In the GQM approach, this was the Metric step.) The choice of metrics depends on your questions from the planning stage, and consists of a combination of supporting measures into a meaningful ratio.

In industry literature the words “metric” and “measure” are used interchangeably. The word metric is usually defined in more formal circles as the ratio of two or more supporting measures, correlated in a meaningful way. An example of a metric for productivity would be Function Points (FP) per hour. Sometimes the word indicator is used in place of metric. The word “measure” means a single collected observation of a variable, such as size, work effort or duration.

Measurement practitioners often enjoy the selection and design of metrics, and will even rush through the planning stage to get to this more interesting work. However, take note that the time spent planning a proper measurement program does not go to waste—proper planning is a critical success factor to measurement success.

There is literally a myriad of software metrics and supporting measures from which to choose: size, project work effort, technical quality, functional quality, project duration, amount of rework, scope creep percentages, impact of changes, defects (by category and point of origin), cost, mean time to failure, McCabe's complexity metrics, SLOC, FP Size, testing coverage, FTE resources, maintenance hours, hardware and software, money, etc. There are many definitive articles and books available that emphasize the advantages of various types of software metrics. With so many metrics and so little time to analyze their relative merits, the wise practitioner leverages the experience of others. There are many ways to gain this insight—through networking with experienced measurement practitioners at conferences, by attending courses and tutorials on software measurement, by reading industry articles and published books on software measurement.

When you have all of the pieces of the software measurement program in place (goals and questions are set, new data collection procedures have been designed, participants have been part of the process and are trained, data collection, analysis and reporting processes have been documented, and data collection is set to begin), the actual Measurement begins. It is especially critical in the first data collection cycles that there be process checks to monitor the collected data. This will ensure that your training was effective and gauge the levels of participation. When the first set of data points has been collected, we move into the next stage of measurement implementation: Measure.

Stage 3: Measure

In MEASURE, the actual data collection, analysis, data audit and reporting is done with target measures, and actionable management information is compiled and presented. As soon as valid data (validated through the implement stage) is collected, the analysis to prepare answers to the stated questions commences. It is important in the planning stage to identify all of the measurement participants from those who will collect and report the data to those that will analyze the results and create action plans. Additionally, how and when the analysis will take place are part of the planning stage.

For software measurement practitioners who become passionate about metrics, it is worth noting that not everyone in the organization will immediately share your enthusiasm. Because of this passion and enthusiasm, often data is reported and presented without clearly thinking through the potential consequences. At one client, we had discussed how productivity numbers (size/effort) were a function of MANY project influences including tools, technology, methodology, type of system, percentage reuse, etc. However, an over zealous measurement participant announced that the only way to achieve productivity increases was on a personal basis (not true) and therefore productivity should be part of individual performance appraisals, (this is inappropriate due to the multitude of productivity influences besides people). There was much damage control to be done once that statement was made to the entire department.

Employ the services of a qualified statistician to do your data analysis. It is simply too easy to make inappropriate correlation and “lie with statistics” given raw data values. Once the data is analyzed, the measurement practitioner can meet with the project teams who supplied the data or with management representatives to develop corrective action plans based on the data. This proceeds into Stage 4.

Stage 4: Act on the Measurement Results

This final step, ACT, performs two functions in the PIMA cycle: first, it identifies and builds appropriate process improvement remedies based on the data of the previous step; second, it cycles back to the original planning stage, where the alignment of the overall program to the original goals, corporate goals, etc., is explored.

Measurement itself is a passive process, whereby data is collected, analyzed and management information is gained—but it is the action plans developed as part of the overall program that bring about the process improvement results. If we take certain metric goals (which may be achievement targets set by management), we can develop and implement corrective (or changed) actions to achieve the higher levels. Note that not every decisive action that is taken will lead to positive results—that is part of why we measure, to track and monitor progress or lack of, in response to a set of implemented actions.

Exhibit 2. Utility Company With Over 100 Full-Time Developers

Utility Company With Over 100 Full-Time Developers

For example, I have a client whose main objectives are to improve, by a certain percentage their overall software product quality. They do this by answering quality questions with metrics information. In terms of collected metrics, they show, among other things, where each defect of certain type was detected, when the defect was discovered, plus the impact the potential flaw would have delivered had the defect not been detected. From their measurement program, they have been able to justify walkthroughs and peer reviews based on the recorded higher levels of resultant product quality.

This last step cycles back into the planning stage for a quick pass at the sustainability of the metrics program, before additional metrics are collected.

Challenges and Pitfalls in Implementing Measurement

Regardless of how well you plan and implement measurement following the aforementioned steps (Plan, Implement, Measure, and Act), you will encounter resistance and challenges. While the technical design of a measurement program has a dominant role in its ultimate success, the participation (or lack thereof) of people is also a major contributor. I have witnessed well-planned measurement programs fail because the people and cultural issues were not addressed. Measurement, like many other corporate initiatives, involves cultural change, and resistance to it often manifests itself in the form of challenges or myths.

Cultural change affects people (their attitudes, and how they think about and derive value from their work); their jobs (how they do their work); and their workplace (how they interact with others, and how the business places value on their work).

Software measurement transforms an organization from one that manages by feeling to one that manages by fact. Its people are forced to change the way they view their work, their jobs and even themselves as they adapt to measurement. Although measurement is persistent in our lives through sports (batting averages), finance (taxable income) and life in general (blood pressure, weight, age), it is predominantly “personal.” It is no wonder then, when software measurement is introduced, resistance to change is compounded by past experiences with measurement. “Everyone was totally in favor of consistency, as long as it turned out to be the way they were already doing it” (Landsbaum 1992).

It is critical for management to accept that resistance to change is human nature and should be expected with any new initiative such as software measurement. Resistance can be manifested in many ways ranging from passivity (“This project is the same as the last one”) to outright rebellion (“This project is doomed and I am not participating”). In implementing measurement at many clients, professionals generally display at least one of the following reactions to change:

•  Gauging the consistency of information (“Is there a real plan to implement the change?'”)

•  Testing management's commitment (“Who at the top supports this initiative—and should we take it seriously?”)

•  Sharing perceptions (rumors and myths) about why the initiative will fail in the real world.

The move to managing an organization by fact involves cultural changes on the part of management as well as developers. For software measurement to succeed, management must view it as a new way of doing business (a business process) not simply as a project. Regardless of the chosen metrics, one of the prerequisites to gathering and reporting accurate data is a supportive environment, conducive to measurement. This means that the reward and punishment system must be realigned to promote and reward the collection of accurate data—no matter how bad the resulting analysis may be. In simple terms, what is rewarded is done; buy-in to measurement will only succeed if people are not punished with the data they report. Based on Hewlett Packard experience, “Understand the data that your people take pride in reporting. Don't ever use it against them. Don't ever even hint that you might” (Grady 1992).

Management acceptance of the impact of cultural change takes time, and is only one side of the people issue concerning measurement. Cultural issues frequently manifest themselves as myths, which are prevalent during software measurement introduction. Good, consistent communication is the key to overcoming these myths (Bradley 1996) and succeeding with measurement. (Developer level myths and Management level myths vary and are the topic of several articles by this author— reference the footnote.) Note that even with solid plans, implementation strategies, clearly researched benefits, and management support, the presence of measurement myths can derail a measurement program. Focus on the people involved with implementing and participating in making measurement work—and be consistent and truthful with your answers to metrics questions. Working together with the measurement participants to design and implement a software measurement program they can live with goes a long way to ease the fears that express themselves through these different myths.

Exhibit 3. Outsourcer Client With Over 300 Developers

Outsourcer Client With Over 300 Developers

The most common challenges associated with software measurement go beyond funding and technical implementation issues and involve people issues. Typical challenges that arise include: Management of expectations, the cost of measurement, tying measurement results to individual contributions, preliminary results that “don't tell anything new (or they are wrong),” fears that measurement will be used to measure individuals, skepticism that measurement is worthwhile, and pure resistance. Every client situation involves a similar set of challenges plus others specific to the particular organization. What is surprising to our clients, however, is that their own “unique” environment and challenges are solvable using the same techniques we have used at innumerable clients in the past. It is as important to deal with the human aspects of measurement as it is to plan the metrics program in the first place.

Examples of Organizations That Have Gained Through Software Measurement

The examples in Exhibits 2 and 3 illustrate the GQM approach and the results gained by implementing a formal software measurement program.

Other Considerations for Software Measurement

This article has touched on most of the major points you need to consider when setting up a software measurement program. There are also aspects of marketing, communicating and selling your program, what to report and when, how to pilot and rollout measurement initiatives, and how to redirect a failed measurement attempt, that are important to consider, but which go beyond the basics introduced in this article.

References

Basili, V., and D.M. Weiss. 1984, November. A Methodology for Collecting Valid Software Engineering Data. IEEE Transactions on Software Engineering, SE-10 (6), 728–738.

Bradley, M., and C. Dekkers. 1996, June. It's the People who Count in Measurement—And other Management Myths. The Voice. International Function Point Users Group, Westerville, OH, 25–26, 31.

Dekkers, C. 1999, April. Secrets of Highly Successful Measurement Programs. Cutter IT Journal.

Grady, R. 1992. Practical Software Metrics for Project Management and Process Improvement. Prentice-Hall, Inc.

Landsbaum, J.B., and R.L. Glass. 1992. Measuring and Motivating Maintenance Programmers. Englewood Cliffs, NJ: Prentice Hall.

This material has been reproduced with the permission of the copyright owner. Unauthorized reproduction of this material is strictly prohibited. For permission to reproduce this material, please contact PMI or any listed author.

Proceedings of the Project Management Institute Annual Seminars & Symposium
November 1–10, 2001 • Nashville, Tenn., USA

Advertisement

Advertisement

Related Content

Advertisement

Publishing or acceptance of an advertisement is neither a guarantee nor endorsement of the advertiser's product or service. View advertising policy.