BEGINNING THE JOURNEY
Product development practices at Apple Computer, Inc., a few years ago were nebulous. Despite having several different development processes, none were consistently used by any product division. Perhaps the less competitive business environment of the early ‘80s let us operate comfortably with these inconsistencies. Looking back, the computer industry was in relative infancy compared to today's new product cornucopia. For example, from 1984 to 1987, we were producing and delivering only four computer models: the Apple II and the Macintosh 128,512, and Plus.
However, demand for newer and more technologically advanced products rapidly complicated our operations. From 1988 to 1990, our product line expanded with the Macintosh II, peripherals, and the portables product lines. Then the challenges of the ‘90s raised the flag for change. As our focus moved from personal to corporate computing, Apple's product uniqueness was now challenged by stiffer competition. Market and cost structure pressures escalated rapidly.
Requirements of sophisticated corporate users both increased product complexity and the need for new products. These combined to stress our operational effectiveness. We could no longer accommodate the inadequacies of our product development processes. And we could no longer afford to have efforts to improve the process fail to achieve full implementation.
As in any major change, there are potential barriers. The challenge to all of us was creating the right degree of rigor and consistency without imposing another layer of administration and paperwork. Essentially, the challenge became how to channel rather than control our innovation and creativity. This is a concern of program managers and senior managers alike. At Apple, we believe the balance has been struck through these same people being responsible for designing and running a development process that preserves our effectiveness and our entrepreneurial spirit.

We accomlished this in four phases, First, to convince our management that a consistent process could create big wins, we benchmarked best-in-class practices in the industry, off-the-shelf from Product Development Consulting in Cambridge. For example, we discovered that best-in-class companies hold early and consistent product development phase reviews 90–100 percent of the time. They experience schedule slips averaging 13 percent of plan. Major, basic design changes generally never happened after one-third of the way through projects. Finally, turnover of project staff averaged 5–10 percent.
Second, we used these findings to launch the new product development program and to keep our goals in sight: (1) compress development time, (2) reduce time-to-volume, (3) reduce development cost, and (4) increase product quality and reliability. We convinced management that the necessary first step toward these goals was a defined, consistently-used development process. For almost a year, the effort was a low-profile push to develop an agreed-on high-level framework: one set of names for development phases, one document that described in very general terms who did what when, one simple map (the “one pager”) that described the status of every project, This conceptual agreement was the foundation for the behavioral change that followed.
LOOKING AT THE MAP
The third phase was marked by moving the locus from its original location in manufacturing into the engineering organization, into my group, which supports all the Macintosh product line development organizations. We formed a cross-functional steering committee to both focus our efforts on very specific areas (one or two phase reviews at a time, for example), and later, to disseminate new standards. We used existing analyses, surveys, and interviews with engineering and marketing directors to identify areas both factually identified as high-leverage improvement areas, and subjectively identified as being personally significant to steering committee members. Now that we had a shared map of the development process, we were able to agree on what was important to change.
The fourth and longest phase was systematically rolling out one initiative at a time, while developing another. (We've called this the “inch wide and mile deep” approach to implementation.) Here, rolling out is not a top-down command-and. control exercise. Far from it. The directors and program managers selected the problem focus, A two- to three-person focus team representing major stakeholders took the first cut at creating format, content, the standard example, and the briefing and rollout materials. The steering committee took these to their organizations for feedback. We piloted materials like briefings and phase reviews with real teams and found what it takes to make something useful and not burdensome. We reviewed materials with senior management, who after all are the “customers” for whom phase review presentations are created.
Rollout started with directors giving the briefing to their staffs of program managers and marketers. We had created modular materials that let these real managers (not professional teachers) convey the materials in whatever depth and learning style was appropriate to their group. Their methods ranged from the briefing-and-fictitious example style of classroom teaching, to checklist reviews to improve the real phase review presentations, to just handing out examples of standard format. What they wanted to use, we provided.
Does all of this seem like overkill in obtaining buy-in? Two observations: First, we tried to make every interaction a value add. No briefing for the sake of briefing. When people spent time with us, they could see what and how they were contributing. This is ownership, not just buy-in. Second observation: We have metrics about whether it was overkill or not. We kept score on each of our rollouts, by reporting measures to the steering committee and sometimes to senior management, of who was using the materials: the fraction of projects that had held that particular kind of phase review, or whatever. We had targets that rose higher each month. The steering committee took responsibility for the metrics: finding out why there were shortfalls and correcting the causes.
When we had a defined standard and began to measure the baseline, we discovered how much the standard (like every product having a Marketing Requirements Document, or MRD) was being used. Both for senior management and engineering and marketing directors, the comparison reinforced an urgent need to change. For example, only 50 percent of our projects had MRDs. Early phase reviews (i.e., concept and development approval) were held on 25–35 percent of the projects. For each one of these successive initiatives, it took a few months to get the measures up to benchmark levels. Our metrics told us, then, that the “overkill” of many iterations having the stakeholders design, test, and approve each modest part of the process was sufficient, no more.
LESSONS LEARNED
Two supporting initiatives provided real motivation for acceptance and use of the new standards that make up our development process. First, each project manager's use and support of the new methodology became part of their annual performance criteria. This initiative clearly communicated the importance that executive leadership placed on the need for and success of the new product development process. People were rewarded according to the level of compliance as well as ultimate results.
Second, we captured successful processes, techniques, and practices of our program managers. Many of their “best in class” concepts were incorporated into the standardized practices. By acknowledging the achievements and competencies project teams and executives already had, they became more eager and supportive of the new standard—they “owned” what was visibly already theirs.
Other conclusions we've drawn:
- Engage executives only to add value. Get a second opinion on the steering committee's findings and choice of initiative. Use communications among higher-level executives to clear away barriers to getting cross-functional resources to work on the initiatives. Visibly support the initiatives through actions such as linking use to annual performance reviews.
- Engage middle management to add value, in designing and owning the new process. They design the process, they own it. Their process is their job. My very small group is there to bring together the right people (including some outside expertise) to improve the process, and keep score through metrics.
- Test your assumptions on actual projects. Look for clarity of purpose and any improvements. Standards have to be usable. Focusing on “buy in” too much hides the need to make the standard process usable in practice.
- Project manage the improvement projects. Perhaps these terms sound familiar: assessment, schedule, work breakdown, action items, forecasting, resource allocation, deliverables, quarterly reviews, replanning. Most of us have less experience with improvement efforts than with product development or other more routine kinds of projects—don't improvement efforts deserve at least the same intensity of management thought and practice? Our experience says that good management pays off again and again in terms of continually delivering what people want and need.
- Use both process metrics (assure you're implementing change) and result metrics (assure you're getting improved results). Result metrics are the “bottom line.” Our very first analysis dissected root cause for delays in time-to-market, one “bottom line” for a development organization. Based partially on that analysis, we zeroed in on specific process changes as the means to improve time-to-market. We then formulated metrics on the development process, in this case, usage of these new standards, to monitor the means to the end. And finally after a year or so, we see the results improve.
Our metrics are selectively borrowed from Total Quality Management: “…Metrics and targets permit the results and means to be monitored over the course of the year and corrective action taken.…the metrics and the targets tell you at year's end (or sooner) whether the failure to achieve the desired outcome was due to malfunctioning of the planned means or to a failure to carry out the planned means…. the metrics allow you to monitor and control processes, even the process of changing other processes themselves” [1] [emphasis added].
We believe these metrics practices, used for years in Japan but infrequently used in this manner in the U. S., are an essential ingredient in our successful implementation. As we discovered at Apple, both types of metrics were successful in identifying problem areas quickly. And, since we choose not to have “process police,” the metrics allowed directors to manage their own implementations.
HAVE WE ARRIVED YET?
We know we've made incredible inroads with acceptance of our new product development process, over the two and a halfyears we've worked on it. We've met our time-to-volume goals and product quality and reliability have improved. Our current lower prices indicate that we have achieved our cost reduction objectives as well. In 1992, we introduced 26 new products. We doubled that effort in 1993. Using our new product development process, we successfully introduced 55 new products. with a reduced workforce!
What about the future? We continue to roll out “inch wide mile deep” improvements to our process. Its scope continues to expand. We survey our customers (the development community) to elicit their perceptions of areas for improvement, and end up working on surprising and challenging parts of the overall process.
At Apple, we are proving the merits of an integrated project management and new product development process. Together, they have enabled us to redefine excellence. Now after doing another benchmarking of “best in class” firms, we consider ourselves to be entering the ranks of those firms. Carpe diem…thanks to effective project management of improvement and an efficient new product development process, Apple is truly seizing the day.