Abstract
This paper applies the concept of a Capability Maturity Model to Training and Education – the realization that how you design and deliver training (your process) is as important to your success as what you deliver (your product). It is based largely on the classical CMM and CMMI developed by the Software Engineering Institute (SEI), but it doesn’t stop there. It draws inspiration from other maturity models and concepts in three categories:
General – Maslow’s Hierarchy, Bloom’s Taxonomy, Kirkpatrick’s Levels of Evaluation, Phillips ROI Process Model, Six Sigma, DMAIC – general, cross-functional or multi-industry models that may not specifically reference training maturity, but can be applied analogously
Industry – CMM, CMMI, People Maturity Model, Organizational Project Management Maturity Model (OPM3®) –maturity models developed by independent sources or organizations for general application in business, government and industry
Vendor – The Gartner Group, Bersin & Associates, Zeroed-In Technologies, Knowledge Advisors, and the Center for Effective Performance –maturity models developed by vendors for specific use in corporate and commercial training
This paper also advocates the creation of a unified, industry-accepted standard for training maturity. (We nicknamed our emerging prototype “TEMMPO”, for “Training & Education Maturity Model in Projectized Organizations”.) This standard would consist of three components:
The Model: The maturity model and its component levels that define and categorize maturity
The Measures: Often referred to as Learning Analytics, criteria and metrics that measure training results and determine what results have been achieved
The Tools: Performance measurement dashboards, balanced scorecards, and other tools that enable management to visualize and utilize the results of a maturity initiative.
Introduction
Given the popularity of CMM and maturity in general, it’s not surprising that several credible sources in the training arena have jumped on the maturity bandwagon and issued their own versions, often with particular reference to eLearning. It would be nice to see a unified, formalized, widely-accepted standard carved out from these proprietary vendor models. But such a standard may take years to develop, to generate support, and to build momentum. In the meantime, there are actions you can take now to bring the benefits of the CMM concept to your training program. Along the way, we also hope to start building a community of people who utilize and value CMM in the training framework.
Maturity, like many other PM-friendly ideas, developed first in the realms of IT, engineering and construction. These concepts then... slowly... trickled down to other, less-traditional areas, in effect taking metrics “where no metrics have ever gone before.” One of the joys of creating a training maturity model is taking tools and techniques that have benefited technical areas and applying them to “soft skills” that have heretofore resisted measurement. Consequently, these “softer” areas have not yet achieved the organizational status and recognition that technical domains receive.
This article is the first in a planned series that will “progressively elaborate” the training maturity concept. This paper equates to project initiation – exploring the background, describing the rationale, and seeking a “charter” to move forward and involve more people. The next step will be planning some actual “deliverables” - a draft Training Maturity standard, perhaps, or more tactical checklists and support tools. Eventually, such a standard may actually be developed and implemented.
Of course, at the end of the day, the fundamental purpose of any training program isn’t to adhere to a model, but to accomplish at least of one of three goals: to increase revenue, to save money, or to comply with mandatory requirements. A Maturity Model, with appropriate metrics and management-savvy reporting tools, can only make it easier to meet these goals and to validate that they have been met.
What’s a Maturity Model, and Why Should You Care?
You just took a training class and it was great. Why? Because you really, really liked it. And you felt you really learned a lot.
BUT... Did you really learn as much as you think you did? How do you know? Did you like it for reasons that had anything to do with your employer’s objectives? And were you able to apply what you learned to what you do on the job?
You just taught a training class and it went really, really well. Why? Because you looked at the evaluations and the participants really, really liked it.
BUT... Did the participants take away value that they will apply on-the-job? Are their bosses likely to agree? Will these managers eagerly send their staff to the next training program you propose? And will you receive high-level support from senior management to develop more programs?
As division chief, you just approved a large-scale training program that will commit a lot of staff and resources for some time to come. The training department has assured you that it will be worth every penny. And you feel really, really good about it.
BUT... Does the training program truly align with your business goals and requirements? Do the total costs exceed the benefits? Will you recoup your investment and get the results you hope for? And how will you prove that the results came from the training itself, and not from coincidental factors such as improving markets or a successful advertising campaign?
This paper posits a Capability Maturity Model as a framework for creating, managing and measuring an organizational training program. It asserts that the way you create and manage training is as important to your success as the training itself. In other words, it values process as well as product in assessing the output of your efforts. How you do it becomes as important as what you do, and what you produce.
The model also reinforces the essential relationship of training to business results. Learning has been described as an “ecosystem” that touches all other aspects of the business. The sustainability of a training program -- and, to a great extent, a trainer’s career -- hinges on the alignment of the training unit to the broad business goals and strategic objectives of the overall organization.
The maturity concept adds value on several levels: (1) It improves the organization’s ROI through results-oriented learning initiatives. (2) It validates the contribution of the training department and its programs. (3) It makes it easier to secure funding and obtain management support. And, (4) it enhances the career path and skill set of trainers and trainees alike. It’s a win-win-win situation all around.
Industry Models
The Foundation – The Capability Maturity Model
The Capability Maturity Model, which has proven so valuable in IT settings, is now being used in other settings not traditionally associated with “hard” metrics. CMM is defined in SearchCIO.com as “a methodology used to develop and refine an organization’s software development process. The model describes a five-level evolutionary path of increasingly organized and systematically more mature processes. CMM was developed and is promoted by the Software Engineering Institute (SEI), a research and development center sponsored by the U.S. Department of Defense (DOD). (Jarayam. 2003, ¶1) Similar to the ISO 9001 standards that specify an effective quality system for software development and maintenance, CMM “establishes a framework for continuous process improvement.” (Jarayam. 2003, ¶2)
SEI took CMM to a higher level with the “Capability Maturity Model® Integration (CMMI), defined as “a process improvement approach that provides organizations with the essential elements of effective processes. It can be used to guide process improvement across a project, a division, or an entire organization. CMMI helps integrate traditionally separate organizational functions, set process improvement goals and priorities, provide guidance for quality processes, and provide a point of reference for appraising current processes.” (Carnegie Mellon, 2005, ¶1)
The People Capability Maturity Model (People CMMI)
SEI has also released the People Capability Maturity Model (People CMMI). Now in Version2, “People CMM is a framework that helps organizations successfully address their critical people issues. Based on the best current practices in fields such as human resources, knowledge management, and organizational development, the People CMM guides organizations in improving their processes for managing and developing their workforces. The People CMM helps organizations characterize the maturity of their workforce practices, establish a program of continuous workforce development, set priorities for improvement actions, integrate workforce development with process improvement, and establish a culture of excellence.” (Carnegie Mellon, 2005b, ¶3)
The Five Levels of CMM and CMMI
There are five levels in the basic Capability Maturity Model:
- INITIAL: Processes are unpredictable, often reactive, not yet defined and documented, even disorganized. They are hard to replicate on the next project. . Success may depend on individual efforts, and occasional “heroics”. Individual contributors may be so talented and productive that the organization does not yet realize that it lacks process and control.
- REPEATABLE/MANAGED: Basic processes have been established, defined, and documented. They can be repeated on subsequent projects. These processes are applied mostly to free-standing projects and are often reactive.
- DEFINED: The organization now uses internal control processes on individual projects and archives them. It pursues standardization and integration across projects. The approach is becoming proactive, but still resides largely within “silos”.
- PREDICTABLE: The organization analyzes, measures and controls processes across departmental units. Success is planned and predicted, rather than merely serendipitous.
- OPTIMIZING: The organization focuses on continuous improvement of its processes. It seeks out responsive innovations to better serve organizational needs.
The Maturity levels are linked to a web of Key Process Areas (KPAs) - related activities that contribute to achieving goals – as well as Key Practices, goals and activities. Taken as a whole, they enable the organization to continuously improve the processes of the activities and the products, or outcomes, of those activities.
CMMI describes the Five Levels a little differently: Performed, Managed, Defined, Quantitatively Managed, and Optimized. For the purposes of our initial discussion of a training model, these differences are not material, although they may provide fuel for discussion and consideration later on.
Organizational Project Management Maturity Model (OPM3®)
The Organizational Project Management Maturity Model (OPM3®) was developed by the Project Management Institute (PMI®) to apply project management principals at the organizational level. OPM3®) was in large part built upon PMI’s original standards, including A Guide to the Project Management Body of Knowledge (PMBOK® Guide) and the Project Management Competency Development Framework.
OPM3® combines three elements: Knowledge (the contents of the standard, including maturity and best practices), Assessment (a method for comparing a real organization to the standard), and Improvement (how to change the organization, build capabilities and implement best practices. It does not contain a specific chapter or sub-set on training, although it specifically references training in two best practices: These best practices, and their official IDs, are:
5200 – Provide Project Management Training. The organization provides project management training appropriate for all levels within the project hierarchy.
5210 – Provide Continuous Training. The organization provides continuous training in the use of tools, methodology and deployment of knowledge. (PMI, 203, p. 112)
According to Tom Keuten, PMI’s 2008 OPM3 Update Project Manager, training plays an important role in every OPM3 implementation. “In order for an organization to become more mature in its project, program and portfolio processes, training is required for the people responsible for executing those processes and there are specific capabilities in OPM3 that discuss this.” (Private correspondence)
Kerzner’s Project Management Maturity Model
The Project Management Maturity Model published by Dr. Harold Kerzner makes a valuable contribution by reframing the maturity concept in project management terms. Dr. Kerzner’s model has five levels, accompanied by a scored assessment:
Level One: | Common Language (basic knowledge of PM and its terminology) |
Level Two: | Common Processes (defined, developed, repeatable) |
Level Three: | Singular Methodology (all organizational methodologies combined, synergistically) |
Level Four: | Benchmarking process improvement (used to maintain competitive advantage) |
Level Five: | Continuous Improvement |
This model was articulated in Dr. Kerzner’s book, Strategic Planning for Project Managers Using a Project Management Maturity Model (Harrison, Sweeney, Taylor, & Wood. 2003 ¶1). Although the model has a broad focus, Training & Education comprise one of six traits in Level Three needed to move towards achieving excellence.
Mark Harrison, writing for the Gale Group and the Defense Acquisition University Press, elaborates: “Kerzner described maturity in PM as the development of systems and processes that are repetitive in nature and provide a high probability that each project will be a success... Management must recognize that... excellence in PM will affect the organizational outcome, since it is essential for survival. Organizations that transform to PM rarely give it up – because it works.” (Harrison, et al, 2003, ¶8)
General Models
There are some general models that contribute to a concept of training maturity. These include Maslow’s Hierarchy of Needs, Bloom’s Taxonomy, Kirkpatrick’s Four Levels of Training Evaluation, HR Management Information Systems and the People Capability Maturity Model (mentioned earlier as an offshoot of the original CMM).
Maslow’s Hierarchy
Maslow’s Hierarchy of Needs originated as a personality theory to describe the priority in which people seek to satisfy their needs. The hierarchy contains five levels:
Level One: | physiological needs |
Level Two: | safety and security needs |
Level Three: | belonging needs |
Level Four: | esteem and recognition needs |
Level Five: | self-actualization needs |
Although the original model was applied to general, even primitive, life situations, Maslow’s Theory adapts well to business. For example: most of us are not being chased by real bears, but mergers, reorganizations, downsizings and generic job stress may still keep us tethered to the “security and safety” level.
Esteem in the workplace equates to “status, fame, glory, recognition, attention, reputation, appreciation, dignity, even dominance.” At a higher stage of development, esteem might be expressed as “self-respect... confidence, competence, achievement, mastery, independence, and freedom.” The highest level, Self-Actualization, engages our desire to fulfill potentials, to exert a positive influence in the world, to “be all that you can be.” (Boeree, 1998, Theory ¶9)
Maslow’s Hierarchy is not a maturity model per se, but it traces a path to personal maturity that parallels professional development and business growth. And, interestingly, it relates individual maturity to that of the surrounding environment.
Bloom’s Taxonomy
While Maslow’s Hierarchy is generically applicable to business in general, Bloom’s Taxonomy offers a hierarchy to specifically categorize learning and education. As described by Don Clark in his online article “Learning Domains or Bloom’s Taxonomy”, Benjamin Bloom and his colleagues identified three domains of educational activities: cognitive mental skills (knowledge), affective feelings or emotions (attitudes), and psychomotor manual or physical skills (skills). (2001)
Blooms’ Taxonomy pairs categories and key words to facilitate the development of test questions and learning objectives. While not a normative system, Bloom’s Taxonomy is highly regarded by many trainers are a mature way to think about educational programs, and should be included the in the total range of resources behind training maturity.
The Legacy of Kirkpatrick
The training profession has long relied on an almost iconic model developed by Donald Kirkpatrick, the Four Levels of Training Evaluation. These four levels – reaction, learning, behavior, and results -roughly map to four basic questions about the learners’ experience – Did they like it? Did they learn it? Can they do it? Was it worth it?
The fourth level -- Results -- asked whether the learning activity produced the desired change in business operations, but it didn’t go far enough. One critical question remained unanswered: Did the learning deliver the desired results for less cost than the monetary value of those results? Over time, a putative Fifth Level has been added, Return on Investment (ROI), to address this critical concern. However, the elements of time, physical distance, and diverse media make it very hard to measure training ROI.
Phillips ROI Methodology
The fifth level of training evaluation, ROI, was advanced by Dr. Jack Phillips through his ROI Institute. Dr. Phillips developed his own model, which consists of six levels of evaluation: (1) reaction, satisfaction and planned action, (2) learning and application, (3) implementation, (4) business impact, (5) ROI, and (6) intangibles. Here the Kirkpatrick and Phillips models are shown side-by-side:
Level One: | Reaction / Reaction, satisfaction & planned action |
Level Two: | Learning / Learning & application |
Level Three: | Behavior / Implementation |
Level Four: | Results / Business impact |
Level Five: | (Not in Kirkpatrick) / ROI |
Level Six: | (Not in Kirkpatrick) / intangibles |
Dr. Kirkpatrick first published his model in 1956, based on what became known as the “Kirkpatrick Assumption” -the belief that “the immediate objective of any training class can be stated in terms of the desired knowledge and understanding that the program is trying to impart in the students.” (Knowledge Advisors, 2006a, graphic) Kaliym Islam, Director of Customer Training & Information Products for The Depository Trust & Clearing Corporation, suggests that this assumption is no longer completely valid, or sufficient, to describe training’s impact. (2005)
The Six Sigma Alternative
Writing in LTi Newsline, Islam explains ”... The Kirkpatrick Assumption essentially mandates what is measured and what is reported to project stakeholders. This mandate takes place regardless of the perspective or expectations of the process partners.” For example, high evaluation scores might not be relevant to a CEO/sponsor who spent $ 1 million dollars to implement a $750,000 program. An operations manager who missed a deadline because key staffers were away at a training class would have a different perception, as well. (2005, ¶13)
Islam suggests an alternative approach, Six Sigma, as a way to synchronize training results with corporate objectives. Six Sigma has gained wide popularity and success because of its focus on customer service and stakeholder satisfaction. “One immediate benefit of applying Six Sigma is the business credibility to learning programs that the methodology brings. Unlike... Kirkpatrick’s evaluation methodology, that [is] proprietary to the training world, Six Sigma brings a methodology that is universally accepted by businesses across various industries... Because of its customer focus, Six Sigma provides ideal solutions for showing and proving the business impact of training.” (2005, 1¶14)
DMAIC
Another well-known model used in the training industry goes by the acronym DMAIC, which stands for Define, Measure, Analyse, Improve and Control. It originated as a problem solving cycle within the Six Sigma movement. According to Trevor Durnford of Kaizen Training, “this tool helps teams approach problems in a logical, stepwise method, ensuring that the problem is clearly identified at the outset; that root causes are explored and measured before solutions are implemented.” (2005, December 5) DMAIC is sometimes used as an instructional design framework. Among its benefits are the opportunities to capture and share lessons learned among colleagues who are working on similar projects throughout the organization. It’s not a maturity model per se, but it encapsulated a mature, projectized approach to training.
Vendor Models – Gartner, Bersin, CEP
Gartner’s eLearning Maturity Model
The Gartner Group has sponsored a Maturity Model for eLearning that consists of SIX levels: Non-existent, Initial, Repeatable, Defined, Managed, and Optimized. No surprises here! Gartner further identifies six characteristics of successful eLearning programs -- characteristics that might well apply to training across-the-board, and are presumptive indicators of overall maturity. These characteristics are Vision and Strategy, Sponsorship, Roles and Responsibilities, Funding, Business Processes and Policies, and Technology.
The Gartner levels echo the overall path of classical CMM:
Level Zero: | Non-existent - use of eLearning is sporadic, intermittent, somewhat casual, and usually focused on IT applications. |
Level One | Initial - awareness kicks in and the organization starts to perceive eLearning as a worthwhile endeavour, but it may still be pursued in “silos”. |
Level Two: | Repeatable - eLearning is now regarded as critical and gets funded and sponsored in specific business “domains”; many of the eLearning deliverables may be reused and shared among these domains. |
Level Three: | Defined - an enterprise-wide eLearning strategy has been defined and communicated. It is used not just for training skills, but also for developing career competencies. |
Level Four: | Managed - eLearning is integrated into the organization’s strategic plans and the function is regarded as a *program*, not just as a string of individual projects. Development activities are well-defined and automated, and a continuous feedback loop (replete with metrics) assures continuous improvement. |
Level Five: | Optimized - training is completely integrated into the organization’s strategic fabric and the approach is pro-active, student-centric, and highly leveraged, with reusable content and LMS-supported tracking and measurement. |
Bersin’s Four Stages of eLearning
Josh Bersin of California-based Bersin & Associates presents a four-stage maturity model focused on eLearning, but also broadly applicable to a wide range of training scenarios. The first three stages are Getting Started, Expansion, and Integration and Alignment. (2005)
The first stage applies to organizations starting a new eLearning program. Largely motivated by the opportunity to save time and money, they rely mostly on off-the-shelf catalogue content housed in a fairly basic LMS. During the second phase (expansion) organizations grapple with issues related to wider access, outreach, support, branding, customization, and blended learning
After three to five years of eLearning experience, the organization will enter the third phase, characterized by integration and alignment. In this scenario, eLearning is no longer a standalone solution, but is woven into the fabric of the organization and its work processes, including performance management. A fourth, somewhat hypothetical phase-of-the-future posits the widespread adoption of eLearning that makes full use of rapid eLearning, simulation, an active LCMS, continuous improvement, and learning analytics to measure and control learning activities.
Knowledge Advisors
Knowledge Advisors has created the Human Capital Contribution Model™ (HCCM) to support its Learning Analytics Tool, “Metrics that Matter”. The tool processes and delivers training metrics in four reporting areas: tactical, aggregate, executive, and ROI. The HCCM consist of five processes that “enable learning organizations to measure and improve business results and bottom-line impact”. These are: (1) business needs analysis, (2) performance analysis, (3) business results analysis, (4) ROI analysis, and (5) profit impact analysis.
“Metrics That Matter” combines HCCM, a “Business Result ROI Model” and the Phillips ROI Process Model to create a “series of valuation models” that allows users to choose measures that match their current level of maturity. HCCM focuses on Level Three of the Kirkpatrick scale, Job Impact. Business Result ROI zooms in on Level Four, Results, and the Phillips ROI Process focuses on Level 5, ROI. (Knowledge Advisors, 2005b)
Center for Effective Performance
The Center for Effective Performance (CEP) offers a five-phase Maturity Model to “provide a strategic framework for becoming a performance-based... organization (that) is structured around defined and measurable process that are aligned with organizational goals.” The six phases are: Discover, Plan, Manage, Optimize, and Sustain. In a recent webinar, CEP vice president Paula Alsher presented an assessment plan and Ten-Step program to achieving maturity.
CEP asserts that training is only part of the solution to most performance problems. A cornerstone of the CEP model is the Criterion-Referenced Instruction Methodology. Developed from behavioural science research, it is also closely aligned with job performance, broken down into three areas: Skill, Motivation (Will), and Operational (Hills). When all three areas are engaged the organization can achieve high performance and efficiency. Under the pragmatic CRI scenario, training is derived directly from the job, moulded to the needs of the learners, based on clear learning objectives, and covers only what is required to meet those objectives. Such training mirrors actual job conditions and provides immediate practice and feedback.

Exhibit I: Quick Comparison of Training Maturity Models and Reference Models
Making it Measurable – Learning Analytics
Former GE CEO Jack Welch (and a number of other people) famously said, “If you can’t measure it, you can’t manage it.” No learning maturity system would be meaningful if it were not pegged to measures, and equipped with a way to translate those metrics into understanding. Maturity, management and metrics are inseparable and indispensable.
Learning analytics is the place where all three come together and the “rubber hits the road”. Learning Analytics is the subset of business intelligence that measures the impact of training and professional development programs on actual business results. Like a Work Breakdown Structure, learning analytics break down, or decompose, learning data through logical analysis to generate meaningful management-focused information.
Zeroed-In Technologies employs a set of Key Learning Indicators (KLIs) to measure and predict goal attainment in training. According to Zeroed-In CEO Chris Moore, “each KLI describes some critical, must-achieve business outcome, such as operation excellence or compliance.” (Moore, 2005, p. 5) The seven KLI’s in the Zeroed-In model are:
- Operational Excellence – maximum Return on Effort (ROE) (as measured in cost per student day, ration of training costs to full time employees, enrollment and fill rates, facility costs, training revenue, development timelines, etc.)
- Learning Effectiveness – optimal results from instructional design and delivery (a measured in satisfaction surveys, progress indicators, completions, comparative pre- and post-assessments, return on investment (ROI), etc.)
- Compliance – compliance with all internal and external mandated regulations, with a robust audit process, etc.)
- Change Readiness – prepared and receptive workforce, able to respond to strategic change (as measured in cross-training and communication characteristics and climate surveys, etc.)
- Time to Competency – minimum amount of time needed to master the requirements of a job (as measured in proficiency levels, usage of “just-in-time” learning, ratio of job time to learning time, duration of orientation program, etc.)
- Workforce Proficiency – level of knowledge and competency in the workforce (as measured through test scores, skills gap analysis, productivity, quality ratings, 360-degree reviews, etc.)
- Point of Engagement – level of employee engagement, motivation, and commitment (as measured by retention, attrition, punctuality, customer surveys, results, etc.)
These KLI’s can be matched to successive levels of the maturity model. At the higher levels, training becomes part and parcel of overall corporate planning, baked into the company’s balanced scorecard and reflected in its “performance management dashboard”, sometimes rolled up into a broader “organizational dashboard.” (2005)
Training Impact on the Extended Enterprise
In addition to measuring the internal maturity of the training unit, training may be assessed by what it contributes to other departmental units and to the extended organization – development, sales, support, customers, suppliers, subcontractors, distributors, channel partners and others in the “value chain”. Cushing Anderson and Lyn C. Maize, writing in Chief Learning Officer magazine (2005, ¶8), identify eight cycles in the value chain that may interact with training: product development, supply, logistics, sales, implementation, maintenance, customer relationship and support. These eight cycles, in turn, suggest a new range of metrics that may be embedded in a maturity assessment. For example, training of maintenance personnel may be linked to reduced costs, faster repair times, up-sell awareness and reduced breakage. Customer service training may be measured against customer loyalty, brand value, and sales cycle. To the extent that the elements of the value chain cycles are prized in organizational maturity, the impact of training efforts in meeting value chain objectives will become equally important to training maturity.
Making it Visible – Organizational Dashboards
Distilling Wisdom from Data
Zeroed-In Technologies CEO Chris Moore exhorts us to “view learning as an investment, not an expense” and to directly align learning with the needs of the business to show a measurable return on that investment. Further, he decries learning systems that are “data rich but information poor”, that ‘rarely approach reporting from a measurement perspective” and, consequently, “can’t link or align your activity back to your strategic initiatives, or can’t do so without a lot of external wizardry... Put simply, today’s learning systems don’t store information about business goals, objectives, measures, targets, and projections.” (2005, p. 4)
One tool to distill training data into knowledge and intelligence is the performance management dashboard. A dashboard provides a single, integrated view for monitoring key initiatives. Or, as described on the PMOStep web site, it is a “concise, visual representation of consolidated status information and metrics. It may contain graphs, tables and charts that show the overall status and health of the organization and projects. Many dashboards allow you to click on each graph and chart to see the detailed metrics that make up the total.”
Some of the metrics that can be captured on an organizational dashboard, specifically as they relate to training, include efficiency (numbers served per money expended), effectiveness (completions and scores), compliance (attainment of mandatory goals) and quality (process improvement and measurement). The dashboard can be used to assess, monitor, predict and drive your training organization.
Zeroed-In’s Chris Moore reminds us that “You’re not steering a go-kart; you are driving a race car, a competitive machine with paid sponsors and onlookers. Go-karts have few controls and no dashboard. The pace is slow and the mechanics are simple. A race car is a complicated system with electronics and devices to aid the drive in maneuvering and decision making. The dashboard is the central monitoring center for the driver. You are the driver of your learning organization.” (2005, p. 10)
Summary
Most people find it easy to “connect the dots” between the achievement of business goals and the fulfillment of personal goals – continued employment, recognition, remuneration, advancement, and prestige. This is especially true in the training area, which is often relegated to the shadowy precinct of “cost center” and “support area” and not regarded as “mission critical”. A maturity model brings precise methods, measures and metrics -learning analytics -to your training management repertoire, and displays them on an organizational dashboard that can add visibility, validity and prestige to your efforts.
Using a maturity model – or, at least, adopting a maturity approach – will help prove the value of your training initiatives to your sponsor (and to your employer, in case they are not one and the same). Adding metrics to the mix will help you quantify the business impact of training in terms these executive will salute. It will bring the predictability, repeatability, clarity, elegance, stability and (dare we say) tranquility of project management methods and best practices to your instructional design and delivery process. It will generate a cycle of continuous improvement that extends and amplifies these rewards into the future. And it will provide useful analogies to help you manage your own career – meeting your personal objectives while satisfying those of your organization.
Now that you’ve had a chance to get acquainted with the concepts and tools of the Capability Maturity Model applied to a learning organization... Are you going to drive a go-kart or a race car? Are you going to dazzle your boss with ad hoc heroics, or support your organization with a sober and serious alignment to its long-term goals? Are you going to coast in your area of specialization, or are you going to climb the ladder to higher maturity? Some people will enjoy success with a totally Level One approach, just like some people will roll out of Las Vegas with pockets stuffed with their casino wins. But do you want to bet your career on it?
This paper, with its limited scope and depth, is intended to set the stage for further discussion, debate and development of a unified standard for training management maturity. We have tried to present a broad survey of what’s out there and what can be done. Organizational maturity is not achieved in a straight, seamless line. There are bumps, bruises and glitches along the way. Each organization will progress in a unique trajectory; it may achieve some Level Four qualities before it completely masters Level Two. And the maturity concept itself will stretch and grow unevenly, as it strives towards a maturity of its own.
Metrics and measures are critical to understanding training results. But the maturity concept doesn’t just help us track what has already happened; it helps guide our path going forward. Gartner researcher Kathy Harris, speaking of Gartner’s eLearning framework but speaking for such models in general, says ”[the] maturity model provides a framework to evaluate the current state of your eLearning program. However, the greater value of the model is in using it to chart a path toward a more effective eLearning program.” (Kaizen Training) So, maturity can help us plan, predict, measure, evaluate, improve and align throughout the entire training process.
It has been said that a mind, once stretched, never returns to its original shape. I hope that *your* mind has been stretched by the concept of training maturity, and that it will pay increasing dividends as you continue your journey in project management and in learning.