Why use a hammer when you need a wrench
results-based monitoring and evaluation of projects
Monitoring and Evaluation or M&E has been used by nongovernmental organisations (NGOs) for evaluating programmes for decades. For the European Union, the United Nations, the World Bank and other development banks, M&E is embedded in their organisational processes. Many have even published M&E programme toolkits to promote understanding and adoption. Country associations, like the Swiss Evaluation Society (SEVAL), have also advanced the adoption of M&E of policies and government funded programmes. For programme evaluation, there is usually not one set of standards from which to pick but many.
For projects, processes for monitoring progress are far less established. Therefore, it is of little surprise that the quality of those monitoring processes can vary widely. By quality, we mean at a minimum timeliness, relevance, reliability, accuracy, usability and credibility. Unless monitoring processes demonstrate these characteristics, they are unlikely to improve performance and enhance accountability.
This paper examines project monitoring in three industry sectors: government, NGOs, and construction. Construction was chosen because it has a record of using external project monitors successfully. Differences among monitoring, evaluating, and managing projects will be discussed, as well as results-based management (RBM) and how it could be used within the monitoring and evaluating context of projects. Examples of some approaches and tools that can be used will be presented. We describe a nine step process for monitoring projects, what it takes to be successful, and potential risks to that success.
The need for monitoring
The rise of outsourcing has raised interest in evaluation and monitoring of projects. How does an entity know how a project is progressing – when it no longer has in-house expertise in that area due to outsourcing? The truth of the matter is – it may not. Government has increasingly relied on the use of outside contractors to design, build, and even conduct oversight. Concerns are now being raised that the pendulum has swung too far. A recent study on the U.S. federal government's Deepwater project found the client agency had a shortage of engineering and government acquisition experts. (Karp, 2007, p. A16) Government leaders are now worried too much control has been given to contractors, and not enough expertise kept in-house.
The use of new technology can create a situation similar to outsourcing. One of the most documented types of failures is the implementation of Enterprise Resource Planning (ERP) systems. ERP systems replace existing legacy systems with new technology. Rarely is the client organisation well versed in what the end result should look like or how to get there. The client is dependent on the vendor and system integrator. New technology such as VoIP, wireless, or new engineering techniques could fall into this category. When no one in-house has sufficient expertise to evaluate progress, the reliance is totally on the vendor. This reliance has sometimes proven to be misplaced.
Another reason for monitoring is that there is more money involved. Projects are getting bigger – but the results are not necessarily better. In the oil and gas industry, rising prices for oil have buffered the consequences, but the facts show projects often do not meet the original baseline. A study by Booz Allen found that over 35% of projects went at least 10% over on either cost, schedule or both. And for megaprojects (costs exceeding $1B), the numbers were worse: nearly 40%. (Booz Allen, 2006, p.3)
Government and NGOs are also under increasing pressure to show “value for money”. Constituents and donors are demanding transparency and accountability. The increase in the number of NGOs has caused competition for donations. The Organisation for Economic Co-operation and Development found that results based reporting improved the NGOs ability to compete for funds by convincing a sceptical public or legislature that an agency's programs produce significant results and provide value. (Binnendijk, 2000, p.7) Conversely, if such programmes do not provide value, they are more carefully scrutinised. With the rise of the internet, this scrutiny is easier. Failure is becoming increasingly obvious.
Exhibit 1 shows some major projects/programmes, most of which were, to use kind words, challenged.
Exhibit 1 – Challenged Projects
To improve performance results, more entities are trying project monitoring, including several of the projects above. However, none used RBM. The TNS project used an external monitor who showed in 1997 that the project was short £2m. However, it was not clear to whom they should direct this issue. The question was raised with the shell company, TNS plc, who responded debentures could be used to raise funds. However, they were not the party who was really responsible. (ESC Committee, 2001) The U.S. State Government projects in Florida used an internal group specifically set up in 1997 to monitor information technology projects. A special oversight committee is being convened to evaluate the projects. For the Aspire project, the internal group did not have in-depth subject matter expertise in the product being implemented, Peoplesoft, although they did have expertise in project management and technology. The U.S. State of Wisconsin project had an external monitor, but the subsequent evaluation found the role and responsibilities had never been defined. The vendor who won the contract assigned individuals who lacked subject matter expertise and did not have a hands-on approach. (Virchow Krause, 2004, p.5) Using RBM addresses the issues that occurred: clear definition of roles and responsibilities, subject matter expertise, and use of a hands-on approach.
The construction industry offers insights into how monitoring can be effective. Imagine you are building a vacation home in Spain, but you live near London. Would you be comfortable assuming the real estate promoter is looking out for your best interests? Even more importantly, would your lender? The answer is, ”Probably not.” Your lender would likely require you to hire a project monitor. This project monitor would evaluate change orders, conduct onsite visits, provide proactive problem identification, review the schedule periodically, evaluate monthly progress, verify periodic progress payments are in order, assess the critical path, and perhaps even take photos of progress. In your absence, he would be your advocate to ensure your (and your lender's) interests were represented.
An example of successful project monitoring is King's Gait, a Regency Homes project in Scotland. The lender, Bank of Ireland, contracted with an architectural firm to monitor the project. The location caused two challenges. The land, formerly part of Eastfield Playing Fields, was a brownfield site, as it was contaminated with chromium. The site was fenced off and taken out of public use in the early 1990s. (SLC, 2005) Prior to development, it was subject to full decontamination process. Connection to the main sewer was problematic as it was over capacity. The monitor was able to work these issues in the pre-planning period which avoided contractual delays and cost. The precontract period was longer due to the two issues but on site the project went according to programme. The costs did rise from the original budget but not dramatically, mainly due to the attenuation issue. The construction of 111 flats will be completed in July 2007 at a final cost of £10 million.
Turnkey contracts for facilities, often used in construction, are another example of where a project monitor is often used and provides benefit. A turnkey contract creates a situation where the client is totally dependent on the vendor to deliver a ready to use facility. Monitoring provides peace of mind in several ways. It ensures progress payments match actual progress, reduces the risk of nonconformity to specifications, and increases the probability that deviations which could be hidden by project end are brought to light. Onsite visits of monitors may improve work site safety. (Sweet, 1999, pp. 125-126) In construction, monitoring engineering and project progress is regarded as a key control. (Kimmons, p. 94) Outsourcing doesn't eliminate the need for the control; it changes how the organisation ensures the control is performed and effective. All projects experience issues. It is important to have adequate monitoring methods to recognise issues, identify appropriate adjustments and incorporate these into the system used to manage the project.
Managing versus monitoring versus evaluating
Usually project managers manage projects. When their role changes, due to outsourcing or a move from the private sector to a government agency or an NGO, they may not be comfortable. The role and responsibilities for monitoring and evaluating projects is quite different from managing the projects themselves. The project manager needs to understand the differences between managing a project, monitoring, and evaluating it.
Managing the project entails taking responsibility for the processes necessary to achieve the project objectives. It includes building and managing the team, communicating with stakeholders, controlling costs, developing the work breakdown structure and timeline, managing risk, managing quality, meeting deadlines, and solving problems to bring the project in on time and on budget. As defined by A Guide to the Project Management Body of Knowledge (PMBOK® Guide), the term monitor means to, ”collect project performance data with respect to a plan, produce performance measures, and report and disseminate performance information.” (PMI, 2004, p. 364)
In a results-based context, the term monitor has a broader meaning. Performance monitoring is a process of assessment based on participation, feedback, data collection, analysis of actual performance (using indicators) and regular reporting. Monitoring provides the information required to assess whether progress is being made towards achieving results. It provides the opportunity to review the assumptions made early in the project to be sure they still hold true and to decide whether the original strategies are still appropriate. Monitoring aims to provide the sponsor and key stakeholders with regular feedback and early indications of progress (or the lack thereof) in achieving intended results. It tracks the actual performance against plan according to pre-determined standards, collecting and analysing data on processes and results and recommending corrective measures. (UNFPA, 2001) By examining the impact and outcomes as well as the outputs (Exhibit 2), results-based monitoring goes beyond another technique that is sometimes use to perform implementation monitoring, earned value management.
Exhibit 2 – Results-based monitoring compared to implementation monitoring
Evaluation is a time-bound exercise to assess systematically and objectively the relevance, performance and success of ongoing and completed projects. Evaluation is undertaken to answer specific questions, to guide the sponsor, decision-makers and managers, and to provide information on whether underlying assumptions used in development and designs were valid, what worked and what did not work, and why. It emphasises analysing factors that affected results whether positive or negative, and on identifying lessons learned. (UNFPA, 2001)
From these definitions, one can see that the roles are quite different. A project manager who is placed in a monitor role will use the skills of analyses, observation and influence more than motivation or team-building. Clear agreement on the roles and responsibilities of the project monitor are essential for it to be successful and improve project performance. If the project monitor is relegated to attending status meetings and preparing periodic reports, the role will not be effective as defined by this paper.
So what is results based management and how can it help a project manager who now must monitor but not manage a project? RBM involves collaboration to set agreed upon indicators and standards against which results can be measured. By making these decisions in advance, there is less of a surprise when the project monitor raises a concern. In this way, it is similar to setting risk triggers. If a risk trigger is set, when that event occurs, the project manager doesn't hold a meeting to determine whether the contingency plan should really be implemented. That decision has already been made; the contingency plan is implemented and then periodically checked to see if the plan is working. Setting up a framework in advance can perform the same benefits for the project monitor. The monitor is seen as following procedure rather than interfering or second guessing the project manager. The agreed upon indicators and how they will be verified are compiled in a matrix called a logical framework (LogFrame).
Three categories of information collected during monitoring have been identified (Forum Solint, 2003, p. 41):
- a) Information on the implementation of planned activities and stakeholder participation, to support the day-to-day management of the projects
- b) Information on the results attained through project activities and stakeholder response, to assess progress towards results and review work plans for follow-up.
- c) Information on the achievement of specific objectives and first impact, to review the strategy approach and problem solving.
Thus monitoring can be broken down into three types:
- On-going monitoring: to identify and highlight problems as they emerge,
- Regular monitoring: to analyse problems and deviations and suggest follow-up measures, and
- Monitoring at specific moments of project life: to review the whole strategy and recommend adjustments.
During the project, the monitor can collect data through self-assessment questionnaires, individual and group interviews, surveys, before and after pictures (for a construction project), site visits, reviewing deliverables and substantive project documentation, evaluation of training, user participation, review of issues and risks, and observations. Performance-based monitoring and evaluation combines the traditional approach of monitoring implementation with the assessment of results (Mayne, 1999). Results-based monitoring goes beyond implementation monitoring in that it examines what changes are occurring in the real world.
Approaches to monitoring
Organisations usually select from four basic approaches to performance monitoring. No matter which method is selected, the Project Manager has overall accountability for the project. The first option is internal monitoring. In this case, performance measurement and monitoring is the responsibility of those who are most closely involved in project implementation: the organisation's staff. Internal monitoring is essentially a form of continuous performance self-assessment where the project team has the capacity, and are given the responsibility, to undertake performance measurement and reporting.
The second option is to build an internal but independent group to monitor projects. The U.S. Florida's Technology Review Workgroup is one example. Organisations have to be committed to building and maintaining this group to use the approach successfully. One advantage of this method is that a body of knowledge can be built, which can be leveraged in the future.
A third option is external monitoring where a consultant is contracted as an independent Project Monitor to track and report on performance. They may report to the project sponsor, a steering committee, a government body as a legislature, the project manager, or the prime lender. The architecture firm that performed monitoring services at King's Gait in Glasgow, Scotland for the Bank of Ireland is an example of this approach.
A fourth option, external support, makes the project manager responsible for the performance measurement function, but provides support to build organisational capacity in this area. This performance advisor also monitors the validity and reliability of the performance information being reported. Canada's International Development Agency uses this approach on occasion.
Tools for monitoring
A number of toolkits are available for programme evaluation. Unless you are joining an established group that performs project monitoring, you will probably have to create your own toolset library based on your organisations needs and expectations. Some basic tools are shown below.
Switzerland's society for evaluation, SEVAL, has published standards which can be applied to projects as well as programmes and appear general enough to apply to monitoring. (www.seval.ch/en/standards/index.cfm)
A matrix to plan monitoring activities is useful as it provides a record of what is to be performed and lends transparency to the process (Exhibit 3). Additional activities not initially planned could be added if needed.
Exhibit 3 – Example schedule of monitoring activities
A LogFrame (Exhibit 4) is a useful tool. In some cases, it may be simpler to follow similar principles but not use this exact tool, e.g. use a participatory approach, identify clear and verifiable indicators for objectives, define what measures will be taken if the objectives are not met, and ensure monitoring is performed.
Exhibit 4 – LogFrame example (WIOMSA, 2007, p.1)
There are a number of resources for learning more about results based monitoring. A United Nations organisation has published a Results-Based Management Orientation Guide (www.unfpa.org/results/docs/rbmguide.doc) and a glossary: (www.unfpa.org/results/docs/rbmglossary.doc)
What does it take for monitoring to be successful
What should project managers know to be successful?
- Make sure the role of project monitor is clearly defined in advance,
- Define clear and verifiable objectives and indictors. A LogFrame can be useful.
- Recognise that subject matter expertise is vital At least a part of monitoring has to be hands-on. One of NASA's 100 Rules for Project Managers states,” Rule #28: People who monitor work and don't help get it done never seem to know exactly what is going on (being involved is the key to excellence).” (Madden, 2007) The implication is that if you are just looking at reports, and not involved elsewhere, you're not really performing monitoring at the level where a difference can be made. You may be providing oversight – but not performance monitoring.
- Don't be too optimistic - recognise when a project is challenged.
- Don't be too pessimistic – people are willing to work hard when they see the need for it.
- Understand influence - be convincing. Those in power need to recognise that your concerns are justified and suggestions for corrective actions feasible.
- If a site visit is done, clarify guidelines and objectives in advance.
- Ask whether your organisation is ready for bad news as well as good.
- Make the project manager see the benefit. You could say to him, ”One of the alternatives to a Project Monitor could be more frequent audits. Do you really want the auditors checking more frequently?”
- Achieve respect – you don't have to be liked…but respect is critical
Possible risks to performance monitoring
Three types of risks have been identified when using monitoring. (Schwartz, 2005, p. 10). First, resources might be wasted because the efforts do not produce a noticeable improvement in quality. Secondly, the project monitor may be ignored. They may be viewed as lacking credibility, or demanding too high a standard. Third, if the project monitor is seen to be finger pointing rather than working to improve project quality, their efforts can be resented Such resentment can impact the project monitor's relationships throughout the project. The best approach to countering these risks is to remember that projects are done by people – not by machines. A project monitor needs to maintain a collaborative rather than adversarial team relationship.
Steps to building a project monitoring system
Exhibit 5 - Steps to Project Monitoring using RBM
Organisations are pushing for improved governance and accountability. This push affects projects. A project audit is a recognised quality assurance tool. But it can be seen as hitting the project with a hammer when a more gentle approach might be appropriate. Monitoring is another method to be considered. Some organisations, especially in the construction sector, have found value in monitoring. Results based monitoring provides a framework to avoid the confrontation that can result from having an external monitor. As organisations are trying to reduce surprises, monitoring of projects using a structured approach as RBM may improve accountability and governance.
Binnendijk, A. (February, 2000) Results Based Management in the Development Co-operation Agencies. Retrieved on 3 February, 2007 from http://www.oecd.org/secure/pdfDocument/0,2834,en_21571361_34047972_31950853_1_1_1_1,00.pdf
Cotterell, W. (21 February 2007) Florida Today[Electronic version]. Retrieved on 21 Feb, 2007, from http://www.floridatoday.com/apps/pbcs.dll/article?AID=/20070221/BREAKINGNEWS/70221042/1086
ESC: Education, Culture & Sport Committee (2001) Report on Inquiry into The National Stadium Volume 1 SF Paper 266. Retrieved on 3 February, 2007 from http://www.scottish.parliament.uk/business/committees/historic/education/reports-01/edr01-05-vol01-02.htm
Forum Solint, (2003) Monitoring and Evaluation Manual. Retrieved 5 February 2007 from http://www.cosv.org/public/54/Manuale%20M&V.zip?PHPSESSID=99bf079fa48f6214c76e8023612094ce
Karp, J. (14 February, 2007) Security Projects Get Review Wall Street Journal, p. A16
Kimmons, R.L., Loweree, J.H. (1989) Project Management: A Reference for Professionals, New York: Marcel Dekker
Madden, J., Stuart, R. One Hundred Rules for NASA Project Managers, Retrieved on 3 February 2007 from http://pbma.nasa.gov/
Mayne, J., Zapic-Goni, E. (1997) Monitoring Performance in the Public Sector. New Brunswick, NJ: Transaction Books
McKenna, M.G. Wilczynski, H. and VanderSchee, D. (2006) “Capital Project Execution in the Oil and Gas Industry,” Booz Allen Hamilton white paper
OIG (Office of the Inspector General), Department of Homeland Security, February 2007 OIG Report 7-27
Project Management Institute (2004) A guide to the project management body of knowledge (PMBOK® Guide) (2004 ed.). Newtown Square, PA: Project Management Institute.
Reneau, J. State of Wisconsin, Department of Employee Trust Funds, Correspondence, 3 March, 2004
Schwartz, R., Mayne, J. (February, 2005) Assuring the quality of evaluative information: theory and practice. Evaluation and Program Planning, 28(1), 1-14
State of Florida, Project Aspire web site, Retrieved on 22 August 2006, from http://aspire.dfs.state.fl.us
Street, M. 31 Jan 2003 IT Week [Electronic version], Retrieved on 3 Jan, 2007, from http://www.computing.co.uk/itweek/news/2084816/nao-gives-project-warning
SLC (South Lanarkshire Council) 29 July 2005, Retrieved on 22 August 2006 from http://www.southlanarkshire.gov.uk/portal/page/portal/EXTERNAL_WEBSITE_DEVELOPMENT/SLC_ONLINE_HOME/SLC_NEWS/NEWS_STORY?content_id=5190
Sweet, J., Sweet, J. (1999) Sweet on Construction Industry Contracts, 4th Edition Gaithersburg, MD: Aspen Publishers, Inc.
UNFPA (2001) Results-Based Management Orientation Guide, retrieved 3 February 2007 from Guide www.unfpa.org/results/docs/rbmguide.doc
Virchow Krause & Company Final Report: Evaluation of the Benefit Payment System Project 12 July, 2004
WIOMSA (Western Indian Ocean Marine Science Association): Logical Framework Approach, retrieved on 1 February, 2007 from www.wiomsa.org/mpatoolkit/Themesheets/C4_Logical_framework_approach.pdf
© 2007, Joy Gumz
Originally published as a part of 2007 PMI Global Congress Proceedings – EMEA