Measuring software 4 dummies

Raffaele Larenza, Web Application Director, Telecom Italia S.p.A.

Leonardo Caronia, Intranet & Internet Director, Telecom Italia S.p.A.

Abstract

We present a new software sizing tool based on the Function Point Methodology. The tool is realized on a pyramidal abstraction concept based on assumptions and constraints related to Telco web applications. The application of this concept leads to a methodology to easily estimate the effort to develop a software application, starting from its high level functional requirements and without the need of considering technical design details. Nevertheless, if needed, in order to get a sharper measurement, the tool allows a three-level configuration management for getting input in the right degree of technical design information. In this paper, we first describe the converged Telco domain and its characteristics used to simplify the Function Point Methodology; then we give an overview of the software measurement area; and, finally, we present our concept, methodology, and tools.

Introduction

Well, I didn't really like the title, but the bibliography shows that, most of the time this type of title is successful, so I borrowed it. Maybe the real title should be “Measuring Software in 4 clicks,” but…let's start from beginning. It is all around the term “evaluate,” a term, which recurs many times during our project management activities: evaluate effort, evaluate time, performance, progress, quality (PMI, 2008; Marino). In particular, in certain project management processes, when estimating and planning time, resources, and costs we need to evaluate the project deliverables “size.” When dealing with “tangible” deliverables, in most cases, there exist methods, procedures, and tools to calculate this size. Consider building a house. Of course, measuring all the deliverables (the house elements) is complex because of the large number of variables to be considered, but…fortunately…there are methodologies and tools to do it deterministically! Software is “intangible,” so measuring its size is not a deterministic process…yet. There are many “empiric” methodologies that try to define a software measure by observing some of its tangible characteristics: the source lines of code, the binary size, the classes and interfaces, the software functionalities. The Function Point (FP) methodology is one of the most known techniques used to measure software; because it is based on calculating the effort to develop technical entities (DB, graphical user interfaces, application interfaces, etc) it requires having a deep idea of what the technical solution is and, moreover, it is time consuming.

In this paper, we present a new FP-based concept and a tool, to easily measure software during feasibility phase, starting from high level functional requirements; thus, we save costs and time because it is not required to deeply go into the technical design in this phase. The tool can also be used for more accurate evaluations that are usually required at the subsequent phases, when the technical solution has been decided.

Domain: Convergent Telco Companies

Convergent Telco Companies

In recent years, Telco Companies have undergone big transformations and evolutions; many drivers have been instrumental in these changes:

  • Technology Innovation goes very fast: Mobile and fixed Broadband, Optical Fiber, MAC World.
  • Business Convergence: not only Mobile versus Fixed but also Multimedia Contents, Value Added Services, etc.
  • More and more skilled users asking for improved user experience.

And things are still evolving so that in following years, the business models will change again, and new companies will offer bundled services for both the consumer and business markets.

This is why Converged Telco markets appear as an “apparently chaotic” world. In order to be a market leader, any Telco company, because of the tough competition, must continuously evolve its business and take care of several critical aspects, such as offerings, services, innovation, sales, multichannel caring etc. (e.g. web-caring, sms-caring, self-caring, iphone caring)

In this scenario, IT plays a fundamental role in supporting business goals and strategies: flexibility, time-to-market and quality of services are mandatory to achieving and controlling all these aspects, but they are not enough. Today, the “face of the company” — the portals — are a virtual door into the company in which customers “play” with a big variety of e-services, such as e-billing, self-caring, services, technical support; oftentimes, these depend on the availability and performances of the Business Support Systems (BSS) data and functions rather than on the web application itself.

The IT Department of Convergent Telco Companies

All we know: IT is often considered a cost for the company, but the business cannot evolve without the support of the ICT: new offerings, new technologies, and improved services can be launched only if the applications and the IT infrastructures are aligned with the Business Strategies. In order to keep this alignment as tight as possible, several important factors must be managed:

  • Organization
  • Processes
  • Innovation & Technology

The IT Department must modulate all these elements in order to ensure competitive time-to-market and high level customer services.

Convergent Telco Portals

Portals and web applications play crucial roles; in fact, today it is probably the most accessed point of contact between the company and its customers. Thus, it is important that the availability of the services be kept very high. Moreover, by analyzing user actions during web navigation and by asking specific information of the customer itself, it is possible to construct a customer profile repository that can be used for “taking care of the customer” through, for example, advertising campaigns, offering propositions, and “ad hoc” caring services.

Moreover, very often, portals constitute a kind of abstraction level respect to the legacy systems services such as CRM, billing, and data warehousing. In fact, especially for convergent Telco, the portals must give a unified and congruent view of all fragmented services offered by the numerous heterogeneous applications. Finally, portals must be always up-to-date with respect to the evolutions of all legacy systems.

All these considerations, lead to the conclusion that the portals of a converged Telco company need a very high time-to-market and cannot be locked into rigid processes and procedures.

Telco IT Project Management

As described above, Telco Companies need to satisfy aggressive requirements in terms of time-to-market and continuous evolution of marketing ideas to develop new offerings and new services. This has an important impact on both the organization and processes of the company, and both will determine the success or failure of the launching new ideas in the market. In particular, the processes to develop IT applications must satisfy two main opposite properties:

  • Robust and Well-Defined: like a black-box machine taking user requirements in input and giving in output, with predefined synchronized timeframes, the due software with an established throughput; only few variables are tunable to lightly adapt the process to certain requirements. In fact, the main need is to guide all the large variety of stakeholders (marketing, several IT departments, operations, purchasing department, etc.) and numerous amounts of impacted applications (CRM, billing, portals, Data warehousing) through a predefined roadmap (standard plans or KITs).
  • Flexible: on the other hand, as said, 20% to 30% of total user requests have special needs so that they cannot be treated as standard requirements; for example: new and innovative services and one-shot launch services may require very tight time-to-market, or new infrastructures and big projects impacting on many systems may have very complex delivery plans, far from the standard ones.

In order to satisfy these needs, it is possible to implement a three-modulated Project Management System (Exhibit 1):

  • Wave-PM
  • Horizontal-PM
  • Quick-PM
Project Management System

Exhibit 1 – Project Management System

Wave-PM

Wave-PM (W-PM) represents the standard Project Management System normally used by IT engineering departments to satisfy, for example, 70% to 80% of all requests. Ideally, you can think of it as a wave with a predetermined modulation: it brings all its requests to the destination with a predefined timetable: all requests must respect the defined pre-scheduled activities and milestones. In the following illustration, we represent the main phases of a standard “wave” (Exhibit 2):

W-PM predefined Activities and Milestones

Exhibit 2 – W-PM predefined Activities and Milestones

Our tool can be utilized during the entire schedule, but in particular it proves to be very useful, especially at the feasibility phase. In fact, the number of requests to be evaluated during this phase and before the “TO-DO List” milestone is very high with respect to the requests that will really be implemented (i.e., the requests belonging to the “TO-DO List”).

The slot-time to provide the estimation for all requests is very short (1-2 weeks); moreover, the functional specs, at this stage, usually are not well-defined. Thus, the tool we realized gives much support for reducing time and costs for the software evaluation.

Horizontal-PM

The Horizontal-PM (H-PM) is mainly used for long-term and infrastructural projects, whose plans cannot be suited into the predefined W-PM standard plans. In this case, each project has its own plan that can satisfy all requirements in terms of time, costs, resources, and so forth.

Quick-PM

Quick-PM (Q-PM) has been introduced into the Project Management System to satisfy certain kinds of requests having very tight time-to-market. These kinds of requests are not rare for web applications and portals, in which information, services or graphical elements may rapidly change and must be implemented in a few weeks and often, even in few days.

In large companies, it is not easy to implement a formal and well-defined process that can support this big reactivity and flexibility: usually the number of people participating in the software life cycle is very high. The main characteristic of Q-PM is the flexibility, obtained by removing a series of constraints and putting the engineering department at the center of the development process so they can work directly with the other departments involved in the process and be responsible for critical activities that are usually crucial for the project plans.

Measuring Software

“You cannot manage what you cannot measure” (Robert Kaplan, 2006).

But the question is: “What do we need to measure?” An application, a system, or a software interface is composed of many entities that constitute an asset of the company:

  • Documents (technical design, architecture blueprint, user manual, installation manual, use cases etc.)
  • Source code
  • Executable code
  • Interfaces
  • Databases

All these entities evolve together during the software life cycle; hence, we need to define processes and tools to measure them at all main phases of the life cycle: feasibility, design, development, testing, and so forth. In fact, as shown by the “cone of uncertainty” introduced by Bohem (1995), during the feasibility phase of a project the estimation is typically as far off as 60% to 160%. After the requirements are written, the estimation might still be off +/- 15% in either direction (Exhibit 3).

The cone of uncertainty narrows as the project progresses

Exhibit 3 – The cone of uncertainty narrows as the project progresses.

The uncertainty of the estimations may have important impacts on several aspects of the entire project:

  • Economic impacts, for example: costs or lost of revenues because of project delays
  • Technical impacts
  • Managerial impacts, “adding manpower to a late software project makes it later,” Brooks Law (1975).

There is a vast domain of software measures and methodologies that we can employ to try to reduce the estimation error. Many of these are very complicated, and often cannot even be applied at the beginning of the project because they require detailed information. At this point, we want to make a simple consideration: at the early phase of the project, applying the Pareto law, we deduce that it is not efficient to adopt a detailed and complicated methodology to estimate software. It is enough to point to the 80% of the actual size; the effort to calculate the remaining 20% is very high and perhaps we can reach almost the same results of simpler methodologies.

This paper is not a dissertation on software measures; rather, in this section, we want to present just a brief overview of the main software measures and methodologies. So, in the following section, we provide a non-exhaustive classification of the most known software measures.

Internal Measurements

These measures can be grouped with respect to the software life cycle:

  • Requirements
    • Number of Requirements
    • Function Points (functional requirements)
  • Specs
    • Number of Entities of the RE Diagram
    • Number of Use Case Points (UML)
  • Design
    • Number of Modules
    • Modules Coupling
    • Flow Complexity
  • Development
    • Lines of Code (LOC)
    • McCabe Number (numbers of nodes and arcs of the flow graph)
    • Halstead Measures
    • Object Oriented Measures

External Measurements

  • Gils Metrics
  • Bohem quality model
  • McCall quality model
  • ISO/IEC 9126-1

Software Evaluation Methodologies

Based on these measures, there are many evaluation methodologies; in the following section, we briefly describe the most known:

COCOMO

COCOMO (Constructive Cost Model) is a model that allows software project managers to estimate project cost and duration. It was developed initially (COCOMO 81) by Barry Boehm in the early 1980s. The COCOMO II model is a COCOMO 81 update used to address software development practices in the 1990s and 2000s. The model is by now invigorative software engineering artifact that has, from the customer's perspective, the following features:

  • The model is simple and well tested
  • Provides about 20% cost and 70% time estimate accuracy

In general, COCOMO II estimates project cost, derived directly from person-months effort, by assuming the cost is basically dependent on the total physical size of all project files, expressed in thousands of single lines of code (KSLOC).

Use Case Points

Use case modeling is an accepted and widespread technique used to capture the business processes and requirements of a software application. Because they provide the functional scope of the application, analyzing their contents provides valuable insight into the effort and size needed to design and implement the application. In general, applications with large, complicated use cases take more effort to design and implement than small applications with less complicated use cases. Use Case Points (UCP) is an estimation method that provides the ability to estimate an application's size and effort from its use cases. Based on work by Gustav Karner in 1993, UCP analyzes the use case actors, scenarios, and various technical and environmental factors and abstracts them into an equation.

Story Points

The Story Point Estimation technique begins by splitting the project into small parts: stories. Then, a number is assigned to each of those stories to indicate their relative size. Normally, this number is from the Fibonacci sequence (1, 2, 3, 5, 8, 13, etc.) or a power of 2 (2, 4, 6, 8, 16, etc.).

Function Points

A function point is a unit of measurement used to express the amount of business functionality an information system provides to a user. The cost (in dollars or hours) of a single unit is calculated from past projects. Function points are the units of measure used by the IFPUG Functional Size Measurement Method. The IFPUG FSM Method is an ISO recognized software metric used to size an information system based on the functionality that is perceived by the user of the information system, independent of the technology used to implement the information system. The IFPUG FSM Method (ISO/IEC 20926 Software Engineering - Function Point Counting Practices Manual) is one of five currently recognized ISO standards for functionally sizing software.

Function points were defined in 1979 in A New Way of Looking at Tools by Allan Albrecht at IBM. The functional user requirements of the software are identified and each one is categorized into one of five types: outputs, inquiries, inputs, internal files, and external interfaces. Once the function is identified and categorized into a type, it is then assessed for complexity and assigned a number of function points. Each of these functional user requirements maps to an end-user business function, such as a data entry for an Input or a user query for an Inquiry. This distinction is important because it tends to make the functions measured in function points map easily into user-oriented requirements, but it also tends to hide internal functions (e.g., algorithms), which also require resources to implement; however, there is no ISO recognized FSM Method that includes algorithmic complexity in the sizing result. Recently, there have been different approaches proposed to dealing with this perceived weakness and implemented in several commercial software products.

The Tool

The Goal

The goal is to define a concept, a methodology, and realize a tool to evaluate, as quickly as possible, the costs of the requests; the tool can be used at each phase of the software life cycle, but it can be best appreciated during the feasibility phase that requires a rough estimation under very tight conditions:

  • The functionalities are not completely clear and well-defined.
  • The number of requests is very high, also 4 or 5 times the number of requests that will be really developed.
  • Only 6 to 10 days are available to provide the cost estimation for all requests.

Thus, first of all, we define a new concept in order to simplify as much as possible the evaluation process.

The Concept

The concept is based on a series of considerations and assumptions regarding four fundamental elements:

  • The Fast Function Point methodology
  • The domain we act in: web applications and portals
  • The deep knowledge of the application we work on.
  • Telco Processes.

This allows a three-level abstraction structure that leads to a pyramidal methodology of abstraction: each level simplifies the above level of abstraction until the top — the easiest — that allows the software analysts with good knowledge of the impacted system to estimate the effort (FP), starting from the functionalities to be developed and without considering any technical aspects usually required by the Function Point methodologies (Exhibit 4).

Level abstraction of the Fast Function Point Methodology

Exhibit 4 - Level abstraction of the Fast Function Point Methodology

The Fast Function Point Methodology (FFPM) is the foundation on which we build our concept. The methodology is based on the standard Function Point Methodology and it has been simplified based on the Telco industry requirements and our environment; the simplification affects to a very reduced set of elements to be considered for estimation of Function Points. In particular, FFPM defines four macro-categories of elements (Exhibit 5):

  • (A) – Entities: it includes data internal to the application and external data referenced by the application.
  • (B) – Inputs: it includes the application input (read/write) and the inputs coming from inquiries (read only).
  • (C) – Output: the application outputs.
  • (D) – Processing: the business rules and computations performed by the application.

Each macro-category (Exhibit 5) identifies a set of technical objects (FFP Objects) that must be identified and measured in order to calculate the size of the function (FP):

  • (A) – Entities:
    • A1 - Entities internal to the application.
    • A2 - Entities external to the application.
  • (B) – Inputs:
    • B1 – Input from Web Pages/Forms (read/write).
    • B2 – Input from other applications.
    • B3 – predefined lists (Combo-box)
    • B4 – Web Pages/Forms from inquiry and display (read)
    • B5 – User activated computations.
  • (C) – Output:
    • C1 – Reports.
    • C2 – Outputs to other applications.
  • (D) – Processing: the business rules and computations performed by the application.
    • D1 – Computations internal to the application.
    • D2 – Inferences internal to the application.
FFPM macro-categories

Exhibit 5 – FFPM macro-categories

The size of each object is measured in FP and it is given by a predefined relationship assigned to a certain “range of complexity,” a defined number of FP (example in Exhibit 6).

Example of a “Range->FP” relationship

Exhibit 6 – Example of a “Range->FP” relationship.

The domain we act in: Web applications and portals.

The FFPM utilizes the IT Telco characteristics to reduce the domain space to be considered, simplifying the entire process of measuring the software. Nevertheless, especially for the first phases of the software life cycle, the methodology is still complex and it requires a certain effort to estimate the size of FFP objects. In fact, technical details are required to apply the methodology, and thus, it is needed to identify a technical solution and to sketch a light technical design. Unfortunately, at the feasibility phase, usually, the functional requirements are still very primitive and moreover the time to provide the estimations is very tight. Thus, the idea is to consider the characteristics of our software domain (Web Applications) to reduce the domain space and simplify the FFPM.

Analyzing the applications we develop in the Internet & Intranet Department, we have identified the set of logical objects called “Portal Objects” (PO) that we usually implement to realize our web applications. Hence, we define a relationship that, for each PO and for three levels of complexities (High, Medium, Low), determines how many FFP Objects are usually involved in the realization of that PO with that complexity. Finally, the size of a certain PO is the sum of FPs of all FFP Objects related to that PO with that complexity (example in Exhibit 7). The PO we defined are the following:

  • Batch computations
  • Statistical computations
  • Report computations
  • Internal Data utilized by the application
  • Send data to other applications
  • Gather data from other applications
  • Web pages performing “Inserts”
  • Web pages performing “Updates”
  • Web pages requesting services (search, buy, activate,…)
  • Web pages performing “Displays”
  • Web pages performing “Inserts”
  • External DB Enquires
Example of a “PO->FFPO” relationship for a defined Complexity (High, Medium, Low)

Exhibit 7 – Example of a “PO->FFPO” relationship for a defined Complexity (High, Medium, Low)

The deep knowledge of the application we work on.

The next level of abstraction is the one that allows us to pass from a technical view to a functional view and it reflects the fact that the deep knowledge of each application allows us to define a closed set of “Logical Components” (LC) that are implemented by using a subset of the previously defined Portal Objects. Following is the list of defined LC:

  • Identity Management – It manages the user registration process and the access to the private areas accessible after authentication;
  • Self-caring – responsible for the self-caring services that are accessible by the authenticated users; examples of self-caring services are: prepaid credit visualization, fidelity card management, profile management, and so forth;
  • Searching – search engine;
  • E-commerce – it is the component for buying on-line products and services, managing the orders and payments, and so forth;
  • Advertising;
  • HDA (Human Digital Assistant) – virtual Customer Care operator to solve questions about caring, e-commerce, technical assistance, and so forth;
  • Multichannel Communication Services – it contains communication services, such as: chat services, web call back, and so forth;
  • Knowledge Sharing & Collaboration;
  • BI Analytics & Reporting;
  • Content Management.

As in the previous level of abstraction, we use a relationship between the LCs, the three complexities (H, M, L), and the Portal Objects to compute the size of a certain LC with a certain complexity (example in Exhibit 8).

Example of a “LC->PO” relationship

Exhibit 8 – Example of a “LC->PO” relationship

Telco Processes

Finally, the last level of abstraction is based on the business processes typical of Telco industries and it leads to a very simplified method to calculate the size of the software they are asking to develop. At this last level of the pyramid, the roof, we only have to assign a level of complexity to each of the ten Functional Areas (FA) we defined:

  • Registration Process/Private Area
  • Self-Caring/Self-Provisioning
  • eCommerce
  • Proactive Engagement/Not Human Caring
  • Technical Assistance/Self-Ticketing
  • Search
  • Loyalty
  • Order Management
  • Advertising

Each FA impacts on one or more LC, depending on its complexity. At this level we defined five values of complexity. Thus, once all the FA values have been set up, the tool calculates the total FP by going down into the abstraction pyramid until the FFP layer. Finally, the tool calculates the software development cost using the following formula:

FFP *“Vendor Productivity Rate”

Of course we have calculated the “Vendor Productivity Rate” for each vendor with a history of all developed FP and their developing costs.

Conclusions

In conclusion, there is no unique and standard methodology for software measurement. The Function Point Methodology is one of the most used and consolidated, but in order for it to be applied, it requires a big effort in terms of time, especially at the feasibility phase. In fact, the methodology is based on the identification of technical entities involved in software development. We have shown that it is possible to sensibly reduce this effort by applying our abstraction concept; the concept is based on the idea that, considering the specific area of the software to be developed, we can pre-define all main design and functional entities, which allows us to configure the tool we have realized in order to compute the size of our software simply, starting from the functional requirements and without considering any technical detail with respect to future software implementation.

References

Boehm, B., Clark, B., Horowitz, E., Westland, C., Madachy, R., & Selby, R. (1995). Cost models for future software lifecycle processes: COCOMO 2.0. Annals of Software Engineering 1(1), 57–94.

Brooks, F. P., Jr., (1975). The mythical man-month. Reading, Massachusetts, Addison-Wesley

Kaplan, R. (2006). The demise of cost and profit centers. Harvard Business School Working Paper, No. 07-030. Retrieved from http://www.hbs.edu/research/pdf/07-030.pdf

Marino, A., Posati, M., (2008) Nuove tecniche per progetti estremi: la gestione sostenibile dei progetti complessi. Milan, Italy: Franco Angeli

Project Management Institute (2008). A guide to the project management body of knowledge (PMBOK® Guide) —Fourth edition, Newtown Square, PA: Author.

This material has been reproduced with the permission of the copyright owner. Unauthorized reproduction of this material is strictly prohibited. For permission to reproduce this material, please contact PMI or any listed author.

© 2011, Giuseppe Parlati, Raffaele Larenza, Leonardo Caronia
Originally published as a part of 2011 PMI Global Congress Proceedings – Washington DC

Advertisement

Advertisement

Advertisement

Publishing or acceptance of an advertisement is neither a guarantee nor endorsement of the advertiser's product or service. View advertising policy.