Crowd advisory in projects and programs

unleash the potential of data

images

DR. MUHAMMAD EHSAN KHAN, PMP, PGMP

Inseyab Consulting and Information Solutions LLC, Abu Dhabi, UAE

This paper extends the concept of crowdsourcing by proposing the notion that organizations should develop, or utilize, knowledge platforms available within the organization that dig into data, as well as external data sources, in order to improve project decision making and to discover new solutions to problems.

While utilizing the deployed tools, organizations can work with online data sources and project management communities instead of simply relying on internal employees or external consultants, as is the norm today. This concept is in alignment with the concept of crowd advisory or crowd sourcing.

Knowledge crowdsourcing's biggest benefit is the ability to receive better quality solutions, since several people offer their best ideas, skills, and support—the main idea being optimum utilization of this collective resource pool.

This paper will help organizations in designing, or influencing strategies, to utilize potential data, and associated data streams, available in order to build the most accurate and complete outcomes for decision making.

Keywords: knowledge management, big data, text analytics, predictive coding, taxonomy

INTRODUCTION

Project teams and researchers require information on a daily basis that helps them solve problems and answer questions that impact their tasks at hand. A traditional mechanism would be searching for answers online, reading papers, hiring consultants/experts, and accessing organizational data sources.

A new concept that has been in place now is the notion of knowledge crowd advisory, or knowledge sourcing. Project crowd advisory is the process of obtaining needed services, ideas, or content by soliciting contributions from a large group of (geographically dispersed) project management experts, such as an online community, and multiple data (in some cases digitized) sources, rather than from internal resources or suppliers.

This paper proposes the idea that organizations should make use of technological advancements and implement consolidated knowledge platforms that can combine multiple data sources and make them accessible through enterprise search tools. Multiple data sources include structured data within organizations and unstructured data sources external to organizational boundaries including data obtained from the crowd.

Such platforms facilitate project teams to get answers to queries, while accessing a larger knowledge base, resulting in improved decisions and project results.

CROWD-GENERATED DATA

Since 2012, the Internet has collected another 1.1 billion users, making the total 3.2 billion people who are now hooked up to the world wide web. Every minute, there are an estimated 347, 222 Tweets; 4,310 Amazon unique visitors; 1,736, 111 Instagram; and 4,166,667 Facebook likes (DOMO, 2015).

Take an example from www.projectmanagement.com. Hundreds of articles are published and tens of webinars are conducted every week. Questions are asked and answers are provided by experts around the world. All this knowledge is available for project managers and their teams, who can use this information to answer their queries.

There are many other project management knowledge sources outside an organization's boundary, which can be considered as crowd generated data. Deploying a platform that connects with these project management data sources and experts will increase the organization's capability to get answers and improve their decision-making process.

KNOWLEDGE ECOSYSTEM

If you look at the example above, you will notice that it is a complete ecosystem of knowledge. A knowledge ecosystem is comprised of three entities, namely, knowledge seekers, knowledge curators, and knowledge organizers.

KNOWLEDGE SEEKER

Knowledge seekers are people or systems seeking information to answer certain queries or support their decisions. These can be project managers, program managers, researchers, portfolio managers, domain experts, and other knowledge workers working on project teams. These can also be systems that require information to further process a business activity.

KNOWLEDGE CURATORS

Knowledge curators can be divided into two major categories: automated data, and knowledge experts.

Automated data includes, but is not limited to:

1. External automated sources: websites, communities, Facebook, Twitter, LinkedIn

2. Machine-generated data, such as sensors

3. Internal organizational sources

a. Knowledge-base portal

b. Emails

c. Documents produced in projects

Knowledge from domain experts who are:

1. Internal to the organization

2. Outside to the organizational boundaries (e.g., experts cloud)

KNOWLEDGE ORGANIZER

The knowledge produced by curators has to be consumed by knowledge seekers. In order to facilitate knowledge seekers, processes, and technologies need to be implemented. It is this area, where technology has advance immensely; we now have tools that can extract data from sources, organize it, categorize data, and make it available for knowledge seekers for their consumption. Following are the major components of this area:

a. Processes or mechanisms

b. Tools that can

i. Connect to knowledge curators

ii. Connect to knowledge seekers

iii. Organize and process knowledge from raw format to consumable format

iv. Ensure searchability of data

BUILDING THE ECOSYSTEM, EASY? NO, CHALLENGING!

It is important to design a proper knowledge crowdsourcing strategy, as it comes with its own set of challenges. What we are building is a consolidated ecosystem with multiple sources of data (internal/external to an organization OR structured/unstructured OR any format) feeding in one platform, which is accessed by multiple individuals. The platform needs to have the capability of extracting and storing this large data set from disparate sources and should ensure searchability and accessibility of data with an easy-to-use interface. The implementation of each component of the ecosystem has its challenges and prerequisites that are discussed in this section.

THE HUMAN ELEMENT

Availability of too much data may overwhelm the information seeker. One could possibly be searching through thousands of responses, which can be painstaking, or even complicated, if the problem is not clearly understood. The information seekers should:

1. have a relatively clear understanding of questions for which answers are needed.

2. be able to differentiate between authentic and non-authentic data sources.

3. have enough knowledge and experience to derive answers from the available information, and ultimately derive the right inference.

The benefit of knowledge crowdsourcing can be realized if the data sources and online communities are properly selected and validated. This ensures that the information received is of high quality, is complete, and is unbiased.

In addition to this, the data available within such platforms should be classified by end users for proper accessibility. This becomes an unwieldy job for the knowledge seekers, which they tend to avoid without keeping in mind the consequences. This issue, combined with an undefined taxonomy creates a situation where the project team knows the content is there, but they can't get to it.

THE TECHNOLOGY ELEMENT

Once the data sources are validated, it is important to develop a platform, which consolidates data and provides data searching and analysis capabilities to the project managers.

However, typical data stores (DS), enterprise content management (ECM), and enterprise search (ES) tools are not ready to handle such large volumes, velocity, or veracity of data.

In relation to data, project information within organizations sometimes exists in silos. Every now and then, a new data source can be added to the information pool. These silos, can have structured as well as unstructured data, thus it cannot be connected easily, and hence, the project teams never get the full picture.

In addition to this, these platforms do not provide searching capability on par with Bing or Yahoo or Google. This is because of the meta data marking and searching capabilities of these online platforms. Such capabilities generally do not exist in on-premise platforms. In a typical ECM and ES system, metadata is poor and incomplete. People didn't spend time on data classification and tagging. The tools do not have the capabilities to automate this tagging process. As a result, data becomes consolidated, but unsearchable or inaccessible.

NOW? WHY AND HOW.

Technological advancements have changed the world that we live in and have impacted every aspect of our life. Technology solutions and appliances have changed how the data is captured, consolidated, analyzed, and searched.

RENEWED DATA CAPTURING AND STORAGE

Before the advent of big data tools, a large part of the available data was considered inaccessible due to various reasons, such as limited capacity (or capability) of handling a large volume or velocity of data.

However, big data tools and appliances have now enabled project/program stakeholders to consume these data sources and help them to answer questions and seek the latest, and most relevant, advice on problems that they face everyday. These tools improve the ability of organizations to gather large numbers of solutions and information at a relatively inexpensive cost. The idea of knowledge sharing, information consumption, and learning from one another's experiences is central to this concept.

Another concept that is being actively propagated is the idea of silos bridging. This means that instead of bringing in data from project management systems and project financial applications in a consolidated data warehouse, one can bridge the data by storing relationships and allowing the original repositories to communicate (Weissman, 2015).

IMPROVED ANALYTICAL AND SEARCHING CAPABILITIES

According to Hart (2015), technology vendors started focusing on concepts such as semantic search, the focus of which was to find related content based on related concepts or topics.

E-discovery solutions started applying text analytics engines to the data classification and tagging problem. Thus effectively, partially, automating the data coding process. The idea was, if some part of data is tagged by users, the remaining can be coded by the analytics engine, while utilizing the analytics algorithms. These engines identify documents that fall into each category and apply the respective metadata tags to the documents (Hart, 2015). The search engines can work on the tagged data and make the data available for search and retrieval.

In addition to this, business intelligence (BI) tools can be integrated with the search tools to provide an intuitive user interface. From this easy to use interface, the users can search for information, filter and analyze the responses, and may take decisions based on the information received.

IMPLEMENTATION PROCESS AND GUIDELINES

Having the technology in place solves one set of problems. There are certain important factors to consider when implementing a consolidated knowledge management and enterprise search application.

BUSINESS GOALS

The first and foremost is identifying the business goals of such a system. The business goal can be:

1. Search – finding answers to certain questions or searching a knowledge base for certain information

2. Reporting and dashboard – reviewing the data to take informed decisions

3. Analyze and self-service – understand the patterns, relations, and dependencies between business entities while analyzing the data.

DATA SOURCES

Once the business goals are identified, it is important to identify the data sources. These sources can be within the organization or external data sources, such as online communities, websites, or project management forums. Internal organization sources can include project documents, emails, applications, Excel files, or other data components.

Organizations can identify project domain experts who can be connected to this platform. In addition to utilizing the available data, project teams can consult with these experts when working on a specific problem.

APPLICATION LAYER

Once the data sources are identified, a decision needs to be taken to select the tool set that can process the data and supports the business objectives. In general, this application layer consists of four components:

1. Data extraction and capturing

2. Data consolidation or bridging component

3. Data categorization, tagging, and searching component

4. Visualization or output layer

The data extraction and capturing component is utilized to capture the data from the sources. The component should have the capability to extract the data from the identified sources (e.g., documents, websites, emails, etc.).

The extracted data should then be consolidated in a data repository or a data bridging component. The purpose of this component is to ensure that the data is modeled or structured in a such a way that it can be searched and analyzed. In some cases, the data is brought into the repository, whereas, in other cases, a bridging component is deployed.

Searching the metadata is faster than performing a full-content search because the amount of information that is needed to be searched is smaller. Metadata is the data about data (e.g., a document related to employee medical records will have metadata tags of HR, medical, employee record, date, etc.). The meta information can be tagged by business users or by the data tagging and searching component. It is important for the organization to finalize the metadata taxonomy, so that the vocabularies used could be reconciled and synthesized.

The final component of the layer is used to insert queries and get the results. This layer may include a BI layer and enterprise search. As mentioned above, a BI layer can be integrated with the search layer to provide an intuitive user interface from where the users can search for information, filter and analyze the responses, and may take informed decisions.

BUSINESS CASES

HOUSING PROJECT PLANNING

An integrated, housing project planning platform can:

1. Consolidate housing demand, from a structured data source that contains data for applications and applicants.

2. Consolidate housing supply, containing geospatial information for the locations available.

3. Enable collaboration with housing experts, in order to improve the project planning process.

The tool can allow planners to view overlays of housing requested location of applicants, of a certain age, on a map correlating to other information, such as proximity to certain facilities useful for their age bracket. Utilizing the same tool, the planners can take advice from geospatial analysis experts, in order to determine whether the construction site is fit for development.

PROJECT EXECUTION AND SUCCESS

Consider a construction portfolio that has a project associated with the construction of a bridge in a populated area, performing within the defined thresholds of time and cost. However, analysis of Tweets and Facebook feeds could reflect that the affected population is quite unpleased with the resulting noise and pollution. Such an analysis will allow the project team to devise a strategy to manage stakeholder expectations that will increase the chance of project success.

TOOLS WORTH MENTIONING

There is active research going on in the technology world, where some big names are investing billions of dollars to develop optimized big data, enterprise search, and data solutions.

Google has developed Google Search Appliance, that combines the search expertise of Google with features that meet today's business requirements, all in one box.

IBM Watson is a technology platform that uses natural language processing and machine learning to reveal insights from large amounts of unstructured data (IBM, 2015). It:

1. Analyzes unstructured data, by using natural language processing to understand grammar and context

2. Understands complex questions, by evaluating all possible meanings and determines what is being asked

3. Presents answers and solutions, based on supporting evidence and quality of information found

Attivio's Active Intelligence Engine (AIE) combines the power of enterprise search, BI, and big data for strategic applications and solutions that can be deployed on-premises or in the cloud. Attivio unlocks the business value trapped in text-based sources of information by making it easy to analyze dark data—giving users a more complete view, so they can act with certainty. The technology stack drives more effective decision making by bringing structure from unstructured content, creating a holistic, unified view so business users can act with confidence (Attivio, 2015).

Microsoft has an on-cloud, as well as on-premises, offering enterprise search (i.e., FAST search integrated in a SharePoint Server). In the domain of big data analytics, where both structured and unstructured data is analyzed, Microsoft Analytics Platform System (APS) was launched. The organization has recently initiated Project Oxford, that has artificial intelligence (AI) based components for vision, speech, and text recognition. Within its cloud platform for business intelligence (i.e., Power BI), Microsoft has implemented features such as Q and A (i.e., question and answers), that uses natural language analysis to get answers.

CONCLUSION

Based on the discussion above, it is evident that for any question, there is an answer that is available in the project data world around us. Previously, accessing this data was close to impossible, simply because the technology to support this venture was not mature. However, with the enhancements in tools, such as the ones discussed in the last section, it is now possible for organizations to embark upon such a journey.

It is imperative for the project teams to understand how multiple sources of data, within and external to the organization, can be utilized for decision making, while employing related tools and technologies. The goal must be to consolidate all of the project-related information that the organization has, whether it's external or internal, so it behaves as if it lives in one big box. This information should be accessible, subject to appropriate controls, in a straightforward manner. It can be done, but only with careful forethought and a sensible plan.

This has only become possible after the development of big data tools, including tools for analytics and enterprise search, that can enable active collaboration between knowledge seekers and knowledge curators. It thus makes sense to now unleash the potential of data

ABOUT THE AUTHOR

images

Dr. Khan is the Founder/CEO of Inseyab Consulting & Information Solutions LLC, a firm focused on Business Intelligence, Big Data, and Social Media Analytics. Dr. Khan earned his doctorate in Strategy, Programme and Project Management (Major de Promotion/Valedictorian) from SKEMA Business School, France, and is a certified Program Management Professional (PgMP)® and Project Management Professional (PMP)®. He is also the recipient of the Project Management Institute's James R. Snyder Award 2012 and IPMA Young Researcher of the Year 2013. He is author of Program Governance, the first book written on this subject.

CONNECT WITH ME!

images http://ae.linkedin.com/in/ehsankhan | images @MEKhan3PM |

Attivio (2015). Harness an agile contextualization engine. Retrieved 26 January 2015 from http://www.attivio.com/technology

DOMO (2015). Data never sleeps 3.0. Retrieved 13 December 2015 from https://www.domo.com/blog/2015/08/data-never-sleeps-3-0/

Hart, L. (2015). How analytics engines could—finally—relieve enterprise search pain. Retrieved 22 December 2015 from http://searchcontentmanagement.techtarget.com/tip/How-analytics-engines-could-finally-relieve-enterprise-search-pain

IBM (2015). What is Watson. Retrieved 22 January 2015 from http://www.ibm.com/smarterplanet/us/en/ibmwatson/what-is-watson.html

Weissman, S. (2015). ECM systems: From silo busting to silo bridging. Retrieved 24 December 2015 from http://searchcontentmanagement.techtarget.com/tip/ECM-systems-From-silo-busting-to-silo-bridging

This material has been reproduced with the permission of the copyright owner. Unauthorized reproduction of this material is strictly prohibited. For permission to reproduce this material, please contact PMI or any listed author.

images
© 2016, Dr. Muhammad Ehsan Khan
Originally published as part of the 2016 PMI® Global Congress Proceedings – Barcelona, Spain

Advertisement

Advertisement

Related Content

Advertisement

Publishing or acceptance of an advertisement is neither a guarantee nor endorsement of the advertiser's product or service. View advertising policy.