Trends in research design, measurement, and analysis in the organizational sciences

the last decade

University of Colorado Denver, Denver CO

CHARLES A. PIERCE, FRANK A. BOSCO, AND IVAN S. MUSLIN

University of Memphis, Memphis, Tennessee

The Last Decade

Organizational Research Methods (ORM) was first published in 1998. ORM's mission is to bring relevant methodological developments to a wide range of researchers in organizational and management studies and promote a more effective understanding of current and new methodologies and their application in organizational settings (Organizational Research Methods, 2007). Given that ORM has been in existence for a decade and has established itself as the leading journal devoted to research methodology in the organizational sciences, its extensive body of work warrants the following three questions about articles published in ORM: (1) Which are the topical areas that have been consistently popular over the past decade? (2) Which are the topical areas that have become increasingly popular over the past decade? and (3) Is the relative popularity of various topical areas different from what has been reported in previously published reviews of the research methods used in substantive journals?

It is important to provide answers to each of these three questions given the potential implications for training doctoral students and retooling researchers, future research on methodology, and the advancement of the organizational sciences. More specifically, answering these questions will provide information regarding the topical areas considered to be important and those considered to be up-and-coming (as well as those that are not), as judged by some of the top methodologists in the field (i.e., authors of the articles published as well as the reviewers, including the editorial team, who decided to accept these articles for publication). These answers also have important implications for researchers because they will point to the areas that already are, as well as those that are becoming, part of the mainstream methodological toolkit in the organizational sciences. Identifying these topical areas will help instructors shape the content of methodology courses targeting doctoral students in the organizational sciences. In addition, having this information will allow researchers to take specific steps towards gaining expertise in these areas so as to become more informed consumers, journal reviewers, and even producers of substantive research conducted using these tools.

Previous Reviews of Research Methods Reported in the Organizational Sciences Literature

We focused on four reviews of research methods used in the organizational sciences: Podsakoff and Dalton (1987); Stone-Romero, Weaver, and Glenar (1995); Scandura and Williams (2000); and Austin, Scherbaum, and Mahlman (2002).

Podsakoff and Dalton (1987) reviewed the research methods and analyses used by authors of all articles published in the 1985 volumes of the following five journals: Academy of Management Journal (AMJ), Administrative Science Quarterly (ASQ), Journal of Applied Psychology (JAP), Journal of Management (JOM), and Organizational Behavior and Human Decision Processes (OBHDP). The overall conclusion of the Podsakoff and Dalton (1987) review was that there is intransigence in organizational research methodologies, and this is happening because people do what they know, what they have done, what is efficient and easier, and what is rewarded (i.e., published). Stone-Romero et al. (1995) reviewed all articles published in JAP from 1975 to 1993. The main question posed by Stone-Romero et al. was whether the availability of software packages such as LISREL and EQS to conduct covariance structure analysis (CSA)-based procedures affected methodological practices regarding design and data analysis. Although CSA-based procedures were not nearly as dominant as others such as ANOVA, correlations, and multiple regression, Stone-Romero et al. concluded that, because of the increased use of CSA-based procedures, not knowing these techniques “will leave researchers at a great disadvantage” (1995: p. 155). Scandura and Williams (2000) compared the methodological strategies reported in articles published in 1985–1987 (N = 280 empirical studies) versus 1995–1997 (N = 334 empirical studies) in AMJ, ASQ, and JOM. Scandura and Williams concluded that “the news is not all bad… both the mean and median sample size rose in the 1990s… suggest[ing] that issues of statistical power are receiving more attention… research in the 1990s was becoming broader, with more dependent variables and content domains being included… the types of data analytic approaches also appeared to be moving toward the use of more complex techniques, such as structural equation modeling and event history analysis” (2000, p. 1261). Austin et al. content-analyzed articles reporting empirical studies that were published in every tenth volume from 1920 to 2000 in JAP. Their review included a total of 609 articles. Austin et al. (2002) concluded that “the history of I-O research methods contains both positive and negative aspects… Greater attention to innovation will more firmly place I-O as a field on a solid footing for both research and practice… Some problems of misuse could be solved, we believe, by aggressive interventions in dissemination. Potential avenues include pre-convention workshops, computer-mediated discussions at a distance…. journals (Organizational Research Methods)….” (2002, p. 22).

What did we learn from the four aforementioned reviews of research methods usage in the organizational sciences that were published in the last two decades? First, the adoption of innovative methodological practices is very slow. Although some of the reviews refer to changes, improvements, and important trends, a close examination of the data actually shows that changes take place very slowly and usually do not happen in less than two to three decades. Specifically, the 10-year span covered by the Scandura and Williams (2000) review revealed few drastic changes. For example, the rank order of the top 6 research strategies is identical over the 10-year span except for a flipping in order of “field study: secondary” (from 2nd to 3rd) and “formal theory/literature review” (from 3rd to 2nd). The Stone-Romero et al. (1995) review covered 19 years and found virtually no changes regarding design. In terms of analysis, change was very slow in all areas, including CSA-based procedures, which was still far from being dominant for the last few years included in the review (i.e., reaching a highest level of usage of only 10%). The Austin et al. (2002) review, which included an 80-year time span, found more noticeable changes over time regarding analysis. For example, the use of critical ratio and probability error, which were each about 27% in 1920, were not used by any article since 1970 (i.e., 50-year time span). On the other hand, the use of multiple regression has increased notably from about 8% in 1970 to about 46% in 2000. Similarly, confirmatory factor analysis increased from 0% in 1970 to about 16% in 2000. In short, changes in the use of data-analytic procedures became noticeable when the review period included at least two to three decades. In contrast, to preview what we will describe in detail later in the article, changes in the development and improvement of methodological approaches have a shorter time cycle and are noticeable within shorter time frames (i.e., a decade or less).

Second, because change in patterns of methods use is so slow, the modal design, measurement, and analysis characteristics of an article today have not changed much compared to an article published about 20 years ago. Twenty years ago, Podsakoff and Dalton (1987) noted that a typical study in the organizational sciences is likely to be a cross-sectional survey or a laboratory study relying on student subjects, conducted at the individual level of analysis, and the data would be analyzed using multiple regression/correlation techniques or ANOVA. The fact that this has continued to be the typical study has been confirmed by results reported in 1995 by Stone-Romero et al., 2000 by Scandura and Williams, and 2002 by Austin et al.

Next, we describe the content analysis we conducted based on articles published in the first decade of ORM to answer each of the three questions posed at the beginning of our article.

Method

Overview

We used content analysis, which is defined broadly as “any methodological measurement applied to text (or other symbolic materials) for social science purposes” (Shapiro & Markoff, 1997. p. 14). More specifically, our data collection procedure consisted of content analyzing the design, measurement, and analysis topics addressed in each article published in every issue of ORM starting with the first issue (January 1998) and ending with the October 2007 issue (excluding book and software reviews). Content analysis is primarily a qualitative methodology but it also includes a quantitative component, which provides an advantage over other more purely qualitative methods such as literary interpretation and hermeneutics (Duriau, Reger, & Pfarrer, 2007).

Content-Analysis Taxonomy

Similar to previous reviews of research methods reported in the substantive literature (e.g., Scandura & Williams, 2000; Stone-Romero et al., 1995), the unit of analysis in our study was topical area and not article. The choice for this type of unit of analysis is guided by the fact that numerous articles focus on more than one topic and choosing only one topic per article may lead to underestimates of the relative attention devoted to various areas.

A key component of any content analysis is the taxonomy that is used. Consequently, we undertook a careful process involving several researchers to create our taxonomy. The starting point was a taxonomy for design, measurement, and analysis topics used by ORM during 1998–2004 (i.e., under the editorship of founding editor Larry J. Williams) to classify new submissions and also used by reviewers to denote their areas of expertise. This initial taxonomy was expanded in 2004 by an iterative process conducted via email that involved the work of the incoming editor and four associate editors of ORM at that time: Herman Aguinis, Mark Gavin, Charles E. Lance, Karen Locke, and Robert J. Vandenberg. The resulting revised and expanded taxonomy was used by ORM during 2005–2007 to classify new submissions and reviewers' areas of expertise. After it was developed, and because it was the most comprehensive taxonomy of design, measurement, and analysis topics available, it was also adopted by the Research Methods Division of the Academy of Management (AOM) to classify submissions to the AOM annual meetings (Gordon Cheung, personal communication, 2005). Once the taxonomy was completed, we reviewed Podsakoff and Dalton (1987), Stone-Romero et al. (1995), Scandura and Williams (2000), and Austin et al. (2002) to make sure that each of their topical areas was included in our taxonomy. Finally, once this process was completed, we added new categories as they emerged during the coding process (e.g., Bayesian networks, electronic/web research, mixed methods).

The final version of the taxonomy, which due to space constraints is available from the authors upon request, represents the most comprehensive and exhaustive classification of design, measurement, and analysis topics in the organizational sciences available to date. The broadest categories are quantitative and qualitative. Each of these broad categories includes design, measurement, and analysis subcategories.

In spite of its comprehensiveness, we acknowledge that, like any other taxonomy used in a content analysis, there are relationships among design, measurement, and analysis issues. For example, as mentioned by Scandura and Williams (2000), if an article addresses a multi-level design, it is also likely to address multi-level analysis. Second, some methods, particularly newer methods and those that combine quantitative and qualitative approaches, are harder to classify (e.g., content analysis). In some cases we had to place the category under one category or the other. We do not see this as an important limitation for two reasons. First, as described next, we obtained very high levels of coder agreement initially and subsequent consensus was reached in all cases. Second, our coding strategy and taxonomy are fully transparent and, therefore, replicable empirically.

Coding Process

Each article was coded by the third and fourth authors, who at the time of coding were management doctoral students and had successfully completed all their graduate-level coursework in research methods and statistics. To serve as the coders' training and create a common frame of reference before the actual coding started, the coders used the taxonomy to code independently the first three issues of ORM's Volume 1. The two coders then met with this article's second author to discuss the degree of agreement between their classifications. After this practice session was completed, each coder independently coded all 193 articles published in Volumes 1 through 10.

For each of the 193 articles, we coded the presence or absence of 405 nominal level variables using dummy coding. Upon completion of the coding, the second author compared the two coders' initial classifications of the 405 variables for each article. For each of these 405 variables, we computed Cohen's kappa ( κ) as an index of interrater agreement between our coders. Unlike a percent agreement index, Cohen's kappa corrects for chance agreement (Cohen, 1960). Kappa values greater than .40 indicate acceptable interrater agreement (Fleiss, 1981). Across all 193 articles, 400 of the 405 kappa values were greater than 0.40, statistically significant at an alpha level of either 0.001 or 0.01, and thus indicate acceptable interrater agreement for each of our nominal level variables. The following five variables had kappa values slightly below 0.40, but the disagreements between coders were easily resolved for the main analyses reported herein: feminism, level of analysis of dependent variable - individual, categorical interactions, multidimensional scaling, and action research. Any article that initially did not produce identical classifications for all coded variables was discussed between the two coders and a second author until a consensus was reached. In short, the coding process followed best practices as recommended by Duriau et al. (2007) and Scandura and Williams (2000), including taking corrective steps when full agreement was not reached initially.

Results

Tables 1–7 provide summaries of the results in counts and percentages of the various topical areas of articles published in ORM. At the highest level of analysis, Table 1 shows that, over the 10-year period, about 90% of the topics are quantitative whereas only 10% are qualitative. In terms of the quantitative area, about 49% of the topics refer to analysis, about 37% to measurement, and about 15% to design issues. In contrast, the qualitative topics are mostly about design (about 56%), followed by analysis (about 35%), and measurement (about 9%). Figure 1a displays the trends for the data in Table 1 and suggests that the relative frequency of quantitative and qualitative topics remained fairly constant over the 10-year period, except for 2002, which included an issue devoted almost entirely to interpretive genres of organizational research methods, which explains the unusually large number of qualitative articles during this year.

Quantitative Topics

Table 2 shows results regarding the quantitative-design topics. The three most popular topics over the 10-year period are the following:

  1. Survey (32.35%)
  2. Temporal issues (i.e., longitudinal designs) (13.24%)
  3. Electronic/web research (10.29%)

Figure 2 (top panel) shows that the relative frequency of these topics changed over the 10-year period. Specifically, there is a clear upward trend regarding the interest in surveys and also in electronic/web research. The interest in temporal issues, although high over the 10-year period, shows a slight downward trend.

In terms of the quantitative-measurement subcategories, Table 3 shows that the following are the five most popular topics over the 10-year period:

  1. Validity (40.12%)
  2. Reliability (23.26%)
  3. Level of analysis of dependent variable (11.05%)
  4. Scale development (9.88%)
  5. Measurement invariance/equivalence (8.72%)

Figure 2 (center panel) shows that the relative frequency of topics changed over the 10-year period. Specifically, there is a recent renewed interest in validity and a clear upward trend regarding measurement issues about level of analysis of the dependent variable. On the other hand, there is a decrease in interest in scale development over time. Alternatively, the relative interest in reliability and measurement invariance/equivalence has remained fairly stable.

Table 4 shows the relative frequency of quantitative-analysis topics over the 10-year period. Overall, the most popular topics over the 10-year period are the following:

  1. Multiple regression/correlation (17.03%)
  2. Structural equation modeling (12.23%)
  3. Multilevel research (10.92%)
  4. Missing data (9.61%)
  5. Factor analysis (6.68%)
  6. Temporal issues (i.e., techniques for analyzing data collected over time) (6.55%)

Figure 2 (bottom panel) shows that the relative frequency of these topics has not changed much over the 10-year period except for two notable exceptions. First, there is a striking increase in attention devoted to multilevel research. Second, there is a decrease in the attention devoted to structural equation modeling. Alternatively, the relative interest in multiple regression/correlation, factor analysis, missing data, and temporal issues has remained fairly stable.

Qualitative Topics

Table 5 shows results pertaining to the qualitative-design subcategories. Results show that the most popular areas are the following:

  1. Interpretive (26.67%)
  2. Policy capturing (16.67%)
  3. Action research (13.33%)

Figure 1b shows that the relative popularity of interpretive and action research has increased over time, whereas the relative attention given to policy capturing has decreased.

Table 6 includes results regarding the qualitative-measurement subcategories. The total count for this category over the 10-year period is only 5, and Table 7 shows that 4 of these 5 addressed reliability whereas 1 addressed surveys.

Finally, Table 7 includes results regarding the qualitative-analysis subcategories. Note that the total number of qualitative-analysis categories is only 19 over the 10-year period. Nevertheless, the most popular subcategories are the following:

  1. Interpretive (26.32%)
  2. Policy capturing (26.32%)
  3. Content analysis (21.05%)

Given that for some of the years the total count of qualitative-analysis subcategories is 1 or even 0, it is not very meaningful to describe changes in trends over time.

Discussion

The goal of this paper was to answer three questions. We address each in the following sections.

Which are the Topical Areas that Have Been Consistently Popular over the Past Decade?

Our results indicate that a summary of topics addressed by the most typical (i.e., modal) article published in the first decade of ORM is as follows. First, this modal article addresses quantitative instead of qualitative topics. In terms of design topics of this modal quantitative article, these are surveys, temporal issues, and electronic/web research. In terms of measurement, the topics are validity, reliability, and level of analysis of dependent variable. In terms of data-analysis, the topics are multiple regression/correlation, structural equation modeling, and multilevel research. Data-analysis topics are dominant (49%), followed by measurement (37%) and, lastly, design (only 15%). If we consider the modal qualitative article, in terms of design, the topics are interpretive, policy-capturing, and action research. In terms of measurement, the topics are surveys and reliability. Finally, in terms of analysis, the topics are interpretive, policy-capturing, and content analysis. In contrast to the quantitative topics, the most popular type of qualitative topic is design (56%), followed by analysis (33%), and measurement (9%).

Which are the Topical Areas that have Become Increasingly Popular over the Past Decade?

First, in terms of quantitative topics, there are upward trends regarding surveys and electronic/web research (design), level of analysis of the dependent variable and validity (measurement), and multilevel research (analysis). In terms of qualitative topics, the attention devoted to interpretive and action research has increased over time (design), but trends in terms of measurement and analysis are difficult to identify given that the overall number of articles is relatively small.

Is the Relative Popularity of Various Topical Areas Different from What Has Been Reported in Previously Published Reviews of Research Methods Used in Substantive Journals?

A comparison of our results with those reported by Podsakoff and Dalton (1987), Stone-Romero et al. (1995), Scandura and Williams (2000), and Austin et al. (2002) indicate an interesting and counter-intuitive finding because the answer to this third question is both yes and no. First, it is yes because the most popular topics identified in previous reviews include surveys (design), reliability (measurement), and multiple regression/correlation (analysis). Our results also suggest that these topics are among the most popular. On the other hand, the answer is also no because our results show that there are other topics that are equally, or in some cases even more, popular than the topics identified in previous reviews. These include temporal issues and electronic/web research (design), validity and level of analysis of the dependent variable (measurement), and multilevel research (analysis). Another striking difference is the presence of qualitative topics in our results. Although certainly not receiving as much attention as quantitative topics (i.e., only about 10% compared to the 90% devoted to quantitative topics), there is little if any mention to qualitative topics in any of the four previously published reviews. As noted above, the most popular qualitative topics are interpretive, policy-capturing, and action research (design), surveys and reliability (measurement), and interpretive, policy-capturing, and content analysis (analysis).

Implications for Training Doctoral Students and Retooling Researchers

Our results lead to important implications for training doctoral students and retooling researchers in the organizational sciences. First, there are some methodological tools that are investigated by articles published in ORM and also seen as crucial for testing influential theories in the organizational sciences. These include multilevel, temporal (i.e., longitudinal), validity, scale development, and level of analysis of the dependent variable issues. Thus, instructors of research methods courses as well as researchers interested in testing influential theories in the organizational sciences should pay attention to these topics. Specifically, the content of methods courses should include at least one module on each of these topics. Also, researchers should seek opportunities to retool themselves in each of these domains. Second, although not receiving as much coverage in ORM, there are several qualitative methodological topics that are key for testing influential theories in the organizational sciences. These include case studies and naturalistic field observations and interviews, the use of qualitative computer databases, and narrative interpretation and qualitative sequence analysis. Increased attention to these topics by instructors who deliver methods courses and researchers interested in improving their methodology toolkit is likely to lead to more fruitful empirical investigations of important theories.

Implications for Research and the Advancement of the Organizational Sciences

Content analysis is becoming an increasingly popular methodological tool (Duriau et al., 2007). The taxonomy developed as part of this study is the most comprehensive available to date to conduct a content analysis of a research methods body of literature in the organizational sciences. Its comprehensiveness also relies on the inclusion of qualitative issues, which are notably absent in previously published reviews. Thus, one implication for research is that our taxonomy can be used in future reviews of the research methods literature as well as the methods used in articles published in substantive journals. The consistent use of a single taxonomy would allow for more systematic and precise comparisons of trends over time instead of having to make comparisons across articles that use different taxonomies, as is the case for the reviews we summarized in the Introduction.

Second, our review revealed an unbalanced coverage of design, measurement, and analysis topics. In the quantitative arena, most topics address data-analysis (i.e., 49%) and measurement (i.e., 37%), whereas only a minority are about design issues (i.e., 15% of topics). Few researchers would argue that data analysis is three times as important as research design. Also, few researchers would argue that we know all there is to know about research design. Moreover, it is difficult to argue that the organizational sciences will produce important advancements by focusing mainly on data analysis and measurement and paying less attention to design. In short, an implication of our study is that more attention is needed regarding the development of new as well as the improvement of existing research designs. For example, further research on archival, behavior simulation, and mixed approaches (i.e., qualitative-quantitative) is warranted.

Another implication of our study is that, although changes in the usage of methods is extremely slow-paced and can take two to three decades minimum, those studying and improving methods seem more willing to embrace change and innovation at a faster pace. Although our review included only 10 years, this time period was sufficient to identify several important trends. For example, there is a surge in the relative attention devoted to surveys and electronic/web research, level of analysis of the dependent variable and validity, multilevel research, and interpretive and action research. Data reported in each of the four reviews published since 1987 show that the clear trends found in 1 decade of ORM cannot be identified if a review includes 1 decade of articles published in any of the substantive journals. This result suggests that change patterns in articles published by developers of methods (i.e., those publishing in ORM) is faster than those in articles published by users of methods (i.e., authors of articles in substantive journals). In other words, the diffusion of innovation is faster among methods developers than methods users. As Austin et al. (2002) noted, this result points to the important role that ORM can play as a catalyst for change in terms of methods usage by substantive researchers and there is evidence that ORM is playing this role. For example, the impact factor for ORM has increased consistently and reached its current value of 1.53. Perhaps even more revealing are the results with respect to which journals cite ORM articles most frequently. Consider the following information included in the most recent (i.e., 2007) Journal Citation Report by Thomson Scientific (formerly ISI), which refers to the year 2006. As would be expected, ORM is the journal that cited ORM articles most frequently. However, the journals following ORM (in rank-order) are as follows:

  1. Journal of Applied Psychology
  2. Journal of Organizational Behavior
  3. Personnel Psychology
  4. Accounting, Organizations, and Society
  5. Journal of Occupational and Organizational Psychology
  6. Academy of Management Journal
  7. Leadership Quarterly
  8. International Journal of Selection and Assessment
  9. Organizational Behavior and Human Decision Processes
  10. Structural Equation Modeling

In other words, 9 of the 10 journals (i.e., all except for structural equation modeling) that cited ORM articles most frequently in 2006 are substantive and not methodological journals, suggesting that ORM is indeed influencing how substantive researchers do their work. The conclusion would be very different if ORM articles are cited mainly by other methodological journals. Such a finding would have suggested that ORM is merely part of a cottage industry that does not bring relevant methodological developments to a wide range of researchers in organizational and management studies and, moreover, does not promote a more effective understanding of current and new methodologies and their application in organizational settings.

A fourth implication of our study for future research is that our results may provide a preview of some of the methodological tools that are not yet very popular, but may become more popular in the future. For example, ethnostatistics, network analysis, and neural networks are some of the new topics emerging in the literature. It will be interesting to see whether these topics become more popular in ORM as well as the substantive literature during the next decade.

Conclusion

Change is usually difficult and requires effort and resources. Researchers in the organizational sciences also find that change is difficult and they have their own methodological comfort zones. This is why there is a “scientific community's persistence in the use of particular methods” (Podsakoff & Dalton, 1987, p. 433). Interestingly, this statement is not only applicable to the methods used, but also to the substantive areas that researchers choose to investigate (Cascio & Aguinis, 2007). A surprising finding of our study is that this statement applies only partially to those who are advancing research methodology and publishing in ORM. Although some of the more traditional methodological approaches identified in previous reviews are popular (e.g., surveys, multiple regression and correlation), there are several innovative approaches that have become at least as popular (e.g., multilevel research, temporal issues) in a very short time period. Moreover, these innovative approaches address the quantitative methodological tools needed to test influential theories in the organizational sciences. For the organizational sciences to move forward, it is important that researchers and doctoral students become knowledgeable about these new tools.

Our review reveals that not all needs of substantive researchers are met by articles published in the first decade of ORM. There is a need to develop more and better tools in the qualitative domain, particularly regarding qualitative research design. Although the improvement of methodological tools in the absence of good theory is not likely to produce important advances, theory cannot advance in the absence of good empirical methods (Van Maanen, Sørensen, & Mitchell, 2007). A combination of methods training that includes some of the newer approaches and further research on qualitative tools is likely to produce important advancements in the organizational sciences during the next decade.

References

Austin, J. T., Scherbaum, C. A., & Mahlman, R. A. (2002). History of research methods in industrial and organizational psychology: Measurement, design, analysis. In S. G. Rogelberg (Ed.), Handbook of research methods in industrial and organizational psychology (pp. 1–33). Oxford, UK: Blackwell.

Cascio, W. F., & Aguinis, H. (2007). Research in industrial and organizational psychology 1963–2007: Changes, choices, and trends. Manuscript submitted for publication.

Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37–46.

Cordes, C. L., & Dougherty, T. W. (1993). A review and an integration of research on job burnout. Academy of Management Review, 18, 621–656.

Duriau, V. J., Reger, R. K., & Pfarrer, M. D. (2007). A content analysis of the content analysis literature in organization studies: Research themes, data sources, and methodological refinements. Organizational Research Methods, 10, 5–34.

Fleiss, J. (1981). Statistical methods for rates and proportions. New York: Wiley.

Organizational Research Methods. (2007). Organizational Research Methods. Retrieved from http://www.sagepub.com/iournalsProdDesc.nav?prodId=Journal200894 on September 28, 2007.

Podsakoff, P. M., & Dalton, D. R. (1987). Research methodology in organizational studies. Journal of Management, 13, 419–441.

Scandura, T. A., & Williams, E. A. (2000). Research methodology in management: Current practices, trends, and implications for future research. Academy of Management Journal, 43, 1248–1264.

Shapiro, G., & Markoff, G. (1997). A matter of definition. In C. W. Roberts (Ed.), Text analysis for the social sciences (pp. 9–31). Mahwah, NJ: Erlbaum.

Stone-Romero, E. F., Weaver, A. E., & Glenar, J. L. (1995). Trends in research design and data analytic strategies in organizational research. Journal of Management, 21, 141–157.

Van Maanen, J., Sørensen, J. B., & Mitchell, T. R. (2007). Introduction to special topic forum: The interplay between theory and method. Academy of Management Review, 32, 1145–1154.

Table 1. Frequencies and percentages for topics published in organizational research methods (1998–2007)

Note—Topics included are only those for which there is at least 1 count for the entire 1998–2007 review period.

Topic 1998
N (%)
1999
N (%)
2000
N (%)
2001
N (%)
2002
N (%)
2003
N (%)
2004
N (%)
Quantitative 47 (95.92) 57 (93.44) 42 (100.00) 51 (91.07) 20 (45.45) 43 (86.00) 42 (100.00)
Quantitative - Design 8 (17.02) 10 (17.54) 12 (28.57) 11 (21.57) 3 (15.00) 5 (11.63) 2 (4.76)
Quantitative - Measurement 13 (27.66) 21 (36.84) 11 (26.19) 23 (45.10) 5 (25.00) 18 (41.86) 15 (35.71)
Quantitative - Analysis 29 (58.00) 32 (50.79) 19 (45.24) 20 (37.04) 14 (63.64) 20 (46.51) 28 (62.22)
Qualitative 2 (4.08) 4 (6.56) 0 (0.00) … 5 (8.93) 24 (54.55) 7 (14.00) 0 (0.00)
Qualitative - Design 1 (50.00) 1 (25.00) 0 (0.00) 3 (60.00) 9 (37.50) 7 (100.00) 0 (0.00)
Qualitative - Measurement 0 (0.00) 1 (25.00) 0 (0.00) 0 (0.00) 3 (12.50) 0 (0.00) 0 (0.00)
Qualitative - Analysis 1 (50.00) 2 (50.00) 0 (0.00) 2 (40.00) 12 (50.00) 0 (0.00) 0 (0.00)
Total for Year 49 (100) 61 (100) 42 (100) 56 (100) 44 (100) 50 (100) 42 (100)
Topic 2005
N (%)
2006
N (%)
2007
N (%)
1998–2007
N (%)
     
Quantitative 40 (90.91) 45 (91.84) 64 (94.12) .451 (89.31)      
Quantitative - Design 2 (5.00) 5 (11.11) 10 (15.63) 68 (15.08)      
Quantitative-Measurement 25 (62.50) 13 (28.89) 28 (43.75) 172 (38.14)      
Quantitative - Analysis 13 (32.50) 27 (60.00) 27 (41.54) 229 (48.83)      
Qualitative …4 (9.09) (8.16) …4 (5.88) …54 (10.69)      
Qualitative - Design 2 (50.00) 4 (100.00) 3 (75.00) 30 (55.56)      
Qualitative - Measurement 1 (25.00) 0 (0.00) 0 (0.00) .. 5 (9.26)      
Qualitative - Analysis 1 (25.00) 0 (0.00) 1 (25.00) 19 (35.19)      
Total for Year 44 (100) 49 (100) 68 (100) 505 (100)      

Table 2. Frequencies and percentages for quantitative design topics published in organizational research methods (1998–2007)

Note—Topics included are only those for which there is at least 1 count for the entire 1998–2007 review period.

Topic 1998
N (%)
1999
N (%)
2000
N (%)
2001
N (%)
2002
N (%)
2003
N (%)
2004
N (%)
Archival 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 1 (20.00) 0 (0.00)
Behavioral simulation 1 (12.50) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Case study 1 (12.50) 1 (10.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Control variables / statistical control 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Cross-cultural research 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 1 (20.00) 1 (50.00)
Electronic / web research 0 (0.00) 0 (0.00) 1 (8.33) 2 (18.18) 0 (0.00) 1 (20.00) 0 (0.00)
External validity / generalizability 0 (0.00) 0 (0.00) 2 (16.67) 1 (9.09) 0 (0.00) 0 (0.00) 0 (0.00)
Internal validity 1 (12.50) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Mixed methods (qualitative and quantitative) 1 (12.50) 2 (20.00) 0 (0.00) 0 (0.00) 0 (0.00) 1 (20.00) 0 (0.00)
Multilevel research 0 (0.00) 1 (10.00) 1 (8.33) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Quantitative literature review/meta-analysis 0 (0.00) 0 (0.00) 0 (0.00) 1 (9.09) 0 (0.00) 0 (0.00) 0 (0.00)
Quasi-experimental 1 (12.50) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Research setting 1 (12.50) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Sample size 0 (0.00) 1 (10.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Sampling 0 (0.00) 0 (0.00) 2 (16.67) 1 (9.09) 0 (0.00) 1 (20.00) 0 (0.00)
Survey 1 (12.50) 1 (10.00) 6 (50.00) 6 (54.55) 0 (0.00) 0 (0.00) 1 (50.00)
Temporal issues 1 (12.50) 3 (30.00) 0 (0.00) 0 (0.00) 3 (100.00) 0 (0.00) 0 (0.00)
General/nonspecified 0 (0.00) 1 (10.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Total for Year 8 (100) 10 (100) 12 (100) 11 (100) 3 (100) 5 (100) 2 (100)
Topic 2005
N (%)
2006
N (%)
2007
N (%)
1998–2007
N (%)
     
Archival 1 (50.00) 0 (0.00) 0 (0.00) 2 (2.94)      
Behavioral simulation 0 (0.00) 0 (0.00) 0 (0.00) 1 (1.47)      
Case study 0 (0.00) 0 (0.00) 0 (0.00) 2 (2.94)      
Control variables / statistical control 1 (50.00) 0 (0.00) 0 (0.00) 1 (1.47)      
Cross-cultural research 0 (0.00) 0 (0.00) 1 (10.00) 3 (4.41)      
Electronic / web research 0 (0.00) 1 (20.00) 2 (20.00) 7 (10.29)      
External validity / generalizability 0 (0.00) 0 (0.00) 1 (10.00) 4 (5.88)      
Internal validity 0 (0.00) 0 (0.00) 0 (0.00) 1 (1.47)      
Mixed methods (qualitative and quantitative) 0 (0.00) 0 (0.00) 0 (0.00) 4 (5.88)      
Multilevel research 0 (0.00) 1 (20.00) 0 (0.00) 3 (4.41)      
Quantitative literature review / meta-analysis 0 (0.00) 0 (0.00) 0 (0.00) 1 (1.47)      
Quasi-experimental 0 (0.00) 0 (0.00) 0 (0.00) 1 (1.47)      
Research setting 0 (0.00) 0 (0.00) 0 (0.00) 1 (1.47)      
Sample size 0 (0.00) 0 (0.00) 0 (0.00) 1 (1.47)      
Sampling 0 (0.00) 0 (0.00) 0 (0.00) 4 (5.88)      
Survey 0 (0.00) 2 (40.00) 5 (50.00) 22 (32.35)      
Temporal issues 0 (0.00) 1 (20.00) 1 (10.00) 9 (13.24)      
General/nonspecified 0 (0.00) 0 (0.00) 0 (0.00) 1 (141)      
Total for Year 2 (100) 5 (100) 10 (100) 68 (100)      

Table 3. Frequencies and percentages for quantitative measurement topics published in organizational research methods (1998–2007)

Note—Topics included are only those for which there is at least 1 count for the entire 1998–2007 review period.

Topic 1998
N (%)
1999
N (%)
2000
N (%)
2001
N (%)
2002
N (%)
2003
N (%)
2004
N (%)
Archival data 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 1 (5.56) 0 (0.00)
Banding 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Level of analysis of dependent variable 0 (0.00) 0 (0.00) 3 (27.27) 0 (0.00) 0 (0.00) 0 (0.00) 4 (26.67)
Measurement invariance / equivalence 0 (0.00) 0 (0.00) 2 (18.18) 0 (0.00) 1 (20.00) 6 (33.33) 2 (13.33)
Reliability 2 (15.38) 4 (19.05) 0 (0.00) 5 (21.74) 4 (80.00) 5 (27.78) 0 (0.00)
Scale development 2 (15.38) 4 (19.05) 3 (27.27) 4 (17.39) 0 (0.00) 1 (5.56) 0 (0.00)
Test development 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Test theory 0 (0.00) 0 (0.00) 0 (0.00) 2 (8.70) 0 (0.00) 2 (11.11) 1 (6.67)
Validity 9 (69.23) 12 (57.14) 3 (27.27) 12 (52.17) 0 (0.00) 3 (16.67) 8 (53.33)
General/nonspecified 0 (0.00) 1 (4.76) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Total for Year 13 (100) 21 (100) 11 (100) 23 (100) 5 (100) 18 (100) 15 (100)
Topic 2005
N (%)
2006
N (%)
2007
N (%)
1998–2007
N (%)
     
Archival data 0 (0.00) 0 (0.00) 0 (0.00) 1 (0.58)      
Banding 1 (4.00) 0 (0.00) 0 (0.00) 1 (0.58)      
Level of analysis of dependent variable 3 (12.00) 1 (7.69) 8 (28.57) 19 (11.05)      
Measurement invariance / equivalence 0 (0.00) 3 (23.08) 1 (3.57) 15 (8.72)      
Reliability 10 (40.00) 4 (30.77) 6 (21.43) 40 (23.26)      
Scale development 3 (12.00) 0 (0.00) 0 (0.00) 17 (9.88)      
Test development 1 (4.00) 0 (0.00) 0 (0.00) 1 (0.58)      
Test theory 1 (4.00) 0 (0.00) 2 (7.14) 8 (4.65)      
Validity 6 (24.00) 5 (38.46) 11 (39.29) 69 (40.12)      
General/nonspecified 0 (0.00) 0 (0.00) 0 (0.00) 1 (0.58)      
Total for Year 25 (100) 13 (100) 28 (100) 172 (100)      

Table 4. Frequencies and percentages for quantitative analysis topics published in organizational research methods (1998–2007)

Note—Topics included are only those for which there is at least 1 count for the entire 1998–2007 review period.

Topic 1998
N (%)
1999
N (%)
2000
N (%)
2001
N (%)
2002
N (%)
2003
N (%)
2004
N (%)
ANOVA 0 (0.00) 1 (3.13) 0 (0.00) 0 (0.00) 1 (7.14) 0 (0.00) 0 (0.00)
Article citation / impact 0 (0.00) 0 (0.00) 1 (5.26) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Bayesian networks 0 (0.00) 0 (0.00) 0 (0.00) 1 (15.00) 0 (0.00) 0 (0.00) 0 (0.00)
Causal mapping 0 (0.00) 0 (0.00) 1 (5.26) 0 (0.00) 0 (0.00) 0 (0.00) 2 (7.14)
Coefficient beta 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Common method variance 2 (6.90) 2 (6.25) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Computational modeling 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Computer simulation 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Confidence intervals 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Correlation 2 (6.90) 0 (0.00) 0 (0.00) 1 (5.00) 0 (0.00) 0 (0.00) 2 (7.14)
Descriptives 0 (0.00) 0 (0.00) 1 (5.26) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Effect size 0 (0.00) 7 (21.88) 1 (5.26) 2 (10.00) 0 (0.00) 0 (0.00) 0 (0.00)
Ethnostatistics 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Factor analysis 1 (3.45) 1 (3.13) 1 (5.26) 0 (0.00) 0 (0.00) 4 (20.00) 1 (3.57)
Generalized estimating equations 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 1 (3.57)
Categorical dependent variables 1 (3.45) 0 (0.00) 1 (5.26) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Logistic regression 2 (6.90) 0 (0.00) 1 (5.26) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Longitudinal data analysis 2 (6.90) 0 (0.00) 0 (0.00) 0 (0.00) 3 (21.43) 0 (0.00) 0 (0.00)
MANOVA 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 1 (7.14) 0 (0.00) 0 (0.00)
Meta-analysis 0 (0.00) 3 (9.38) 0 (0.00) 0 (0.00) 0 (0.00) 1 (5.00) 0 (0.00)
Missing data 0 (0.00) 4 (12.50) 4 (21.05) 0 (0.00) 0 (0.00) 9 (45.00) 2 (7.14)
Multiple regression-correlation 6 (20.69) 4 (12.50) 3 (15.79) 5 (25.00) 4 (28.57) 1 (5.00) 9 (32.14)
Multidimensional scaling 0 (0.00) 0 (0.00) 1 (5.26) 1 (5.00) 0 (0.00) 0 (0.00) 0 (0.00)
Multilevel research 1 (3.45) 1 (3.13) 1 (5.26) 0 (0.00) 1 (7.14) 2 (10.00) 3 (10.71)
Network analysis 1 (3.45) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Neural networks 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 3 (15.00) 1 (3.57)
Nonparametric techniques 0 (0.00) 1 (3.13) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Outliers 1 (3.45) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Power analysis 1 (3.45) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Probit regression 1 (3.45) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Structural equation modeling 3 (10.34) 5 (15.63) 3 (15.79) 8 (40.00) 1 (7.14) 0 (0.00) 3 (10.71)
Temporal issues 3 (10.34) 3 (9.38) 0 (0.00) 0 (0.00) 2 (14.29) 0 (0.00) 3 (10.71)
General/nonspecified 1 (3.45) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 1 (3.57)
Other 1 (3.45) 0 (0.00) 0 (0.00) 0 (0.00) 1 (7.14) 0 (0.00) 0 (0.00)
Total for Year 29 (100.00) 32 (100.00) 19 (100.00) 20 (100.00) 14 (100.00) 20 (100.00) 28 (100.00)
Topic 2005
N (%)
2006
N (%)
2007
N (%)
1998–2007
N (%)
     
ANOVA 0 (0.00) 1 (3.70) 0 (0.00) 3 (1.31)      
Article citation / impact 1 (7.69) 0 (0.00) 1 (3.70) 3 (1.31)      
Bayesian networks 0 (0.00) 0 (0.00) 0 (0.00) 3 (1.31)      
Causal mapping 2 (15.38) 1 (3.70) 0 (0.00) 6 (2.62)      
Coefficient beta 0 (0.00) 1 (3.70) 0 (0.00) 1 (0.44)      
Common method variance 0 (0.00) 1 (3.70) 0 (0.00) 5 (2.18)      
Computational modeling 1 (7.69) 0 (0.00) 0 (0.00) 1 (0.44)      
Computer simulation 1 (7.69) 0 (0.00) 1 (3.70) 2 (0.87)      
Confidence intervals 1 (7.69) 0 (0.00) 0 (0.00) 1 (0.44)      
Correlation 1 (7.69) 0 (0.00) 0 (0.00) 6 (2.62)      
Descriptives 1 (7.69) 0 (0.00) 0 (0.00) 2 (0.87)      
Effect size 0 (0.00) 0 (0.00) 0 (0.00) 10 (4.37)      
Ethnostatistics 0 (0.00) 5 (18.52) 0 (0.00) 5 (2.18)      
Factor analysis 0 (0.00) 5 (18.52) 0 (0.00) 13 (5.68)      
Generalized estimating equations 0 (0.00) 0 (0.00) 0 (0.00) 1 (0.44)      
Categorical dependent variables 0 (0.00) 0 (0.00) 1 (3.70) 3 (1.31)      
Logistic regression 0 (0.00) 0 (0.00) 0 (0.00) 3 (1.31)      
Longitudinal data analysis 0 (0.00) 1 (3.70) 1 (3.70) 7 (3.06)      
MANOVA 0 (0.00) 1 (3.70) 0 (0.00) 2 (0.87)      
Meta-analysis 0 (0.00) 0 (0.00) 1 (3.70) 5 (2.18)      
Missing data 0 (0.00) 0 (0.00) 3 (11.11) 22 (9.61)      
Multiple regression-correlation 0 (0.00) 5 (18.52) 2 (7.41) 39 (17.03)      
Multidimensional scaling 1 (7.69) 0 (0.00) 0 (0.00) 3 (1.31)      
Multilevel research 3 (23.08) 1 (3.70) 12 (44.44) 25 (10.92)      
Network analysis 1 (7.69) 0 (0.00) 1 (3.70) 3 (1.31)      
Neural networks 0 (0.00) 0 (0.00) 0 (0.00) 4 (1.75)      
Nonparametric techniques 0 (0.00) 0 (0.00) 0 (0.00) 1 (0.44)      
Outliers 0 (0.00) 0 (0.00) 0 (0.00) 1 (0.44)      
Power analysis 0 (0.00) 0 (0.00) 0 (0.00) 1 (0.44)      
Probit regression 0 (0.00) 0 (0.00) 0 (0.00) 1 (0.44)      
Structural equation modeling 0 (0.00) 5 (18.52) 0 (0.00) 28 (12.23)      
Temporal issues 0 (0.00) 0 (0.00) 4 (14.81) 15 (6.55)      
General/nonspecified 0 (0.00) 0 (0.00) 0 (0.00) 2 (0.87)      
Other 0 (0.00) 0 (0.00) 0 (0.00) 2 (0.87)      
Total for Year 13 (100) 27 (100) 27 (100) 229 (100)      

Table 5. Frequencies and percentages for qualitative design topics published in organizational research methods (1998–2007)

Note—Topics included are only those for which there is at least 1 count for the entire 1998–2007 review period.

Topic 1998
N (%)
1999
N (%)
2000
N (%)
2001
N (%)
2002
N (%)
2003
N (%)
2004
N (%)
Action research 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 3 (42.86) 0 (0.00)
Case studies 1 (100.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 1 (14.29) 0 (0.00)
Document interpretation 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 1 (11.11) 1 (14.29) 0 (0.00)
Ethnography 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 1 (11.11) 0 (0.00) 0 (0.00)
Grounded theory 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Interpretive 0 (0.00) 0 (0.00) 0 (0.00) 1 (33.33) 3 (33.33) 0 (0.00) 0 (0.00)
Interviewing 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 1 (14.29) 0 (0.00)
Knowledge-based view 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Narrative 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Participant observation 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 1 (14.29) 0 (0.00)
Policy capturing 0 (0.00) 0 (0.00) 0 (0.00) 2 (66.67) 3 (33.33) 0 (0.00) 0 (0.00)
Survey 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 1 (11.11) 0 (0.00) 0 (0.00)
General/nonspecified 0 (0.00) 1 (100.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00)
Total for Year 1 (100) 1 (100) 0 (100) 3 (100) 9 (100) 7 (100) 0 (100)
Topic 2005
N (%)
2006
N (%)
2007
N (%)
1998–2007
N (%)
     
Action research 0 (0.00) 0 (0.00) 1 (33.33) 4 (13.33)      
Clinical research 0 (0.00) 0 (0.00) 0 (0.00) 2 (6.67)      
Document interpretation 0 (0.00) 0 (0.00) 0 (0.00) 2 (6.67)      
Ethnography 0 (0.00) 0 (0.00) 0 (0.00) 1 (3.33)      
Grounded theory 0 (0.00) 1 (25.00) 0 (0.00) 1 (3.33)      
Interpretive 2 (100.00) 1 (25.00) 1 (33.33) 8 (26.67)      
Interviewing 0 (0.00) 0 (0.00) 0 (0.00) 1 (3.33)      
Knowledge-based view 0 (0.00) 1 (25.00) 0 (0.00) 1 (3.33)      
Paper and pencil 0 (0.00) 1 (25.00) 0 (0.00) 1 (3.33)      
Personal experience methods 0 (0.00) 0 (0.00) 0 (0.00) 1 (3.33)      
Policy capturing 0 (0.00) 0 (0.00) 0 (0.00) 5 (16.67)      
Visual methods 0 (0.00) 0 (0.00) 0 (0.00) 1 (3.33)      
General/nonspecified 0 (0.00) 0 (0.00) 1 (33.33) 2 (6.67)      
Total for Year 2 (100) 4 (100) 3 (100) 30 (100)      

Table 6. Frequencies and percentages for qualitative measurement topics published in organizational research methods (1998–2007)

Note—Topics included are only those for which there is at least 1 count for the entire 1998–2007 review period.

Topic 1998
N (%)
1999
N (%)
2000
N (%)
2001
N (%)
2002
N (%)
2003
N (%)
2004
N (%)
Reliability 0 (0.00) 1 (100.00) 0 (0.00) 0 (0.00) 2 (66.67) 0 (0.00) 0 (0.00)
Survey 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 1 (33.33) 0 (0.00) 0 (0.00)
Total for Year 0 (100) 1 (100) 0 (100) 0 (100) 3 (100) 0 (100) 0 (100)
Topic 2005
N (%)
2006
N (%)
2007
N (%)
1998–2007
N (%)
     
Reliability 1 (100.00) 0 (0.00) 0 (0.00) 4 (80.00)      
Survey 0 (0.00) 0 (0.00) 0 (0.00) 1 (20.00)      
Total for Year 1 (100) 0 (100) 0 (100) 5 (100)      

Table 7. Frequencies and percentages for qualitative analysis topics published in organizational research methods (1998–2007)

Note—Topics included are only those for which there is at least 1 count for the entire 1998–2007 review period.

Topic 1998
N (%)
1999
N (%)
2000
N (%)
2001
N (%)
2002
N (%)
2003
N (%)
2004
N (%)
Concept mapping 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 1 (8.33) 0 (0.00) 0 (0.00)
Conjoint analysis 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 2 (16.67) 0 (0.00) 0 (0.00)
Content analysis 1 (100.00) 1 (50.00) 0 (0.00) 1 (50.00) 0 (0.00) 0 (0.00) 0 (0.00)
Interpretive 0 (0.00) 0 (0.00) 0 (0.00) 1 (50.00) 3 (25.00) 0 (0.00) 0 (0.00)
Policy capturing 0 (0.00) 0 (0.00) 0 (0.00) 0 (0.00) 5 (41.67) 0 (0.00) 0 (0.00)
General/nonspecified 0 (0.00) 1 (50.00) 0 (0.00) 0 (0.00) 1 (8.33) 0 (0.00) 0 (0.00)
Total for Year 1 (100) 2 (100) 0 (100) 2 (100) 12 (100) 0 (100) 0 (100)
Topic 2005
N (%)
2006
N (%)
2007
N (%)
1998–2007
N (%)
     
Concept mapping 0 (0.00) 0 (0.00) 0 (0.00) 1 (5.26)      
Conjoint analysis 0 (0.00) 0 (0.00) 0 (0.00) 2 (10.52)      
Content analysis 0 (0.00) 0 (0.00) 1 (100.00) 4 (21.05)      
Interpretive 1 (100.00) 0 (0.00) 0 (0.00) 5 (26.32)      
Policy capturing 0 (0.00) 0 (0.00) 0 (0.00) 5 (26.32)      
General/nonspecified 0 (0.00) 0 (0.00) 0 (0.00) 2 (10.52)      
Total for Year 1 (100) 0 (100) 1 (100) 19 (100)      
Trends in counts in percentages for quantitative and qualitative topics over the 10-year review period

Figure 1a. Trends in counts in percentages for quantitative and qualitative topics over the 10-year review period

Trends in counts in percentages for the most popular qualitative-design subcategories over the 10-year review period

Figure 1b. Trends in counts in percentages for the most popular qualitative-design subcategories over the 10-year review period

Trends in counts in percentages for the most popular quantitative-design (top panel), quantitative-measurement (center panel), and quantitative-analysis (bottom panel) subcategories over the 10-year review period

Figure 2. Trends in counts in percentages for the most popular quantitative-design (top panel), quantitative-measurement (center panel), and quantitative-analysis (bottom panel) subcategories over the 10-year review period

 

1 This research was conducted, in part, while Herman Aguinis was on sabbatical leave from the University of Colorado Denver and holding visiting appointments at the University of Salamanca (Spain) and University of Puerto Rico.

This material has been reproduced with the permission of the copyright owner. Unauthorized reproduction of this material is strictly prohibited. For permission to reproduce this material, please contact PMI or any listed author.

© 2008 Project Management Institute

Advertisement

Advertisement

Related Content

Advertisement

Publishing or acceptance of an advertisement is neither a guarantee nor endorsement of the advertiser's product or service. View advertising policy.