International Journal of Qualitative Methods 6 (1) March 2007

Printable PDF Version

Questions Arising about Emergence, Data Collection,
and Its Interaction with Analysis in a Grounded Theory Study


Catherine D. Bruce


Catherine Bruce, PhD, Assistant Professor, Trent University, School of Education and Professional Learning, Canada

Abstract: There has been a strong call for increased clarity and transparency of method in qualitative research. Although qualitative data analysis has been detailed, data management has not been made as transparent in the literature. How do data collection and analysis interact in practical terms? What constitutes sufficient data? And can research be both planful and emergent? In this paper, the author highlights several methodological strategies for addressing data management challenges in a grounded theory study of preservice mathematics teachers.

Keywords: grounded theory, data management, methodology, efficacy

 

Citation

Bruce, C. D. (2007). Questions arising about emergence, data collection, and its interaction with analysis in a grounded theory study. International Journal of Qualitative Methods, 6(1), Article 4. Retrieved [date] from http://www.ualberta.ca/~iiqm/backissues/6_1/bruce.htm


The call for transparency and clarity of qualitative research methods (Anfara, Brown, & Mangione, 2002; Demerath, 2006) has become more focused and urgent as educational researchers attempt to address complex issues of “improvement” and the wider education community makes demands for assurances of credibility (that the study has internal reliability), transferability (that rich descriptions of data collection and analysis methods allow for potential application in other contexts), and dependability (that a clear audit trail is articulated for inspection by the reader). Clarity and transparency of methods requires detailed documentation and illustration for these demands to be met. Demerath has described this response as the elucidative response for reducing marginalization of qualitative research.

It is well understood that there is a need for a wide range of research methods, including both quantitative and qualitative studies, to capture trends and effects as well as provide descriptions that explain how and why outcomes are achieved and phenomena are experienced. Yet, there continue to be broad assumptions about how qualitative and quantitative data are gathered and analyzed. These assumptions must be reexamined regularly. Furthermore, authors of reports must ensure that methods are not oversimplified or underdescribed. In this article, I examine particular complexities of data collection, management, and related analysis based on a grounded theory study in mathematics education where resources were limited. Methodological issues arising in the study are magnified and scrutinized to illustrate specific strategies for data management. \

Questioning assumptions about qualitative and quantitative methods

A typical description of a quantitative study suggests that the method used is deductive: The conclusions follow necessarily from the premises. Researchers are expected first to develop a hypothesis, use prior theory, and anticipate conclusions; then to collect data appropriate to the anticipated conclusions; and, finally, to analyze the data numerically. This involves using a preplanned, fixed structure (Creswell, 2005). On closer examination, it becomes apparent that quantitative studies often involve careful examination of data (e.g., to determine how key variables are distributed and correlated) before researchers launch into a main analysis. This frequently includes the reworking of research questions and subsequent literature review. Essentially, the frame of the study can change in response to the data.

A typical description of a qualitative study suggests that the method used is inductive: reasoning from the specific to a whole and focusing on the particulars rather than the general. Qualitative researchers are expected to gather rich descriptive data and ground conclusions and understandings in the data mined, not prior theories. It is the particulars that tell the story. This involves using an emerging, flexible structure (Creswell, 2005). On closer examination, however, it becomes apparent that qualitative studies often involve overt planning (e.g., by creating start codes prior to analysis) before the researcher launches into a main analysis. There are significant regularities in data collection and analysis procedures. Qualitative studies also have theoretical expectations that guide the collection and analysis stages, particularly in the light of ethical review requirements that must be met before the onset of any study.

With these complexities in mind, it is possible to imagine how qualitative and quantitative educational research methods are closer than traditionally described. It seems as though the fundamental reasoning of the methods are similar. Perhaps educational researchers weave between induction and deduction in an iterative process. These deliberations are not new to most educational researchers; however, descriptions of methods do not tend to acknowledge or illustrate these complexities.

In a qualitative study of preservice teacher efficacy in mathematics, issues of methodological credibility, transferability, and dependability all surfaced. The study examined how preservice teacher efficacy changed as teacher candidates were engaged in practicum settings at schools and in a reform-based mathematics methods course. Because there are very few qualitative studies of preservice teacher efficacy, and because the study was to be grounded in teacher candidate experiences, constructivist grounded theory (Charmaz, 1994, 2003) was used as the research method.

Grounded theory incorporates a number of quantitative-like procedures, such as data saturation requirements; prescriptive coding systems of open, axial, and selective coding; and code counts. Because grounded theory shares some characteristics with quantitative methods (Creswell, 2005; Greckhamer & Koro-Ljungberg, 2005) but is clearly positioned in the qualitative tradition, it offers an interesting lens through which distinctions between quantitative and qualitative data collection and analysis methods appear less absolute.

An example study in mathematics teacher efficacy

I undertook a study of preservice teacher mathematics efficacy to examine when, how, and why efficacy increased or decreased during a 1-year bachelor of education program that included a 36-hour mathematics methods course and 61 days of practice teaching in schools. Research in the area of teacher efficacy has produced a solid body of literature over the past 25 years that focuses on how teachers judge their capability to bring about student learning (Bandura, 1986, 1997; Gibson & Dembo, 1984; Goddard, Hoy, & Woolfolk Hoy, 2004; Ross, 1998; Tschannen-Moran & Woolfolk Hoy, 2001; Tschannen-Moran, Woolfolk Hoy, & Hoy, 1998). The teacher assesses his or her ability to perform a given task based on analysis of what is required to accomplish the task, reflection on past similar situations, and an assessment of the resources available (Tschannen-Moran, Woolfolk Hoy, & Hoy, 1998). Teachers with high self-efficacy are more likely to experiment with effective but challenging instructional strategies, such as performance-based assessment (Vitali, 1993) and student-directed, activity-based methods (Riggs & Enochs, 1990). Teacher efficacy is important, because it is a reliable predictor of student achievement (Mascall, 2003; Muijs & Reynolds, 2001) and of math reform implementation (evidence reviewed in Ross, 1998).

Teaching efficacy is particularly important in elementary mathematics, because many of the candidates do not have strong mathematics backgrounds. Most teacher candidates have experienced traditional programs as students and have not observed, or participated in, programs that are based on reformed math principles and practices (Bruce, 2005). The Conference Board of the Mathematical Sciences (1975) found that “Teachers are essentially teaching the same way they were taught in school” (p. 77). More than 20 years later, “the average classroom showed little change.” (Hiebert, 1999, p. 11) The same method of teaching persists today despite pressure to change (D’Ambrosio, Boone, & Harkness, 2004; Ross, McDougall, & Hogaboam-Gray, 2002).

Research design and data collection

In the sample study, I examined the teaching efficacy of 50 elementary preservice teachers, but the main goal of the study was to understand qualitatively the experiences of preservice teachers and examine what factors influenced their efficacy ratings and why. For a single researcher, the initial sample size was too large for gathering rich qualitative data. Therefore, the participant sample was narrowed from 50 participants to 10 in a stratified random sample, and then finally to 2 participants as extreme sampling case studies.

Four purposes drove the preservice teacher efficacy study. They were (a) to investigate preservice teachers’ math stories of experience as sources of tension, “failure,” and “success”; (b) to identify methods that contribute to preservice development of efficacy related to mathematics reform based teaching methods; (c) to inform the teaching and learning strategies and theoretical understandings of preservice elementary mathematics methods courses; and (d) to gather information on how efficacy beliefs of preservice teachers changed through experience.

There were two phases in the study: a pilot phase and a full study phase. I used the pilot phase (Year 1) to understand the scope of the situation. This phase supplied rich data demonstrating the value of continuing with a full study (Year 2). During the pilot phase, previous studies and existing theoretical frameworks were carefully reviewed. The pilot phase thus acted as an informing agent where existing research, the preliminary data collection, and analysis interacted to inform the full study.

Several forms of data collection were used, including interviews, written log entries by participants, inventories, observations, and a teacher efficacy survey. In an attempt to demonstrate transparency of data collection methods, I have detailed the data collection methods and timing in Tables 1 and 2 to match the four research purposes. Tables 1 and 2 make the data collection methods transparent to the reader to meet two criteria posited by Lincoln and Guba (1985): transferability and dependability. That is, the collection methods are described in sufficient detail that the reader can inspect the process, and are displayed with enough clarity that a researcher could (a) consider a similar implementation sequence in another research context or (b) use the tables as an example of dependable reporting practices.

Grounded theory dilemmas

Grounded theory analysis procedures have been well documented in the methodology literature (Charmaz, 1990; Creswell, 1998, 2005; Harry, Sturges, & Klinger, 2005), and highlight the validity (and, some would argue, the objectivist underpinnings) of this research method. For a full discussion on the terrain, evolution, and developments of grounded theory, see Greckhamer and Koro-Ljungberg (2005) and Mills, Bonner, and Francis (2006). However, the challenges of data collection and management in grounded theory studies have not been fully identified and addressed. In this study, there were three principal data dilemmas that emerged for me as a researcher and are discussed in detail in this paper.

Collection Method

Participant Involvement

Specific Research Purpose and Additional Comments

Math inventory

All participants individually completed an individual questionnaire with eight questions on the first day of class

• Purpose: (a) To investigate preservice teachers’ math stories of experience as a sources of tension, “failure,” and “success”
• Used for baseline data regarding prior experiences and general level of confidence and interest coming into the mathematics methods course

Teacher efficacy scale

All participants, individually completed the Teacher’s sense of efficacy scale on four occasions

• Purpose: (d) To gather information on how efficacy beliefs changed through experience for preservice teachers
• Efficacy scale was used to plot confidence changes over time; scale was also used to support final log entry writing by participants and as a point of discussion during one to one interviews
• Data from interviews gave insights into participant interpretation

Math logs

All participants, individually completed math log entries on a biweekly basis during the course

• Purpose: (a) To investigate preservice teachers’ math stories of experience as a sources of tension, “failure” and “success”; (c) To inform the teaching and learning strategies and theoretical understandings of preservice elementary mathematics methods courses
• Participants were given specific math log prompts to write about; the final log prompt is most helpful as it relates to self-efficacy

Class observations

All participants in class were observed while working in small groups/ Recorded field notes

• Purpose: (b) To identify methods that contribute to preservice development of efficacy related to mathematics reform based teaching methods; (c) To inform the teaching and learning strategies and theoretical understandings of preservice elementary mathematics methods courses
• Participants were involved in regular class activity; researcher made observations of students in a field notes journal both during and after each observation occasion

Focus group interviews

Participants met in three groups: Group A, Group B, Group C
(18 participants—Year 1
12 participants—Year 2)
Drawing of images of self as a math teacher at onset of meetings

• Purpose: (a) To investigate preservice teachers’ math stories of experience as a sources of tension, “failure,” and “success; (b) To identify methods that contribute to preservice development of efficacy related to mathematics reform based teaching methods
• For 2004: Analysis completed in Spring 2004; paper written
• For 2005: All transcripts complete from focus group interviews following interviews

One-to-one interviews

Selected participants met one-to-one with researcher for a 75 minute interview at the end of the program
(5 interviews—Year 1, 10 interviews—Year 2; 2 of which were extended in length and detail)

(a) To investigate preservice teachers’ math stories of experience as a sources of tension, “failure,” and “success”
(b) To identify methods that contribute to preservice development of efficacy related to mathematics reform based teaching methods
(c) To inform the teaching and learning strategies and theoretical understandings of preservice elementary mathematics methods course br> • Five interviews conducted in Year 1; 10 interviews conducted in Year 2
• Detailed information but not quite as rich as focus group interviews

Table 1. Data collection methods and purposes

Timing of Collection

Collection Method

Nature of Collection Episode

September
   First class in September, Year 1
   First class in September, Year 2

Math inventory

Single collection episode

September, October, March, February
   Early September, Year 2
   Mid October, Year 2
   Mid of February, Year 2

Teacher efficacy scale

Punctuated collection episodes across 4 months

September through December

Class observations

Ongoing collection; 8 occasions of 2 hours each

Mid September – Mid December
   Year 2

   

September through March
   Mid September – End of March
   Years 1 and 2

Math logs

Ongoing collection; biweekly

November
   Early November, Year 1
   Early November, Year 2

Focus group interviews

Single collection episode for each of the 3 focus groups

May
   May, Year 1
   May, Year 2

One-to-one interviews

Single collection episode for each participant

Table 2. Timing of data collection

How can I begin to collect data in an inductive manner? If there is no theoretical framework or “hunch” to begin with, how would I know what to collect? There are criticisms that grounded theory does not incorporate reference to, and use of, existing theoretical frameworks. However, according to Berg (2001), Charmaz (2003), and Strauss (1987), grounded theory is not an entirely inductive process. Interplay between experience, induction, and deduction are required. This presents a fundamental methodological dilemma. Is it possible for research to be both planful and emergent?

What is the nature of the interaction between data collection and analysis? The iterative process of collection and analysis is alluded to in most grounded theory studies because the analysis of specific data should inform the following round of data collection, but how is this actually done? Furthermore, how does the researcher describe this phenomenon in reports?

How much data collection is required in a grounded theory study? In a grounded theory study, the volume of data can quickly become overwhelming (see Harry et al., 2005, for example) because of the inductive nature of the study. The importance of saturation of codes is what sometimes drives overcollection of data, because, although the study is qualitative in nature, the researcher wants to be confident that the themes identified are exhaustive and/or valid. This leads to the danger of gathering wide, but shallow, pools of data. The objective, however, is to gather rich data that do not deplete or overconsume valuable research resources.

These three methodological dilemmas or tensions are deeply interconnected. They require further elaboration and discussion both related to the sample study and in the qualitative research literature overall.

Can grounded theory studies be both planful and emergent?

A key characteristic of grounded theory is the use of an emergent design. This is defined by Creswell (2005) as a process whereby “the researcher collects data, analyzes it immediately rather than waiting until all data are collected, and then bases the decision about what data to collect next on this analysis” (p. 405). The issue of practicality surfaces quickly. To collect data, and then analyze it before continuing with further collection, the overall research period must be extensive. The analysis period takes up time that might otherwise be used for additional data collection. In the sample study, the conditions were optimal, with distinct “on” and “off” data collection and analysis periods: Periods of methods course activity were followed by periods of classroom placement activity, allowing for pauses in data collection and time for data analysis. Whether time periods are restricted or vast, research studies cannot theoretically or practically be launched without hunches or a framework. The framework might not be explicit, but the researcher begins, as a minimum, with objectives based on prior experiences. This reality clearly challenges any pure induction claims. Issues of emergent design were confronted at most stages of data collection and analysis in the preservice teacher efficacy study: As a “responsible” researcher, I had theoretical understandings and data collection methods clearly detailed from the outset of the study, which was in direct competition with my goal of responding to the analysis of each data collection episode by refining the subsequent collection methods and/or episodes.

The educational researcher cannot begin to collect data for publication purposes without first undergoing ethical review. The ethical review process for the preservice teacher efficacy study was stringent, with more than 30 pages of documentation and five ethical research committee deliberation meetings. Letters to participants, interview outlines, surveys, and scales were all under close scrutiny because of my strong participatory role. Typical to many ethical reviews, a summary of the related literature was required. Timelines and dissemination plans were also demanded. Given all of the requirements of ethical reviews for education research studies, the ability to use a pure emergent design is considerably compromised. Qualitative studies, however, value an emergent unfolding of the research. Can the researcher plan for emergence?

In allowing for emergence, the most effective strategy employed in this efficacy study was the use of an extended timeline allowing for two phases: a pilot year and a full-year study. The pilot phase of data collection functioned as both a method informant and as a findings informant. From the data collected in the pilot phase, both the methods for data collection and the focus of collection were refined. This strategy supports the constructivist principle of constructivist grounded theory; that is, there is a full acknowledgement that I as researcher was interacting with participants, the data, and the literature as the study was co-constructed.

The simple event of one-to-one interviews is used here to illustrate how data collection and analysis informed the subsequent collection activity in an emergent design. Interviews with preservice teachers were held at the end of the program. The questions, although drafted, were not finalized until observation and focus group data were collected and analyzed. Originally, Question 3 was What elements of your classroom placement(s) were helpful in building confidence in your math teaching? This question was revised on analysis of data in the pilot phase, which indicated mixed experiences on classroom placements, leading to increased efficacy for some participants but decreased efficacy for others. The original question was expanded to two questions for the final interview, allowing for a range of responses: (a) What features of the program over this past year have helped you build confidence teaching math? and (b) What features of the program have hindered your confidence teaching mathematics?

A broader example of data collection emergence relates to timing of data collection. Originally, the individual interviews were to be conducted at the end of the methods course (February). Once I clearly understood the impact of classroom placements, based on analysis of focus group interviews, however, the timing of interviews was moved to April, when participants would have completed an additional placement. This proved to be fruitful for gathering a more encompassing story of preservice teacher experiences.

A final example of planning for emergence illustrates how theoretical frameworks and related research instruments influenced the refinement of collection strategies. In the pilot year, a simple survey was administered three times. The strategy yielded high-quality results, but the instrument was flawed, in that it did not account for many areas of efficacy information. In the full study, the original survey was substituted with one developed and tested by other researchers. It was not until the literature review was extended to examine teacher efficacy studies based on quantitative data that the value of the Teacher’s Sense of Efficacy Scale was considered. The 12-item instrument (Tschannen-Moran & Woolfolk Hoy, 2001) was chosen because of its demonstrated reliability and validity. The qualitative examination of the Efficacy Scale led to further understanding of exactly how items on the scale were being interpreted by participants and what factors influenced depression or elevation of scores.

How do data collection and analysis interact?

Some data analysis procedures have been well documented in the methodology literature (Charmaz, 1990; Creswell, 1998 , 2005; Denzin & Lincoln, 2003; Harry et al., 2005; Seale, 1999), particularly in terms of coding processes, to illustrate the validity of grounded theory as a research method and the credibility of findings. Charmaz (2003) clearly described the theoretical understanding of data collection and analysis interaction.

Essentially, grounded theory methods consist of systematic inductive guidelines for collecting and analyzing data to build middle-range theoretical frameworks that explain the collected data. Throughout the research process, grounded theorists develop analytic interpretations of their data to focus further data collection, which they use in turn to inform and refine their developing theoretical analyses. (p. 250)

Charmaz also described common analytic steps of grounded theory studies.

Less transparent are the details of how data collection and analysis interact in a practical sense. In this efficacy study, for example, the data collection and early analysis structure was distinct from the final detailed analysis structure. Realities of working as a single researcher (with one research assistant) have an enormous impact on the ability to manage large quantities of data with integrity. Thus, the collection and early analysis stage consisted of a funneling strategy that began with 50 participants, then decreased to 10, and finally reduced again to 2 participants. The purpose of the funneling strategy was to gain increasing depth of understanding about the details of participant experience. The funneling strategy drove data collection processes as well as the early analysis stages. This was distinct from the final detailed data analysis strategy, in which I began with the same 50 participants, then narrowed to investigate the 2 extreme case samples, and finally expanded the analysis to 10 participants in an hourglass strategy (see Figure 1). Using the hourglass strategy, I reexamined data from all 50 participants to increase my confidence that a full accounting of codes was complete. I was concerned that codes generated in the initial analysis might not have been comprehensive because of additional themes generated as analysis continued. I then focused in on 2 extreme cases for a full analysis based on all codes generated. The analysis out to 10 representative cases then confirmed that the 2 cases selected were the outlying cases, and that participants followed similar trajectories during their preservice year. This increased my confidence in the theorized learning trajectory as a valid overall description of experiences for all participants.

Figure 1: Model for enhancing integrity of participant sample during data collection and analysis

Figure1

Why is this distinction between the funnel and the hourglass important? Beyond the increased transparency of declaring the methods employed, the distinction between the two strategies is critical, particularly in terms of credibility of findings. The funnel strategy is typical in qualitative and mixed methods studies for narrowing the number of participants and managing related data while maintaining integrity of the sample. However the hourglass strategy for analysis is not commonly described in the literature. As a researcher with few resources, I found that use of the hourglass strategy was manageable and effective.

Examining outlying cases is an important strategy for helping to ensure that norms, themes, or typical patterns of activity do not monopolize analysis to the point of ignoring atypical patterns or contradictory findings. Use of the hourglass strategy has the potential to increase researcher and reader confidence that model development and theory building are substantive. In this study, it was important to expand the analysis back out to 10 participants for three reasons: (a) to look for contradictions and noncongruence of findings; (b) to determine whether the two cases were, indeed, the extremes; and (c) to determine whether there were same, similar, or different patterns across a larger sample of participants leading to researcher confidence in the development of a working model.

Research reports that explicitly describe or depict collection and analysis strategies can be more convincing, yet written reports are expected to adhere to very specific guidelines, such as short text length while simultaneously describing complex ideas. Charmaz (1994) and Larson (1997) explicitly excluded the use of diagrams and figures in an attempt to avoid overgeneralizations in grounded theory studies. As a departure from this position, I found that annotated diagrams of data collection and analysis processes offered a helpful visual depiction of some of the complexities of the data collection and analysis interaction processes.

Interaction between collection and analysis are obviously not fully accounted for in Figure 1. Figures 2 and 3 further illustrate the zigzag data collection and analysis approach used in Year 1 (pilot) and Year 2 of the study (see also Creswell, 2005). It is evident from these figures that each stage of data collection was followed by a period of analysis that subsequently informed the following data collection stage.

The purpose of including the diagrams is to explicitly illustrate the practical interaction between data collection and analysis, yet even detailed diagrams do not capture the entire process. Member checks, which occurred after each major stage of analysis, and participant coding of transcripts (interrater checks on open and active coding) are two examples of methods not accounted for in the diagrams. (One advantage of a smaller study is the ability to follow up on member checks that actively seek contradictions rather than simply moving the researcher agenda along.) The arrows between columns in Figures 2 and 3 illustrate a flow back and forth between collection and analysis. In reality, analysis was also occurring during the data collection events, as I made intuitive leaps. With the shortcomings of diagrams in mind, the use of prose statements or annotations is essential to enhance detail and complexities of data collection, analysis, and triangulation to further increase transparency. The danger of oversimplifying the method, even through detailed descriptions and diagrams, is an ongoing tension that I faced during the study.

What constitutes sufficient data?

The answer to this question is elusive. On investigation of existing grounded theory studies and this study, it has become clear to me that there is no magic number of events or types or periods of data collection. Harry et al. (2005) have suggested that they could have completed their large-scale study more successfully with approximately half the amount of data gathered from participants (272 open-ended interviews, 84 informal conversations, 627 classroom observations, 42 child study meetings, and related documents). The preservice teacher efficacy study was much smaller and fine-grained in comparison, and it quickly became apparent that gathering qualitative data with 50 participants would be impossible given the limited resources I had. Furthermore, the purpose of the study was to examine carefully the particulars of participant experience and the details of how their efficacy developed while learning to teach mathematics. The funneling strategy combined with purposeful sampling (stratified random sample) helped to narrow participant involvement to a manageable amount while maintaining integrity of the sample.

The pilot phase also yielded important information about group size for focus group interviews. The pilot phase included groups of six preservice teachers for focus groups in which each participant competed for time to express his or her views, and there was a lack of sustained storytelling. Therefore, in the full study, the focus groups had a membership of between 3 and 5 participants. Although this is a much smaller group size than the 6 to 8 recommended by Kruger (1998), it proved to be more effective for purposeful interaction combined with opportunities for each participant to speak at length about their experiences.

The number of data “events” in a grounded theory study depends on a matrix of factors, including scope and scale of the study, level of detail required, resources available, topic, and research questions. Is there a minimum number of data events required? Typically, researchers ensure credibility through triangulation of data (drawing on multiple sources of information, individuals, or processes) and/or member checking (asking participants to check the accuracy of accounts) (Creswell, 2005). Ultimately, the number of data events is less important than the trustworthiness of the reporting.

Figure 2. Diagram of zigzag approach for Year 1 (pilot year)

Figure2

Figure 3. Diagram of zigzag approach for Year 2

Figure3

In Figures 2 and 3, I illustrate the sequence of “events” in terms of how data collection and analysis were conducted over 2 years. Also made transparent in the figures are the distinctions and refinements made from the pilot phase to the full study phase. For example, the number of focus group interview participants was reduced in the full study, as group size was too large in the pilot phase, compromising the quality of the stories of experience shared by participants. Member checks and independent coding were completed by the researcher

The essence of the data quantity issue lies, perhaps, in the saturation of data. Once an exhaustive listing of codes, categories, or themes has been achieved, and no other codes are identifiable, the researcher can be assured to the greatest extent possible that sufficient data have been collected and that new data will not provide additional information or interpretation. The problem for the researcher is that it is difficult to know whether the codes have been saturated (and sufficient data have been gathered) until the data have been analyzed and the study is complete. Many researchers are led to one of three solutions: (a) the researcher must collect more data than necessary to ensure saturation of codes, (b) he or she must return to the study site and collect more data after the analysis phase is “complete,” or (c) he or she conducts preliminary coding of data whenever possible during the data collection process. None of these solutions is ideal in terms of use of research resources or method. In the ideal circumstances, the underlying structure of moving back and forth between data collection and analysis is accounted for and planned. In the example study, the ideal data collection conditions were available. The rhythm of the bachelor of education program itself, combined with the opportunity to conduct a pilot phase and actual study phase, was optimal for data collection, analyzing data and then refining the following round of data collection. These conditions are likely more the exception than the norm.

The tension between the value of the specifics of a fine-grained qualitative study and studies with wider data collection is at the heart of grounded theory dilemmas. How does the researcher make claims about new theories that are convincing to the research community while investigating the particulars of experience? It is this balance between richness (and detail in the particulars of the findings) and application to broader models and theory building that continue to plague grounded theory research studies. In the efficacy study, the hourglass approach to data collection and analysis, as well as implementing the study over two phases, facilitated a balance that eased some of these tensions.

Discussion

Grounded theory studies are “grounded” in the data collected to develop or refine models of understanding through an inductive process. There is an assumption that the researcher approaches the study in a state of neutrality and has the important role of describing the situation in a non-evaluative way, so that participant voice is valued over that of the researcher. Over the years, in educational research, however, there has been a tendency to be evaluative in efforts to identify “most effective practices” in educational settings. The researcher is already familiar with the theoretical frameworks and literature of the field, and carries a personal professional definition of what constitutes most and least effective practices. To position grounded theory, or perhaps any method in the educational research context, as entirely inductive is an overgeneralization. In reality, there is a give-and-take relationship between the researcher and the situation being investigated. The researcher becomes interested in a situation. She or he begins to read more about similar situations. Furthermore, the researcher would likely want to ensure that the study is worthwhile in terms of value to the research and educational communities, particularly as it relates to funding and enabling a sense agency from the findings. Before the research begins, the researcher must prepare proposals, undergo ethical reviews, and invite participants. In other words, researchers must be strategic. The theorizing has already begun. The value of prior experiences must be acknowledged and explicitly referenced. This underlines and makes possible the level of complexity of understanding essential to educational research. Induction and prior theory examination must be seen not as oppositional but as complementary. A researcher can be engaged in a situation both theoretically and practically and still examine the data using multiple lenses and interpretations. Indeed researchers theorize on an ongoing basis. Ethical review procedures cause additional complications to emergent design studies. The details of data collection methods must be disclosed for ethical review. Problems can arise when the researcher recognizes that procedural changes are required. Using a qualitative emergent design such as constructivist grounded theory is more time consuming and labor intensive, and requires specific conditions for success. Indeed, researchers might be discouraged from using this method in some cases.

Those grounded theory studies that occur in phases allow for a higher level of emergence not afforded in shorter studies. The researcher tests out data collection methods for two purposes: first, to see whether the data yield is rich; and second, to begin the analysis of early data to inform future phases of study. Theorizing then finds a place within the literature and within the data collection phases.

In response to the call for transparency of methods, qualitative reports must make every effort to describe the details of collection and analysis so that the reader believes the study to be credible, transferable, and dependable, whether claims are made about generalizability or not. “Such efforts at transparency will make our work more accessible to others, and their subsequent judgments will ultimately be of benefit to us” (Demerath, 2006, p. 104). One strategy for enhancing transparency, in light of limited space requirements of reports, is the use of annotated diagrams providing the reader with graphically explicit images of the data collection and analysis procedures. No form of description captures the process entirely, and by simply acknowledging this shortcoming, the reader might have a more realistic sense of the complexities of the processes.

Optimal conditions for a constructivist grounded theory study

There appear to be specific conditions under which a constructivist grounded theory study is possible and appropriate. These include obvious but essential conditions described by methods experts to date (conditions 1, 2, 3, and 6, below), and conditions that have been less clearly articulated in the literature but require further consideration (particularly conditions 4, and 5, 7).

1. The purpose of the study is to examine and explain an educational process qualitatively, usually in situations where the research literature is limited and theories require generation or refinement (Glaser & Strauss, 1967).

2. The timelines of the study are organized to ensure that periods of data collection can be followed directly by analysis prior to the following data collection period. This ensures that appropriate types and amounts of data are collected effectively and lead to saturation without depleting resources unnecessarily (Creswell, 2005).

3. Extensive pilot work can be extremely important to an effective study. According to Wholey (1986), it takes the researcher up to one third of the full research timeframe to evaluate best approaches for collection and analysis methods. The time available for the study should be sufficient to ensure that this “acclimatization” can occur.

4. To increase credibility, a portion of data is “reserved” and used to confirm interpretations of detailed findings. Models such as the hourglass strategy, developed in the example study, are effective in facilitating this process.

5. To increase dependability and transferability, an audit trail of data collection and analysis methods is explicitly made available for reader inspection. Tables and figures combined with rich descriptions can support this process. The table of data collection methods and purposes, as well as the zigzag data collection and analysis figures provided in the sample study, are examples of how dependability and transferability may be increased.

6. The researcher understands and ensures that the study is thoroughly grounded in the data while being informed by existing theoretical frameworks and research literature, and therefore does not make claims of using an entirely inductive process (Charmaz, 2003).

7. The researcher is prepared to engage in a predominantly emergent process where the themes emerge from the data leading to a middle-range theory or working model, while acknowledging and managing the extensive preplanning required for successful data collection and analysis that meets ethics demands and uses limited resources effectively.

Paradigms of emergence, induction and deduction, and qualitative and quantitative traditions are distinguished from one another in educational research to clarify purposes, methods, and perspectives. In this article, I have questioned the extremist positioning of reported studies within a given paradigm, which leads to oversimplification and underrepresentation of data collection, management, and analysis methods in educational research. Transparency and clarity of methods used, as well as an acknowledgement of tensions faced in relation to method, are essential ingredients to increasing our understanding of complexities and ensuring integrity of conclusions.

This article is a response to the call for greater communication and transparency of qualitative research methods, particularly in illustrating ways of managing data collection and the subsequent relationship to analysis in grounded theory studies. Tensions of attempting to use a predominantly emergent design were sufficiently reconciled during the sample study using phases of implementation and acknowledging the realistic constraints that are placed on emergence in educational research. In this article, I have offered some practical suggestions for conducting small-scale grounded theory studies that are transparent, credible, transferable, and dependable.

References

Anfara, V., Brown, K., & Mangione, T. (2002). Qualitative analysis on stage: Making the research process more public. Educational Researcher, 31(7), 28-36.

Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice-Hall.

Bandura, A. (1997). Self-efficacy: The exercise of control. New York: W. H. Freeman.

Berg, B. L. (2001). Qualitative research methods for the social sciences. Needham Heights, MA: Pearson.

Bruce, C. (2005). Teacher candidate efficacy in mathematics: Factors that facilitate increased efficacy. In G. A. Lloyd, S. Wilson, J. L. M. Wilkins, & S. L. Behm (Eds.), Proceedings of the Twenty-Seventh Psychology of Mathematics Association-North America [CD-ROM]. Eugene, OR: All Academic.

Charmaz, K. (1990) Discovering chronic illness: Using grounded theory. Social Science & Medicine, 30, 1161-1172.

Charmaz, K. (1994) Identity dilemmas of chronically ill men. Sociology Quarterly, 35, 269-288.

Charmaz, K. (2003) Grounded theory: Objectivist and constructivist methods. In N. K. Denzin & Y. Lincoln (Eds.), Strategies of qualitative inquiry (pp. 249-291). Thousand Oaks, CA: Sage.

Conference Board of the Mathematical Sciences. (1975). Overview and analysis of school mathematics, Grades K-12. Washington DC: National Advisory Committee on Mathematical Education.

Creswell, J. (1998). Qualitative inquiry and research design: Choosing among five traditions. Thousand Oaks, CA: Sage.

Creswell, J. (2005). Educational research: Planning, conducting, and evaluating qualitative research. Upper Saddle River, NJ: Merrill Prentice Hall Pearson Education.

D’Ambrosio, B., Boone, W., & Harkness, S. (2004) Planning district-wide professional development: Insights gained from teachers and students regarding mathematics teaching in a large urban district. School Science & Mathematics, 104(1), 5-15.

Demerath, P. (2006) The science of context: Modes of response for qualitative researchers in education. International Journal of Qualitative Studies in Education, 19(1), 97-113.

Denzin, N. K., & Lincoln, Y. S. (Eds.). (2003). Landscape series of the handbook of qualitative research. Thousand Oaks, CA: Sage.

Gibson, S., & Dembo, M. (1984). Teacher efficacy: A construct validation. Journal of Educational Psychology, 76(4), 569-582.

Glaser, B., & Strauss, A. (1967). The discovery of grounded theory. Chicago: Aldine.

Goddard, R., Hoy, W., & Woolfolk Hoy, A. (2004). Collective efficacy beliefs: Theoretical developments, empirical evidence, and future directions. Educational Researcher, 33(3), 3-13.

Greckhamer, T., & Koro-Ljungberg, M. (2005). The erosion of a method: Examples from grounded theory. International Journal of Qualitative Studies in Education, 18(6), 729-750.

Harry, B., Sturges, K., & Klinger, J. (2005). Mapping the process: An exemplar of process and challenge in grounded theory analysis. Educational Researcher, 34(2), 3-13.

Hiebert, J. (1999) Relationships between research and the NCTM standards.  Journal for Research in Mathematics Education, 30(1), 3-19.

Kruger, R. (1998). Analyzing and reporting focus group results. Thousand Oaks, CA: Sage.

Larson, B. W. (1997). Social studies teachers/ conceptions of discussion: A grounded theory study. Theory and Research in Social Education, 25, 114-146.

Lester, F. K. (1996). Criteria to evaluate research. Journal of Research in Mathematics Education, 27(2), 130-132.

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage.

Mascall , B. (2003). Leaders helping teachers helping students: The role of transformational leaders in building teacher efficacy to improve student achievement. Unpublished doctoral dissertation, University of Toronto, Toronto, Canada.

Mills, J., Bonner, A., & Francis, K. (2006). The development of constructivist grounded theory. International Journal of Qualitative Methods, 5(1), Article 3. Retrieved June 19, 2006, from http://www.ualberta.ca/~iiqm/backissues/5_1/html/mills.htm

Muijs, D., & Reynolds, D. (2001, April). Being or doing: The role of teacher behaviors and beliefs in school and teacher effectiveness in mathematics, a SEM analysis. Paper presented at the Annual Meeting of the American Educational Research Association, Seattle.

Riggs, I., & Enochs, L. (1990). Toward the development of an elementary teacher’s science teaching efficacy belief instrument. Science Education, 74(6), 625-638.

Ross, J. A. (1998). The antecedents and consequences of teacher efficacy. In J. Brophy (Ed.), Research on teaching (Vol. 7, pp. 49-74). Greenwich, CT: JAI.

Ross, J., McDougall, D., & Hogaboam-Gray, A. (2002). Research on reform in mathematics education, 1993-2000. Alberta Journal of Educational Research, 48(2), 122-138.

Seale, C. (1999). The quality of qualitative research. Thousand Oaks, CA: Sage.

Strauss, A. (1987). Qualitative analysis for social scientists. New York: Cambridge University Press.

Tschannen-Moran, M., & Woolfolk Hoy, A. (2001). Teacher efficacy: Capturing an elusive construct. Teaching and Teacher Education, 17, 783-805.

Tschannen-Moran, M., Wolfolk Hoy, A., & Hoy, W. (1998) Teacher efficacy: Its meaning and measure. Review of Educational Research, 68(2), 202-248.

Vitali, G. (1993). Factors influencing teachers’ assessment and instructional practices in an assessment driven educational reform. Unpublished doctoral dissertation, University of Kentucky, Lexington.

Wholey, J. (1986). Evaluability assessment: Developing program theory. In L. Bickman (Ed.), New directions for program evaluation (pp. 77-92). San Francisco: Jossey-Bass.

 International Journal of Qualitative Methods 6 (1) March 2007
 http://www.ualberta.ca/~ijqm/