New educational methods inevitably set off debates among traditionalists and reformers. “Inquiry science” instruction provides a classic case in point. Over the past two decades, proponents of inquiry science have lauded it as an engaging and interactive teaching approach that allows students to practice and apply the scientific method, whereas critics have lambasted it as an absence of instruction, with students “constructing” their own knowledge, and teachers caring little about actual scientific facts or correct answers.
Unfortunately, this debate is hampered by a lack of common definitions. “When different people say ‘inquiry,’ they can mean very different things. In the absence of a shared understanding about the term, it’s been difficult to build knowledge about this pedagogical approach,” says EDC’s Daphne Minner.
Minner, Abigail Levy, and Jeanne Century of EDC’s Center for Science Education (CSE) are completing a synthesis of nearly 20 years of research into inquiry science. Their study, “Has Inquiry Made a Difference?” reveals that there is no systematic definition of “inquiry science” even among researchers or curriculum developers. And without a common definition, there is, of course, no way to assess the effectiveness of the method.
“There have been a lot of theoretical and conceptual conversations about what inquiry science involves,” says Minner. “We’ve taken it down to a more practical, pragmatic level: What does it look like in the classroom?”
“Our purpose for conducting the synthesis was to inform the debate about what inquiry science actually was and whether or not it had merit, because practitioners, policymakers, and researchers alike are making decisions often without a research base,” says Levy. The CSE team specifically aimed to meet this need across a wide audience of professionals.
The process of “inquiry” is modeled on the scientist’s method of discovery. It views science as a constructed set of theories and ideas based on the physical world, rather than as a collection of irrefutable, disconnected facts. It focuses on asking questions, considering alternative explanations, and weighing evidence. It includes high expectations for students to acquire factual knowledge, but it expects more from them than the mere storage and retrieval of information.
Foundations, The Challenge and Promise of K–8 Science Education Reform (1997). Written by EDC’s Center for Science Education and Published by the National Science Foundation.
In the first phase of the study, CSE staff put out calls and combed the field looking for any research that might fall within the range of “inquiry science” methods. The staff used a list of 123 terms to identify studies that focused on students within the K–12 population and were completed between 1984 and 2002. Ultimately, they identified more than 900 research reports. While wading through those reports, they discovered a host of problems—from the meaning of “inquiry” to the meaning of “rigorous research.”
“If you apply the word ‘research’ to a piece of work, there should be standards about what that means, regardless of what methodology or design you’re using,” says Minner. “That was the driving force behind our quest for rigor.”
The research team established a framework to assess three main criteria for rigor: (1) descriptive clarity (do we know enough about how this study was conducted?), (2) data quality (the legitimacy of data sources, including tests, focus groups, and conversations), and (3) analytic integrity (are the findings from the data believable, traceable, and trustworthy?).
“One reason this analysis has taken so long is that with each phase of coding and each criteria for rigor, we had to develop a codebook to define and standardize our terms and methods,” says Century. The codebooks document the team’s process and provide other researchers with guidelines for assessing all kinds of studies.
Using the codebooks, the 900 reports dwindled to 456—a total of 459 were discounted for simple but important flaws. “Almost half of our reports were eliminated because they didn’t describe the treatment,” says Century. “They said nothing about what actually happened in the classroom setting. How can you accumulate a body of knowledge about the impact of instruction if you don’t have a description of the instruction being studied?” Other reports could not state clearly that outcomes were directly related to the instruction.
Since the project includes research that comprises both qualitative and quantitative data, another project challenge was to develop coding schemes that would accommodate each. Assessing the rigor of qualitative research (such as that involving case studies, classroom observations, and interview data) proved challenging because there were few road maps to follow. “There are a number of set standards for doing experimental research,” says Minner, “but these standards don’t apply to qualitative research or to other types of designs, such as quasi-experiments.”
The CSE team believes that the painstaking work they’ve done to design and apply the codebooks to hundreds of studies will result in important information for school decision-makers.
“We believe we’re developing tools and definitions that will help administrators and curriculum developers as well as researchers,” says Century. “Decision-makers need to understand not only the student outcomes associated with particular curricula, but also the nature of the instruction that fostered those outcomes. Our study is guided by that need.”
The study is in its final year. The CSE team will publish in the coming months findings on the effect that inquiry instruction has on students’ understanding of physical, earth/space, and life science concepts, and their ability to apply those concepts to solve scientific problems.
Originally published on June 1, 2006