A summary of Kenneth Howe’s article criticizing positivism and the new orthodoxy in educational science (evidence-based education).
(Howe, K.R., (2009). Epistemology, Methodology, and Education Sciences: Positivist Dogma, Rhetoric, and the Education Science Question, Education Researcher, 38 (#6) pp. 428-440.
Keywords: Philosophy; politics; research methodology
“Although explicitly articulated versions (of positivism) were cast off quite some time ago in philosophy, positivism continues to thrive in tacit form on the broader scene . . . now resurgent in the new scientific orthodoxy.” (p.428)
(A positivist stance on science) has sought to “construct a priestly ethos – by suggesting that it is the singular mediator of knowledge, or at least of whatever knowledge has real value . . . and should therefore enjoy a commensurate authority” (Howe quoting Lessl, from Science and Rhetoric).
Howe traces the outline of this tacit form of positivism through the National Research Council’s 2002 report titled Scientific Research in Education and relates this report to three dogmas of positivism:
- The quantitative – qualitative dichotomy – A reductionism dogma that had the consequence of limiting the acceptable range of what could be considered valid in research studies.
- The fact value distinction – An attempts to portray science as a value free process with the effect of obscuring the underlying values in operation.
- The division between the sciences and the humanities. Another distinction of positivism designed to limit any discussions to a narrow view of science.
Howe’s article does a good job of summarizing these general critiques of positivist methodology, which include: (1)its overall claims could not stand up to philosophical scrutiny, (2) it tended to not recognize many of its own limitations including applying adequate standards to itself and (3) it also was inhabited by a political agenda that sought to stifle and block many important directions that inquiry otherwise might have taken.
The crux of the political matter: While the goal of positivism may have been to positively establish a objective verifiable method of conducting social science modeled on the physical sciences, the primary result was an attempt to politically limit the scope of what could be considered meaningful scientific statements to include only statements that were verifiable in a narrow positivist sense. Howe is among the cohort who believe that the evidence-based movement is being used by some as a context to advance a tacit return to a form of positivism.
The crux of the scientific matter: Howe’s primary interest appear to be political, the politics of how research is received and funded, but there is also an effectiveness issue. Positivism’s primarily scientific problems are in the tendency to ignore or to down play many of the limitations of positivist methods, (overstating the meaning of positivist research) and in the way it oversimplifies and fails to problematizes the rather complex relationship between research and practice.
Messick’s Six Part Validity Framework as a Response
There are four responses to Howe in this journal issue. To me, none of the responses address the primary issue at play: to bring some sense of unity to varying ideas and communication with people using different scientific methodological frameworks. There are suggestions to allow for multiple methods, but they are more of a juxtaposition of methods rather than a framework that serves to guide and support communication and understanding among scientists use differing methods. This is why I support Messick’s validity framework as a response to just this type of concern. Although Messick spoke specifically of test validity, there is nothing that would preclude this framework from being applicable to practice validity and to the development of post-positivist evidence to support the validity of practices. What is the evidence-based movement really concerned with, if it is not the validity of the practices being pursued by practitioners. This is not primarily about the validity of individual research studies, but is about the validity of practices and developing evidence to support the validity of specific practices. It is also a mature framework that considers the full range of inquiry when developing evidence.
Messick’s six areas for developing validity are six different types of validity evidence and I develop here an initial set of ideas about how they might relate to evidence-based practice as follows:
- Content – Content defines the scope of the practice domain with evidence (including rationales and philosophical debates) for the relevance of a particular practice, how the practice represents ideas within the general domain and the technical quality as compared to other examples of the practice.
- Substantive – Evidence that the design of actual processes involved are consistent with the knowledge of design from relevant domains (i.e. psychology, sociology, engineering, etc. . ..)
- Structural – The consistence between the processes involved and the theories that underly and support rationales for the structure of the actual process.
- External – Empirical data to support criterion evidence (Random controlled trials (RCT) would be one example). For many practices this may include both convergent and discriminant evidence. (My thinking is still in development here, but I am think that empirical evidence from the research base would function more like criterion evidence. Direct empirical evidence from the actual practice being validated would be considered in most situations under consequential evidence. See below.)
- Generalization – Evidence for the practice to be relevant and effective across different populations, contexts and time periods.
- Consequential – Evidence that the practice is actually achieving the purpose it was originally intended to achieve.
I consider this list to be an early formation with more development needed. Critiques are most welcome.
Messick’s original formulation for test validity is available here.