Howe’s Critique of a Positivist Evidence-based Movement with a Potentially Valid Way Forward

A summary of Kenneth Howe’s article criticizing positivism and the new orthodoxy in educational science (evidence-based education).

(Howe, K.R., (2009). Epistemology, Methodology, and Education Sciences: Positivist Dogma, Rhetoric, and the Education Science Question, Education Researcher, 38 (#6) pp. 428-440.

Keywords: Philosophy; politics; research methodology

“Although explicitly articulated versions (of positivism) were cast off quite some time ago in philosophy, positivism continues to thrive in tacit form on the broader scene . . . now resurgent in the new scientific orthodoxy.” (p.428)

(A positivist stance on science) has sought to “construct a priestly ethos – by suggesting that it is the singular mediator of knowledge, or at least of whatever knowledge has real value . . . and should therefore enjoy a commensurate authority” (Howe quoting Lessl, from Science and Rhetoric).

Howe traces the outline of this tacit form of positivism through the National Research Council’s 2002 report titled Scientific Research in Education and relates this report to three dogmas of positivism:

  1. The quantitative – qualitative dichotomy – A reductionism dogma that had the consequence of limiting the acceptable range of what could be considered valid in research studies.
  2. The fact value distinction – An attempts to portray science as a value free process with the effect of obscuring the underlying values in operation.
  3. The division between the sciences and the humanities. Another distinction of positivism designed to limit any discussions to a narrow view of science.

Howe’s article does a good job of summarizing these general critiques of positivist methodology, which include: (1)its overall claims could not stand up to philosophical scrutiny, (2) it tended to not recognize many of its own limitations including applying adequate standards to itself and (3) it also was inhabited by a political agenda that sought to stifle and block many important directions that inquiry otherwise might have taken.

The crux of the political matter: While the goal of positivism may have been to positively establish a objective verifiable method of conducting social science modeled on the physical sciences, the primary result was an attempt to politically limit the scope of what could be considered meaningful scientific statements to include only statements that were verifiable in a narrow positivist sense. Howe is among the cohort who believe that the evidence-based movement is being used by some as a context to advance a tacit return to a form of positivism.

The crux of the scientific matter: Howe’s primary interest appear to be political, the politics of how research is received and funded, but there is also an effectiveness issue.  Positivism’s primarily scientific problems are in the tendency to ignore or to down play many of the limitations of positivist methods, (overstating the meaning of positivist research) and in the way it oversimplifies and fails to problematizes the rather complex relationship between research and practice.

Messick’s Six Part Validity Framework as a Response

There are four responses to Howe in this journal issue. To me, none of the responses address the primary issue at play: to bring some sense of unity to varying ideas and communication with people using different scientific methodological frameworks.  There are suggestions to allow for multiple methods, but they are more of a juxtaposition of methods rather than a framework that serves to guide and support communication and understanding among scientists use differing methods.  This is why I support Messick’s validity framework as a response to just this type of concern.  Although Messick spoke specifically of test validity, there is nothing that would preclude this framework from being applicable to practice validity and to the development of post-positivist evidence to support the validity of practices.  What is the evidence-based movement really concerned with, if it is not the validity of the practices being pursued by practitioners.  This is not primarily about the validity of individual research studies, but is about the validity of practices and developing evidence to support the validity of specific practices.  It is also a mature framework that considers the full range of inquiry when developing evidence.

Messick’s six areas for developing validity are six different types of validity evidence and I develop here an initial set of ideas about how they might relate to evidence-based practice as follows:

  • Content – Content defines the scope of the practice domain with evidence (including rationales and philosophical debates) for the relevance of a particular practice, how the practice represents ideas within the general domain and the technical quality as compared to other examples of the practice.
  • Substantive – Evidence that the design of actual processes involved are consistent with the knowledge of design from relevant domains (i.e. psychology, sociology, engineering, etc. . ..)
  • Structural – The consistence between the processes involved and the theories that underly and support rationales for the structure of the actual process.
  • External – Empirical data to support criterion evidence (Random controlled trials (RCT) would be one example).  For many practices this may include both convergent and discriminant evidence.  (My thinking is still in development here, but I am think that empirical evidence from the research base would function more like criterion evidence.  Direct empirical evidence from the actual practice being validated would be considered in most situations under consequential evidence.  See below.)
  • Generalization – Evidence for the practice to be relevant and effective across different populations, contexts and time periods.
  • Consequential – Evidence that the practice is actually achieving the purpose it was originally intended to achieve.

I consider this list to be an early formation with more development needed.  Critiques are most welcome.

Messick’s original formulation for test validity is available here.

More on the Research Practice Gap and Evidence-Based Practice

How Do People Approach Evidence-Based Practice

Tracy at the Evidence Soup Blog has a recent post that got me thinking that the processes supporting Evidence-based Practice (EBP) must be centered on actual clinical practices (not some abstract formulation of practice) and that these processes should include both research and clinical expertise.  Tracy reviews a article in the July issue of Clinical Child Psychology and Psychiatry (How do you apply the evidence? Are you an improver, an adapter, or a rejecter? by Nick Midgley).  I hope to review the article myself soon, but my library resources do not yet have the July issue, so my take at this time is dependent on Tracy’s description.

First here is my first take on the article:

Rejectors seem to be rejecting a positivist version of EBP when they discuss normative prepackaged practices.  This is defensible, there is no reason to follow in the positivist’s footsteps

Innovators seem to be focusing on a top down “push” approach.  First, while research in this vain is important, technology and networks are moving toward a pull approach; giving answers to practitioners when they need it.  Secondly, in addition to a top down approach there is also a need for a deep bottom up understanding of practice: understanding practice needs and devising how dissemination models can meet these needs.  Understanding transfer problems may have the question backwards.

Adapter – I like this approach for the most part with two caveats.  First it looks like it is falling into the qualitative / quantitative divide that I dislike.  I believe that you choose the methodology to fit the research question.  Qualitative research is needed to find a deep understanding of practices or to unearth value issues.  But, I’ve seen too many qualitative studies that tried to answer quantitative type research questions (i.e. which intervention is better).  Coming from a validity perspective, I believe that all kinds of data can be integrated to arrive at an inferential judgement on practice validity.  Especially in medicine, I think we often have correlational based research data, but without a lot of theory and practice-based understandings.  We need to understand practices from multiple perspectives that come together like the pieces of a puzzle to make a coherent picture.

Another Way to Approach the Research Practice Gap from a Post-Positivist Perspective

One of Samuel Messick’s validity innovations was to connect construct validity with utility, values and consequences in a progressive matrix.  His original matrix can be found on page 27 in his 1995 Am Psych article available here.  What I have done is to adapted this matrix to what it might look like for Evidence-Based Practice. (The graphic is at the end of this post) (I believe the Messick’s use of the term Test Use is analogous to Clinical Experience, which I have termed Clinical Evidence and Judgement.  Tests exist as artifact and I also believe that practice, although more concrete, can also be analyzed as an artifact in much the same way as Messick analyzes tests.)

Messick uses a matrix which I have used as well, but it could also be viewed as a stepwise process.

  • Step 1. Inferences from Research Data and syntheses forms the evidentiary basis for Practice Validity (PV)
  • Step 2. P V + Clinical Evidence and Judgement forms the evidentiary basis for the Relevance and Utility (RU) of the practice.
  • Step 3. PV + Inferences from Research form the Consequential basis that informs the clinician of the Value Implications (VI) of  a practice
  • Step 4. PV + RU + VI + Social Consequences forms the Consequential basis for Clinical evidence regarding practice use

The bottom line is that Clinical evidence for using a practice is the total of practice validity, judgements of relevance and utility, the value implications from research inferences, and evidence for the personal and social consequences of a practice

Discussion always welcome!

8-7-post table.001