Evidence-based Practice Defined

Evidence-based practice (whether in management, education, medicine or other) describes a process designed to facilitate an integrative and evaluative judgement, based on empirical evidence and theoretical rationale, as to the appropriateness and adequacy of actions or proposed actions, that are based on the processes or protocols of a practice or program.
Anyone familiar with the validity theory of Samuel Messick will recognize that this is drawn from his language.  That is because of my belief that the evidence-based movement should focus on practice in a way that can best be understood as practice validity (or process validity, or program validity depending on your perspective and task)
Evidence-based practice represents a multifaceted judgment. This is not a cookbook or a cherry picking approach.  It is a multifaceted judgement integrating all relevant information to evaluate actions as to their appropriateness for specific contexts and goals.  These actions are based on empirically supported theoretical rationales that are also backed up by locally collected data.
Evidence-based practice is scientific inquiry. This approach does not just rely on the results of scientific inquiry, it represents a form of scientific inquiry, but instead of being directed toward some aspect of theory (an a-contextual methodology), it is directed toward a specific practice (a contextualized methodology).  Though much of the literature equates an evidence-based approach with a decision-making process, the application of evidence is often focused on processes (or standing practices, or programs) and whether these processes are adequate and appropriate toward intended goals.
Evidence-based Practice should be associated with valid clinical or practice judgment.  A common criticism of the cookbook approach to EBP is that it impinges upon the ability of clinicians to make clinical judgements, but that is only true if one does not recognize that EBP is itself a judgement.  It is a form of clinical judgement and should occupy an important space within clinical practice.  This is an important point because research information seldom exactly fits the contexts of practice.  Applying this type of information is not direct, but requires making an informed judgment.  This is also an area that needs additional study.  I don’t believe that we really fully understand all aspects (especially cognitive aspects) of clinical judgment.

Practice-based Evidence

I reviewed 2 papers that take the perspective; if you want more evidence-based practice, you’ll need more practice-based evidence.  The core idea is: when evaluating the validity of a specific research study, internal validity is primary, but when your advocating from a specific study to evidence-based practice, external validity becomes most important.

First, Green and Glasgow take this validity framework, pointing out that most research concentrates on internal validity and pretty much ignore external validity.  It’s true that you can’t have external validity without internal validity, but this does not make external validity any less important.  They work from Cronbach, Glesser, Nanda, & Rajaratnam’s (1972) generalizability theory and relate four facets of generalizability to translation frameworks.  The four facets are identified as “different facets across which (evidence-based) program effects could be evaluated. They termed these facets units (e.g., individual patients, moderator variables, subpopulations), treatments (variations in treat- ment delivery or modality), occasions (e.g., patterns of maintenance or relapse over time in response to treatments), and settings (e.g., med- ical clinics, worksites, schools in which programs are evaluated)”.

Westfall, Mold, & Fagnan (2007) point out some of the specific problems in generalizing to valid practice:

The magnitude and nature of the work required to translate findings from human medical research into valid and effective clinical practice, as depicted in the current NIH re- search pipeline diagrams have been underestimated.  . . . (problems) include the limited external validity of randomized con-trolled trials, the diverse nature of ambulatory primary care practice, the difference between efficacy and effectiveness, the paucity of successful collaborative efforts between academic researchers and community physicians and patients, and the failure of the academic research enterprise to address needs identified by the community (p.403).   Practice-based research and practice- based research-networks (PBRNs) may help because they can (1) identify the problems that arise in daily practice that create the gap between recommended care and actual care; (2) demonstrate whether treatments with proven efficacy are truly effective and sustainable when provided in the real- world setting of ambulatory care; and (3) provide the “laboratory” for testing system improvements in primary care to maximize the number of patients who benefit from medical discovery

They recommend adding another step to the NIH’s roadmap to evidence-based practice shown in this graphic:

Screen shot 2009-10-20 at 11.28

A recommended addition to the NIH translation roadmap

References

Green, L.W. & Glasgow (2006). Evaluating the Relevance, Generalization , and Applicability of Research: Issues in External Validation and Translation Methodology, Evaluation & the Health Professions, 29, (#1) pp. 126-153.

Westfall, J.M., Mold, J., & Fagnan, L., (2007). Practice-Based Research—“Blue Highways” on the NIH Roadmap, JAMA, January 24/31, 2007—Vol 297, No. 4

Howe’s Critique of a Positivist Evidence-based Movement with a Potentially Valid Way Forward

A summary of Kenneth Howe’s article criticizing positivism and the new orthodoxy in educational science (evidence-based education).

(Howe, K.R., (2009). Epistemology, Methodology, and Education Sciences: Positivist Dogma, Rhetoric, and the Education Science Question, Education Researcher, 38 (#6) pp. 428-440.

Keywords: Philosophy; politics; research methodology

“Although explicitly articulated versions (of positivism) were cast off quite some time ago in philosophy, positivism continues to thrive in tacit form on the broader scene . . . now resurgent in the new scientific orthodoxy.” (p.428)

(A positivist stance on science) has sought to “construct a priestly ethos – by suggesting that it is the singular mediator of knowledge, or at least of whatever knowledge has real value . . . and should therefore enjoy a commensurate authority” (Howe quoting Lessl, from Science and Rhetoric).

Howe traces the outline of this tacit form of positivism through the National Research Council’s 2002 report titled Scientific Research in Education and relates this report to three dogmas of positivism:

  1. The quantitative – qualitative dichotomy – A reductionism dogma that had the consequence of limiting the acceptable range of what could be considered valid in research studies.
  2. The fact value distinction – An attempts to portray science as a value free process with the effect of obscuring the underlying values in operation.
  3. The division between the sciences and the humanities. Another distinction of positivism designed to limit any discussions to a narrow view of science.

Howe’s article does a good job of summarizing these general critiques of positivist methodology, which include: (1)its overall claims could not stand up to philosophical scrutiny, (2) it tended to not recognize many of its own limitations including applying adequate standards to itself and (3) it also was inhabited by a political agenda that sought to stifle and block many important directions that inquiry otherwise might have taken.

The crux of the political matter: While the goal of positivism may have been to positively establish a objective verifiable method of conducting social science modeled on the physical sciences, the primary result was an attempt to politically limit the scope of what could be considered meaningful scientific statements to include only statements that were verifiable in a narrow positivist sense. Howe is among the cohort who believe that the evidence-based movement is being used by some as a context to advance a tacit return to a form of positivism.

The crux of the scientific matter: Howe’s primary interest appear to be political, the politics of how research is received and funded, but there is also an effectiveness issue.  Positivism’s primarily scientific problems are in the tendency to ignore or to down play many of the limitations of positivist methods, (overstating the meaning of positivist research) and in the way it oversimplifies and fails to problematizes the rather complex relationship between research and practice.

Messick’s Six Part Validity Framework as a Response

There are four responses to Howe in this journal issue. To me, none of the responses address the primary issue at play: to bring some sense of unity to varying ideas and communication with people using different scientific methodological frameworks.  There are suggestions to allow for multiple methods, but they are more of a juxtaposition of methods rather than a framework that serves to guide and support communication and understanding among scientists use differing methods.  This is why I support Messick’s validity framework as a response to just this type of concern.  Although Messick spoke specifically of test validity, there is nothing that would preclude this framework from being applicable to practice validity and to the development of post-positivist evidence to support the validity of practices.  What is the evidence-based movement really concerned with, if it is not the validity of the practices being pursued by practitioners.  This is not primarily about the validity of individual research studies, but is about the validity of practices and developing evidence to support the validity of specific practices.  It is also a mature framework that considers the full range of inquiry when developing evidence.

Messick’s six areas for developing validity are six different types of validity evidence and I develop here an initial set of ideas about how they might relate to evidence-based practice as follows:

  • Content – Content defines the scope of the practice domain with evidence (including rationales and philosophical debates) for the relevance of a particular practice, how the practice represents ideas within the general domain and the technical quality as compared to other examples of the practice.
  • Substantive – Evidence that the design of actual processes involved are consistent with the knowledge of design from relevant domains (i.e. psychology, sociology, engineering, etc. . ..)
  • Structural – The consistence between the processes involved and the theories that underly and support rationales for the structure of the actual process.
  • External – Empirical data to support criterion evidence (Random controlled trials (RCT) would be one example).  For many practices this may include both convergent and discriminant evidence.  (My thinking is still in development here, but I am think that empirical evidence from the research base would function more like criterion evidence.  Direct empirical evidence from the actual practice being validated would be considered in most situations under consequential evidence.  See below.)
  • Generalization – Evidence for the practice to be relevant and effective across different populations, contexts and time periods.
  • Consequential – Evidence that the practice is actually achieving the purpose it was originally intended to achieve.

I consider this list to be an early formation with more development needed.  Critiques are most welcome.

Messick’s original formulation for test validity is available here.

More on the Research Practice Gap and Evidence-Based Practice

How Do People Approach Evidence-Based Practice

Tracy at the Evidence Soup Blog has a recent post that got me thinking that the processes supporting Evidence-based Practice (EBP) must be centered on actual clinical practices (not some abstract formulation of practice) and that these processes should include both research and clinical expertise.  Tracy reviews a article in the July issue of Clinical Child Psychology and Psychiatry (How do you apply the evidence? Are you an improver, an adapter, or a rejecter? by Nick Midgley).  I hope to review the article myself soon, but my library resources do not yet have the July issue, so my take at this time is dependent on Tracy’s description.

First here is my first take on the article:

Rejectors seem to be rejecting a positivist version of EBP when they discuss normative prepackaged practices.  This is defensible, there is no reason to follow in the positivist’s footsteps

Innovators seem to be focusing on a top down “push” approach.  First, while research in this vain is important, technology and networks are moving toward a pull approach; giving answers to practitioners when they need it.  Secondly, in addition to a top down approach there is also a need for a deep bottom up understanding of practice: understanding practice needs and devising how dissemination models can meet these needs.  Understanding transfer problems may have the question backwards.

Adapter – I like this approach for the most part with two caveats.  First it looks like it is falling into the qualitative / quantitative divide that I dislike.  I believe that you choose the methodology to fit the research question.  Qualitative research is needed to find a deep understanding of practices or to unearth value issues.  But, I’ve seen too many qualitative studies that tried to answer quantitative type research questions (i.e. which intervention is better).  Coming from a validity perspective, I believe that all kinds of data can be integrated to arrive at an inferential judgement on practice validity.  Especially in medicine, I think we often have correlational based research data, but without a lot of theory and practice-based understandings.  We need to understand practices from multiple perspectives that come together like the pieces of a puzzle to make a coherent picture.

Another Way to Approach the Research Practice Gap from a Post-Positivist Perspective

One of Samuel Messick’s validity innovations was to connect construct validity with utility, values and consequences in a progressive matrix.  His original matrix can be found on page 27 in his 1995 Am Psych article available here.  What I have done is to adapted this matrix to what it might look like for Evidence-Based Practice. (The graphic is at the end of this post) (I believe the Messick’s use of the term Test Use is analogous to Clinical Experience, which I have termed Clinical Evidence and Judgement.  Tests exist as artifact and I also believe that practice, although more concrete, can also be analyzed as an artifact in much the same way as Messick analyzes tests.)

Messick uses a matrix which I have used as well, but it could also be viewed as a stepwise process.

  • Step 1. Inferences from Research Data and syntheses forms the evidentiary basis for Practice Validity (PV)
  • Step 2. P V + Clinical Evidence and Judgement forms the evidentiary basis for the Relevance and Utility (RU) of the practice.
  • Step 3. PV + Inferences from Research form the Consequential basis that informs the clinician of the Value Implications (VI) of  a practice
  • Step 4. PV + RU + VI + Social Consequences forms the Consequential basis for Clinical evidence regarding practice use

The bottom line is that Clinical evidence for using a practice is the total of practice validity, judgements of relevance and utility, the value implications from research inferences, and evidence for the personal and social consequences of a practice

Discussion always welcome!

8-7-post table.001