More on the Research Practice Gap and Evidence-Based Practice

How Do People Approach Evidence-Based Practice

Tracy at the Evidence Soup Blog has a recent post that got me thinking that the processes supporting Evidence-based Practice (EBP) must be centered on actual clinical practices (not some abstract formulation of practice) and that these processes should include both research and clinical expertise.  Tracy reviews a article in the July issue of Clinical Child Psychology and Psychiatry (How do you apply the evidence? Are you an improver, an adapter, or a rejecter? by Nick Midgley).  I hope to review the article myself soon, but my library resources do not yet have the July issue, so my take at this time is dependent on Tracy’s description.

First here is my first take on the article:

Rejectors seem to be rejecting a positivist version of EBP when they discuss normative prepackaged practices.  This is defensible, there is no reason to follow in the positivist’s footsteps

Innovators seem to be focusing on a top down “push” approach.  First, while research in this vain is important, technology and networks are moving toward a pull approach; giving answers to practitioners when they need it.  Secondly, in addition to a top down approach there is also a need for a deep bottom up understanding of practice: understanding practice needs and devising how dissemination models can meet these needs.  Understanding transfer problems may have the question backwards.

Adapter – I like this approach for the most part with two caveats.  First it looks like it is falling into the qualitative / quantitative divide that I dislike.  I believe that you choose the methodology to fit the research question.  Qualitative research is needed to find a deep understanding of practices or to unearth value issues.  But, I’ve seen too many qualitative studies that tried to answer quantitative type research questions (i.e. which intervention is better).  Coming from a validity perspective, I believe that all kinds of data can be integrated to arrive at an inferential judgement on practice validity.  Especially in medicine, I think we often have correlational based research data, but without a lot of theory and practice-based understandings.  We need to understand practices from multiple perspectives that come together like the pieces of a puzzle to make a coherent picture.

Another Way to Approach the Research Practice Gap from a Post-Positivist Perspective

One of Samuel Messick’s validity innovations was to connect construct validity with utility, values and consequences in a progressive matrix.  His original matrix can be found on page 27 in his 1995 Am Psych article available here.  What I have done is to adapted this matrix to what it might look like for Evidence-Based Practice. (The graphic is at the end of this post) (I believe the Messick’s use of the term Test Use is analogous to Clinical Experience, which I have termed Clinical Evidence and Judgement.  Tests exist as artifact and I also believe that practice, although more concrete, can also be analyzed as an artifact in much the same way as Messick analyzes tests.)

Messick uses a matrix which I have used as well, but it could also be viewed as a stepwise process.

  • Step 1. Inferences from Research Data and syntheses forms the evidentiary basis for Practice Validity (PV)
  • Step 2. P V + Clinical Evidence and Judgement forms the evidentiary basis for the Relevance and Utility (RU) of the practice.
  • Step 3. PV + Inferences from Research form the Consequential basis that informs the clinician of the Value Implications (VI) of  a practice
  • Step 4. PV + RU + VI + Social Consequences forms the Consequential basis for Clinical evidence regarding practice use

The bottom line is that Clinical evidence for using a practice is the total of practice validity, judgements of relevance and utility, the value implications from research inferences, and evidence for the personal and social consequences of a practice

Discussion always welcome!

8-7-post table.001