Future EBMgmt Research Ideas

  1. I will need to think more on an evidence evaluation framework and how Rousseau’s model might be enhanced by Messick’s model of validity as discusses in my last post.
  2. As Messick said that validity is about test use not tests in themselves, so evidence-based practice is about how the evidence is used in practice, not about the evidence itself.  This needs to be spelled out.
  3. The practice research gap – Research validity generally become greater the more context can be controlled and parsed out of studies.  In evidence-based practice evidence must be related to context to be valid.  The more confident you are of research results, the less confidence you are that it will relate to the constellation of factors seen in contexts.  I don’t know how you can get beyond this without some applied research that puts research syntheses to the test.
  4. Practice is most often cross or interdisciplinary.  This impacts the last point, but it also means that each practice relates to many potential disciplines.  Accumulating the vast amounts of data will be next to impossible in a practical manor.  We need a technological solution through some sort of Web 3.0 or metadata solution as well as a technological way to compile data.

Considering the Validity of Evidence

In my last post I looked at an evidence-based framework that included evidence evaluation.  Denise Rousseau from Carnegie Mellon has extended the ability to evaluate evidence with a new model (Full paper here) that include these evaluation categories: Construct Validity, Internal Validity, Effect Size, Generalizability, Intervention Compliance, and Contextualization.  These catagories correspond closely to the six catagories of validity proposed by Messick (previously discussed here).

Rousseau Catagories Messick Catagories
Construct Validity Structural
Internal Validity External (Not a perfect match but logically similar)
Effect Size External
Generalizability Generalizability
Intervention Compliance Substantive
Contextualization Content

This is not an exact match category by category, but the way in which evaluation evidence is categorizing is very similar in approach and in purpose.  What Rousseau leaves out is consequencial validity and she does not address content and substantive validity in full.

A Framework for Integrating Evidence and Practice

Why do we need to consider evidence-based methodologies for our practices, because as Jeffrey Pfeffer recently stated, belief often trumps evidence and bias and false beliefs abound.  But implementing these methods is often not linear, rational or easy.  Joanne Rycroft-Malone, et. al. (working in the medical field) have developed a model suitable to this level of complexity.

The model is divided between evidence concerns (sub-divided into research, clinical and patient [or customer] concerns) and contextual concerns (subdivided into context, culture, leadership and evaluation concerns).  See Figure 1

Some of the lessons learned include:

    • Getting evidence into practice is not . . . a linear and logical process.
    • (This) framework attempts to represent the complexity of the processes involved in implementation . . ..
    • The nature of the evidence, the quality of the context, and the type of facilitation all impact simultaneously on whether implementation is successful.
    • Implementation is more likely to be successful when:
    • Evidence (research, clinical experience, and patient experience) is well conceived, designed, and executed and there is consensus about it.
    • The context in which the evidence is being implemented is characterised by clarity of roles, decentralised decision making, transformational leadership, and a reliance on multiple sources of information on performance.
    • Facilitation mechanisms appropriate to the needs of the situation have been instigated.

The intended purpose of this framework is to provide practitioners with a tool to plan, implement, and track their own strategies for change.  The article also notes that research methods must match the research question considered.  RCT methods are not always the best way to frame research.