Practice-based Evidence

I reviewed 2 papers that take the perspective; if you want more evidence-based practice, you’ll need more practice-based evidence.  The core idea is: when evaluating the validity of a specific research study, internal validity is primary, but when your advocating from a specific study to evidence-based practice, external validity becomes most important.

First, Green and Glasgow take this validity framework, pointing out that most research concentrates on internal validity and pretty much ignore external validity.  It’s true that you can’t have external validity without internal validity, but this does not make external validity any less important.  They work from Cronbach, Glesser, Nanda, & Rajaratnam’s (1972) generalizability theory and relate four facets of generalizability to translation frameworks.  The four facets are identified as “different facets across which (evidence-based) program effects could be evaluated. They termed these facets units (e.g., individual patients, moderator variables, subpopulations), treatments (variations in treat- ment delivery or modality), occasions (e.g., patterns of maintenance or relapse over time in response to treatments), and settings (e.g., med- ical clinics, worksites, schools in which programs are evaluated)”.

Westfall, Mold, & Fagnan (2007) point out some of the specific problems in generalizing to valid practice:

The magnitude and nature of the work required to translate findings from human medical research into valid and effective clinical practice, as depicted in the current NIH re- search pipeline diagrams have been underestimated.  . . . (problems) include the limited external validity of randomized con-trolled trials, the diverse nature of ambulatory primary care practice, the difference between efficacy and effectiveness, the paucity of successful collaborative efforts between academic researchers and community physicians and patients, and the failure of the academic research enterprise to address needs identified by the community (p.403).   Practice-based research and practice- based research-networks (PBRNs) may help because they can (1) identify the problems that arise in daily practice that create the gap between recommended care and actual care; (2) demonstrate whether treatments with proven efficacy are truly effective and sustainable when provided in the real- world setting of ambulatory care; and (3) provide the “laboratory” for testing system improvements in primary care to maximize the number of patients who benefit from medical discovery

They recommend adding another step to the NIH’s roadmap to evidence-based practice shown in this graphic:

Screen shot 2009-10-20 at 11.28

A recommended addition to the NIH translation roadmap

References

Green, L.W. & Glasgow (2006). Evaluating the Relevance, Generalization , and Applicability of Research: Issues in External Validation and Translation Methodology, Evaluation & the Health Professions, 29, (#1) pp. 126-153.

Westfall, J.M., Mold, J., & Fagnan, L., (2007). Practice-Based Research—“Blue Highways” on the NIH Roadmap, JAMA, January 24/31, 2007—Vol 297, No. 4

Analyzing the Research Practice Gap Via Inductive and Deductive Logics

The following is an attempt to further refine my views on evidence-based practices.

If evidence-based practice is going to gain traction as a movement, a better way is needed, not necessarily to translate research into practice, but rather, to devise a way to gather evidence in support of practice and practice development.  I will suggest that the evidence-based movement is interested in make inferences about the validity of practices given an overview of relevant evidence.  In general, determining the validity of practices means gathering evidence through inductive processes.  This contrasts with much research, which is conducted according to the requirements of deductive analyses.  Constructing evidence-based practices may depend on research acquired through deductive processes, but culling evidence for the validity of a practice requires inductive processes.

It has been recognized that there are differences between practice and research in how they categorize information and approach problems.  Hodgkinson and Rousseau (2009) note that: “the research–practice gap is due to more than language and style differences between science and practice. The categories scientists and practitioners use to describe the things they focus on are very different”.  Cohen (2007) also considers differences in the way problems are approached.  “There is a fundamental difference between how academics approach the analysis of a problem and how practitioners focus on a problem” (p. 1017).

Consider that researchers often construct information in terms of independent and dependent variables and that their relationships are investigated through rigorous forms of deductive logic.  There are very good reasons for researchers to do this.  Deduction does not equal truth from a philosophical perspective, but as Goodman (1978) puts it; “Among the most explicit and clearcut standards of rightness we have anywhere are those for validity of a deductive argument”.  The problem is, applying deductive logic to practices is difficult at best and often impossible due to the complexity and scope of most practices.  Even when research is conducted through inductive methodologies, the rigorous treatment of variables and  categorization schemes does not often approach the scope and complexity of processes we see in practice.  Consider the example of practice provided by Cohen in regards to the evidence supporting the use of testing in hiring and talent management practices.

Human resource managers must “select individuals who will fit the organization as well as have the technical capability to do the job. A person who is smart and who gets “results” may be a person who discriminates, bullies, or causes turnover in the organization.   . . .  Intelligence and personality are only partially predictive of success in candidates for management positions. Factors such as accurate job descriptions, effective organizational structure, sound compensation philosophies and reward structures must all be considered in both attracting and selecting employees.

Human resource managers cannot restrict their practices to the rigorous analysis of a few simple variable.  Often they must gather a wide scope of information and must analyze that information in terms of a goodness of fit between the individual and a position.

Practitioners then must use methods that are appropriate for practices that often involve a constellations of variables and numerous related theories.  The methodological requirements for the standards of deductive experimental research cannot often be adapted to such complex circumstances.  There are simply too many variables that are often too loosely defined.  When looking at a specific practice, we are most interested not in determining if the practice is right or wrong, but is it valid.  Validity is an inference, a conclusion based on evidence and reasoning.  I am making generalities.  It may be possible to devise an experiment that provides a deductive inference about a practice, but in general, most conclusions in validity are made by inductively compiling information that infers a goodness of fit between the practice and the evidence.

Subsuming and combining multiple arguments, research findings and theoretical frameworks is a job for inductive processes, even though inductive logic may be seen as a philosophical step removed from the truth.  In addition, language imposes yet another layer of complexity.  Scientists have rather standardized ways of talking about research methodology, theories and variables, but practitioners have to deal with much more pluralistic ways in which practice is often framed and the language that is used to frame it.  Again, the nature of the inductive process necessary to guide practice is one whose truth value is not necessarily determined by its correctness, but by its overall goodness of fit (Goodman, 1978).

In my next post I’ll look at what type of information is useful in inferences about the validity of practice.

References

Cohen, D.J. (2007), The Very Separate Worlds of Academic and Practitioner Publications in Human Resource Management: Reasons for the Divide and Concrete Solutions for Bridging the Gap, Academy of Management Journal, 50 (#5) 1013-1019.

Goodman N. (1978). Ways of Worldmaking, Indianapolis, IN: Hackett Publishing Company.

Hodgkinson, G.P. & Rousseau, D.M. (2009). Bridging the Rigour-Relevance Gap in Management Research: It’s Already Happening!, Journal of Management Studies, 46:3, 534-546.

Evidence-based Practice: Is it Actionable Information or Judgement Support

A recent article can be read as another interesting take on the relationship between science and practice.  The reference is:

Stake, R.E. (2009). The incredible lightness of evidence: Problems of synthesis in educational evaluation, Studies in Educational Evaluation, 35,  pp. 3-6.

In the authors words:

Educational measurement and evaluation is . . .  a problematic field. Standardized testing is magnificent in its conceptual structure, and it is ‘‘out of control’’ in educational practice.  . . . there is widespread advocacy for evidence- based decision-making . . . (and) One quickly understands that many of its advocates are speaking of evidence in the form of objective, science-driven, decontextualized, action determining knowledge, more than as material for user deliberation.

Evidence is an important concept also in establishing a rationale or potential for action. Here there is no single criterion but multiple criteria . . .  policy should be based on many factors, on evidence of many kinds.  . . . Evidence is an attribute of information, but it is also an attribute of persuasion. It contributes to understanding and conviction.  . . . Evidence should be subordinate to judgment, crafted to user conviction. (emphasis added)