Why Interpretation is the Cornerstone of Evidence-based Data Driven Practice

This post responds to a comment by Richard Puyt where I thought I would try to explain my ideas on interpretation and evidence in a more complete manner.

First, a first order belief of mine: data driven practices, that are supported and shown valid by research evidence, is the best way to establish or improve business practices.  Common sense is not a good way to run your business, because it is often wrong.  However, you also need a good theory or mental framework to make sense of your data and you need a broad evaluation framework to understand and to explain how your research is related to your practice.  Without good frameworks, your level of analysis falls back to common sense, no matter how much data you have available.  It simply can become a case of garbage in = garbage out.

This is the point of Stanley Fish in the NY Times Opinionator Blog when he says:

. . . there is no such thing as “common observation” or simply reporting the facts. To be sure, there is observation and observation can indeed serve to support or challenge hypotheses. But the act of observing can itself only take place within hypotheses (about the way the world is) . . . because it is within (the hypothesis) that observation and reasoning occur.  (I blogged about this before here)

Your observations, be they data, measures or research results, need to be interpreted and that can only occur within an interpretive framework such as a good theory or hypothesis.  Furthermore, the quality of your analysis will depend as much on the quality of your interpretative framework as it does on the quality of your data.

Examples

Performance Measurement:  (I previously blogged about this here.)  Any performance measure implies a theoretical rationale that links performance with the measure.  This theoretical relationship can be can be tested, validated and improved over time.  It is not just that you are using a data driven performance system, but that you also have a well supported way of interpreting the data.

Research Evidence: When conducting a quantitative study, care is taken in choosing a sample and in controlling for a wide range of potential confounding variables.  The effects that are the research results may show a causal relationship that can be trusted.  However, you can not then assume that these results can be directly applied to your business where all the confounding variables are back in play and where the sample and context may be different.  It may be a very important piece of evidence, but it should only be a piece in a larger body of evidence.  This evidence can be the basis for a theory (what Fish calls a hypothesis) and used as a basis for a practice that is data driven (what Fish calls observation), but this practice needs to be tested and validated on it’s own merit, not because it relates to a research study.

This is the basis of Evidence-based data driven practice.  Good data, derived from good measures, with a good understanding of how the measures relate to your practice, an understanding that is tested over time.  This is not too hard to do and should be a foundation for business education.

Communicating Evidence Over Different Cognitive Frameworks: Overcoming Incommensurability in Communication Frameworks Between Research and Practice

My recent posts have highlighted the differences between scientific research and other types of practice as it relates to the design of evidence-based practice.  Previously I discussed how the larger scope of practice changes the epistemological needs of practice knowledge.*  In this post I will take up Nicolay Worren’s paper which suggests the cognitive frameworks that managers use to guide and control business practice are also different from those used to disseminate scientific practice.  (Worren, N., Moore, K. & Elliott, R. (2002). When Theories Become Tools: Towards a Framework for Pragmatic Validity, Human Relations, 55(10), 1227-1250.)

Nicolay notes that science is typically conducted and disseminated in propositional frameworks (often steeped in dense scientific vocabulary), but notes that managers depend more on narrative and visual cognitive and communication frameworks that are constructed in everyday language.  This can result in 2 problems:

1. People do not really understand the generalizable meaning of research because it is buried within obscure propositional frameworks.  Good communication must be constructed with the ability to span different cognitive and situational frameworks.  Consider the following quote from R.L. Ackoff **:

Until we communicate to our potential users in a language they can understand, they and we will not understand what we are talking about. If Einstein could do it with relativity theory, we should be able to do it with systems thinking (Einstein and Infeld, 1951). It is easy to hide the ambiguity and vagueness in our own thinking behind jargon, but almost impossible to do so when speaking or writing in ordinary language.
We have developed a vocabulary that equips our students with the ability to speak with authority about subjects they do not understand. Little wonder they do not become effective spokespersons to potential users.
Ackoff, R.L. (2006). Why Few Organizations Adopt Systems Thinking, http://ackoffcenter.blogs.com/ackoff_center_weblog/files/Why_few_aopt_ST.pdf

2. Managers must deal with practices that have a wide scopes and a high level of complexity.  Just as this creates different epistemological requirements for knowledge, it also entails different cognitive requirements for understanding and communicating.  While propositional frameworks are good for maintaining precision in deductive arguments, they do not have the speed of communication and the ability to communicate complexity, change/time, or emotion that can be found in narrative and visual frameworks.

Different cognitive frameworks can make the language of research and practice not only difficult to translate, but they can become almost incommensurate.   Again, I don’t think that research and practice are incommensurate, but you’ll need to engage in inductive processes to appropriately bring them together.

Notes

* Primarily this refers to the inability to design practices scientifically; with the amount of variable control necessary to ensure the same level of internal validity we see in research.  Without this level of control it is questionable when generalizing research to uncontrolled situations.  This does not mean that research is not relevant.  It means that decisions are dependent on inductive processes (as opposed to the deductive processes common in research) and that these processes are aligned with a goodness of fit model of verification (as opposed to deductive truth).

** I’m also indebted to Nicolay for pointing me to Ackoff and this source.  See his blog post:

http://www.nicolayworren.com/2009/11/russell-ackoff-1919-2009.html

Practice-based Evidence

I reviewed 2 papers that take the perspective; if you want more evidence-based practice, you’ll need more practice-based evidence.  The core idea is: when evaluating the validity of a specific research study, internal validity is primary, but when your advocating from a specific study to evidence-based practice, external validity becomes most important.

First, Green and Glasgow take this validity framework, pointing out that most research concentrates on internal validity and pretty much ignore external validity.  It’s true that you can’t have external validity without internal validity, but this does not make external validity any less important.  They work from Cronbach, Glesser, Nanda, & Rajaratnam’s (1972) generalizability theory and relate four facets of generalizability to translation frameworks.  The four facets are identified as “different facets across which (evidence-based) program effects could be evaluated. They termed these facets units (e.g., individual patients, moderator variables, subpopulations), treatments (variations in treat- ment delivery or modality), occasions (e.g., patterns of maintenance or relapse over time in response to treatments), and settings (e.g., med- ical clinics, worksites, schools in which programs are evaluated)”.

Westfall, Mold, & Fagnan (2007) point out some of the specific problems in generalizing to valid practice:

The magnitude and nature of the work required to translate findings from human medical research into valid and effective clinical practice, as depicted in the current NIH re- search pipeline diagrams have been underestimated.  . . . (problems) include the limited external validity of randomized con-trolled trials, the diverse nature of ambulatory primary care practice, the difference between efficacy and effectiveness, the paucity of successful collaborative efforts between academic researchers and community physicians and patients, and the failure of the academic research enterprise to address needs identified by the community (p.403).   Practice-based research and practice- based research-networks (PBRNs) may help because they can (1) identify the problems that arise in daily practice that create the gap between recommended care and actual care; (2) demonstrate whether treatments with proven efficacy are truly effective and sustainable when provided in the real- world setting of ambulatory care; and (3) provide the “laboratory” for testing system improvements in primary care to maximize the number of patients who benefit from medical discovery

They recommend adding another step to the NIH’s roadmap to evidence-based practice shown in this graphic:

Screen shot 2009-10-20 at 11.28

A recommended addition to the NIH translation roadmap

References

Green, L.W. & Glasgow (2006). Evaluating the Relevance, Generalization , and Applicability of Research: Issues in External Validation and Translation Methodology, Evaluation & the Health Professions, 29, (#1) pp. 126-153.

Westfall, J.M., Mold, J., & Fagnan, L., (2007). Practice-Based Research—“Blue Highways” on the NIH Roadmap, JAMA, January 24/31, 2007—Vol 297, No. 4