Practice-based Evidence

I reviewed 2 papers that take the perspective; if you want more evidence-based practice, you’ll need more practice-based evidence.  The core idea is: when evaluating the validity of a specific research study, internal validity is primary, but when your advocating from a specific study to evidence-based practice, external validity becomes most important.

First, Green and Glasgow take this validity framework, pointing out that most research concentrates on internal validity and pretty much ignore external validity.  It’s true that you can’t have external validity without internal validity, but this does not make external validity any less important.  They work from Cronbach, Glesser, Nanda, & Rajaratnam’s (1972) generalizability theory and relate four facets of generalizability to translation frameworks.  The four facets are identified as “different facets across which (evidence-based) program effects could be evaluated. They termed these facets units (e.g., individual patients, moderator variables, subpopulations), treatments (variations in treat- ment delivery or modality), occasions (e.g., patterns of maintenance or relapse over time in response to treatments), and settings (e.g., med- ical clinics, worksites, schools in which programs are evaluated)”.

Westfall, Mold, & Fagnan (2007) point out some of the specific problems in generalizing to valid practice:

The magnitude and nature of the work required to translate findings from human medical research into valid and effective clinical practice, as depicted in the current NIH re- search pipeline diagrams have been underestimated.  . . . (problems) include the limited external validity of randomized con-trolled trials, the diverse nature of ambulatory primary care practice, the difference between efficacy and effectiveness, the paucity of successful collaborative efforts between academic researchers and community physicians and patients, and the failure of the academic research enterprise to address needs identified by the community (p.403).   Practice-based research and practice- based research-networks (PBRNs) may help because they can (1) identify the problems that arise in daily practice that create the gap between recommended care and actual care; (2) demonstrate whether treatments with proven efficacy are truly effective and sustainable when provided in the real- world setting of ambulatory care; and (3) provide the “laboratory” for testing system improvements in primary care to maximize the number of patients who benefit from medical discovery

They recommend adding another step to the NIH’s roadmap to evidence-based practice shown in this graphic:

Screen shot 2009-10-20 at 11.28

A recommended addition to the NIH translation roadmap

References

Green, L.W. & Glasgow (2006). Evaluating the Relevance, Generalization , and Applicability of Research: Issues in External Validation and Translation Methodology, Evaluation & the Health Professions, 29, (#1) pp. 126-153.

Westfall, J.M., Mold, J., & Fagnan, L., (2007). Practice-Based Research—“Blue Highways” on the NIH Roadmap, JAMA, January 24/31, 2007—Vol 297, No. 4

Writing to Tame the Chaos

Recently found great writing and revising prompts and suggestions in the Tomorrows Professor Blog article by Gina Hiatt, Ph.D 851. Reducing Over-Complexity in Your Scholarly Writing

The first one struck me as an illustration of distributed cognition; how we use external aids to add structure and extend our thinking.

Write to find out what you think. Your thoughts will be somewhat muddled until you get them in writing. Don’t go around and around in circles internally until you know what to write. Write before you know what you’re going to say.

Learn to tolerate some degree of confusion, and yes, complexity in your early writing. I’ve noticed that many academics get panicky when their first draft is a mess. It’s supposed to be a mess! Have faith in the revision process.

I really do need to get something down “on the page” before I really understand the implications of what I’m thinking. It supports the limitations of short-term and working memory, but more than that too! It’s also the back and forth / give and take revising process.  I’m revising my thoughts and ideas while I get the first words down.  This is one of the main reasons I blog, to workout ideas on the page and over time.

Good thinking, writing and communicating should go hand in hand.  I also think that there are no principled breaks in the chiastic relationships between thought, writing and communicating.  I think there is a common sense that academic writing is for scholars and not for the rest. Now in one sense, academic are writing for other academics and thus their writing serves an instrumental purpose, but good writing should also be able to serve a broader purpose.  It should be able to communicate, and inspire good thinking for non-academics.

Why are non-academic not exposed to good thinking and why are academics not writing in ways that better influence practitioners:

  • I think there are low expectations for non-academics. To anyone familiar with the literature on teacher expectations, this should send up red flags!  Its is easy to downplay potential via expectations.
  • I think academics get too caught-up in the need for complexity that serves only disciplinary vocabulary and categorization schemes, not the underlying thinking.  Look at the current economy / finance mess and those complex derivatives.  It’s looking more and more like pretty simple fraud that people perpetrated on themselves and on the rest of us by using complex vocabulary and mathematical formulas to cover-up what was basically simple.  Thought can be complex, but there is a lot of complex writing that shouldn’t be.

I shouldn’t be ranting, not with MY dissertation, just working to try getting better.

Collective Intelligence and Distributed Decision Making

Lots of information recently on the topic of collective intelligence and distributed decision making (web 2.0, decision 2.0, project management 2.0 etc. . .).

George Siemen’s blog looks at the report Clickstream Data Yields High-Resolution Maps of Science. He notes; “The data is current provided as images. It would be useful to navigate the resulting “map of science” in an interactive application”.  When this comes-to-pass, it would really represent a powerful web 2.0 app and a data tool for ideas.

The Many Worlds Blog discusses Eric Bonabeau’s Sloan management review article on Decision Making 2.0. (1-9-2009)  They note 2 concluding suggestions by Bonabeau

“First, collective intelligence tends to be most effective in correcting individual biases in the overall task area of (idea) generation” and

‘Second, because most applications lack a strong feedback loop between generation and evaluation, “companies should consider deploying such feedback loops with greater frequency because the iterative process taps more fully into the power of a collective.”’

This could really be realized if there were two more developments like clickstream data.

  1. If disciplinary research agendas would become more self-organized by being more connected to the collective 2.0 world, because research is just such a feedback loop that Bonabeau is calling for, and
  2. We had a better aggregator for research results.  There is too much research and knowledge being generated to use, at least in a way that taps into collective intelligence.  This would make the leap from idea generation 2.0 to evaluation 2.0.

Finally Andrew Filev in the Project Management 2.0 blog (referencing Seth Godin) says that collective intelligence still needs leadership (as in a leader of the project tribe).  It seems like this is a re-introduction of bias back into the system, but maybe some bias can be productive for getting things done.  I’m not sure.

The Research Practice Gap: Why is Evidence-based Practice so Hard to Achieve.

There’s has been some recent articles in the social science literature (nursing, education, management, HR, etc. ) about Evidence-based practice (EBP) or the research practice gap that exists in very many fields.  Why is EBP so difficult to achieve and why do so many solution articles leave me so underwhelmed.  I will offer a reason for the difficulties that I have not yet heard in a convincing manner.

Problem: Using research across different practices is basically the same problem as the transfer of learning or knowledge across contexts.

Reason for the problem: it takes work. Knowledge is closely tied to the contexts of production.  There may be theories and prior research that are applicable to a specific practice, but it takes work to contextualize that knowledge, see its applicability to specific contexts, and change the resulting practice.  What is that work:

  • Establishing a broad practitioner knowledge-base in order to know that the applicable theories and knowledge exist.
  • Knowing how the existing problem or practice can be reframed or re-understood in the light of this new knowledge.  It’s not just using knowledge in a new context, it is re-producing that knowledge or sometimes producing knowledge that is unique to that context.
  • Making changes and dealing with side problems common in change management.
  • Developing a feedback methodology for evaluating and adjusting practice changes

Solution;  we need practitioners with better skills and better tools:

  • A larger knowledge-base and a better network (or community of practice) that allows practitioners to tap into the cognition distributed across practitioner networks. In someways practitioners, because they need to be generalist, need a larger knowledge-base than do researchers who can restrict themselves to specialty areas.
  • Skills in problem framing:  re-conextualizing knowledge, hypothesis generation and testing, setting up experimental and other feedback methodology
  • Skills in communication and change management.  Understanding what to do is one thing, understanding how to get it done is another thing entirely.

Better tools. Many article speak like there is broad consensus on what practitioners should do like that consensus already exists.  That does not seem like the paradigmatically defined world of science that I know.  I think there is hard work yet to be done in writing practice standards and guidelines for best practices in most areas.  They are important however, as standards will form the basis for practitioners to be able to create measurement tools to measure how their practices are conforming, creating a deep understanding of their practice.  A measurement tool will also provide a practice compliancy pathway for changing practice.