Ideas for Developing Expert Practitioners of Evidence-based Management

I’ve previously discussed ways to implement Evidence-based Management here.  Today I ask a related question; how do we prepare practitioners to become experts at using evidence.  The work of Carl Wieman points us in a relevant direction that suggests that knowledge of evidence is not sufficient to make us expert users of evidence.

Wieman begins with evidence that scientific coursework was not preparing students to be experts in scientific problem solving, that is, not until they were able to gain experience as assistances in his physic lab.   Introductory physics courses did not seem to be working as expected.

On average, students have more novice like beliefs after they have completed an introductory physics course than they had when they started; this was found for nearly every introductory course measured. More recently, my group started looking at beliefs about chemistry. If anything, the effect of taking an introductory college chemistry course is even worse than for taking physics.

Wieman describes novices as people who can only see isolated pieces of information, pieces of information that are learned by memorization and are understood as disconnected from the world.

To the novice, scientific problem-solving is just matching the pattern of the problem to certain memorized recipes.

On the other hand, experts see coherent structures or frameworks of evidence-based concepts   The way experts solve problems involves strategies that are systematic, concept-based, and applicable to new and different contexts.  Wieman points out that experts have substantial knowledge, but it only becomes important when it is used within expert conceptual structures.  From a teaching and assessment point of view, assessing only what experts know will leave you ignorant in the ways that experts use knowledge.  You must understand the frameworks within which knowledge is used.

Everything that constitutes “understanding” science and “thinking scientifically” resides in the long-term memory, which is developed via the construction and assembly of component proteins. So a person who does not go through this extended mental construction process simply cannot achieve mastery of a subject.

Now I generally follow constructivist ideas, but I don’t believe that we should focus on a naive constructivist pedagogy.  The issue is not knowing, even if you find a way to construct your knowledge.  It is all about doing and the way that knowledge enables you to do things.  I believe Wieman is advocating for teaching methods that promote this type of knowledge use.  If you use constructivist pedagogy, but remain focused on only a body of knowledge, your results will not substantially improve.  What Wieman points out about learning reinforces the notion that our brains are wired for action, in ways that link learning and motor control.  We are not made to know only, but to know in the process of doing.

A second point, this also illustrates another case that was demonstrated by Engel (2010) and is relevant here.  Engel noted that “developmental precursors don’t always resemble the skill to which they are leading”.  (I’ve discussed this here.) Students who are learning in Wieman’s physics lab are:

focused intently on solving real physics problems, and I (Wieman) regularly probe how they’re thinking and give them guidance to make it more expert-like.  After a few years in that environment they turn into experts, not because there is something magic in the air in the research lab but because they are engaged in exactly the cognitive processes that are required for developing expert competence.

A diverse body of knowledge is a necessary but insufficient condition.  Even though knowledge is necessary, accumulating a body of knowledge is not a developmental precursor of expert performance.

That leaves the question, what does expert practice look like in management; what do successful managers do, how do we get students to think deeply about what to do with management problems, in what cognitive process should they be involved?  Overall, I am still an advocate for bridging the academic and the world of practice for students through some type of supervised practicum.

One Description of Science and the Basis for an Argumentative Approach to Validity Issues

I came across an interesting metaphor for science (and structural ways of understanding in general) in the Partially Examined Podcast Episode #8.   Here is my take on the metaphor.

Imagine the world as a white canvas with black spots on it.  Over that, lay a mesh made of squares and describe what shows through the mesh.  We are describing the world, but as it shows through the mesh.  Change the mesh in size or in shape and we have a new description of the world.

Now, these descriptions are useful and allow us to do things, but they are not truth, they are description.  They may be highly accurate in their descriptions of an actual world, but they are still descriptions.  It’s how science functions and is how science progresses and changes.  It also is why I advocate an argumentative approach to validity in the use of scientific structures like assessment or the use of evidence.  Old forms of validity (dependent on criterion validity) and much of the current discussion of evidence-based approaches is about the accuracy in certain forms of description.  But we must also allow for discussions of the mesh (to return to the metaphor).  As in construct validity, any discussion of how the world is must also include a discussion of how the mesh interact with the world to create the description.

In addition to methods like random controlled trials (RCTs), there is also a need for research into how we understand and rethink the assumptions and things that are sometimes unexamined in research.  RCTs are very good at helping us do things with very accurate descriptions (like describe linear causal processes).  We also need research that uses other meshes that will allow us to understand in new ways and facilitating our ability to do new and different things; to make progress.

Frames for Using Evidence: Actions, Processes and Beliefs

As a follow-up to my last post, there are three frames of reference that are important to my thinking about being evidence-based.

  1. The unit of analysis is action, not thinking.  Evidence-based programs are often focussed on decision-making, but action is a better focal point.  Why is this?  First, focusing on actions helps to make a direct connection from evidence to consequences and outcomes.  Second, Our actions and thinking are closely related.  Actions gets at both thinking and acting.  Neuroscience has recently begun to confirm what psychology (Vygotsky) and philosophy (Wittgenstein) have believed for a while: that cognition is closely tied to muscle control and acting.  That there is a neurological link between doing and thinking.
  2. Evidence-based information is best directed toward practices, processes or programs. Much of the evidence-based literature is directed toward decision-making. and while this is important, many aspects of practice are made up of decision that are organized by repeatable processes, programs or protocols.  The intense effort that is sometimes needed in order to be evidence-based may be more justified in the wider effect sen in focusing on the programs and processes that support everyday decision-making.
  3. The basis for most thoughtful actions is theory or belief. These may range from extensively developed nomothetic theoretical networks to well-founded beliefs, but the relevance of evidence-based information is on it’s effect upon these beliefs and theories that in turn guide decisions and program actions.  There is no such thing as facts without theory or belief.  The role of evidence is to support (or fail to support) the beliefs that underly actions.

Communicating Evidence Over Different Cognitive Frameworks: Overcoming Incommensurability in Communication Frameworks Between Research and Practice

My recent posts have highlighted the differences between scientific research and other types of practice as it relates to the design of evidence-based practice.  Previously I discussed how the larger scope of practice changes the epistemological needs of practice knowledge.*  In this post I will take up Nicolay Worren’s paper which suggests the cognitive frameworks that managers use to guide and control business practice are also different from those used to disseminate scientific practice.  (Worren, N., Moore, K. & Elliott, R. (2002). When Theories Become Tools: Towards a Framework for Pragmatic Validity, Human Relations, 55(10), 1227-1250.)

Nicolay notes that science is typically conducted and disseminated in propositional frameworks (often steeped in dense scientific vocabulary), but notes that managers depend more on narrative and visual cognitive and communication frameworks that are constructed in everyday language.  This can result in 2 problems:

1. People do not really understand the generalizable meaning of research because it is buried within obscure propositional frameworks.  Good communication must be constructed with the ability to span different cognitive and situational frameworks.  Consider the following quote from R.L. Ackoff **:

Until we communicate to our potential users in a language they can understand, they and we will not understand what we are talking about. If Einstein could do it with relativity theory, we should be able to do it with systems thinking (Einstein and Infeld, 1951). It is easy to hide the ambiguity and vagueness in our own thinking behind jargon, but almost impossible to do so when speaking or writing in ordinary language.
We have developed a vocabulary that equips our students with the ability to speak with authority about subjects they do not understand. Little wonder they do not become effective spokespersons to potential users.
Ackoff, R.L. (2006). Why Few Organizations Adopt Systems Thinking, http://ackoffcenter.blogs.com/ackoff_center_weblog/files/Why_few_aopt_ST.pdf

2. Managers must deal with practices that have a wide scopes and a high level of complexity.  Just as this creates different epistemological requirements for knowledge, it also entails different cognitive requirements for understanding and communicating.  While propositional frameworks are good for maintaining precision in deductive arguments, they do not have the speed of communication and the ability to communicate complexity, change/time, or emotion that can be found in narrative and visual frameworks.

Different cognitive frameworks can make the language of research and practice not only difficult to translate, but they can become almost incommensurate.   Again, I don’t think that research and practice are incommensurate, but you’ll need to engage in inductive processes to appropriately bring them together.

Notes

* Primarily this refers to the inability to design practices scientifically; with the amount of variable control necessary to ensure the same level of internal validity we see in research.  Without this level of control it is questionable when generalizing research to uncontrolled situations.  This does not mean that research is not relevant.  It means that decisions are dependent on inductive processes (as opposed to the deductive processes common in research) and that these processes are aligned with a goodness of fit model of verification (as opposed to deductive truth).

** I’m also indebted to Nicolay for pointing me to Ackoff and this source.  See his blog post:

http://www.nicolayworren.com/2009/11/russell-ackoff-1919-2009.html

Practice-based Evidence

I reviewed 2 papers that take the perspective; if you want more evidence-based practice, you’ll need more practice-based evidence.  The core idea is: when evaluating the validity of a specific research study, internal validity is primary, but when your advocating from a specific study to evidence-based practice, external validity becomes most important.

First, Green and Glasgow take this validity framework, pointing out that most research concentrates on internal validity and pretty much ignore external validity.  It’s true that you can’t have external validity without internal validity, but this does not make external validity any less important.  They work from Cronbach, Glesser, Nanda, & Rajaratnam’s (1972) generalizability theory and relate four facets of generalizability to translation frameworks.  The four facets are identified as “different facets across which (evidence-based) program effects could be evaluated. They termed these facets units (e.g., individual patients, moderator variables, subpopulations), treatments (variations in treat- ment delivery or modality), occasions (e.g., patterns of maintenance or relapse over time in response to treatments), and settings (e.g., med- ical clinics, worksites, schools in which programs are evaluated)”.

Westfall, Mold, & Fagnan (2007) point out some of the specific problems in generalizing to valid practice:

The magnitude and nature of the work required to translate findings from human medical research into valid and effective clinical practice, as depicted in the current NIH re- search pipeline diagrams have been underestimated.  . . . (problems) include the limited external validity of randomized con-trolled trials, the diverse nature of ambulatory primary care practice, the difference between efficacy and effectiveness, the paucity of successful collaborative efforts between academic researchers and community physicians and patients, and the failure of the academic research enterprise to address needs identified by the community (p.403).   Practice-based research and practice- based research-networks (PBRNs) may help because they can (1) identify the problems that arise in daily practice that create the gap between recommended care and actual care; (2) demonstrate whether treatments with proven efficacy are truly effective and sustainable when provided in the real- world setting of ambulatory care; and (3) provide the “laboratory” for testing system improvements in primary care to maximize the number of patients who benefit from medical discovery

They recommend adding another step to the NIH’s roadmap to evidence-based practice shown in this graphic:

Screen shot 2009-10-20 at 11.28

A recommended addition to the NIH translation roadmap

References

Green, L.W. & Glasgow (2006). Evaluating the Relevance, Generalization , and Applicability of Research: Issues in External Validation and Translation Methodology, Evaluation & the Health Professions, 29, (#1) pp. 126-153.

Westfall, J.M., Mold, J., & Fagnan, L., (2007). Practice-Based Research—“Blue Highways” on the NIH Roadmap, JAMA, January 24/31, 2007—Vol 297, No. 4

Evidence-based Practice: Is it Actionable Information or Judgement Support

A recent article can be read as another interesting take on the relationship between science and practice.  The reference is:

Stake, R.E. (2009). The incredible lightness of evidence: Problems of synthesis in educational evaluation, Studies in Educational Evaluation, 35,  pp. 3-6.

In the authors words:

Educational measurement and evaluation is . . .  a problematic field. Standardized testing is magnificent in its conceptual structure, and it is ‘‘out of control’’ in educational practice.  . . . there is widespread advocacy for evidence- based decision-making . . . (and) One quickly understands that many of its advocates are speaking of evidence in the form of objective, science-driven, decontextualized, action determining knowledge, more than as material for user deliberation.

Evidence is an important concept also in establishing a rationale or potential for action. Here there is no single criterion but multiple criteria . . .  policy should be based on many factors, on evidence of many kinds.  . . . Evidence is an attribute of information, but it is also an attribute of persuasion. It contributes to understanding and conviction.  . . . Evidence should be subordinate to judgment, crafted to user conviction. (emphasis added)

Naturalistic Decision-making or Algorithmic Practice: Which is Appropriate and When

Interesting Article in APA’s American Psychologist.

Kahneman, D. & Klein, G. (2009). Conditions for Intuitive Expertise: A Failure to Disagree, American Psychologist, 64, #6, 515-524.

The question, what works best, the intuition of expert decision-makers (Naturalistic Decision Making) or a statistical prediction algorithmic approach (Heuristics and Biases).

The answer of course, it depends on the context.

Intuition (which is presented as a form of pattern recognition) works well when the context include clear and consistent patterns and the experts has ample opportunities to practice recognition.

Where simple and valid clues exist, humans will find them given sufficient experience and enough rapid feedback. (p. 523)

This expert pattern recognition type of decision-making is especially relevant when time is a factor like in nursing or firefighting.  In situations where there are contra-indications, an algorithmic would be warranted, but the authors note there may be a potential for push back from practitioners.

An important point here is that an evidence-based approach is portraited not a simplistic application of science, but rather the development of a specific practice oriented algorithm – an scientific extenuation of the practice.

Contra-indications for a naturalistic decision-making process would include:

  • weak or difficult to detect patterns (e.g. high ceiling effects),
  • the lack of feedback,
  • feedback over long time periods or situations involving wicked problems where the feedback is misleading.

Contra-indications for a hubristic algorithmic approach include:

  • a lack of adequate knowledge about relevant variables,
  • reliable criterion,
  • a body of similar cases,
  • a cost benefit ratio that allow for algorithm development,
  • a low likelihood of changing conditions that would render the algorithm obsolete

The authors also note that algorithmic approaches should be closely monitored for changing conditions.

My take: Kahneman and Klein set up their discussion as a debate between themselves and discuss different approaches primarily as an either or choice.  I value their clarifications, but I would like to think of the many other situations where algorithms would be appropriate to supplementing not replacing naturalistic decision-making.  For instance, they use nursing diagnoses as an example of a reliable intuition space.  In some situations it is appropriate to use it, however diagnosis is a complex tasks that can include a large amount of data that can be combined in different ways.  I’ll have to look at the literature to see if there is a contra-example for naturalistic decision-making.  I’m not saying that naturalistic decision-making is inappropriate in many situations, only that they seem to be short changing algorithmic approaches.  There are also indications that these to authors are not sharing a philosophical heuristic framework.  My bet is that the positivist side is overstating naturalistic bias (which mean failing to see their own) and the naturalistic side is ignoring sources of bias when is suits them (throwing our the scientific baby with the bath water).  Again this is pointing to a need for a framework that can being people with different perspective into true communication and exchange.

Howe’s Critique of a Positivist Evidence-based Movement with a Potentially Valid Way Forward

A summary of Kenneth Howe’s article criticizing positivism and the new orthodoxy in educational science (evidence-based education).

(Howe, K.R., (2009). Epistemology, Methodology, and Education Sciences: Positivist Dogma, Rhetoric, and the Education Science Question, Education Researcher, 38 (#6) pp. 428-440.

Keywords: Philosophy; politics; research methodology

“Although explicitly articulated versions (of positivism) were cast off quite some time ago in philosophy, positivism continues to thrive in tacit form on the broader scene . . . now resurgent in the new scientific orthodoxy.” (p.428)

(A positivist stance on science) has sought to “construct a priestly ethos – by suggesting that it is the singular mediator of knowledge, or at least of whatever knowledge has real value . . . and should therefore enjoy a commensurate authority” (Howe quoting Lessl, from Science and Rhetoric).

Howe traces the outline of this tacit form of positivism through the National Research Council’s 2002 report titled Scientific Research in Education and relates this report to three dogmas of positivism:

  1. The quantitative – qualitative dichotomy – A reductionism dogma that had the consequence of limiting the acceptable range of what could be considered valid in research studies.
  2. The fact value distinction – An attempts to portray science as a value free process with the effect of obscuring the underlying values in operation.
  3. The division between the sciences and the humanities. Another distinction of positivism designed to limit any discussions to a narrow view of science.

Howe’s article does a good job of summarizing these general critiques of positivist methodology, which include: (1)its overall claims could not stand up to philosophical scrutiny, (2) it tended to not recognize many of its own limitations including applying adequate standards to itself and (3) it also was inhabited by a political agenda that sought to stifle and block many important directions that inquiry otherwise might have taken.

The crux of the political matter: While the goal of positivism may have been to positively establish a objective verifiable method of conducting social science modeled on the physical sciences, the primary result was an attempt to politically limit the scope of what could be considered meaningful scientific statements to include only statements that were verifiable in a narrow positivist sense. Howe is among the cohort who believe that the evidence-based movement is being used by some as a context to advance a tacit return to a form of positivism.

The crux of the scientific matter: Howe’s primary interest appear to be political, the politics of how research is received and funded, but there is also an effectiveness issue.  Positivism’s primarily scientific problems are in the tendency to ignore or to down play many of the limitations of positivist methods, (overstating the meaning of positivist research) and in the way it oversimplifies and fails to problematizes the rather complex relationship between research and practice.

Messick’s Six Part Validity Framework as a Response

There are four responses to Howe in this journal issue. To me, none of the responses address the primary issue at play: to bring some sense of unity to varying ideas and communication with people using different scientific methodological frameworks.  There are suggestions to allow for multiple methods, but they are more of a juxtaposition of methods rather than a framework that serves to guide and support communication and understanding among scientists use differing methods.  This is why I support Messick’s validity framework as a response to just this type of concern.  Although Messick spoke specifically of test validity, there is nothing that would preclude this framework from being applicable to practice validity and to the development of post-positivist evidence to support the validity of practices.  What is the evidence-based movement really concerned with, if it is not the validity of the practices being pursued by practitioners.  This is not primarily about the validity of individual research studies, but is about the validity of practices and developing evidence to support the validity of specific practices.  It is also a mature framework that considers the full range of inquiry when developing evidence.

Messick’s six areas for developing validity are six different types of validity evidence and I develop here an initial set of ideas about how they might relate to evidence-based practice as follows:

  • Content – Content defines the scope of the practice domain with evidence (including rationales and philosophical debates) for the relevance of a particular practice, how the practice represents ideas within the general domain and the technical quality as compared to other examples of the practice.
  • Substantive – Evidence that the design of actual processes involved are consistent with the knowledge of design from relevant domains (i.e. psychology, sociology, engineering, etc. . ..)
  • Structural – The consistence between the processes involved and the theories that underly and support rationales for the structure of the actual process.
  • External – Empirical data to support criterion evidence (Random controlled trials (RCT) would be one example).  For many practices this may include both convergent and discriminant evidence.  (My thinking is still in development here, but I am think that empirical evidence from the research base would function more like criterion evidence.  Direct empirical evidence from the actual practice being validated would be considered in most situations under consequential evidence.  See below.)
  • Generalization – Evidence for the practice to be relevant and effective across different populations, contexts and time periods.
  • Consequential – Evidence that the practice is actually achieving the purpose it was originally intended to achieve.

I consider this list to be an early formation with more development needed.  Critiques are most welcome.

Messick’s original formulation for test validity is available here.

Combining Evidence and Craft for Successful Practice: No False Dichotomies

Evidence-based (in all its various permutations) is a construct that needs to be carefully worked out.  If evidence-based practice was self-evident, we would have achieved it through the success and extension of operationalism*, but, that wasn’t to be the case.  Participating in a practice requires evidence, craft and experience combined in a way that is fraught with complexity; but improving practices of all kinds is dependent on meeting this complexity.

I recently came across two ideas from Wampold, Goodheart & Levant (Am Psychol. 2007 Sep;62(6):616-8) that go a long way to clarifying this evidence-based construct.  Their first clarification is a definition of evidence and the second is to counter the false dichotomy of evidence vs. experience.

Evidence and Inference

Evidence is not data, nor is it truth.  Evidence can be thought of as inferences that flow from data.  . . . Data becomes evidence when they are considered with regard to the the phenomena being studied, the model used to generate the data, previous knowledge, theory, the methodologies employed, and the human actors.

This is not a simple positivist conception of evidence, but reflects a complex multimodal aggregation.  In addition, I would add that the primary concern of practitioners is really the validity of the practices they are conducting.  The validity of practice is supported by evidence, but it is dependent on the use of practice in context.  We do not validate practice descriptions or practice methodologies, but rather the use of practice in its local contexts, understood by reference to phenomena, models, knowledge, theories, ex-cetera.  I’ll have to look back at validity theory to see if I can get a clearer description of this idea.

Evidence and Experience

A second insight expressed by Wampold, Goodheart & Levant is the integrative nature of evidence and experience as they relate to practice; where any opposition between experience and evidence is considered to be a false dichotomy.  The ability to use evidence is a component of practice expertise including the ability to collect and draw inferences from local data through the lens of theory and empirical evidence or in the ability to adjust practices in response to new evidence.  It’s experience and evidence and evidence as a part of experience.

Evidence and Craft

I find it somewhat serendipitous that I have been drawn into conversations involving design management and evidence-based management.  It is because I believe that the success of each one depends on the other.  The positivist agenda of running the world by science is not tenable.  The world is too complex and there are too many relevant or even distant variables for a positivist program to be sustainable.  Science cannot do it all, but neither can we be successful without science, evidence and data.  We need a bit of craft and a bit of evidence to engage in practice.  That may often include craft in the way that evidence is used and it may entail craft that is beyond evidence. It just should not draw false diochotomies between evidence and craft.

The goal of operationalism “was to eliminate the subjective mentalistic concepts . . .  and to replace them with a more operationally meaningful account of human behavior” (Green 2001, 49). “(T)he initial, quite radical operationalist ideas eventually came to serve as little more than a “reassurance fetish” (Koch 1992, 275) for mainstream methodological practice.” Wikipedia (Incomplete references noted in this article, but it seems trustworthy as I’m familiar with Koch’s work.)