Critical Thinking, Scientific Reasoning, and the Incorporation of Evidence into Everyday Practice: A Conceptual SymbiosisI

It seems to me that there is a natural affinity between evidence-based practice, scientific reasoning and critical thinking.  I think Kuhn (quoted in Dawson, 2000) captures the essence of this symbiosis:

I have undertaken here to show that these two abilities–the ability to recognize the possible falsehood of a theory and the identification of evidence capable of disconfirming it–are the foundational abilities that lie at the heart of both informal and scientific reasoning. These abilities lie at the heart of critical thinking, which similarly can be regarded, at the most global level, as the ability to justify what one claims to be true (Kuhn, 1993).

Some background considerations and directions for future thoughts and research.

  1. I’m taking the perspective that what cognitive control we have over our decisions and actions, is mediated by our beliefs, theories, schemas and prior knowledge.  Without this mediation everyday actions would represent an unbearable cognitive load.
  2. Although there are good strategies for enabling critical thinking, at it’s core, critical thinking is the ability and disposition to seek disconfirming evidence and use it to change our minds (beefs schemas, theories, etc. . . ).
  3. Although we often equate scientific thinking with the scientific method (hypothesis testing), the core of it’s reasoning is also the disposition to seek and make use of disconfirming evidence.
  4. Evidence-based organizations must actively support critical thinking through their culture and in the organization of their internal processes and practices.
  5. Practice validity (seeking evidence for the validity of organizational practices) is the ability to justify the efficacy of our actions, just as Kuhn considers critical thinking to be a way to justify our claims to truth.

A shout-out to Harold Jarche who’s post Critical thinking in the organization led me down this primrose path.

References

Dawson, R. (2000). Critical Thinking, Scientific Thinking, and Everyday Thinking: Metacognition about Cognition, Academic Exchange Quarterly, accessed 4-8–10 at http://www.thefreelibrary.com/Critical+Thinking,+Scientific+Thinking,+and+Everyday+Thinking:…-a067872702

Kuhn, D. (1993). Connecting scientific and informal reasoning. Merrill-Palmer Quarterly, 39(1), 74-103.

Ideas for Developing Expert Practitioners of Evidence-based Management

I’ve previously discussed ways to implement Evidence-based Management here.  Today I ask a related question; how do we prepare practitioners to become experts at using evidence.  The work of Carl Wieman points us in a relevant direction that suggests that knowledge of evidence is not sufficient to make us expert users of evidence.

Wieman begins with evidence that scientific coursework was not preparing students to be experts in scientific problem solving, that is, not until they were able to gain experience as assistances in his physic lab.   Introductory physics courses did not seem to be working as expected.

On average, students have more novice like beliefs after they have completed an introductory physics course than they had when they started; this was found for nearly every introductory course measured. More recently, my group started looking at beliefs about chemistry. If anything, the effect of taking an introductory college chemistry course is even worse than for taking physics.

Wieman describes novices as people who can only see isolated pieces of information, pieces of information that are learned by memorization and are understood as disconnected from the world.

To the novice, scientific problem-solving is just matching the pattern of the problem to certain memorized recipes.

On the other hand, experts see coherent structures or frameworks of evidence-based concepts   The way experts solve problems involves strategies that are systematic, concept-based, and applicable to new and different contexts.  Wieman points out that experts have substantial knowledge, but it only becomes important when it is used within expert conceptual structures.  From a teaching and assessment point of view, assessing only what experts know will leave you ignorant in the ways that experts use knowledge.  You must understand the frameworks within which knowledge is used.

Everything that constitutes “understanding” science and “thinking scientifically” resides in the long-term memory, which is developed via the construction and assembly of component proteins. So a person who does not go through this extended mental construction process simply cannot achieve mastery of a subject.

Now I generally follow constructivist ideas, but I don’t believe that we should focus on a naive constructivist pedagogy.  The issue is not knowing, even if you find a way to construct your knowledge.  It is all about doing and the way that knowledge enables you to do things.  I believe Wieman is advocating for teaching methods that promote this type of knowledge use.  If you use constructivist pedagogy, but remain focused on only a body of knowledge, your results will not substantially improve.  What Wieman points out about learning reinforces the notion that our brains are wired for action, in ways that link learning and motor control.  We are not made to know only, but to know in the process of doing.

A second point, this also illustrates another case that was demonstrated by Engel (2010) and is relevant here.  Engel noted that “developmental precursors don’t always resemble the skill to which they are leading”.  (I’ve discussed this here.) Students who are learning in Wieman’s physics lab are:

focused intently on solving real physics problems, and I (Wieman) regularly probe how they’re thinking and give them guidance to make it more expert-like.  After a few years in that environment they turn into experts, not because there is something magic in the air in the research lab but because they are engaged in exactly the cognitive processes that are required for developing expert competence.

A diverse body of knowledge is a necessary but insufficient condition.  Even though knowledge is necessary, accumulating a body of knowledge is not a developmental precursor of expert performance.

That leaves the question, what does expert practice look like in management; what do successful managers do, how do we get students to think deeply about what to do with management problems, in what cognitive process should they be involved?  Overall, I am still an advocate for bridging the academic and the world of practice for students through some type of supervised practicum.

Two Different Ways of Implementing Evidence-based Practice and their Different Requirements for Evidence

It intuitively seems to me that there are two way of applying evidence in Evidence-based Management.

  1. One I’ll call evidence-based decision-making (EBDM), bringing evidence into decision processes.
  2. The other I’ll refer to as evidence-supported interventions (ESI), specific practices that are empirically supported.

I suspect that EBDM will be a tougher nut to crack in practice.  This is because decision-making is often context dependent, involves ill structured problems, and can be cognitively complex.  (See March, 1991; for one take on this complexity.)  Decision processes require a higher level of interpretation regarding the evidence and can easily fall prey to logical errors.  Most thinking on decision-making has stressed that research should begin by analyzing of how people make decisions in real time, not as some sort of abstract logical process.  As Daniel Kahneman (2003) puts it;

psychological theories of intuitive thinking cannot match the elegance and precision of formal normative models of belief and choice, but this is just another way of saying that rational models are psychologically unrealistic ( p. 1449).

Nonetheless, evidence should inform decision processes and I believe that evidence supported protocols, as one example, can prepare the decision space for better decision-making outcomes.  However, this type of process also begins to bring me closer to the second way of applying evidence; through evidence-supported interventions.

Mullen, Bledsoe, & Bellamy (2008) define Evidence-supported Interventions (ESI) as

specific interventions (e.g., assessment instruments, treatment and prevention protocols, etc.) determined to have a reasonable degree of empirical support.

(Other names might include evidence-based practices, empirically supported treatments, or empirically informed interventions.)  In implementation settings, ESIs function as standardized practices; practices where all or a portion of the operational, tactical, logistical, administrative or training aspects of a practice are able to conform to a specific and unified set of criteria.  In other words, the contexts of implementation will allow practice to be replicated exactly as they were defined and constructed in supporting research.   In being evidence-based, it is important that critical issues flow both ways.  If the contexts do not allow replication, or present confounding variables and complexity not addressed in research, it will necessarily reduce the level of support that can be claimed for any research supported practice.

There are many differences between EBDM and ESIs.  I would like to focus here on the different role that theory plays in each.  There are no data or practices that are completely theory free.  All are theory and value laden to some extent.  All datum, hypothesis, or knowledge depend on assumptions and implications that are based in someway on theory.  But, all do not depend in the same way or to the same extent.  I will borrow on Otto & Ziegler (2008) to explain how some of these differences can be ascribed to either causal descriptions or causal explanations.  First, I agree with Otto and Ziegler who say that

Probably, it is fair to say that the quest for causal explanation is theory driven, whereas causal description is not necessarily grounded in theory.

ESIs, because they focus on replication, do not need to be as concerned with the fact that they are relying on causal descriptions.  EBDM, however, are using evidence in a more theoretical way than in the replication of a standard practice.  Because they are dealing with complex and context independent reasoning, they need evidence that is valid in a causal explanative manner.

Two observations – From a strict positivist perspective, this creates a problem for EBDM because of the difficulty in achieving a necessary level of causal explanation.  Positivism can live better through a ESI approach because it can depend on causal description.  Instead an EBDM approach must adopt an argumentative type role in validating evidence.  This is the approach that validity theory has taken.  Validity theory began with a positivist framework that was centered on a criterion approach to validity.  As it became more and more apparent that constructs were the central concern (theoretical concerns) it adopted a unified construct validity perspective that needed an argumentative approach.  This is an approach where validity is never an either or proposition, but rather a concern for the level of validity achieved.  While this is not necessarily the most clear way, it is very pragmatic and practical and able to be implemented across a wide variety of practice locations.

Two Conclusions:

  1. EBDM is concerned with supporting naturalistic decision processes with evidence that is empirically and theoretically supported and can be a easily included in that decision process.
  2. ESIs are concerned with practices, protocols and processes that can function in a standardized manner through the replication of empirically supported research interventions.

References

Kahneman, D., (Dec., 2003). Maps of Bounded Rationality: Psychology for Behavioral Economics, The American Economic Review, Vol. 93, No. 5 , pp. 1449-1475.

March, J.G., (1991).  How Decisions Happen in Organizations, Human-Computer Interaction, 6, 95-117. accessed 02-15-2010 at http://choo.fis.utoronto.ca/fis/courses/lis2176/Readings/march.pdf

Mullen, E.J., Bledsoe, S.E. & Bellamy, J.L., (2008). Implementing Evidence-Based Social Work Practice, Research on Social Work Practice, Vol. 18 No. 4, July 2008 325-338.

Otto, H., & Ziegler, H., (2008). The Notion of Causal Impact in Evidence-Based Social Work: An Introduction to the Special Issue on What Works? Research on Social Work Practice, Vol. 18 No. 4, July 2008 273-277.

A Marketing Plan for Promoting Evidence-based Management

What is needed for Evidence-based management to be a relevant business concept.  I believe it will move in that direction when managers can clearly understand the incentives for using it, when they easily understand how to translate and integrate it into their current responsibilities, and when they understand how to change the relevant behaviors.  This issuer can be seen as a marketing problem and, as said by Nancy R. Lee (2009), “words alone don’t often change behaviors.  We need products, and incentives, and convenient distribution channels as well”.  (Nancy’s topic is social marketing to alleviate poverty and associated problems, but the principles can equally apply to a poverty of business acumen, a topic that should include EBMgnt).  Lee’s recommended approach in this podcast is a standard market approach broken down into a 10 step plan:
A Ten step Marketing Plan
  1. Provide a clear rationale and statement of purpose
  2. Conduct a situational analysis with organizational strengths and weaknesses and environmental opportunities and threats
  3. Segment the heterogeneous market; then choose, prioritize and strategize for the needs of specific target audiences.
  4. Identify the behavior(s) to be changed, emphasizing simple and doable tasks.
  5. Listen to the voice of the customer for perceived barriers and reasons why they do not perform the behavior now.
  6. Form a positioning statement describing how you wish the audience to view the behavior and its benefits.
  7. Develop a strategic mix of marketing tools that include the right: product, price/ incentive, placing and promotion
  8. Develop a plan to evaluate outcomes.
  9. Budget for implementation
  10. Plan how the campaign will role out.
This approach wold most likely fall under consultative model of business services.  Besides marketing to specific segments of the management services market, it would also require the development of quality products that can support the needed behaviors and understandings.  These products are likely to be mostly educational and conceptual in nature and would include concepts to help scaffold needed changes to behaviors and business processes.  Also need would be additional appropriate distribution channels to build on the recognition and pre-knowledge of concepts.  These could be business schools or professional organizations and publications.
I find this approach interesting because:
  • It acknowledges the difficulty in changing behavior and understandings,
  • It acknowledges that the goals of managers and researchers are different and
  • It acknowledges that academic and scientific research would benefit from a well-formed translation strategy.
It would be nice to know if anyone can see any problems with an approach such as this, or would know of any other similar approaches.
Reference
Nancy R. Lee on How Social Networking Can Create Change for the Poor, podcast accessible on itunes or at http://www.whartonsp.com/podcasts/episode.aspx?e=04d8fe16-c7e4-45be-a441-7d33a83384e8

Frames for Using Evidence: Actions, Processes and Beliefs

As a follow-up to my last post, there are three frames of reference that are important to my thinking about being evidence-based.

  1. The unit of analysis is action, not thinking.  Evidence-based programs are often focussed on decision-making, but action is a better focal point.  Why is this?  First, focusing on actions helps to make a direct connection from evidence to consequences and outcomes.  Second, Our actions and thinking are closely related.  Actions gets at both thinking and acting.  Neuroscience has recently begun to confirm what psychology (Vygotsky) and philosophy (Wittgenstein) have believed for a while: that cognition is closely tied to muscle control and acting.  That there is a neurological link between doing and thinking.
  2. Evidence-based information is best directed toward practices, processes or programs. Much of the evidence-based literature is directed toward decision-making. and while this is important, many aspects of practice are made up of decision that are organized by repeatable processes, programs or protocols.  The intense effort that is sometimes needed in order to be evidence-based may be more justified in the wider effect sen in focusing on the programs and processes that support everyday decision-making.
  3. The basis for most thoughtful actions is theory or belief. These may range from extensively developed nomothetic theoretical networks to well-founded beliefs, but the relevance of evidence-based information is on it’s effect upon these beliefs and theories that in turn guide decisions and program actions.  There is no such thing as facts without theory or belief.  The role of evidence is to support (or fail to support) the beliefs that underly actions.

4 Types of Evidence-based Practitioner Information Needs

This is a thought in development, not a finished product.  I currently can think of 4 different types of evidence-based information that would be of interest practitioners: the structure of practice, the scope of practice, the applicability  (the level of confidence that the evidence is applicable to your specific context), and the measured consequences of practice (intended or unintended).
1. Form – How should my practice be structured according to the evidence from best practice models and all forms of evidence.  What do we know about how the practice or protocol should be structured.  Is there evidence for a correspondence between the theoretical proscribed structure and the actual practice I’m reviewing.
2. Scope – What different aspects should be included in my practice.  What different types of actions are important for goal achievement.  Does my local process include all aspects demonstrated to be important in a successful practice.
3. Applicability – Do the models generalize well to my specific situation.  Just because research was valid for college sophomores does not necessarily mean I should have confidence that the evidence generalizes to my situation.
4. Consequential – Are my local measures consistent with and confirm what the evidence predicts should happen. Include intended and unintended consequences.  In addition to external research information, local measures should  also be an important source for generating evidence.

Concept clarification of Evidence-Based Management

My current series of post are centered on clarifying the meaning of being evidence-based and a recent article (Briner, Denyer & Rousseau, 2009) falls right in line with this task.  The article focusses on 4 key points in clarifying EBMgmt.

1. EBMgt (Evidence-based Management) is something done by practitioners, not scholars.

One caveat here, the implication that practitioniers do not need to be scholars.  The type of scholarship and scholarly activity may be different, but evidence-based practice is based on scientific inquiry and requires a certain level of knowledge and thought.  People often talk of this being a knowledge age, which if true, will mean that more and more people need a better understanding of various forms of scholarship.  Understanding science is often a foundation of educational programs designed to prepare evidence-based practitioners.  The scientific tasks of practitioners will be different than other types of scholarship.  It is scholarship focused on what’s relevant to practice and it’s true that practitioners often find current scholarship irrelevant, but there is a type of scholarship that will drive the evidence-based movement.

2.EBMgt is a family of practices, not a single rigid formulaic method.

Determining the validity of one’s practice focuses on the total context of practice.  Both it’s method and the type of evidence required is multifaceted.

3. Scholars, educators, and consultants can all play a part in building the essential supports for the practice of EBMgt. To effectively target critical knowledge and related resources to practitioners, an EBMgt infrastructure is required; its development depends on the distinctive knowledge and skills found in each of these communities.

Well said!  I also hope that we see related innovative thinking in these communities as well.

4. Systematic reviews (SRs) are a cornerstone of EBMgt practice and its infrastructure, and they need to possess certain features if they are to be informative and useful.

I believe the infrastructure needs should focus on systematic reviews that go beyond what work in a simplistic fashion.  It should focus on the total needs of practitioners who are developing their practice by means of scientific inquiry.  Major et al (2009) in the December issue of American Psychologist is a good example of a through review process.  Their article reviews the empirical research on the links between abortion and women’s mental health, a highly contested and politicalized issue.  They first look at how relevant concepts and research questions have been framed by various studies.  They consider various problems with the data before analyzing the results organized by different parameters.  Because of their comprehensive approach, their conclusions not only provide a good empirical summation, but will also contribute to practitioners’ understanding of the relevant issues from a number of different perspectives and how it might relate to different practices.

My next post will focus on what types of knowledge (and hence what type of infrastructure) might be needed by the scientific inquiry of practitioners.

References

Major, B., Applebaum, M., Beckman, L., Duton, M.A., Russo, N.F. & West, C. (2009). Abortion and Mental Health: Evaluating the Evidence, American Psychologist, Vol 64 (9) pp.863-890

Briner, R.B., Denyer, D. & Rousseau, D.M., (2009). Evidence-Based Management: Concept Cleanup Time? Academy of Management Perspectives, Vol. 23(4), pp. 19-32.

Evidence-based Practice Defined

Evidence-based practice (whether in management, education, medicine or other) describes a process designed to facilitate an integrative and evaluative judgement, based on empirical evidence and theoretical rationale, as to the appropriateness and adequacy of actions or proposed actions, that are based on the processes or protocols of a practice or program.
Anyone familiar with the validity theory of Samuel Messick will recognize that this is drawn from his language.  That is because of my belief that the evidence-based movement should focus on practice in a way that can best be understood as practice validity (or process validity, or program validity depending on your perspective and task)
Evidence-based practice represents a multifaceted judgment. This is not a cookbook or a cherry picking approach.  It is a multifaceted judgement integrating all relevant information to evaluate actions as to their appropriateness for specific contexts and goals.  These actions are based on empirically supported theoretical rationales that are also backed up by locally collected data.
Evidence-based practice is scientific inquiry. This approach does not just rely on the results of scientific inquiry, it represents a form of scientific inquiry, but instead of being directed toward some aspect of theory (an a-contextual methodology), it is directed toward a specific practice (a contextualized methodology).  Though much of the literature equates an evidence-based approach with a decision-making process, the application of evidence is often focused on processes (or standing practices, or programs) and whether these processes are adequate and appropriate toward intended goals.
Evidence-based Practice should be associated with valid clinical or practice judgment.  A common criticism of the cookbook approach to EBP is that it impinges upon the ability of clinicians to make clinical judgements, but that is only true if one does not recognize that EBP is itself a judgement.  It is a form of clinical judgement and should occupy an important space within clinical practice.  This is an important point because research information seldom exactly fits the contexts of practice.  Applying this type of information is not direct, but requires making an informed judgment.  This is also an area that needs additional study.  I don’t believe that we really fully understand all aspects (especially cognitive aspects) of clinical judgment.

Why Interpretation is the Cornerstone of Evidence-based Data Driven Practice

This post responds to a comment by Richard Puyt where I thought I would try to explain my ideas on interpretation and evidence in a more complete manner.

First, a first order belief of mine: data driven practices, that are supported and shown valid by research evidence, is the best way to establish or improve business practices.  Common sense is not a good way to run your business, because it is often wrong.  However, you also need a good theory or mental framework to make sense of your data and you need a broad evaluation framework to understand and to explain how your research is related to your practice.  Without good frameworks, your level of analysis falls back to common sense, no matter how much data you have available.  It simply can become a case of garbage in = garbage out.

This is the point of Stanley Fish in the NY Times Opinionator Blog when he says:

. . . there is no such thing as “common observation” or simply reporting the facts. To be sure, there is observation and observation can indeed serve to support or challenge hypotheses. But the act of observing can itself only take place within hypotheses (about the way the world is) . . . because it is within (the hypothesis) that observation and reasoning occur.  (I blogged about this before here)

Your observations, be they data, measures or research results, need to be interpreted and that can only occur within an interpretive framework such as a good theory or hypothesis.  Furthermore, the quality of your analysis will depend as much on the quality of your interpretative framework as it does on the quality of your data.

Examples

Performance Measurement:  (I previously blogged about this here.)  Any performance measure implies a theoretical rationale that links performance with the measure.  This theoretical relationship can be can be tested, validated and improved over time.  It is not just that you are using a data driven performance system, but that you also have a well supported way of interpreting the data.

Research Evidence: When conducting a quantitative study, care is taken in choosing a sample and in controlling for a wide range of potential confounding variables.  The effects that are the research results may show a causal relationship that can be trusted.  However, you can not then assume that these results can be directly applied to your business where all the confounding variables are back in play and where the sample and context may be different.  It may be a very important piece of evidence, but it should only be a piece in a larger body of evidence.  This evidence can be the basis for a theory (what Fish calls a hypothesis) and used as a basis for a practice that is data driven (what Fish calls observation), but this practice needs to be tested and validated on it’s own merit, not because it relates to a research study.

This is the basis of Evidence-based data driven practice.  Good data, derived from good measures, with a good understanding of how the measures relate to your practice, an understanding that is tested over time.  This is not too hard to do and should be a foundation for business education.

Evidence and Interpretation: Two Sides of the Same Coin

Robert Aronowitz wrote an interesting historical analysis on the background of the current mammogram debate titled: Addicted to Mammograms.  His analysis provides another layer of meaning to the debate, which is my point.  Evidence must be interpreted. Aronowitz infers that the people at the Preventive Services Task Force, who made recommendations based on their interpretation of the evidence, didn’t understand how it would be re-interpreted in the media and the health industry, especially in the context of the current health insurance reform debate. But evidence and interpretation are two sides of the same coin.  One side may be stamped with permanent maker and the other with erasable marker, but you can’t have a one sided coin.  . . . Well, . . . maybe physicists can have a one sided coin, but not the rest of us mortals.