One Description of Science and the Basis for an Argumentative Approach to Validity Issues

I came across an interesting metaphor for science (and structural ways of understanding in general) in the Partially Examined Podcast Episode #8.   Here is my take on the metaphor.

Imagine the world as a white canvas with black spots on it.  Over that, lay a mesh made of squares and describe what shows through the mesh.  We are describing the world, but as it shows through the mesh.  Change the mesh in size or in shape and we have a new description of the world.

Now, these descriptions are useful and allow us to do things, but they are not truth, they are description.  They may be highly accurate in their descriptions of an actual world, but they are still descriptions.  It’s how science functions and is how science progresses and changes.  It also is why I advocate an argumentative approach to validity in the use of scientific structures like assessment or the use of evidence.  Old forms of validity (dependent on criterion validity) and much of the current discussion of evidence-based approaches is about the accuracy in certain forms of description.  But we must also allow for discussions of the mesh (to return to the metaphor).  As in construct validity, any discussion of how the world is must also include a discussion of how the mesh interact with the world to create the description.

In addition to methods like random controlled trials (RCTs), there is also a need for research into how we understand and rethink the assumptions and things that are sometimes unexamined in research.  RCTs are very good at helping us do things with very accurate descriptions (like describe linear causal processes).  We also need research that uses other meshes that will allow us to understand in new ways and facilitating our ability to do new and different things; to make progress.

Two Different Ways of Implementing Evidence-based Practice and their Different Requirements for Evidence

It intuitively seems to me that there are two way of applying evidence in Evidence-based Management.

  1. One I’ll call evidence-based decision-making (EBDM), bringing evidence into decision processes.
  2. The other I’ll refer to as evidence-supported interventions (ESI), specific practices that are empirically supported.

I suspect that EBDM will be a tougher nut to crack in practice.  This is because decision-making is often context dependent, involves ill structured problems, and can be cognitively complex.  (See March, 1991; for one take on this complexity.)  Decision processes require a higher level of interpretation regarding the evidence and can easily fall prey to logical errors.  Most thinking on decision-making has stressed that research should begin by analyzing of how people make decisions in real time, not as some sort of abstract logical process.  As Daniel Kahneman (2003) puts it;

psychological theories of intuitive thinking cannot match the elegance and precision of formal normative models of belief and choice, but this is just another way of saying that rational models are psychologically unrealistic ( p. 1449).

Nonetheless, evidence should inform decision processes and I believe that evidence supported protocols, as one example, can prepare the decision space for better decision-making outcomes.  However, this type of process also begins to bring me closer to the second way of applying evidence; through evidence-supported interventions.

Mullen, Bledsoe, & Bellamy (2008) define Evidence-supported Interventions (ESI) as

specific interventions (e.g., assessment instruments, treatment and prevention protocols, etc.) determined to have a reasonable degree of empirical support.

(Other names might include evidence-based practices, empirically supported treatments, or empirically informed interventions.)  In implementation settings, ESIs function as standardized practices; practices where all or a portion of the operational, tactical, logistical, administrative or training aspects of a practice are able to conform to a specific and unified set of criteria.  In other words, the contexts of implementation will allow practice to be replicated exactly as they were defined and constructed in supporting research.   In being evidence-based, it is important that critical issues flow both ways.  If the contexts do not allow replication, or present confounding variables and complexity not addressed in research, it will necessarily reduce the level of support that can be claimed for any research supported practice.

There are many differences between EBDM and ESIs.  I would like to focus here on the different role that theory plays in each.  There are no data or practices that are completely theory free.  All are theory and value laden to some extent.  All datum, hypothesis, or knowledge depend on assumptions and implications that are based in someway on theory.  But, all do not depend in the same way or to the same extent.  I will borrow on Otto & Ziegler (2008) to explain how some of these differences can be ascribed to either causal descriptions or causal explanations.  First, I agree with Otto and Ziegler who say that

Probably, it is fair to say that the quest for causal explanation is theory driven, whereas causal description is not necessarily grounded in theory.

ESIs, because they focus on replication, do not need to be as concerned with the fact that they are relying on causal descriptions.  EBDM, however, are using evidence in a more theoretical way than in the replication of a standard practice.  Because they are dealing with complex and context independent reasoning, they need evidence that is valid in a causal explanative manner.

Two observations – From a strict positivist perspective, this creates a problem for EBDM because of the difficulty in achieving a necessary level of causal explanation.  Positivism can live better through a ESI approach because it can depend on causal description.  Instead an EBDM approach must adopt an argumentative type role in validating evidence.  This is the approach that validity theory has taken.  Validity theory began with a positivist framework that was centered on a criterion approach to validity.  As it became more and more apparent that constructs were the central concern (theoretical concerns) it adopted a unified construct validity perspective that needed an argumentative approach.  This is an approach where validity is never an either or proposition, but rather a concern for the level of validity achieved.  While this is not necessarily the most clear way, it is very pragmatic and practical and able to be implemented across a wide variety of practice locations.

Two Conclusions:

  1. EBDM is concerned with supporting naturalistic decision processes with evidence that is empirically and theoretically supported and can be a easily included in that decision process.
  2. ESIs are concerned with practices, protocols and processes that can function in a standardized manner through the replication of empirically supported research interventions.

References

Kahneman, D., (Dec., 2003). Maps of Bounded Rationality: Psychology for Behavioral Economics, The American Economic Review, Vol. 93, No. 5 , pp. 1449-1475.

March, J.G., (1991).  How Decisions Happen in Organizations, Human-Computer Interaction, 6, 95-117. accessed 02-15-2010 at http://choo.fis.utoronto.ca/fis/courses/lis2176/Readings/march.pdf

Mullen, E.J., Bledsoe, S.E. & Bellamy, J.L., (2008). Implementing Evidence-Based Social Work Practice, Research on Social Work Practice, Vol. 18 No. 4, July 2008 325-338.

Otto, H., & Ziegler, H., (2008). The Notion of Causal Impact in Evidence-Based Social Work: An Introduction to the Special Issue on What Works? Research on Social Work Practice, Vol. 18 No. 4, July 2008 273-277.

How Does a University Create Value for Their Students: Does Current Practice Do This?

Recent blogging regarding university tenure processes and journal peer review processes are a reminder of how contestable knowledge production can be; especially if knowledge is not used as a tool for acting, but becomes the commodity of value itself.  It seems that the processes are corrupting the intent.  As a social scientist, I think academics in general are looking at the wrong things and seeing the world in the wrong way.

The only current educational alternative seems to be the University of Phoenix model, which seems less than desirable.  I’ve been an advocate of increased standardization for many educational tasks (As opposed to each teacher recreating the wheel for standardizable tasks.  This, according to Joshua Kim, seems to be what U of P is doing.)  But, this type of standardized learning also occupies the lowest rung on the educational value chain.

The problem is that the traditional university does is not doing much better.  I believe the biggest problem is in the infrastructure (and this includes tenure, peer review, culture and many other support structures).  Current infrastructures were created for a different world.  Our current world needs something new.

Higher education should be re-thought alone the lines of how they do , or could, create value in the lives of their students and other stakeholders.  I would love to be involved in a staged approach

  • Stage 1. Standardized processes where students acquire background concepts and skill sets.
  • Stage 2.  Internships; real jobs, created in collaborative efforts between universities and employers, structured to exist in conjunction with seminar like courses focusing on higher level cognitive, intellectual and experience development.
  • Stage 3. Ongoing lifelong alumni networks and post-graduate courses design to add value to working professionals, maintain previous skill sets, and to stay current on new developments.  The final end output is not an individual student, but a network.  An employer would not only hire an individual, but also an entire network of resources.  Now that would be of real and lasting value.   (For more on hiring a network and social capital, see Josh Letournea’s post on the Fistfull of Talent blog.)

Mathematics in The Real World: Are Your Use(s) of Numbers Valid

It is my premise that most people do not really understand how to use mathematics strategically in a concrete world.  They don’t think much about what the numbers mean and meaning is everything if you want to know what the numbers are doing.  At its heart, math is an abstraction; an idea that is not connected to real world circumstances.  (See Steven Strogatz’s NY Times article for a detailed look at math and its misuse in education pedagogy)

The trick to understanding and using math in the real world can often be traced to how we devise the measurements that define the meaning of numbers that are then to be treated mathematically.  Let look at some problems relation to the use of numbers and how their meaning is misunderstood.

Problem #1 Educational Testing – Measurement should aways be designed to serve a goal; goals should never be design to fit a measurement protocol.  This is why proficiency testing will never help education and the core idea behind a recent New York Times editorial by Susan Engel.  Current public school measures do not reflect the capabilities we need to develop in students.  It’s not bad that people teach to the test, what’s bad is that the test itself it not worth teaching too.

Our current educational approach — and the testing that is driving it — is completely at odds with what scientists understand about how children develop . . . and has led to a curriculum that is strangling children and teachers alike.

(Curriculum should reflect) a basic precept of modern developmental science: developmental precursors don’t always resemble the skill to which they are leading. For example, saying the alphabet does not particularly help children learn to read. But having extended and complex conversations during toddlerhood does. (What is needed is) to develop ways of thinking and behaving that will lead to valuable knowledge and skills later on.

The problem we see in current testing regimes is that we’re choosing to test for things like alphabet recall for two reasons.

  1. We base measures on common sense linear thinking like the idea that you must recognize letters, before recognizing words, before using words to build statements.  But if fact (as Ms Engel’s article points out) the psychological processes of building complex conversations is the developmental need for students and that is rather unrelated to how thought is considered in schools and how curriculum is developed.  Developmental needs should be studied for scientific validity and not left to common sense.
  2. The current measurement protocols behind proficiency testing  is not very good at measuring things like the ability to participate in complex conversations, it simply doesn’t translate well to a multiple choice question.  We could develop rubrics to do that, but it would be hard to prove that the rubrics were being interpreted consistently.  So instead we test abilities that fit the testing protocol, even if they are rather irrelevant (read invalid) to the capabilities that we really desire to foster.

Problem #2 Business Analytics – Things like analytics and scientific evidence are used in ways that relate mostly to processes and activities that can be standardized.  These are ways of doing things where there is clearly a best way to do it that can be scientifically validated and is repeatable.  The problem occurs when we try to achieve this level of certainty in everything, even if there is little that science can say about the matter.  Math is not about certainty, it’s about numbers.

The problem, says (Roger) Martin, author of a new book, The Design of Business: Why Design Thinking is the Next Competitive Advantage, is that corporations have pushed analytical thinking so far that it’s unproductive. “No idea in the world has been proved in advance with inductive or deductive reasoning,” he says.

The answer? Bring in the folks whose job it is to imagine the future, and who are experts in intuitive thinking. That’s where design thinking comes in, he says.

The problem with things like six sigma and business analytics is that you need to understand what it’s doing mathematically and not just follow a process.  If you’er just applying it, and you don’t understanding what it’s doing, you’ll try to do things that make no sense.  It not usually a problem with the mathematical procedures, it’s a problem with what the numbers mean.  How the numbers are derived and what’s being done as a result of calculations.  There is nothing worse than following a procedure without understanding what that procedure is doing or accomplishing.  Martin’s basic thought that innovation and proof are incompatible is false.  The real problem is a lack of understanding in how mathematics and proof can be use in concrete situations.

Problem #3, Use of the bell curve in annual reviews and performance management.

A recent McKinsey article, (Why You’re Doing Performance Reviews All Wrong, by Kirsten Korosec) generated a lot of negative comments by people force to make their review correspond to a bell curve.  In statistic we know that if you take a large enough random sample of anything that can be represented by numbers, the resulting distribution of the represented quality will resemble a bell curve, large in the middle and tapering off at either end.  But performance management is about fighting the bell curve; it’s about improving performance and moving the bell curve.  If you have to fit your reviews to a bell curve, your making performance look random.  That’s exactly what you do not want to do.  Once again we see a management practice that uses mathematics without understanding what they are doing.

What’s needed?  The valid use of mathematics not the random use

The basic problem is that mathematics is abstract, but human activity is concrete.  If we want to bridge these two worlds (and, as Strogatz explains it, they really seem like parallel universes) we must build a bridge of understanding that is called validity.  Validity is really the scientific study of how the concrete is made abstract and how the abstract is made concrete.  It’s an explicit theory of how the scope of activities can be represented by numbers, laid out so that it can be argued and understood.  You can do amazing things with mathematics in the real world, but only if you understand what you are doing, if you understand how the abstract and the concrete are related.  You must understand how numbers can represent and are related to the world of human activity.