Ideas for Developing Expert Practitioners of Evidence-based Management

I’ve previously discussed ways to implement Evidence-based Management here.  Today I ask a related question; how do we prepare practitioners to become experts at using evidence.  The work of Carl Wieman points us in a relevant direction that suggests that knowledge of evidence is not sufficient to make us expert users of evidence.

Wieman begins with evidence that scientific coursework was not preparing students to be experts in scientific problem solving, that is, not until they were able to gain experience as assistances in his physic lab.   Introductory physics courses did not seem to be working as expected.

On average, students have more novice like beliefs after they have completed an introductory physics course than they had when they started; this was found for nearly every introductory course measured. More recently, my group started looking at beliefs about chemistry. If anything, the effect of taking an introductory college chemistry course is even worse than for taking physics.

Wieman describes novices as people who can only see isolated pieces of information, pieces of information that are learned by memorization and are understood as disconnected from the world.

To the novice, scientific problem-solving is just matching the pattern of the problem to certain memorized recipes.

On the other hand, experts see coherent structures or frameworks of evidence-based concepts   The way experts solve problems involves strategies that are systematic, concept-based, and applicable to new and different contexts.  Wieman points out that experts have substantial knowledge, but it only becomes important when it is used within expert conceptual structures.  From a teaching and assessment point of view, assessing only what experts know will leave you ignorant in the ways that experts use knowledge.  You must understand the frameworks within which knowledge is used.

Everything that constitutes “understanding” science and “thinking scientifically” resides in the long-term memory, which is developed via the construction and assembly of component proteins. So a person who does not go through this extended mental construction process simply cannot achieve mastery of a subject.

Now I generally follow constructivist ideas, but I don’t believe that we should focus on a naive constructivist pedagogy.  The issue is not knowing, even if you find a way to construct your knowledge.  It is all about doing and the way that knowledge enables you to do things.  I believe Wieman is advocating for teaching methods that promote this type of knowledge use.  If you use constructivist pedagogy, but remain focused on only a body of knowledge, your results will not substantially improve.  What Wieman points out about learning reinforces the notion that our brains are wired for action, in ways that link learning and motor control.  We are not made to know only, but to know in the process of doing.

A second point, this also illustrates another case that was demonstrated by Engel (2010) and is relevant here.  Engel noted that “developmental precursors don’t always resemble the skill to which they are leading”.  (I’ve discussed this here.) Students who are learning in Wieman’s physics lab are:

focused intently on solving real physics problems, and I (Wieman) regularly probe how they’re thinking and give them guidance to make it more expert-like.  After a few years in that environment they turn into experts, not because there is something magic in the air in the research lab but because they are engaged in exactly the cognitive processes that are required for developing expert competence.

A diverse body of knowledge is a necessary but insufficient condition.  Even though knowledge is necessary, accumulating a body of knowledge is not a developmental precursor of expert performance.

That leaves the question, what does expert practice look like in management; what do successful managers do, how do we get students to think deeply about what to do with management problems, in what cognitive process should they be involved?  Overall, I am still an advocate for bridging the academic and the world of practice for students through some type of supervised practicum.

Mathematics in The Real World: Are Your Use(s) of Numbers Valid

It is my premise that most people do not really understand how to use mathematics strategically in a concrete world.  They don’t think much about what the numbers mean and meaning is everything if you want to know what the numbers are doing.  At its heart, math is an abstraction; an idea that is not connected to real world circumstances.  (See Steven Strogatz’s NY Times article for a detailed look at math and its misuse in education pedagogy)

The trick to understanding and using math in the real world can often be traced to how we devise the measurements that define the meaning of numbers that are then to be treated mathematically.  Let look at some problems relation to the use of numbers and how their meaning is misunderstood.

Problem #1 Educational Testing – Measurement should aways be designed to serve a goal; goals should never be design to fit a measurement protocol.  This is why proficiency testing will never help education and the core idea behind a recent New York Times editorial by Susan Engel.  Current public school measures do not reflect the capabilities we need to develop in students.  It’s not bad that people teach to the test, what’s bad is that the test itself it not worth teaching too.

Our current educational approach — and the testing that is driving it — is completely at odds with what scientists understand about how children develop . . . and has led to a curriculum that is strangling children and teachers alike.

(Curriculum should reflect) a basic precept of modern developmental science: developmental precursors don’t always resemble the skill to which they are leading. For example, saying the alphabet does not particularly help children learn to read. But having extended and complex conversations during toddlerhood does. (What is needed is) to develop ways of thinking and behaving that will lead to valuable knowledge and skills later on.

The problem we see in current testing regimes is that we’re choosing to test for things like alphabet recall for two reasons.

  1. We base measures on common sense linear thinking like the idea that you must recognize letters, before recognizing words, before using words to build statements.  But if fact (as Ms Engel’s article points out) the psychological processes of building complex conversations is the developmental need for students and that is rather unrelated to how thought is considered in schools and how curriculum is developed.  Developmental needs should be studied for scientific validity and not left to common sense.
  2. The current measurement protocols behind proficiency testing  is not very good at measuring things like the ability to participate in complex conversations, it simply doesn’t translate well to a multiple choice question.  We could develop rubrics to do that, but it would be hard to prove that the rubrics were being interpreted consistently.  So instead we test abilities that fit the testing protocol, even if they are rather irrelevant (read invalid) to the capabilities that we really desire to foster.

Problem #2 Business Analytics – Things like analytics and scientific evidence are used in ways that relate mostly to processes and activities that can be standardized.  These are ways of doing things where there is clearly a best way to do it that can be scientifically validated and is repeatable.  The problem occurs when we try to achieve this level of certainty in everything, even if there is little that science can say about the matter.  Math is not about certainty, it’s about numbers.

The problem, says (Roger) Martin, author of a new book, The Design of Business: Why Design Thinking is the Next Competitive Advantage, is that corporations have pushed analytical thinking so far that it’s unproductive. “No idea in the world has been proved in advance with inductive or deductive reasoning,” he says.

The answer? Bring in the folks whose job it is to imagine the future, and who are experts in intuitive thinking. That’s where design thinking comes in, he says.

The problem with things like six sigma and business analytics is that you need to understand what it’s doing mathematically and not just follow a process.  If you’er just applying it, and you don’t understanding what it’s doing, you’ll try to do things that make no sense.  It not usually a problem with the mathematical procedures, it’s a problem with what the numbers mean.  How the numbers are derived and what’s being done as a result of calculations.  There is nothing worse than following a procedure without understanding what that procedure is doing or accomplishing.  Martin’s basic thought that innovation and proof are incompatible is false.  The real problem is a lack of understanding in how mathematics and proof can be use in concrete situations.

Problem #3, Use of the bell curve in annual reviews and performance management.

A recent McKinsey article, (Why You’re Doing Performance Reviews All Wrong, by Kirsten Korosec) generated a lot of negative comments by people force to make their review correspond to a bell curve.  In statistic we know that if you take a large enough random sample of anything that can be represented by numbers, the resulting distribution of the represented quality will resemble a bell curve, large in the middle and tapering off at either end.  But performance management is about fighting the bell curve; it’s about improving performance and moving the bell curve.  If you have to fit your reviews to a bell curve, your making performance look random.  That’s exactly what you do not want to do.  Once again we see a management practice that uses mathematics without understanding what they are doing.

What’s needed?  The valid use of mathematics not the random use

The basic problem is that mathematics is abstract, but human activity is concrete.  If we want to bridge these two worlds (and, as Strogatz explains it, they really seem like parallel universes) we must build a bridge of understanding that is called validity.  Validity is really the scientific study of how the concrete is made abstract and how the abstract is made concrete.  It’s an explicit theory of how the scope of activities can be represented by numbers, laid out so that it can be argued and understood.  You can do amazing things with mathematics in the real world, but only if you understand what you are doing, if you understand how the abstract and the concrete are related.  You must understand how numbers can represent and are related to the world of human activity.

Concept clarification of Evidence-Based Management

My current series of post are centered on clarifying the meaning of being evidence-based and a recent article (Briner, Denyer & Rousseau, 2009) falls right in line with this task.  The article focusses on 4 key points in clarifying EBMgmt.

1. EBMgt (Evidence-based Management) is something done by practitioners, not scholars.

One caveat here, the implication that practitioniers do not need to be scholars.  The type of scholarship and scholarly activity may be different, but evidence-based practice is based on scientific inquiry and requires a certain level of knowledge and thought.  People often talk of this being a knowledge age, which if true, will mean that more and more people need a better understanding of various forms of scholarship.  Understanding science is often a foundation of educational programs designed to prepare evidence-based practitioners.  The scientific tasks of practitioners will be different than other types of scholarship.  It is scholarship focused on what’s relevant to practice and it’s true that practitioners often find current scholarship irrelevant, but there is a type of scholarship that will drive the evidence-based movement.

2.EBMgt is a family of practices, not a single rigid formulaic method.

Determining the validity of one’s practice focuses on the total context of practice.  Both it’s method and the type of evidence required is multifaceted.

3. Scholars, educators, and consultants can all play a part in building the essential supports for the practice of EBMgt. To effectively target critical knowledge and related resources to practitioners, an EBMgt infrastructure is required; its development depends on the distinctive knowledge and skills found in each of these communities.

Well said!  I also hope that we see related innovative thinking in these communities as well.

4. Systematic reviews (SRs) are a cornerstone of EBMgt practice and its infrastructure, and they need to possess certain features if they are to be informative and useful.

I believe the infrastructure needs should focus on systematic reviews that go beyond what work in a simplistic fashion.  It should focus on the total needs of practitioners who are developing their practice by means of scientific inquiry.  Major et al (2009) in the December issue of American Psychologist is a good example of a through review process.  Their article reviews the empirical research on the links between abortion and women’s mental health, a highly contested and politicalized issue.  They first look at how relevant concepts and research questions have been framed by various studies.  They consider various problems with the data before analyzing the results organized by different parameters.  Because of their comprehensive approach, their conclusions not only provide a good empirical summation, but will also contribute to practitioners’ understanding of the relevant issues from a number of different perspectives and how it might relate to different practices.

My next post will focus on what types of knowledge (and hence what type of infrastructure) might be needed by the scientific inquiry of practitioners.


Major, B., Applebaum, M., Beckman, L., Duton, M.A., Russo, N.F. & West, C. (2009). Abortion and Mental Health: Evaluating the Evidence, American Psychologist, Vol 64 (9) pp.863-890

Briner, R.B., Denyer, D. & Rousseau, D.M., (2009). Evidence-Based Management: Concept Cleanup Time? Academy of Management Perspectives, Vol. 23(4), pp. 19-32.

Why Interpretation is the Cornerstone of Evidence-based Data Driven Practice

This post responds to a comment by Richard Puyt where I thought I would try to explain my ideas on interpretation and evidence in a more complete manner.

First, a first order belief of mine: data driven practices, that are supported and shown valid by research evidence, is the best way to establish or improve business practices.  Common sense is not a good way to run your business, because it is often wrong.  However, you also need a good theory or mental framework to make sense of your data and you need a broad evaluation framework to understand and to explain how your research is related to your practice.  Without good frameworks, your level of analysis falls back to common sense, no matter how much data you have available.  It simply can become a case of garbage in = garbage out.

This is the point of Stanley Fish in the NY Times Opinionator Blog when he says:

. . . there is no such thing as “common observation” or simply reporting the facts. To be sure, there is observation and observation can indeed serve to support or challenge hypotheses. But the act of observing can itself only take place within hypotheses (about the way the world is) . . . because it is within (the hypothesis) that observation and reasoning occur.  (I blogged about this before here)

Your observations, be they data, measures or research results, need to be interpreted and that can only occur within an interpretive framework such as a good theory or hypothesis.  Furthermore, the quality of your analysis will depend as much on the quality of your interpretative framework as it does on the quality of your data.


Performance Measurement:  (I previously blogged about this here.)  Any performance measure implies a theoretical rationale that links performance with the measure.  This theoretical relationship can be can be tested, validated and improved over time.  It is not just that you are using a data driven performance system, but that you also have a well supported way of interpreting the data.

Research Evidence: When conducting a quantitative study, care is taken in choosing a sample and in controlling for a wide range of potential confounding variables.  The effects that are the research results may show a causal relationship that can be trusted.  However, you can not then assume that these results can be directly applied to your business where all the confounding variables are back in play and where the sample and context may be different.  It may be a very important piece of evidence, but it should only be a piece in a larger body of evidence.  This evidence can be the basis for a theory (what Fish calls a hypothesis) and used as a basis for a practice that is data driven (what Fish calls observation), but this practice needs to be tested and validated on it’s own merit, not because it relates to a research study.

This is the basis of Evidence-based data driven practice.  Good data, derived from good measures, with a good understanding of how the measures relate to your practice, an understanding that is tested over time.  This is not too hard to do and should be a foundation for business education.

Evidence-Based Management as a Research/Practice Gap Problem

This is a response I made to a post on the Evidence Soup Blog about the potential demise of EBMmgt
I’ve been think about the health of the movement in response to (Tracy’s) post and I’m still surprised by the lack of EBMgmt discussions and how the movement does not seem to be gaining much traction. I re-looked at the Rousseau – Learmonth and the Van De Van, Johnson – McKelvey discussions for potential reasons why. (both are in Academy of Management vol31 #4, 2006). Here’s my take after reading them:
(1) Cognitive, Translation and Synthesis Problems: One, just like the example Rousseau gave in her Presidential Address, there are too many different concerns and issues floating about. We need the field to be more organized so people can get a better cognitive handle on what’s important. Also, I’m not sure peer review is the best strategy. When I did my dissertation, doing something exciting took a back seat to doing something bounded and do-able. I can’t imagine someone whose publishing for tenure doing anything more than incremental and that does not translate well for cognitive translation reasons. We need a synthesis strategy.
Possible response – A EBMgmt wiki See my 7-31 post on scientific publishing at
(2) Belief problems – Henry Mintzberg believes that managers are trained by experience and MBA programs should be shut down. (3-26-09 Harvard Business Ideacast) He says that universities are good for that scientific management stuff, but implies that science is only a small part (management’s mostly tacit stuff). All my previously mentioned discussions noted that managers and consultant do not read the scientific literature. Part of the problem is communication (see #1), but part is current management paradigms that include little science.
Possible response – Far be it from me to suggest how to deal with paradigm change.
(3) Philosophical Problems – If EBMmgt is to succeed, it must be presented as a post-positivist formulation. Taken at face value, it seems positivist; and positivism has been so thoroughly critiqued that I could see where many people would dismiss it out of hand. Part of my thing is trying to be post-positivist, without throwing out the baby with the bath water. Rousseau tries to mollify Learmonth’s concern that touches on this area, she sees some issue, but I don’t see understanding. A positivist outlook will only lead you in circles.
Possible response – It’s much like your previous post, you need “both and” thinking, not “either or” thinking. EBMgmt must be an art and a science. This is how I understand the validity issue that I’ve mentioned to you before. I use Messick’s validity as a model for post-positivist science. It’s also important because measurement is the heart of science.
I would love your thoughts

The Big Shift: Moving to a social-cultural-constructivist Educational Framework for Organizational Learning

While reading Jay Cross’s comments on John Hagel’s definition of the Big Shift the thought came to me, that this is really a re-definning of knowledge management within a framework that would be acceptable to a social-cultural-constructivist.  Here are a list of Hagel’s definition categories and my thoughts about them.
From knowledge stocks to knowledge flows: I interpret this as a shift from an attempt to objectify knowledge to the recognition that knowledge is bounded by people and contexts, and that knowledge becomes useful when actualized in real-time processes.  You don’t need a database of content that was written for different contexts and different times.  Instead you need access to conversations with people who have a degree of shared understandings (cognitive contexts).
From knowledge transfer to knowledge creation:  Constructivism is often considered synonymous with discovery learning and I don’t think that is correct, but learning is a building process.  Except for modeling (think: mirror neurons) transfer isn’t a valid metaphor for learning.  Better metaphors are creating, building or growing.  These are literal metaphors if you think of learning as the neurology of synaptic development.  Knowledge creation is often achieved by synthesizing new connection between previous knowledge in new ways and learning is represented neurologically by making new connections between existing neurons.
From explicit knowledge to tacit knowledge:  I really don’t like the term tacit knowledge; I’ve never seen a good definition.  Sometimes it’s explicit knowledge that hasn’t yet been well expresses, sometimes it refers to contextual elements.  I’ve always believed that knowing only exists for doing things, the idea that the deed preceded the word.  Sometimes explicit knowledge is just about trying to ascribing more capability to abstract knowledge than it is able to handle.  Let’s just accept that knowing is for doing, it’s one of the main reasons for getting learning out of the classroom and into the world.  Hagel doesn’t seem to realize this yet and why I don’t seem to get much value from his paragraph on tacit knowledge.
From transactions to relationships:  Trust is indeed becoming more and more important.  I also relate the idea of trust to Umair Haque’s idea of profiting by creating thick value, doing things that make peoples lives better.  I really believe that the transition from transactions to relationships and from thin value to thick has a lot more to do with financial and accounting frameworks than it appears on the surface.  The financial set up has to fit the situation correctly, especially if finance is driving your activity.
From zero sum to positive sum mindsets:
This has a lot to do with boundary crossing, open source, and the aforementioned transaction to relationships paragraph.  A major goal of all organization should be identifying their zero sum process pockets and thinking about moving them to positive sum frameworks.  Often the key is not in the processes themselves, but in the frameworks and cultural understandings that support those processes.
From push programs to pull platforms:
People tend to think of social media here, but that’s just a technology platform.  What is needed first is a cultural platform that makes employees partners and then a relationship platform that blurs organizational boundaries so there is a network to pull from.  While technology can facilitate much, people are the foundation and institutions are important facilitators.
From stable environments to dynamic environments:
This is not a choice, environments are becoming more dynamic, the trick is to develop resilience, the ability to identify when change is needed and the ability to adapt in a timely fashion.  The trick is to not let change become disruptive from a cognitive and a work-flow standpoint.  Sense – learn – respond, it needs to happen all the time and at all levels.  Organizations can cope if individuals are always learning and striving to improve, (something I believe is a part of human nature, that is if organizations do not make structures to stifle it) and if organizations take steps to be flexible in their policy structure.  Refer to the previous paragraph on transactions and relationships.  It is important the employees trust their organization and that their organization must trust their employees.  It;s about creating thick value through and through.
Again this is all pretty much consistent with a social cultural constructivist psychological and educational framework.  Previous ideas about knowledge management could be thought of as a management corollary to positivist psychology.  A rational view that just doesn’t square with the way things seem to work in real life.

Network ROI

Interesting IBM article I was thankfully pointed to by the Evidence-based Soup Blog
Wu et. al. (2009) Value of Social NetworksA Large Scale Analysis on Network Structure Impact to Financial Revenue of Information Technology Consultants
Care is needed interpreting this research as it is correlational and can not imply causation, but I would emphasize three findings:

  1. I believe there is evidence that diversity in project teams (maybe coupled with good communication skills) improves performance.  Wu provides evidence that this could apply to communication networks as well.
  2. Having access to powerful individuals (in the hierarchy) improves the performance of project teams
  3. The social networks of the entire project team seems to be more important than the networks of individuals

This would imply that companies should encourage the development of strong diverse project teams networks and should  support the involvement of upper level management in project team networks.

Channeling Pandora: Ideas from a 2007 Interview with Management Professor Bill Starbuck

Reading through documents from the Stanford Evidence-based Management Blog Site, I came across an interesting article (Un)Learning and (Mis)Education Through the Eyes of Bill Starbuck: An Interview with Pandora’s Playmate (Michael Barnett,(2007) Academy of Management Learning and Education, 6, 1, 114-127).

Starbuck seems to be concerned with two things: (1) methodological problems in research and (2) un-learning or fossilized behavior in organizations.
On methodology:  You can’t just apply standard statistical protocols in research and expect to get good answers.  You must painstakingly build a reasonable methodology fitting your methods to contexts and tasks, much like you are fitting together the pieces of a puzzle.  In a personal example: I consulted practically every text book I had when developing methods for my dissertation, but the most common were introductory statistical texts.  I kept asking myself: what am I doing, what are the core statistical concepts I need to do this, how can I shape my methods to fit my tasks to these core concepts.  Almost all advanced statistical techniques are an extrapolation of the concepts found in introductory statistics and your can’t really understand how to use these advanced procedures until you understand their core and how they fit your methodological circumstances.  As Starbuck points out, the concept of statistical significance is the beginning of results reporting, not the end.  You must go on to build a case for substantive importance.  He points out that mistakes are common in reporting effect sizes.  I believe that this often happens because people simply apply a statistical protocol instead of understanding what their statistic are doing.

A favorite issue of mine (that Starbuck implies, but does not address directly) is the lack of a theoretical framework.  Without theory, you are flying empirically blind.  Think of the four blind men empirically describing an elephant by holding a trunk, leg, body and tail.  Vision (or collaboration) would have allowed the men to “see” how their individual observations fit together as a hole.  You must begin with the empirical, but reality is always larger than your empirical study and you need the “vision” of a theoretical framework to understand the larger picture and how things fit together.  Theory is thus an important part of your overall methodological tact.

On (Un)Learning: Starbuck discusses the need to unlearn or to change organizational processes in response to the changing environment.  It is a problem where past success cloud your vision obscuring the fact the what worked before is no longer working.  The problem is not that people can’t think of what to do to be successful, it’s that they already know what to do and their belief keeps them from seeing that there even is a problem or seeing the correct problem.  Starbuck talks about problem solving in courses he taught.  He found that people often found that the problem they needed to solve was not the problem they initially had in mind.  Their biggest need was to change beliefs and perspectives.

The psychologist Vygotsky spoke of something very similar as fossillized behavior.  As someone is presented with a unique problem, they must work out the process solutions to the problem externally.  Later the process is internalized to some extent and become somewhat automated, requiring much less cognitive load.  After more time this can become fossilized, that is, behavior that is no longer tied to a process or reason, but continues as a sort of tradition or habit.  This would apply at the organizational level as well as the individual posychological level.  I would like to investigate the concept of organizational resilience as a possible response to fossilized organizational behavior as well as a way of responding to extreem events.  This would  emphasize an ability to change in response to environmental demands.  Starbuck thinks that constant change is too disruptive to organizations, but I believe that there may be a combination of processes, capabilities and diversity that enable organizations to sense and respond, not necessarily constantly, but reasonably on an ongoing basis as well as when the enevitable  black swan happens.

Beware the Statistical “Dragon King”

A power law describes a relationship between 2 quantities, typically between event frequency and event size where the larger the event, the smaller the frequency.  (Think of a long-tail distribution)  Recently, large events have been referred to as black swans, rare large improbable events (Talub, 2007).  Predicting black swans is difficult because there are too many variables and unknowns in prediction, but their effect size make them too problematic to ignore.

The Physics arXiv Blog recent discussed a proposition by Didier Sornette at the Swiss Federal Institute of Technology.  Sornette says that outsized outliers are more common then they should be in power distributions because they are subject to feedback loops.  He termed these outliers dragon kings.  In real life examples (at lest it seems to me that) these feedback loops seem to often be social; an example of jumping on the bandwagon.  This is another reason that black swans are much more common than they should be according to power laws.

Very relevant for risk management calculations.  If you are preparing for potential risks, beware not only for black swans (rare events with large effects that are hard to predict because you don’t know the future of many varibles), but also for dragon kings (feedback loops that increase the effect size of somewhat rare events, making them more common that a power law distribution would expect).  It provides a rationale for the development of resilient organizations, the ability to change quickly in response to environmental events, instead of relying on cost probability decision matrixes.

We Need Innovation in Creating Thick Value

A Live Science article, Obama: Key to Future Is Innovation by Robert Roy Britt, discusses Obama’s call for innovation through education and controlling health care costs, The problem is that the concepts are a little to vague and unfocussed.  What we need most is innovation in value creation similar to that called for by Umair Haque who writes in the Edge Economy Blog about The Value Every Business Needs to Create Now.  He advocates for thick value.  Thin value is “built on hidden costs, surcharges, and monopoly power”, while thick value is “awesome stuff that makes people meaningfully better off”.  As a country we either need people to shift to thick value or we need to start picking winners and losers with the loser coming from the ranks of the thin group.  If we reduce the cost of health care, it has to come out of someone’s pocket unless that someone starts to create thick value.