A Marketing Plan for Promoting Evidence-based Management

What is needed for Evidence-based management to be a relevant business concept.  I believe it will move in that direction when managers can clearly understand the incentives for using it, when they easily understand how to translate and integrate it into their current responsibilities, and when they understand how to change the relevant behaviors.  This issuer can be seen as a marketing problem and, as said by Nancy R. Lee (2009), “words alone don’t often change behaviors.  We need products, and incentives, and convenient distribution channels as well”.  (Nancy’s topic is social marketing to alleviate poverty and associated problems, but the principles can equally apply to a poverty of business acumen, a topic that should include EBMgnt).  Lee’s recommended approach in this podcast is a standard market approach broken down into a 10 step plan:
A Ten step Marketing Plan
  1. Provide a clear rationale and statement of purpose
  2. Conduct a situational analysis with organizational strengths and weaknesses and environmental opportunities and threats
  3. Segment the heterogeneous market; then choose, prioritize and strategize for the needs of specific target audiences.
  4. Identify the behavior(s) to be changed, emphasizing simple and doable tasks.
  5. Listen to the voice of the customer for perceived barriers and reasons why they do not perform the behavior now.
  6. Form a positioning statement describing how you wish the audience to view the behavior and its benefits.
  7. Develop a strategic mix of marketing tools that include the right: product, price/ incentive, placing and promotion
  8. Develop a plan to evaluate outcomes.
  9. Budget for implementation
  10. Plan how the campaign will role out.
This approach wold most likely fall under consultative model of business services.  Besides marketing to specific segments of the management services market, it would also require the development of quality products that can support the needed behaviors and understandings.  These products are likely to be mostly educational and conceptual in nature and would include concepts to help scaffold needed changes to behaviors and business processes.  Also need would be additional appropriate distribution channels to build on the recognition and pre-knowledge of concepts.  These could be business schools or professional organizations and publications.
I find this approach interesting because:
  • It acknowledges the difficulty in changing behavior and understandings,
  • It acknowledges that the goals of managers and researchers are different and
  • It acknowledges that academic and scientific research would benefit from a well-formed translation strategy.
It would be nice to know if anyone can see any problems with an approach such as this, or would know of any other similar approaches.
Nancy R. Lee on How Social Networking Can Create Change for the Poor, podcast accessible on itunes or at http://www.whartonsp.com/podcasts/episode.aspx?e=04d8fe16-c7e4-45be-a441-7d33a83384e8

Evidence-Based Management as a Research/Practice Gap Problem

This is a response I made to a post on the Evidence Soup Blog about the potential demise of EBMmgt
I’ve been think about the health of the movement in response to (Tracy’s) post and I’m still surprised by the lack of EBMgmt discussions and how the movement does not seem to be gaining much traction. I re-looked at the Rousseau – Learmonth and the Van De Van, Johnson – McKelvey discussions for potential reasons why. (both are in Academy of Management vol31 #4, 2006). Here’s my take after reading them:
(1) Cognitive, Translation and Synthesis Problems: One, just like the example Rousseau gave in her Presidential Address, there are too many different concerns and issues floating about. We need the field to be more organized so people can get a better cognitive handle on what’s important. Also, I’m not sure peer review is the best strategy. When I did my dissertation, doing something exciting took a back seat to doing something bounded and do-able. I can’t imagine someone whose publishing for tenure doing anything more than incremental and that does not translate well for cognitive translation reasons. We need a synthesis strategy.
Possible response – A EBMgmt wiki See my 7-31 post on scientific publishing at howardjohnson.edublogs.org
(2) Belief problems – Henry Mintzberg believes that managers are trained by experience and MBA programs should be shut down. (3-26-09 Harvard Business Ideacast) He says that universities are good for that scientific management stuff, but implies that science is only a small part (management’s mostly tacit stuff). All my previously mentioned discussions noted that managers and consultant do not read the scientific literature. Part of the problem is communication (see #1), but part is current management paradigms that include little science.
Possible response – Far be it from me to suggest how to deal with paradigm change.
(3) Philosophical Problems – If EBMmgt is to succeed, it must be presented as a post-positivist formulation. Taken at face value, it seems positivist; and positivism has been so thoroughly critiqued that I could see where many people would dismiss it out of hand. Part of my thing is trying to be post-positivist, without throwing out the baby with the bath water. Rousseau tries to mollify Learmonth’s concern that touches on this area, she sees some issue, but I don’t see understanding. A positivist outlook will only lead you in circles.
Possible response – It’s much like your previous post, you need “both and” thinking, not “either or” thinking. EBMgmt must be an art and a science. This is how I understand the validity issue that I’ve mentioned to you before. I use Messick’s validity as a model for post-positivist science. It’s also important because measurement is the heart of science.
I would love your thoughts

Future EBMgmt Research Ideas

  1. I will need to think more on an evidence evaluation framework and how Rousseau’s model might be enhanced by Messick’s model of validity as discusses in my last post.
  2. As Messick said that validity is about test use not tests in themselves, so evidence-based practice is about how the evidence is used in practice, not about the evidence itself.  This needs to be spelled out.
  3. The practice research gap – Research validity generally become greater the more context can be controlled and parsed out of studies.  In evidence-based practice evidence must be related to context to be valid.  The more confident you are of research results, the less confidence you are that it will relate to the constellation of factors seen in contexts.  I don’t know how you can get beyond this without some applied research that puts research syntheses to the test.
  4. Practice is most often cross or interdisciplinary.  This impacts the last point, but it also means that each practice relates to many potential disciplines.  Accumulating the vast amounts of data will be next to impossible in a practical manor.  We need a technological solution through some sort of Web 3.0 or metadata solution as well as a technological way to compile data.

Considering the Validity of Evidence

In my last post I looked at an evidence-based framework that included evidence evaluation.  Denise Rousseau from Carnegie Mellon has extended the ability to evaluate evidence with a new model (Full paper here) that include these evaluation categories: Construct Validity, Internal Validity, Effect Size, Generalizability, Intervention Compliance, and Contextualization.  These catagories correspond closely to the six catagories of validity proposed by Messick (previously discussed here).

Rousseau Catagories Messick Catagories
Construct Validity Structural
Internal Validity External (Not a perfect match but logically similar)
Effect Size External
Generalizability Generalizability
Intervention Compliance Substantive
Contextualization Content

This is not an exact match category by category, but the way in which evaluation evidence is categorizing is very similar in approach and in purpose.  What Rousseau leaves out is consequencial validity and she does not address content and substantive validity in full.

A Framework for Integrating Evidence and Practice

Why do we need to consider evidence-based methodologies for our practices, because as Jeffrey Pfeffer recently stated, belief often trumps evidence and bias and false beliefs abound.  But implementing these methods is often not linear, rational or easy.  Joanne Rycroft-Malone, et. al. (working in the medical field) have developed a model suitable to this level of complexity.

The model is divided between evidence concerns (sub-divided into research, clinical and patient [or customer] concerns) and contextual concerns (subdivided into context, culture, leadership and evaluation concerns).  See Figure 1

Some of the lessons learned include:

    • Getting evidence into practice is not . . . a linear and logical process.
    • (This) framework attempts to represent the complexity of the processes involved in implementation . . ..
    • The nature of the evidence, the quality of the context, and the type of facilitation all impact simultaneously on whether implementation is successful.
    • Implementation is more likely to be successful when:
    • Evidence (research, clinical experience, and patient experience) is well conceived, designed, and executed and there is consensus about it.
    • The context in which the evidence is being implemented is characterised by clarity of roles, decentralised decision making, transformational leadership, and a reliance on multiple sources of information on performance.
    • Facilitation mechanisms appropriate to the needs of the situation have been instigated.

The intended purpose of this framework is to provide practitioners with a tool to plan, implement, and track their own strategies for change.  The article also notes that research methods must match the research question considered.  RCT methods are not always the best way to frame research.

A Networks Model for Evidence-based Management and Knowledge Transfer

Couple of interesting reads this morning (Bandura 2006 and Guest 2007) that are relevant to the topics of learning, performance support, knowledge transfer and evidence-based management (EBM).  The bottom-line:

(From Bandura) Knowledge transfer in many situations can be seen as a form of learning that proceeds through ongoing modeling with feedback and increasing approximation, not by an explanation of abstract information.

(From Guest) Practitioners do not generally change their practices as a result of abstract knowledge, but from the example of others in their organization or field.  (e.g. bankers looking to other bankers or retailers looking to other retailers)

Furthermore – Guest laments the current state of EBM.  Changing it requires attention to the communication process (communicator, message, medium and receiver) and the building of bridges (both traditional and non-traditional) between research and practice.  Guest is pestamistic about the readiness of the management field to address EBM.  I would disagree and suggest the following based on Guest’s communication process analogy:

  • Communicator – The concept of EBM is not an outcome, it is the bridge that can close the gap between researchers and practitioners. However, the communicator must stand on this bridge, not on either shore.
  • Message – Standing on the EBM bridge, the most important aspect of research is validity.  It is a view of validity that begins with the whole of the concept (not the narrow view of traditional research validity).  Research is not valid until the consequence of it use in practice can be demonstrated.  See a previous post on validity here although I may need to do additional work on the validity concept.
  • Medium – In the light of Bandura, the real medium of concern, in fact, are the people in the practitioner’s network.
  • Receiver – We need to build up the scope and diversity of practitioner’s networks and the ability of these network to act as learning models for evidence-based practices.