Collective Intelligence and Distributed Decision Making

Lots of information recently on the topic of collective intelligence and distributed decision making (web 2.0, decision 2.0, project management 2.0 etc. . .).

George Siemen’s blog looks at the report Clickstream Data Yields High-Resolution Maps of Science. He notes; “The data is current provided as images. It would be useful to navigate the resulting “map of science” in an interactive application”.  When this comes-to-pass, it would really represent a powerful web 2.0 app and a data tool for ideas.

The Many Worlds Blog discusses Eric Bonabeau’s Sloan management review article on Decision Making 2.0. (1-9-2009)  They note 2 concluding suggestions by Bonabeau

“First, collective intelligence tends to be most effective in correcting individual biases in the overall task area of (idea) generation” and

‘Second, because most applications lack a strong feedback loop between generation and evaluation, “companies should consider deploying such feedback loops with greater frequency because the iterative process taps more fully into the power of a collective.”’

This could really be realized if there were two more developments like clickstream data.

  1. If disciplinary research agendas would become more self-organized by being more connected to the collective 2.0 world, because research is just such a feedback loop that Bonabeau is calling for, and
  2. We had a better aggregator for research results.  There is too much research and knowledge being generated to use, at least in a way that taps into collective intelligence.  This would make the leap from idea generation 2.0 to evaluation 2.0.

Finally Andrew Filev in the Project Management 2.0 blog (referencing Seth Godin) says that collective intelligence still needs leadership (as in a leader of the project tribe).  It seems like this is a re-introduction of bias back into the system, but maybe some bias can be productive for getting things done.  I’m not sure.

There are Many Valuable Forms of Measurement in Social Science Related Fields

A recent HBR article touts the benefits on ethnography at Intel. (Ethnographic Research: A Key to Strategy. By: Anderson, Ken, Harvard Business Review, 00178012, Mar2009, Vol. 87, Issue 3)  There are many types of measures in the social sciences.  Each has its own strengths and weaknesses and each has a place in your measurement repertoire.  But, as this article points out, if you limit your view of measurement and science (or data collection and how you are able to deal with different kinds of datum) you will ultimately lose out.

Some people may not like this kind of viewpoint.  It tends to broaden one’s field of vision and many people like to stay narrow and focused.  There is a time and a place for narrow and focused, but there is also a time and place for broad.  Reminds me of something Martin Buber wrote (paraphrasing)

Only a fool give someone three choices.  The wise man gives only two choices, one that obviously good and one that is obviously evil.

I do hope I am correct in reading this sarcastically.  If this is indeed the knowledge age, we need lots of people who can deal with 3 and more choices on a regular basis.

A Measure of Process Standards can become a Key to Unleashing Creativity

When you’re a hammer, everything looks like a nail.  I have to be careful or everything looks like a measurement opportunity to me.  Nonetheless, I can’t deny that there seems to be opportunities to implement better measures supporting evidence-based practice (as I suggested in my last post).  I think the process would go something like this.

  • Identify and scope out the domains of interest that are important to you.
  • Conduct systemic reviews to establish a description of the processes that represent best practices within each domain.
  • Develop a descriptive questionnaire to allow an organization to compare their current practice with best practices.
  • Initiating a change project based on a capability maturity model of process change.

The best practice questionnaire becomes the focal point.  It is the measure of your organizations current performance and it provides a prescription for where you’re headed.  It’s easy to understand.  Also necessary are outcome measures that provide feedback on the validity of the standards to your organization.

2 caveats:

1. Complete consensus may not be possible, but at least consensus within a proscribed paradigm should be expected.  What the instrument would have the potential to do is to focus research within a paradigm and provide a research platform for many organization to conduct their own improvement projects in the management discipline; similar to what six sigma has done for manufacturing.

2. Which leads to one final caveat.  This is not the end all and be all in management decision-making.  What this approach does is to provide a framework to organize and scaffold your thinking around evidence-based practice.  Science can only provide you with standards; with a description of what has been proven to work in the abstract.  Not everything can be proven by science; not everything can be summarized in a standard process.  What standards do is to tell you these things work, stop re-inventing the wheel. Put these things into place and then place your development focus on the contextual, the relationship, the imaginative, and other areas where empirical science is less helpful.  Knowing where to put your creativity, that’s the real benefit of standards.

The Research Practice Gap: Why is Evidence-based Practice so Hard to Achieve.

There’s has been some recent articles in the social science literature (nursing, education, management, HR, etc. ) about Evidence-based practice (EBP) or the research practice gap that exists in very many fields.  Why is EBP so difficult to achieve and why do so many solution articles leave me so underwhelmed.  I will offer a reason for the difficulties that I have not yet heard in a convincing manner.

Problem: Using research across different practices is basically the same problem as the transfer of learning or knowledge across contexts.

Reason for the problem: it takes work. Knowledge is closely tied to the contexts of production.  There may be theories and prior research that are applicable to a specific practice, but it takes work to contextualize that knowledge, see its applicability to specific contexts, and change the resulting practice.  What is that work:

  • Establishing a broad practitioner knowledge-base in order to know that the applicable theories and knowledge exist.
  • Knowing how the existing problem or practice can be reframed or re-understood in the light of this new knowledge.  It’s not just using knowledge in a new context, it is re-producing that knowledge or sometimes producing knowledge that is unique to that context.
  • Making changes and dealing with side problems common in change management.
  • Developing a feedback methodology for evaluating and adjusting practice changes

Solution;  we need practitioners with better skills and better tools:

  • A larger knowledge-base and a better network (or community of practice) that allows practitioners to tap into the cognition distributed across practitioner networks. In someways practitioners, because they need to be generalist, need a larger knowledge-base than do researchers who can restrict themselves to specialty areas.
  • Skills in problem framing:  re-conextualizing knowledge, hypothesis generation and testing, setting up experimental and other feedback methodology
  • Skills in communication and change management.  Understanding what to do is one thing, understanding how to get it done is another thing entirely.

Better tools. Many article speak like there is broad consensus on what practitioners should do like that consensus already exists.  That does not seem like the paradigmatically defined world of science that I know.  I think there is hard work yet to be done in writing practice standards and guidelines for best practices in most areas.  They are important however, as standards will form the basis for practitioners to be able to create measurement tools to measure how their practices are conforming, creating a deep understanding of their practice.  A measurement tool will also provide a practice compliancy pathway for changing practice.

The Difficulty in Measuring

Measurement is, or should be, a concept that is at the center most peoples’ practice.  It comes in many forms: “No Child Left Behind” in education, “evidence-based practice” in medicine and psychology, six sigma in manufacturing, balanced scorecard in management, or performance improvement in human-resources.  All of these programs have measurement at the process core and the results of these processes begin with the quality of the measure and the ability to target measures to illuminate the intended purpose.  But, much of the efforts that are made to impliment these programs focus much more on the methodology that surround the measures, than on the measures themselves.

Achieving quality measures and quality data is not that easy.  Understanding this begins  with the idea of the measurement construct. 

“In philosophy of science, a ‘construct‘ is an ideal object (i.e., one whose existence depends on a subject’s mind), as opposed to “real objects” (i.e., those whose existence is non dependent on a subject’s mind)”

 “Measurement is the process of assigning a number to an attribute (or phenomenon) according to a rule or set of rules” (Wikipedia.com).

Measurement assigns numbers to constructs; attributes that are idea objects.  Measures do not create real objects.  Some of these objects are more problematic in definition (like personality) than others (like temperature),  but they can all be defined as constructs or idea objects.

This leads to the importance of the concept of validity in measurement that will be the next topic.