We Need Appropriate Measurement for Appropriate Management

Confusion on Managing by Measuring

I believe there is some confusion today involving the place of measurement in business.  In a recent article (Productivity in a Networked Era) in Chief Learning Officer (also available at Jay’s blog), Jay Cross and Jon Husband addressed the need to change traditional ways of viewing return on investment in a networked learning environment where ideas and other intangible assets have become more important than physical assets.  Regarding this need to rethink the basis for investing in new forms of capital, this article is spot on.  However, I am concerned by some aspects of the discussion involving measurement such as:

. . . it (measurement) doesn’t apply to making judgment calls, strategic choices or disruptive innovations.  . . . Intuition, judgment and gut feelings guide these more important decisions.

I disagree!  The very idea of the need for science is that our “intuition, judgement and gut feelings” are just as likely to mislead and to encourage us to choose the wrong path.  Today, psychology is giving us more insight into how our non-conscious mind can lead us astray when we trust our “gut”.

Re-think how we measure, but do not abandon measurement

Again, I believe that Jon and Jay’s basic idea is valid, that most companies measurement methodology is keeping them from making appropriate investments, but part of the problem is managers limited understanding of measurement and the methods needed to develop and use data.  With the increasing complexity of networked environments (see Jon & Jay’s example of Cisco Systems’ large-scale adoption of social computing) and with the development of new forms of data visualization, using data has never been more important than it is today.  Jay and Jon’s conclusion “We should rethink and expand our methods for making judgments”, is also correct, but, I would not suggest doing this by abandoning measurement. The ability to use and understand data begins with an understanding how the data was collected, that is, with understanding the measurement processes involved.

A final thought: fast changing, intangible centric and knowledge intensive environments require knowledgeable and capable individuals.  In the past, this kind of thinking about measurement was reserved for academics, not business types, and it is true that typical academic communication styles are sometimes not appropriate, even for academics.  Simplification is a worthy communication goal, but the ideas we need cannot be dumbed down.  Complex issues, like understanding the meaning of data and the measurement processes by which it is obtained, must be an important capabilities throughout organizations if they wish to take advantage of opportunities in our current business environment.  I believe that measurement is crucial to management, but inappropriate measurement will lead to inappropriate management.

Ideas and paradigms are Important Enablers of Creativity

A timeout from research to respond to a relevant blog post.

George Siemens posted about studies from the PsyBlog relevant to my current  research on creativity.  The PsyBlog post ended with the recommendation to go it alone if creativity is important to you.  I think this is counter productive.  Many important outcomes require group work and diversity in groups can be an important source of creativity when it brings together different perspectives.  Certainly one important factor encouraging creativity in groups are shared ideas and a paradigm based that supports creativity.  The base of my thoughts are in the following comment made in response to George’s blog post.

Good discussion George and Ken;
It reminds me of Vygotsky’s lower and higher mental functions. What Ken describes sounds like a group manifestation of lower mental functions ( a level of thinking shared with animals). Valuing something like diversity may only occur if a higher mental function regulated this primal instinct for conformity.
I would not call this type of creativity destruction a problem with norms, but a problem of lower levels of responding. Something like diversity may require a higher level of function with ideas, paradigms and the sort, mediating the thinking, whether it is a group or an individual. Valuing diversity could be a norm too! Creativity may need to start with an individual, but for a group to participate, you may need relevant shared ideas to be present in the group. As a metaphor, think of shared ideas and artifacts like the neurotransmitters of the group.
In the study referenced by the PsyBlog, we don’t know what kinds of paradigms underly the groups thinking or of the study. Science in my view is a blend of theoretical and empirical. This study sounds like it over emphasized the empirical without a good theoretical understanding of creativity. Sort of like a hold over from behavioral experimental psychology that thought of the individual as a black box where you only measure the inputs and outputs. Measure group creativity, but keep their shared ideas and paradigm base as a variable in the equation.

Synergy, hermeneutics and simplicity

Synergy, hermeneutics and simplicity is at the heart of my thinking, my advocacy and my ideas for learning and supporting performance.  I think it will be difficult to be understood without making what this means more explicit.

Synergy – On 4-2-09 I posted an idea (How to Think:) drawn from the blog of Ed Boyden from MIT’s Media Lab who wrote: “Synthesize new ideas constantly. Never read passively. Annotate, model, think, and synthesize while you read . . . ”    Creativity is essential to success and nothing supports creativity more than the synergy that comes from synthesizing ideas in new ways.  It is also at the heart of learning.  New knowledge must be integrated with existing knowledge to make sense and this often requires synthesis.

Meaning Making (Hermeneutics) – Our brains and sensory systems can process an enormous amount of information, but it’s all chaos (psychologist William James’ buzzing blooming confusion) until we make meaning out of it.  Meaning is not a given, it is a human and a learned (for the most part) achievement.  Like synthesis, creating meaning (hermeneutics) is also a basic skill needed for successful practice.  When I advocate for measurement, it is as a tool for making meaning.

Simplicity – In my 3-19 post, Writing to Tame the Chaos I advocated for simplicity in academic writing that communicates beyond one’s disciplinary silo.  With the help of Cunha & Rego (2008), I would like to extend simplicity as a general approach to practice in my next post.  For now I will just comment that synthesis and meaning making are supported by simplicity.  Science can be very complex.  Think, statistical path analysis, double blind random controlled trials, and item response theory in test construction as examples.  But, all these ideas grow out of a relatively simple idea of science.  Create a model to account for observations, develop a hypothesis and collect evidence to test the hypothesis.  You may decide that a path analysis is appropriate to your context, but attempt to return at every step to the simplest most parsimonious understanding.

Follow-up on Ramo: Potential Principles of an Agile Learning / Research Method

Following up on my last post about The Age of the Unthinkable, what might be the response of educators to Ramo’s critique.  Given the similarities of his suggestions to the Agile Management Method, I will begin looking at the principles of the Agile Manifesto and how that document could be adapted to learning, research and organizational learning.

My Personal Learning Manifesto: Adapted from the Manifesto for Agile Software Development

I will uncovering better ways of learning by doing it and by helping others to do it.

Agile learning values the following:

  • Individuals and interactions over Courses, processes and tools
  • Functioning project teams over Documents, LMSs or other knowledge platforms
  • Learner collaboration over Expert mind sets
  • Responding to changing requirements over Following a plan

Echoing the original Agile Team I state that: while there is value in the items on
the right, preference is given to the items on the left.

Personal Agile Learning Principles: Adapted from the Twelve Principles of Agile Software

  1. The highest priority is to satisfy the customer (learner) through early and continuous delivery of valuable knowledge and insight.
  2. I will welcome changing requirements (even late in development) with Agile processes that harness change for the customer’s (learner’s) competitive advantage.
  3. Deliver working solutions and knowledge frequently, from a with a preference to the shorter timescale.
  4. Business people, project team members and learning leaders must work together daily throughout the project.
  5. Build learning projects around motivated individuals. Give them the environment and support they need, and trust them to find the right solution.
  6. The most efficient and effective educational methods involve face-to-face interaction.
  7. Successful project milestones is the primary measure of progress.
  8. Agile learning promotes sustainable development.
  9. The sponsors, leaders, and users should be able to maintain a constant pace indefinitely.
  10. Continuous attention to technical research excellence and good knowledge design enhances agility.
  11. Simplicity is essential, whether in ideas or in design
  12. The best learning architectures, requirements, and designs emerge from self-organizing teams not from ADDIE implementation.
  13. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its learning behavior accordingly.
  14. Encourage synthesis, creativity and the continuous integration of new and prior understanding.
  15. A commitment to open source method.

My thought processes are in an early phase on this subject.  It may be more meaningful to talk of agile research methods than learning.  To some extent organizational learning may be more like research then traditional pedagogy.  However, it does seems like a promising area for research and further reflection.

There is No Observation (Including Measurement) without Theory: The Stanley Fish View

As I have said before:

(B)ecause of a unified view of construct validity, theory (and hermeneutics) touches all aspects of validity and measurement.

One thing I meant by this is that you can’t do a good job of measuring practice or performance if you don’t understand how measures and practices are theoretically and empirically related, or as I said in my last post:

Any measure implies a theoretical rationale that links performance and measures and it can be tested validated and improved over time.

(Although the topic is faith not measurement) Stanley Fish supports the same type of idea and writes in his NY Times column:

. . . there is no such thing as “common observation” or simply reporting the facts. To be sure, there is observation and observation can indeed serve to support or challenge hypotheses. But the act of observing can itself only take place within hypotheses (about the way the world is) . . . because it is within (the hypothesis) that observation and reasoning occur.

I would use the word theory instead of hypothesis, which I reserve as a word for research questions in an experimental context, but otherwise the meaning is pretty much the same.

Fish goes on to explain an aspect of theory that explains why people do not like the challenges that are presented by theory and deep theoretical understanding.

While those hypotheses are powerfully shaping of what can be seen, they themselves cannot be seen as long as we are operating within them; and if they do become visible and available for noticing, it will be because other hypotheses have slipped into their place and are now shaping perception, as it were, behind the curtain.

I’m not saying it is easy, developing measures with deep understanding is difficult, but I believe the effort is well worth it when the result are better more relevant measures and better performance.

Improving Measures to Improve Performance

A post from the zapaterismo blog is addressed to the need for trust and the need for change to improve talent management, however the focus on this post ends up to be centered on measurement. “Zap” says:

The sad reality in most organizations is:

1. Performance management processes don’t produce highly reliable data. They simply aren’t often helpful in reliably and objectively differentiating employee performance. The process that was once an “ass-covering exercise” has not been sufficiently adapted to the reality that most organizations (and the technology they leverage) are now relying heavily on performance data for making important talent decisions.

2. Other talent measures/processes, such as employee “potential” and promotion “readiness” ranking are most often based on gut, at best, and politics, at worst.

Any measure implies a theoretical rationale that links performance and measures and it can be tested validated and improved over time.  What “Zap” is having problems with are likely measures that come from common sense and maybe something that only made real sense a long time ago.  Now common sense is not a bad place to start from, but it can be followed up by research and thought to build a theoretical structure to turn common sense into theoretical understanding, which in turn facilitates the ability to validate your measures and to build in improvements over time.

Now, you might say; “hold on there” we business people, we can’t be blooming scientists too!  Yes I believe you can.  Science, validity, measurement and similar concepts can become very complex in many circumstances, but at heart, science is a simple concept and can be applied in many ways that are not always of the same complexity.  People who say otherwise are usually looking to specific complex examples and expect everything has to be like the example.  Instead, look at the core concept and what it suggests.  Some perfromance improvement may need new as-of-yet un-thought-of tools, but I believe that much can be accomplished by looking for tools just laying around un-used at present.

A Caveat to the Use of Theory

I must add a caveat to my last post.  I use theory in a pragmatic instrumentalist way, not in an absolute way.  Theory does have limits, an “everything in moderation” idea.  Alex Kosulin explained the potential problems when theory becomes over extrapolated in his 1992 Introduction to Vygotsky’s Thought and Language*

Tracing the evolution of psychoanalysis, reflexology, Gestaltism, and personalism, (Vygotsky) revealed a uniform pattern to their development, an aggressive expansion in a desperate attempt to attain methodological hegemony.  The first stage in the development of each of these systems is an empirical discovery that proves to be important for the revision of the existing views concerning some specific behavioral or mental phenomena.  In the second stage . . .the initial discovery acquires a conceptual form, which expands so as to come to bear on related problems of psychology.  Even at this stage the ties between the conceptual form and the underlying empirical discovery are eroded.  The third stage is marked by the transformation of the conceptual form into an abstract explanatory principle applicable to any problem within the given discipline.  The discipline is captured by this expanding explanatory principle. . . .At the fourth stage the explanatory principle disengages itself from the subject matter of psychology and becomes a general methodology . . . at which point, Vygotsky observed – it usually collapses under the weight of its enormous explanatory claims.

In other words; theoretical contexts are important and abstraction and extrapolation has its limits.

*Kozulin, A. (1992). Vygotsky in Context, in A. Kozulin (Ed.) Though and Language: Cambridge MA, MIT Press.

Scanning Horizons: The Need for Theory in Practice

I believe that practice requires theory at a greater level than has generally been recognized in the past.  One point of view to substantiate this claim is test validity as I have previously discussed here, here, and here.  Theory is the starting point when discussing substantive and structural aspects of validity.  Also, because theory and measurement constructs are closely related, and because of a unified view of construct validity, theory (and hermeneutics) touches all aspects of validity and measurement.  Consider a practical example from my previous work in disability support services.

I was lucky enough to working for an organization as they were initiating supported employment services for the first time.  I believe the impetus for supported employment was rooted in discrimination.  Many people who were quite capability of holding down a job were forced to work in sheltered workshop settings and they proved to be very successful when given half an opportunity.  However, there was little analysis about what was going on at a deeper theoretical level.  When supported employment began serving cliental with more challenging support needs, the rate of growth and successful decreased.

I now believe that supported employment is about participation in economic activity and providing a level of accommodation that allows full participation by all individuals.  The measures needed are measures of participation, accommodation that are needed, and accommodations delivered.  Instead most descriptions of supported employment are just that, descriptions of what supported employment looks like, not how it functions.  Consider how the Department of Labor defines supported employment:

Supported employment facilitates competitive work in integrated work settings for individuals with the most severe disabilities (i.e. psychiatric, mental retardation, learning disabilities, traumatic brain injury) for whom competitive employment has not traditionally occurred, and who, because of the nature and severity of their disability, need ongoing support services in order to perform their job. Supported employment provides assistance such as job coaches, transportation, assistive technology, specialized job training, and individually tailored supervision.  DOL

This provides a description of what supported employment and supported workers look like, but it does not describe how services function to enable individuals to participate in the economy.

In order to develop a functional understanding of practice, theory is necessary. Once you define things in functional relationships, it’s possible to develop relevant measures and to use experimental methods to get to the bottom of functional relationships and improve the processes involved.

Scanning Horizons: Data Driven Practice

Summary: Without standards, data becomes more important in guiding practice.  Construct Measurement is also important to generate data that is relevant and of high quality relating to practice.

I proposed that management education and practice should become much more experimental and data-driven in nature — and I can tell you that it is amazing to realize how little business know and understand how to create and run experiments or even how to look at their own data!  We should teach the students, as well as executives, how to conduct experiments, how to examine data, and how to use these tools to make better decisions.  Dan Ariely (2009) in Technology Review

A second horizon exists where measurement is needed, but no standards exist.  Without standards, experimental methodology is another reasonable path.  Important tasks are to design measurement and to develop a clear logic leading from experimental results to improved practice.  Six sigma is an example of this kind of approach.  What can make it perplexing is the difficulties in developing measures when practice is rooted in social variables.  This calls for building measures based on complex educational, social or psychological constructs on which to base experiments.  Some companies that follow a balanced scorecard approach could be improved by the better measurement of relevant constructs.

A Framework for Action with Reflection: Measurement with Validity

My recent reading reinforce a need for validation thinking.

First, David Jones’ blog in a recent post considers a quote about the problems stemming from theory without action (idealism) and action without “philosophical reflection” (mindlessness).

Second, a reassertion of a limited (but still robust) neo-positivism by Philip Tetlock in Expert Political Judgment.

My Response: Let’s begin by refine Jones quote to read as follows:

Theory without the measurement of empirical correlates to justify action will lead to actions that are based only on biased judgment; and action without broad reflection (even if that action is supported by logical empirical correlates) just as surely leads to action that is based on unrecognized biased judgments.

This is also the basic argument that stands between logical positivism and radical social constructivism (at least in its straw men forms).  I believe that it argues for a dialectic type of response that is `implied by Jones quote.  One of the problems in the positivism / constructivism argument is that both sides spin complex arguments that fail at parsimony, that is, they become needlessly complex in the attempt to justify their radical stances.  This is where validity thinking can serve as Occam’s Razor.

First, consider a unified idea of construct validity: measurements (at least in the real world almost always) measure constructs, not real objects.  (eg. Even if we measure real objects [like pencils] we must define pencils [as not pens or not markers] in a way that indicates that what is indeed being measured is a construct.  This is not quite idealism.  Although it is possible to distinguish between constructs (like IQ) and real objects (like pencils); we cannot operationalize the measurement of real objects without referring to a construct in some way.  So, operationalism, the logical positivist’s banning of constructs by fiat, will not hold.)

Next: The purpose of measurement is to overcome the biases of Jones’ idealism.  We don’t normally fall into idealism because we rely on scientific methodology (based in measurement) to counter bias.  But, we can’t totally escape idealism with measurement because constructs provide a place for ideas and idea bias to creep back in.  Hence the dialectic, we measure with reflection.

In Messick’s validation framework reflection looks like this.

  1. Content validity: does the way we are measuring make sense both logically and through the experience of ourselves and others.
  2. Substantive: is there a theory (empirically supported) that gives meaning to the measures taken.
  3. Structural: do the measures faithfully reproduce the tasks or processes theorized to exist in the contexts or natural settings to which you want to extrapolate.
  4. Generalizability:  Is there evidence that what you are measuring applies to other places and other times.  (Even if you’re not measuring to generalize your findings, evidence against generalizability should cause you to reflect on why the place and time matter)
  5. External: This is mostly (convergent or discriminate) criterion evidence.
  6. Consequential: Show me evidence that measuring is helping. (eg. Are “No Child Left Behind” measures helping students become better prepared for life after school; or are they at least helping to further the right wing agenda to undermine the power of the NEA.  (Opps. . . sorry . . . cynicism slipping in there)

Therefore, a framework for action with reflection is measurement with validity.