A response to Stephen Downes

 

Stephen posted recently on meaning in language in a way that I don’t generally understand in conceptualizing education practice. He divides word use into units like token or types, similar to a computational method. He goes on to criticize constructivism saying:

This is also why constructivism is so hard to criticize. There are many different ways to make meaning. If you show that one way of making meaning is inadequate, then the constructivist always has another one to show you. After all, the theory (mostly) isn’t about some specific way of making meaning. It’s about the idea that ‘to learn’ is ‘to make meaning’, and these can be made in different ways

I generally think on a practice or pragmatic unit of analysis. Thinking of Bakhtin’s concept of Genres; recognizable ways of speaking, or Wittgenstein’s language games. Take the drunken artisans from Dostoevsky’s “Diary of an Author”, whose six characters repeat a curse word six times, but each repetition indicates a different meaning conveyed by the inflection and position of the speaker as well as the genre the speaker is referencing. Same word, but six different meanings. Meaning does not come from the words or from reference, but from re-cognizable practice. Maybe a pragmatic nominalism. Here’s something from an old blogpost of my I was thinking on earlier today but makes an example of how practice could constitutes meaning in assessment:

To see the future (think prediction), students and teachers should focus on their horizons. Horizons here refer to a point in developmental time that can’t be seen clearly today, but that I can reasonable expect to achieve in the future. Because many aspects of this developmental journey are both precarious and dependence on future actions, this joint vision can’t be wishful thinking, but must be clearly framed in terms of privileges and obligations. When it is treated this way, assessment is not a picture of student achievement, but is a methods for making both student and teacher visible to each other in a way that is rational, meaningful and conducted in an ontologically responsible manner; that is, in a way that is true to who we we want to become.

This references John Shotter’s “Cultural Politics of Everyday Life”.
The point I’m making is that meaning begins with assessment items and scores, but it does not become meaningfully useful until it allows student and teacher to “see” each other in their mutual journey toward an agreed upon horizon or end point and the privileges and obligations that makeup the path. This is where the general concept of assessment is fails because of the limits we place on the “genre” of assessment Another example is Vygotsky’s conception of a baby’s grasp for a rattle. The Mother interprets the grasp as a desire and slowly guides the baby into what the mother considers an understandable practice. I agree that there are too many conceptions of constructionism, and I like to ground it in practice which I fell is more secure, but still suffers in many ways from George Lackoff’s limitations of cognition and speech as metaphoric.

Measurement Literacy: Without Meaning, Measures Indeed Can Get Out of Hand

We say that someone is literate if they can read for meaning or if they can calculate with numbers.  There is also a need for measurement literacy which is when someone can say what numbers mean when they are obtained in measures.  Although it is a bit wonkish, it’s still important to remember that measures measure constructs not the thing being measured itself.  Constructs are concepts that are thought (theoretically) to be a property of things, but they are not the things.  To understand the meaning of number obtained from measurement, it is necessary to understand the construct.  Harold Jarche recently posted 2 quotes that expressed negative opinions on measurement processes.  I believe these critiques are ill founded for two reasons:

  1. Poorly designed measures should not be used to condemn measurement practice and
  2. Eliminating measures often lead to politics, gut instincts and other poorly founded basis for decision-making.

In this post I would like to go deeper on this subject and show how the problem can be explained as a problem with measurement literacy.

First, Jarche quotes from Charles Green at The Trusted Advisor:

If you can measure it, you can manage it; if you can’t measure it, you can’t manage it; if you can’t manage it, it’s because you can’t measure it; and if you managed it, it’s because you measured it.

Every one of those statements is wrong. But business eats it up. And it’s easy to see why. …  The ubiquity of measurement inexorably leads people to mistake the measures themselves for the things they were intended to measure (Emphasis added).

The second quote is from Dave Snowden:

We face the challenge of meeting increasing legitimate demands for social services with decreasing real time resources. That brings with it questions of rationing, control and measurement which, however well intentioned, conspire to make the problem worse rather than better. For me this all comes back to one fundamental error, namely we are treating all the processes of government as if they were tasks for engineers rather than a complex problem of co-evolution at multiple levels (individuals, the community, the environment etc.).

I posted this response on Harold Jarche’s blog:

I agree that there are many instances of problems resulting from measures that are based on little more than common sense or tradition, but it is not helpful to base decisions on gut instincts or politics. I believe the need is to increase people’s understanding of good measurement practices and how to develop a deeper understanding of what their measurements really mean. Everyone should know if their measures are valid. In turn, that means being able to say what your measures mean, how they are relevant to practice, and how they are helping to improve practice. It’s not just for big wigs either. Front line employees need to understand how to use measurement to guide practice.

Going further, Charles Green, also said this in his post;

There’s nothing inherently wrong with measuring. Or transactions. Or markets. They’re fine things.  But undiluted and without moderating influences, they become not just a bad deal; they can be a prime cause of ruining the whole deal.

Green is not clear here, to the extent that he doesn’t explain moderating influence.  For measurement, I believe this moderation influence could be meaning or construct supported meaning.  First, Measurement can easily get out of hand because numbers can do two things.  Through constructs they can have meaning, like words, but they can also be calculated.  Being able to calculate with numbers is not the same as being able to say what they mean.  Though people often conflate the two, they are not the same.  Calculation can result in the potential for meaning, like when we calculate a Pearson’s correlation.  But, understanding meaning requires a deeper understanding of how measures were obtained, what is the theoretical construct basis for the measures as well as consequential and other basis for the validity of the measures.

Many people have a good grasp of statistics and how to calculate, but they have less knowledge about measures, validity, designing measures and measurement meaning.  Mistaking measures for the things they represent is a problem of meaning.  Having measures confound complex evolutionary problems is rooted in miss-understanding measurement meaning.  I believe many people would like to give up measurement, but that would not ultimately result in better consequences.  What is needed is better education directed toward literacy in measurement meaning.