Tony Karrer has re-posed a question originally asked by Peter Drucker: how do you increase the productivity of knowledge (concept) workers? Since more jobs depend on knowledge today, we might just as well ask, how do we support productivity in the 21st Century. I suggest that significant organizational learning is the best measure of productivity in a knowledge intensive environment. The following 4 things could be measured as evidence of significant learning:
- An increase in organizational capabilities, and
- In innovation.
- The development of a maturity framework with evidence-(standards)-based practices for improving the quality of repeatable or routine processes.
- An increase in soft skills relating to non-routine and networking processes, which could be measured by the extent and the strength of an organization’s internal and external networks.
I believe that organizational learning is facilitated by individuals, but I would not consider it synonymous with individual learning. I’m not sure exactly how to measure individual learning, which might vary with different contexts. I do believe that employers can focus mostly on the measurement of organizational learning and maybe on individual contributions to organizational learning.
There is certainly enough here to keep me thinking for some time. Thanks for the question Tony!
A Follow-up on my last post.
Are your measures valid across a range of concerns? Improving validity will lead to improved actions, better frameworks for acting and ultimately improved performance. In example:
The turn of the century saw an increase in the expectations tied to measurement through such phenomena as “No Child Left Behind” or SAT test prep classes. This has begun to change as colleges put less emphasis on SAT scores and I believe we’ll soon see similar changes in high stakes graduation tests. Two observations:
- While high expectations pose difficult challenges for assessment, most of the problems that resulted in less use of assessment are in the expectations placed in specific tests not in the capabilities of assessment in general. It’s a hermeneutic problem. The meaning of test scores was much narrower than were the expectations for assessment; a mismatch between the meaning that was required and the meaning that the test could supply. From a narrow psychometric perspective involving external validity, these test were valid, but from other perspective (structural or consequential validity – see the previous post) they are found wanting.
- People will still act and those actions will still require assessment and those assessments will still be made. They will just be more casual, less observable, and even less valid that those made by high stakes tests.
Most actionable situations require a range of assessments that are valid across a range of validity concepts. Just because some are less empirical or more qualitative does not mean they should not be considered in an appropriate mix.
My PhD was in education psychology and most of my classes occurred in the mid to late 90s. The paradigm wars were winding down, but there was still a noticeable split between hermeneutic social constructionists* and the psychometrician. My nature is to want to synthesize, often leading one to walk in two worlds. Too what would I be drawn; a hermeneutic account of psychometrics of course.
I was investigating dissertation topics around disability. The split here was conveyed as between old psychometric ways of conceiving of disability and new socially constructed accounts. An advisor made a casual comment that my concerns seemed to be about validity and it seemed insightful. Yes! The problem was that existing measures were validated by psychometric models that did not account for the hermeneutics of identity construction or for the consequences of resulting identities.
I started my investigation by reading Samuel Messick’s chapter on Validity in Educational Measurement (3rd ed.). What I read was Messick’s attempt to address hermeneutic aspects of measurement from a psychometric perspective. What was important in measuring, is the meaning you derive from the data and the associated implication for action. First, there are only 2 ways to think about invalidity:
- Construct Under-representation; The construct you are interested in is larger than what your assessment is able to measure.
- Construct Irrelevance; you are measuring things that are irrelevant to the information you need to take action and lead to either false positives or false negatives.
Messick would later write about six categories of validity concerns. I take these categories to be a framework for how to think about or find meaning in measurement. They are 6 different way of looking for under-representation or irrelevance:
- Content – Is there evidence that the scope of the content appropriate and representative of the construct.
- Substantive – Is there a theory for the processes and tasks being performed and is there empirical support for the theory.
- Structural – Is there evidence that the assessment faithfully reproduces the tasks or processes in contexts or in the natural settings to which you are trying to extrapolate.
- Generalization – Has the assessment been shown to apply to many different groups, contexts and over time. While this may not reduce validity in specific situations, it would indicate to look much closer at the situation your in.
- External – convergent or divergent criterion evidence.
- Consequential – Is there evidence that your actions are improved by the assessment and that it is fair and free of bias.
* Note – I have no interest in most philosophical discussions of the beliefs of social constructionist or realists. For me, SC is mostly about the ways that things and people are thoroughly effected and affected by the pervasiveness of language and its accompanying hermeneutics. Not only is there no denial of reality, the current trend is to highlight the embodied nature of our living even as it is totally inhabited by hermeneutics. I fall back on pragmatics, not because it is defensible, but because it is a way to go on. Most other discussions are about drawing boundaries that are just too fluid to nail down in a convincing manner.
From the Tommorow’s Professor’s Blog an article by by Pat Hutchings 933. Different Way to Think About Professional Development
This compliments my 3-20 post on changing behavior
For the past several years the Carnegie Foundation has been working with a group of California community colleges . . . for different ways to think about and conduct professional development.
- First, opportunities for teachers to grow and develop must be sustained over time.
- A second principle is the importance of collaboration.
- The third defining feature is a focus on evidence about student learning. . . . information is at the heart of powerful feedback loops. But an important lesson . . . is the power of viewing classroom data through the lens of larger institutional trends and patterns.