Synergy, hermeneutics and simplicity

Synergy, hermeneutics and simplicity is at the heart of my thinking, my advocacy and my ideas for learning and supporting performance.  I think it will be difficult to be understood without making what this means more explicit.

Synergy – On 4-2-09 I posted an idea (How to Think:) drawn from the blog of Ed Boyden from MIT’s Media Lab who wrote: “Synthesize new ideas constantly. Never read passively. Annotate, model, think, and synthesize while you read . . . ”    Creativity is essential to success and nothing supports creativity more than the synergy that comes from synthesizing ideas in new ways.  It is also at the heart of learning.  New knowledge must be integrated with existing knowledge to make sense and this often requires synthesis.

Meaning Making (Hermeneutics) – Our brains and sensory systems can process an enormous amount of information, but it’s all chaos (psychologist William James’ buzzing blooming confusion) until we make meaning out of it.  Meaning is not a given, it is a human and a learned (for the most part) achievement.  Like synthesis, creating meaning (hermeneutics) is also a basic skill needed for successful practice.  When I advocate for measurement, it is as a tool for making meaning.

Simplicity – In my 3-19 post, Writing to Tame the Chaos I advocated for simplicity in academic writing that communicates beyond one’s disciplinary silo.  With the help of Cunha & Rego (2008), I would like to extend simplicity as a general approach to practice in my next post.  For now I will just comment that synthesis and meaning making are supported by simplicity.  Science can be very complex.  Think, statistical path analysis, double blind random controlled trials, and item response theory in test construction as examples.  But, all these ideas grow out of a relatively simple idea of science.  Create a model to account for observations, develop a hypothesis and collect evidence to test the hypothesis.  You may decide that a path analysis is appropriate to your context, but attempt to return at every step to the simplest most parsimonious understanding.

Measurement Literacy: Without Meaning, Measures Indeed Can Get Out of Hand

We say that someone is literate if they can read for meaning or if they can calculate with numbers.  There is also a need for measurement literacy which is when someone can say what numbers mean when they are obtained in measures.  Although it is a bit wonkish, it’s still important to remember that measures measure constructs not the thing being measured itself.  Constructs are concepts that are thought (theoretically) to be a property of things, but they are not the things.  To understand the meaning of number obtained from measurement, it is necessary to understand the construct.  Harold Jarche recently posted 2 quotes that expressed negative opinions on measurement processes.  I believe these critiques are ill founded for two reasons:

  1. Poorly designed measures should not be used to condemn measurement practice and
  2. Eliminating measures often lead to politics, gut instincts and other poorly founded basis for decision-making.

In this post I would like to go deeper on this subject and show how the problem can be explained as a problem with measurement literacy.

First, Jarche quotes from Charles Green at The Trusted Advisor:

If you can measure it, you can manage it; if you can’t measure it, you can’t manage it; if you can’t manage it, it’s because you can’t measure it; and if you managed it, it’s because you measured it.

Every one of those statements is wrong. But business eats it up. And it’s easy to see why. …  The ubiquity of measurement inexorably leads people to mistake the measures themselves for the things they were intended to measure (Emphasis added).

The second quote is from Dave Snowden:

We face the challenge of meeting increasing legitimate demands for social services with decreasing real time resources. That brings with it questions of rationing, control and measurement which, however well intentioned, conspire to make the problem worse rather than better. For me this all comes back to one fundamental error, namely we are treating all the processes of government as if they were tasks for engineers rather than a complex problem of co-evolution at multiple levels (individuals, the community, the environment etc.).

I posted this response on Harold Jarche’s blog:

I agree that there are many instances of problems resulting from measures that are based on little more than common sense or tradition, but it is not helpful to base decisions on gut instincts or politics. I believe the need is to increase people’s understanding of good measurement practices and how to develop a deeper understanding of what their measurements really mean. Everyone should know if their measures are valid. In turn, that means being able to say what your measures mean, how they are relevant to practice, and how they are helping to improve practice. It’s not just for big wigs either. Front line employees need to understand how to use measurement to guide practice.

Going further, Charles Green, also said this in his post;

There’s nothing inherently wrong with measuring. Or transactions. Or markets. They’re fine things.  But undiluted and without moderating influences, they become not just a bad deal; they can be a prime cause of ruining the whole deal.

Green is not clear here, to the extent that he doesn’t explain moderating influence.  For measurement, I believe this moderation influence could be meaning or construct supported meaning.  First, Measurement can easily get out of hand because numbers can do two things.  Through constructs they can have meaning, like words, but they can also be calculated.  Being able to calculate with numbers is not the same as being able to say what they mean.  Though people often conflate the two, they are not the same.  Calculation can result in the potential for meaning, like when we calculate a Pearson’s correlation.  But, understanding meaning requires a deeper understanding of how measures were obtained, what is the theoretical construct basis for the measures as well as consequential and other basis for the validity of the measures.

Many people have a good grasp of statistics and how to calculate, but they have less knowledge about measures, validity, designing measures and measurement meaning.  Mistaking measures for the things they represent is a problem of meaning.  Having measures confound complex evolutionary problems is rooted in miss-understanding measurement meaning.  I believe many people would like to give up measurement, but that would not ultimately result in better consequences.  What is needed is better education directed toward literacy in measurement meaning.

Agile or CMMI: the Differential is Knowledge.

How you frame a topic (question, problem, etc. . . ) is critical in how you come to understand it.  It’s at the heart of why perspective is so important.  A different theory, metaphor or emotional tact can make a vast difference in meaning.

I have been reading about agile methods and how it is an unplanned methods that is contrasted with planned methods, especially with CMMI (Capability Maturity Model Integration).  Although planning is the most salient difference in features, knowledge is the key difference here for analysis; what you know or are capable of knowing as opposed to what you don’t know and are incapable of knowing. CMMI is a framework for managing what you know and can include standardized processes, evidence-based practices or processes and practices that can be standardized.  Many processes or practices are repeatable and can be standardized and managed through a CMMI framework.  But, we also know that science (the basis for standardization) makes knowledge claims in narrow and very specific ways.  This means that there are many aspects to practice that are unique, context bound, not repeatable and not standardizable.  These aspects of practice are best approached through Agile methodology.  The decision to use agile or CMMI methodology should be base on what you are able to know.

I believe the time is approaching that will require more agility, but this will not preclude an expansion of CMMI methods.  Agile methodology needs to incorporate CMMI methodology according to what we are able to know, while maintaining agility for what we are unable to know.  Some project may call primarily for agile methods, some for CMMI methods and some will call for mixed methods.

Follow-up on Ramo: Potential Principles of an Agile Learning / Research Method

Following up on my last post about The Age of the Unthinkable, what might be the response of educators to Ramo’s critique.  Given the similarities of his suggestions to the Agile Management Method, I will begin looking at the principles of the Agile Manifesto and how that document could be adapted to learning, research and organizational learning.

My Personal Learning Manifesto: Adapted from the Manifesto for Agile Software Development

I will uncovering better ways of learning by doing it and by helping others to do it.

Agile learning values the following:

  • Individuals and interactions over Courses, processes and tools
  • Functioning project teams over Documents, LMSs or other knowledge platforms
  • Learner collaboration over Expert mind sets
  • Responding to changing requirements over Following a plan

Echoing the original Agile Team I state that: while there is value in the items on
the right, preference is given to the items on the left.

Personal Agile Learning Principles: Adapted from the Twelve Principles of Agile Software

  1. The highest priority is to satisfy the customer (learner) through early and continuous delivery of valuable knowledge and insight.
  2. I will welcome changing requirements (even late in development) with Agile processes that harness change for the customer’s (learner’s) competitive advantage.
  3. Deliver working solutions and knowledge frequently, from a with a preference to the shorter timescale.
  4. Business people, project team members and learning leaders must work together daily throughout the project.
  5. Build learning projects around motivated individuals. Give them the environment and support they need, and trust them to find the right solution.
  6. The most efficient and effective educational methods involve face-to-face interaction.
  7. Successful project milestones is the primary measure of progress.
  8. Agile learning promotes sustainable development.
  9. The sponsors, leaders, and users should be able to maintain a constant pace indefinitely.
  10. Continuous attention to technical research excellence and good knowledge design enhances agility.
  11. Simplicity is essential, whether in ideas or in design
  12. The best learning architectures, requirements, and designs emerge from self-organizing teams not from ADDIE implementation.
  13. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its learning behavior accordingly.
  14. Encourage synthesis, creativity and the continuous integration of new and prior understanding.
  15. A commitment to open source method.

My thought processes are in an early phase on this subject.  It may be more meaningful to talk of agile research methods than learning.  To some extent organizational learning may be more like research then traditional pedagogy.  However, it does seems like a promising area for research and further reflection.

Book Review: JC Ramo, The Age of the Unthinkable

This is my take on JC Ramo’s new book, The Age of the Unthinkable.

Communication technologies, globalization and the thick interconnections of both people and institutions have increased the systemic complexity of the world, reduced stability and bombards us with near constant change.  Traditional bureaucratic and hierarchal management structural do not have sufficient flexible to response to what is a crisis of predictability.  All complex systems (and that is the world we now face) contain internal dynamics that resist prediction when viewed from a single external perspective.  The old ways explain little, their managing hierarchies are often corrupted by “power position and prestige” and they increasingly are leading us only to failure.

To respond successfully to this increasingly complex world Ramo suggests a revolutionary (although evolutionary may be a better term) approach that looks very much to me like current Agile management methods.  Use small cross-functional teams who can evolve in response to changing contexts with creativity as they respond to changing requirements.  Build systems that anticipate change; dynamic systems that can be resilient and responsive in the face of change.  Analyze the world with imagination as a holistic interconnected network.  In short, see, prepare and build in ways that can help make the unthinkable thinkable.

There is No Observation (Including Measurement) without Theory: The Stanley Fish View

As I have said before:

(B)ecause of a unified view of construct validity, theory (and hermeneutics) touches all aspects of validity and measurement.

One thing I meant by this is that you can’t do a good job of measuring practice or performance if you don’t understand how measures and practices are theoretically and empirically related, or as I said in my last post:

Any measure implies a theoretical rationale that links performance and measures and it can be tested validated and improved over time.

(Although the topic is faith not measurement) Stanley Fish supports the same type of idea and writes in his NY Times column:

. . . there is no such thing as “common observation” or simply reporting the facts. To be sure, there is observation and observation can indeed serve to support or challenge hypotheses. But the act of observing can itself only take place within hypotheses (about the way the world is) . . . because it is within (the hypothesis) that observation and reasoning occur.

I would use the word theory instead of hypothesis, which I reserve as a word for research questions in an experimental context, but otherwise the meaning is pretty much the same.

Fish goes on to explain an aspect of theory that explains why people do not like the challenges that are presented by theory and deep theoretical understanding.

While those hypotheses are powerfully shaping of what can be seen, they themselves cannot be seen as long as we are operating within them; and if they do become visible and available for noticing, it will be because other hypotheses have slipped into their place and are now shaping perception, as it were, behind the curtain.

I’m not saying it is easy, developing measures with deep understanding is difficult, but I believe the effort is well worth it when the result are better more relevant measures and better performance.

Improving Measures to Improve Performance

A post from the zapaterismo blog is addressed to the need for trust and the need for change to improve talent management, however the focus on this post ends up to be centered on measurement. “Zap” says:

The sad reality in most organizations is:

1. Performance management processes don’t produce highly reliable data. They simply aren’t often helpful in reliably and objectively differentiating employee performance. The process that was once an “ass-covering exercise” has not been sufficiently adapted to the reality that most organizations (and the technology they leverage) are now relying heavily on performance data for making important talent decisions.

2. Other talent measures/processes, such as employee “potential” and promotion “readiness” ranking are most often based on gut, at best, and politics, at worst.

Any measure implies a theoretical rationale that links performance and measures and it can be tested validated and improved over time.  What “Zap” is having problems with are likely measures that come from common sense and maybe something that only made real sense a long time ago.  Now common sense is not a bad place to start from, but it can be followed up by research and thought to build a theoretical structure to turn common sense into theoretical understanding, which in turn facilitates the ability to validate your measures and to build in improvements over time.

Now, you might say; “hold on there” we business people, we can’t be blooming scientists too!  Yes I believe you can.  Science, validity, measurement and similar concepts can become very complex in many circumstances, but at heart, science is a simple concept and can be applied in many ways that are not always of the same complexity.  People who say otherwise are usually looking to specific complex examples and expect everything has to be like the example.  Instead, look at the core concept and what it suggests.  Some perfromance improvement may need new as-of-yet un-thought-of tools, but I believe that much can be accomplished by looking for tools just laying around un-used at present.

A horizon summary

I think It boils it down to 2 areas where I am knowledgeable, have an interest, where there is opportunity to apply that interest, and where the technology exists to allow reasonable implementation: (1) measurement and (2) lifelong / just-in-time learning.

Measurement – It has one purpose, to generate data, but when combined with appropriate theory, data can provide invaluable supports to decision-making, communications and experimental practice improvement efforts.  People act either because they believe their actions will lead to desired outcomes or because they are following a tradition.  All action should be backed-up and validated by data, but it is often not because managers cannot articulate a theory linking action to outcome, because they don’t understand how to design relevant and efficient measures, because they don’t understand the knowledge that measures can generate, or because they don’t understand how important data is to communication / reporting.  Education is needed to help managers articulate operational theories, designing appropriate measures and integrate them into everyday activities.  I recommend integrating measures into existing reporting / communicating structures for efficiency and aligning these structures with processes and with organizational strategy.

Learning – Existing learning structures are not sufficient to facilitate individual lifelong learning, organizational learning or in learning that is directly related to contextual needsAs I have written before, we need social innovations in how organizations learning.  What is also needed is pedagogy, technology and institutional structures designed for diverse interdisciplinary on-demand knowledge networks.  Social networks are growing (Linkedin is one example.).  Missing from learning opportunities in these networks are the mission structures and technical capabilities of dedicated learning organizations to participate in ad-hoc networks with the pedagogy and monetization strategies to make these networks efficient and effective as learning nodes in the day to day activities of individuals and project teams