One Description of Science and the Basis for an Argumentative Approach to Validity Issues

I came across an interesting metaphor for science (and structural ways of understanding in general) in the Partially Examined Podcast Episode #8.   Here is my take on the metaphor.

Imagine the world as a white canvas with black spots on it.  Over that, lay a mesh made of squares and describe what shows through the mesh.  We are describing the world, but as it shows through the mesh.  Change the mesh in size or in shape and we have a new description of the world.

Now, these descriptions are useful and allow us to do things, but they are not truth, they are description.  They may be highly accurate in their descriptions of an actual world, but they are still descriptions.  It’s how science functions and is how science progresses and changes.  It also is why I advocate an argumentative approach to validity in the use of scientific structures like assessment or the use of evidence.  Old forms of validity (dependent on criterion validity) and much of the current discussion of evidence-based approaches is about the accuracy in certain forms of description.  But we must also allow for discussions of the mesh (to return to the metaphor).  As in construct validity, any discussion of how the world is must also include a discussion of how the mesh interact with the world to create the description.

In addition to methods like random controlled trials (RCTs), there is also a need for research into how we understand and rethink the assumptions and things that are sometimes unexamined in research.  RCTs are very good at helping us do things with very accurate descriptions (like describe linear causal processes).  We also need research that uses other meshes that will allow us to understand in new ways and facilitating our ability to do new and different things; to make progress.

More on Representations and Doing

My last post was not advocating that we abstain from all representations in education and assessment; that is simply not possible.  What I am advocating is for a carefully evaluation of the representations we choose and how we choose use them.

In example: it’s common to engage in classroom training, based on representations of work actions, with follow up assessments based on recall of those representations.  We are not really interested in representation recall, we’re interested in performance.  Many recall items have limited validity for actual performance or at least this correspondance has not been evaluated.  It is easy to assess if these representations can be recalled, but what do we care if simple recall has a limited impact on performance.  The actual actions needed in the work setting frequently have limited correspondence to the classroom representations and as a result have limited utility.

A better way is to provide multiple ways to support work actions that can include classroom activities, but can also include informal social media tools, communities of performance, searchable web-based tutorial and onsite performance support tools like coaching and collaborative teams.  These tools will likely make use of representations, but those representations will be more closely related to the actual actions required.  With learning assessments and with educational programs, it is always better to choose representations that are more closely related to and valid for actual performance.

Another example is Liz Coleman’s (President of Bennington College) TED Talk regarding the reinvention of liberal art education.  She proposes a more active program of education.  Instead of learning about what ever subject is at focus, it is about learning to actively do.  Rhetoric, design, mediation and other ways of doing education gain prominence because of the ability to actively do intellectual work; to act on challenges, not to recall words (representations) that  are useful only for talk about challenges.  Her great takeaway line: “There is no such thing as a viable democracy made up of experts, zealots, politicians and spectators”.  What is needed are participants.

There Is No Learning without Doing

Learning is doing as doing is learning.  The primary biological purpose of neurological activity is found in how it is associated with motor activity in ways that allow an organism to couple with its environment through movement (Maturana, H.R. & Varela,1992).  The problem with many educational activities is in how they over emphasize representation and under emphasize the activity that is the true target of learning.  Most classroom activities focus on manipulating symbols and leads to the recall of these symbolic representations.  Problems occur in the gap frequently seen between representations and an inability ability to act based on those representations.  What is needed instead is a holistic approach that creates a new portfolio of actions to replace an old portfolio (Zeleny, 2008).  The outcome of learning activities, the test of learning if you will, should be the performance of new activities not the recall of representations.  Zeleny gives an example of this type of thinking through his assessment approach to strategy.  Strategy is not found in plans and statements.  It is found in the action a company takes.  If you want to know what a company’s strategy is, look at the actions they take and the structure of those actions.  Do not look at what they say.  Talking does not convey their strategy, acting does.

Workplace learning is moving beyond the classroom to a holistic approach to performance improvement.  Emphasizing learning transfer is not going to significantly help performance unless it can recognize that learning is about doing, not in grasping representations of doing.  Real learning is not learning about doing, it is learn to do.  Classroom training may occupy a small part of any program, but most training and performance support must focus on the location where actions are performed and focus what is actually needed to support doing.

References

Zeleny, M. (2008). Strategy and strategic action in the global era: overcoming the knowing-doing gap International Journal of Technology Management, 43, 64-75.

Maturana, H.R. & Varela, F.J. (1992). The Tree of Knowledge: THe Biological Roots of Human Understanding, Boston, MA: Shambhala Publications Inc.

Frames for Using Evidence: Actions, Processes and Beliefs

As a follow-up to my last post, there are three frames of reference that are important to my thinking about being evidence-based.

  1. The unit of analysis is action, not thinking.  Evidence-based programs are often focussed on decision-making, but action is a better focal point.  Why is this?  First, focusing on actions helps to make a direct connection from evidence to consequences and outcomes.  Second, Our actions and thinking are closely related.  Actions gets at both thinking and acting.  Neuroscience has recently begun to confirm what psychology (Vygotsky) and philosophy (Wittgenstein) have believed for a while: that cognition is closely tied to muscle control and acting.  That there is a neurological link between doing and thinking.
  2. Evidence-based information is best directed toward practices, processes or programs. Much of the evidence-based literature is directed toward decision-making. and while this is important, many aspects of practice are made up of decision that are organized by repeatable processes, programs or protocols.  The intense effort that is sometimes needed in order to be evidence-based may be more justified in the wider effect sen in focusing on the programs and processes that support everyday decision-making.
  3. The basis for most thoughtful actions is theory or belief. These may range from extensively developed nomothetic theoretical networks to well-founded beliefs, but the relevance of evidence-based information is on it’s effect upon these beliefs and theories that in turn guide decisions and program actions.  There is no such thing as facts without theory or belief.  The role of evidence is to support (or fail to support) the beliefs that underly actions.

Howe’s Critique of a Positivist Evidence-based Movement with a Potentially Valid Way Forward

A summary of Kenneth Howe’s article criticizing positivism and the new orthodoxy in educational science (evidence-based education).

(Howe, K.R., (2009). Epistemology, Methodology, and Education Sciences: Positivist Dogma, Rhetoric, and the Education Science Question, Education Researcher, 38 (#6) pp. 428-440.

Keywords: Philosophy; politics; research methodology

“Although explicitly articulated versions (of positivism) were cast off quite some time ago in philosophy, positivism continues to thrive in tacit form on the broader scene . . . now resurgent in the new scientific orthodoxy.” (p.428)

(A positivist stance on science) has sought to “construct a priestly ethos – by suggesting that it is the singular mediator of knowledge, or at least of whatever knowledge has real value . . . and should therefore enjoy a commensurate authority” (Howe quoting Lessl, from Science and Rhetoric).

Howe traces the outline of this tacit form of positivism through the National Research Council’s 2002 report titled Scientific Research in Education and relates this report to three dogmas of positivism:

  1. The quantitative – qualitative dichotomy – A reductionism dogma that had the consequence of limiting the acceptable range of what could be considered valid in research studies.
  2. The fact value distinction – An attempts to portray science as a value free process with the effect of obscuring the underlying values in operation.
  3. The division between the sciences and the humanities. Another distinction of positivism designed to limit any discussions to a narrow view of science.

Howe’s article does a good job of summarizing these general critiques of positivist methodology, which include: (1)its overall claims could not stand up to philosophical scrutiny, (2) it tended to not recognize many of its own limitations including applying adequate standards to itself and (3) it also was inhabited by a political agenda that sought to stifle and block many important directions that inquiry otherwise might have taken.

The crux of the political matter: While the goal of positivism may have been to positively establish a objective verifiable method of conducting social science modeled on the physical sciences, the primary result was an attempt to politically limit the scope of what could be considered meaningful scientific statements to include only statements that were verifiable in a narrow positivist sense. Howe is among the cohort who believe that the evidence-based movement is being used by some as a context to advance a tacit return to a form of positivism.

The crux of the scientific matter: Howe’s primary interest appear to be political, the politics of how research is received and funded, but there is also an effectiveness issue.  Positivism’s primarily scientific problems are in the tendency to ignore or to down play many of the limitations of positivist methods, (overstating the meaning of positivist research) and in the way it oversimplifies and fails to problematizes the rather complex relationship between research and practice.

Messick’s Six Part Validity Framework as a Response

There are four responses to Howe in this journal issue. To me, none of the responses address the primary issue at play: to bring some sense of unity to varying ideas and communication with people using different scientific methodological frameworks.  There are suggestions to allow for multiple methods, but they are more of a juxtaposition of methods rather than a framework that serves to guide and support communication and understanding among scientists use differing methods.  This is why I support Messick’s validity framework as a response to just this type of concern.  Although Messick spoke specifically of test validity, there is nothing that would preclude this framework from being applicable to practice validity and to the development of post-positivist evidence to support the validity of practices.  What is the evidence-based movement really concerned with, if it is not the validity of the practices being pursued by practitioners.  This is not primarily about the validity of individual research studies, but is about the validity of practices and developing evidence to support the validity of specific practices.  It is also a mature framework that considers the full range of inquiry when developing evidence.

Messick’s six areas for developing validity are six different types of validity evidence and I develop here an initial set of ideas about how they might relate to evidence-based practice as follows:

  • Content – Content defines the scope of the practice domain with evidence (including rationales and philosophical debates) for the relevance of a particular practice, how the practice represents ideas within the general domain and the technical quality as compared to other examples of the practice.
  • Substantive – Evidence that the design of actual processes involved are consistent with the knowledge of design from relevant domains (i.e. psychology, sociology, engineering, etc. . ..)
  • Structural – The consistence between the processes involved and the theories that underly and support rationales for the structure of the actual process.
  • External – Empirical data to support criterion evidence (Random controlled trials (RCT) would be one example).  For many practices this may include both convergent and discriminant evidence.  (My thinking is still in development here, but I am think that empirical evidence from the research base would function more like criterion evidence.  Direct empirical evidence from the actual practice being validated would be considered in most situations under consequential evidence.  See below.)
  • Generalization – Evidence for the practice to be relevant and effective across different populations, contexts and time periods.
  • Consequential – Evidence that the practice is actually achieving the purpose it was originally intended to achieve.

I consider this list to be an early formation with more development needed.  Critiques are most welcome.

Messick’s original formulation for test validity is available here.

Evidence-Based Management as a Research/Practice Gap Problem

This is a response I made to a post on the Evidence Soup Blog about the potential demise of EBMmgt
I’ve been think about the health of the movement in response to (Tracy’s) post and I’m still surprised by the lack of EBMgmt discussions and how the movement does not seem to be gaining much traction. I re-looked at the Rousseau – Learmonth and the Van De Van, Johnson – McKelvey discussions for potential reasons why. (both are in Academy of Management vol31 #4, 2006). Here’s my take after reading them:
(1) Cognitive, Translation and Synthesis Problems: One, just like the example Rousseau gave in her Presidential Address, there are too many different concerns and issues floating about. We need the field to be more organized so people can get a better cognitive handle on what’s important. Also, I’m not sure peer review is the best strategy. When I did my dissertation, doing something exciting took a back seat to doing something bounded and do-able. I can’t imagine someone whose publishing for tenure doing anything more than incremental and that does not translate well for cognitive translation reasons. We need a synthesis strategy.
Possible response – A EBMgmt wiki See my 7-31 post on scientific publishing at howardjohnson.edublogs.org
(2) Belief problems – Henry Mintzberg believes that managers are trained by experience and MBA programs should be shut down. (3-26-09 Harvard Business Ideacast) He says that universities are good for that scientific management stuff, but implies that science is only a small part (management’s mostly tacit stuff). All my previously mentioned discussions noted that managers and consultant do not read the scientific literature. Part of the problem is communication (see #1), but part is current management paradigms that include little science.
Possible response – Far be it from me to suggest how to deal with paradigm change.
(3) Philosophical Problems – If EBMmgt is to succeed, it must be presented as a post-positivist formulation. Taken at face value, it seems positivist; and positivism has been so thoroughly critiqued that I could see where many people would dismiss it out of hand. Part of my thing is trying to be post-positivist, without throwing out the baby with the bath water. Rousseau tries to mollify Learmonth’s concern that touches on this area, she sees some issue, but I don’t see understanding. A positivist outlook will only lead you in circles.
Possible response – It’s much like your previous post, you need “both and” thinking, not “either or” thinking. EBMgmt must be an art and a science. This is how I understand the validity issue that I’ve mentioned to you before. I use Messick’s validity as a model for post-positivist science. It’s also important because measurement is the heart of science.
I would love your thoughts

The Integration of Design and World: More on Design Thinking.

This post responses to Anne Burdick’s invitation concerning the presentation: Design without Designers (found in the comments of my last post).  I will address the question, why would educational theory build on design concepts or how do I see the relation between education and design? I will look at three areas:

  • Erasing the distinction between art and science
  • Artifactual cultural aspects of psychology
  • The trans-disciplinary nature of ideas

Erasing the distinction between art and science

I see general changes in the practice of science along the following lines:

  • The critique of positivism (for promising more than methodology could ever deliver)
  • The critique of postmodernism (for fetishizing certainty; i.e. If positivism fails than scientific judgement cannot be trusted at all.) and
  • More acceptance for addressing real world problems (where problems tend to be interdisciplinary and often involve mixed methods).

The result is that many of the walls and silos of science have been reduced including the distinction between art and science.  In example, I often refer to judgements based on validity.  Although validity uses rational and empirical tools, building a body of evidence and achieving a combined judgement is more like telling a story.

Artifactual cultural aspects of psychology

The work of (or derived from) Vygotsky is popular in psychology and education.  It has also proved consistent with, and complimentary to the recent findings of the “brain sciences”.  While there are genetic and hardwired aspects of psychology, the structure of our minds can be said to reflect, to a great extent, the structure of the social and artifactual world that we live in.  The design of the world is more than just a decorative environment to an autonomous mind, it has an impact on who we are in both development and in how we interact with it in our ordinary lives.

Our delineation of the subject matter of psychology has to take account of discourses, significations, subjectivities, and positionings, for it is in these that psychological phenomena actually exist. (Harré and Gillet, 1994, The Discursive Mind and the Virtual Faculty)

The trans-disciplinary nature of ideas

Ideas never seem to respect the traditional academic disciplinary structure the way that methods and theories did during most of the 20th Century.  In the mid-90s a graduate school mentor pointed out that you could read many books at that time and have no clue to the discipline of the author without reading the back cover.  Psychologist, educators, literary critics, philosophers, sociologists and yes, designers, they all often seem to be speaking in the same language about the same type of things.

In Conclusion

  • The distinction between art and science is dissolving.  Method is important, but it does not rule.  Achieving a scientific break-through is analogous to creating a work of art (even though it still uses rational and empirical tools).
  • The design of our world is not just decoration, it reflects who we are and who we are reflects the design of the world.
  • Tools (artifacts, concepts, theories, etc. . .) are needed to act on the world.  Where these tools come from is less important than our ability to make use of them.

So in the above ways, design and design thinking is everywhere.  I do think designers should be more present in my own thinking as both a technical adjunct and as a foundation of both my thought and of the academic curriculum?  Yes, I do!  What do you think from a designer’s perspective?  How does the thinking of designers and current design curriculum fit into the above ideas?

Cartesian Problems in Communicating about Designing and Design Thinking

Interesting article – Thinking About Design Thinking – by Fred Collopy blogging for Fast Company.  Fred considers, “As (Design Thinking) is a way of talking about what designers can contribute to areas beyond the domains in which they have traditionally worked, about how they can improve the tasks of structuring interactions, organizations, strategies and societies, it is a weak term”, because it makes a “distinction between thinking and acting.”

As Fred points out Design Thinking is beset by the Cartesian Mind – Body problem, which is frequently being rejected today.  One form of rejection is found in the idea, “thought” has it’s genesis in “action”, like how you learn to walk and then you learn to think about where you want to go.  A similar idea (attributed to Bakhtin) is that Cartesian thinking unnecessarily divides being from becoming, where the abstractions of disembodied thought never fully capture either the actions of our lives or the moral aspects of those actions.

This is especially important for education that often has it exactly backwards, trying to teach you how to think in order to go out into the world to act.  Education would be so much more valuable if there were no dichotomous walls. (i.e. classroom/world, schooling/working, or even the idea that education = a 4 year quest for certification instead of an ongoing quest for knowledge.)

Review of Jacobs’ Management Rewired: Chapters 1 & 2

Working my way through Charles Jacobs’ Management Rewired.  I have some reservations about his findings, but he does a good job to frame psychology and education’s relationships with business in specific and with practice in general. The next few post are directed to a chapter by chapter listing of first impressions.

Chapter 1 focusses on emotion over logic in decision-making.  I think this chapter is potentially confusing.  First, his card game example does not prove emotion is better than logic.  What it does suggest to me is that the emotional parts of the subjects minds picked up on the underlying logic implied in the game before the reasoning portion could state it.  This is more in line with Malcolm Gladwell‘s line of though in his book BlinkThe emotional mind was able to understand the logic of the game before the logical mind could express that logic.

What does this mean?  Well, we should pay attention to our gut.  However, the very existence of science is because our gut response is so often wrong.  Jacobs does do a good job of expressing the holistic way that the mind works and to suggest that practice should reflect the function of the mind.  For example, we may have a sound logic behind a practice, but that practice will be much more effective if we are emotionally behind it.

In chapter 2, Jacobs talks about the primacy of perception.  We can’t experience the world directly, we experience it through our minds perception and the world we experience may not be remotely similar to other peoples perception. Therefore, idea, theories, paradigm, metaphors and the like play a big role in our perception.  This idea also reflects what I believe about measurement / assessment.  You can’t understand what your measuring if you don’t base your measurements on theories.  The theoretical world and the empirical world are in a dialectical relationship.  You might think of them as 2 sides of the same coin.  I’m greatful for how Jacobs gives voice to this idea.

Jacobs also echos a theme in chapter 2 that I attribute first to Vygotsky, the relationship between lower mental functions and higher mental functions:

Lower Mental Functions (LMF): (are) inherited, unmediated, involuntary, . . . Higher Mental Functions (HMF): are socially acquired, mediated by meanings, voluntarily controlled and exists in a broad system of functions rather than as an individual unit (from the Lev Vygotsky Wiki).

LMFs (memory perception, emotion, etc. . .) can be controlled to a certain degree when they are mediated through HMFs which are Jacobs’ stories, paradigms, metaphors or theories.  The one caveat over Vygotsky is that the fMRI studies Jacobs is referencing do give us a much clearer sense of what these mental functions are, how they function, and how they are related to each other.  Vygotsky thought of LMFs as being isolated from each other, something that current knowledge and (I believe) Jacobs would refute.

Side-bar I am becoming somewhat uncomfortable with the way that Jacobs uses neuroscience.  These fMRI studies are important and enlightening, but as an educational psychologist, I see a much broader field of knowledge (like the above reference to Vygotsky).  Neuroscience seems to be used as a rhetorical device like science often is.  In example, newspaper articles will often read “studies say” when they want to indicate that the authority of science behind their conclusion, even when their conclusion is not scientific.  Neuroscience is a relatively new field that most people know little about and referencing it can give one a certain authority that psychology would not supply.  Yet, there is little in neuroscience that is really useful without taking it back to a general understanding of psychology and (in some cases) education.  I don’t begrudge Jacobs, you have to find a way to sell your ideas, but I do step back and look closely at the way he uses neuroscience in framing his (rhetorical) arguments.

Synergy, hermeneutics and simplicity

Synergy, hermeneutics and simplicity is at the heart of my thinking, my advocacy and my ideas for learning and supporting performance.  I think it will be difficult to be understood without making what this means more explicit.

Synergy – On 4-2-09 I posted an idea (How to Think:) drawn from the blog of Ed Boyden from MIT’s Media Lab who wrote: “Synthesize new ideas constantly. Never read passively. Annotate, model, think, and synthesize while you read . . . ”    Creativity is essential to success and nothing supports creativity more than the synergy that comes from synthesizing ideas in new ways.  It is also at the heart of learning.  New knowledge must be integrated with existing knowledge to make sense and this often requires synthesis.

Meaning Making (Hermeneutics) – Our brains and sensory systems can process an enormous amount of information, but it’s all chaos (psychologist William James’ buzzing blooming confusion) until we make meaning out of it.  Meaning is not a given, it is a human and a learned (for the most part) achievement.  Like synthesis, creating meaning (hermeneutics) is also a basic skill needed for successful practice.  When I advocate for measurement, it is as a tool for making meaning.

Simplicity – In my 3-19 post, Writing to Tame the Chaos I advocated for simplicity in academic writing that communicates beyond one’s disciplinary silo.  With the help of Cunha & Rego (2008), I would like to extend simplicity as a general approach to practice in my next post.  For now I will just comment that synthesis and meaning making are supported by simplicity.  Science can be very complex.  Think, statistical path analysis, double blind random controlled trials, and item response theory in test construction as examples.  But, all these ideas grow out of a relatively simple idea of science.  Create a model to account for observations, develop a hypothesis and collect evidence to test the hypothesis.  You may decide that a path analysis is appropriate to your context, but attempt to return at every step to the simplest most parsimonious understanding.