The Place of Tech in Ed Tech

This is a follow-up, or another view relevant to my last post. George Siemens posted this goodbye to his involvement in Ed Tech because:

(E)ducational technology is not becoming more human; it is making the human a technology. Instead of improving teaching and learning, today’s technology re-writes teaching and learning to function according to a very narrow spectrum of single, de-contextualized skills. . . . (Ed Tech programs) require the human, the learner, to become a technology, to become a component within their well-architected software system. Sit and click. Sit and click. So much of learning involves decision making, developing meta-cognitive skills, exploring, finding passion, taking peripheral paths. Automation treats the person as an object to which things are done. There is no reason to think, no reason to go through the valuable confusion process of learning, no need to be a human. Simply consume. Simply consume. Click and be knowledgeable.

2 pointsOne, this is partially the result of Tech without ontology and an appropriate teleology. There is no question that Ed Tech is more efficient at whatever it is doing, but without specifying an ontology, it’s really not possible to know what it is doing. This was an underlying problem with Behaviorism. Behaviors were being changed but without a framework that would clue you in to the “what”, “why” and to “what end”. This is why so much Ed Tech is no more than a more complex Skinnerian teaching machine.

Second point, Tech can be used as a more efficient substitute for a human in simple transactional interactions, (think ATMs, self-checkout lines or checking your flight status) but not in systems that are highly variable (Try getting software or customer support help from an automated system. It’s usually a disaster.) Simple decontextualized skill acquisition is an important part of education, but only a small part. Current Ed Tech is good for memorizing math facts, increasing reading levels or memorizing basic decontextualized domain facts, but the hope for education is for much more. Ed Tech is striving to do more, but here are 3 aspects where I believe Ed Tech is not near to being a substitute for a teacher:

  1. Fostering creativity. This is advanced language use (including math) to evaluate and synthesize knowledge and to reach new combinations, new uses and new ideas.
  2. Engaging in social practices. Most of what we do is not to just use knowledge, but to engage with practices that we share with other people, or as Wittgenstein put it; to engage in language games. These are things that even deep AI cannot come close to imitating.
  3. Develop meaningful networks and connections with other people. This may be the most important ability in the future and the only way it can be learned is in direct engagement with other people.

I believe that Technology can help in these areas, not as a substitute for teachers, but by fostering new affordances for teachers which is an intense pedagogical research project and will require new tech from what I’ve seen so far. As an example consider the text editor. Conceived as a replacement for hand writing or the typewriter, it allows new affordances like email, blog posts, spelling and grammar checking or language translation. All these things extend human capabilities, but cannot substitute for it. Ed Tech will require teachers to become more capable and knowledgable with advanced pedagogy and it will make teachers more efficient but only if it creates new affordances for teachers. It must recognizes and constitute a new pedagogical framework that centers on the teacher and the teacher student diode.

Instructionism, Constructionism and Connectivism: Epistomologies and Their Implied Pedagogies

Ryan2.0’s blog recently hosted a discussion on different pedagogies based on Instructionist, Constructionist and Connectivist  theories of learning.  I tend to see these differences on an epistemological / psychological / psychometrics level.  (I’m an educational psychologist, not a philosopher.)  I think this line of thinking is helpful for exploring some of my recent thoughts.

First a note; I resist labels on learning theories.  A consensus may be developing, but there are so many sub-positions that if you look at 100 constructivist positions, you’ll find 100 different takes (as evidenced by many of the comments on Ryan’s post).  I just find labels unsatisfying as points of reference for communication in learning theories at this time; they convey too little meaning to me.  Tell me what you don’t like about a learning theory; I probably don’t like it either.

What’s the Point

Ryan’s main point is that all of these pedagogical position are evident in current education practices and we should think in terms of “and” not “or”.  This fits with my own view that paradigm shifts should proceed by subsuming or at least accounting for the successful parts of the previous paradigm, while enabling teachers and scientists to move beyond problematic aspects of older theories.  To really understand these different theories, it will be good to see how pedagogy changes as we move from one to the next.  My post here looks at each one of these different theories in terms of epistemology / psychology / psychometrics, and than discuss a place where implied pedagogies are relevant to practice today.

Direct Instruction

I’m not familiar with instructivism per say, but it seems similar to direct instruction, a pedagogy that is associated with positivism / behaviorism.  Direct instruction often uses empirically based task analyses that are easy to measure and easy to employ.  Applied Behavioral Analysis is a specialized operant behavioral pedagogy that is a prime supporter of direct instruction.  Many, if not most classroom use direct instruction in some form today.  It seems like common sense and many teachers may not be aware of the underlying epistemology.

One prominent area where advanced uses of direct instruction is growing is in computer based adaptive learning like the Knewton platform. Students follow scripted instruction sequences. A student’s specific path within the script is determined by assessments that follow Item Response Theory (IRT) protocols.  The assessment estimates a student’s command of a latent trait and provides the next instruction that is appropriate for the assessed level of that trait.  The best feature of Adaptive learning systems is the efficiency in moving students through a large body of curriculum or in making leaps in skill levels like the improvement of reading levels.  Because it is also easy to measure, it’s possible to use advanced psychometric computer analyses.

Critiques of direct instruction can be similar to critiques of behaviorism in general.  Even though test developers are becoming more sophisticated in measuring complex constructs (eg. Common Core), the learning that results from direct instruction can still be seen as lacking in conceptual depth and in the ability to transfer to other knowledge domains.  It also doesn’t directly address many important higher level cognitive skills.


Enter constructivism.  I think of constructionism as beginning with Piaget’s learning through schema development.  Piaget’s individual constructive approach is expanded by social theorists and ends up with embodied theorists or in ideas similar to Wittgenstein’s; that knowledge and meaning are closely linked with how they are used.  Wittgenstein’s early work was similar to the work of logical positivists.  He eventually found that meaning in everyday activities is inherently circular and the only way to break out is not through precision, but to look for meaning in what people are doing and how they are using knowledge.  In some ways it’s like a return to behaviorism, but with a position that is more inline with hermeneutics than empiricism.

I recently saw a presentation of an instructional program (MakerState) based on the Maker / Hacker Space movement that functions much like a constructivist approach to education.

MakerState kids learn by doing, by creating, designing, experimenting, building…making. Our makers respond when challenged to think outside the box, to think creatively and critically, to collaborate with their peers, to problem solve, to innovate and even invent solutions to challenges they see around them.

This program can be founded on the same curriculum as that used in direct instruction when developing maker challenge activities and it can use this curriculum to scaffold maker activities with STEAM principles.  But the outcomes are open ended and outcome complexities are well beyond what is capable through direct instruction.  Learning by doing is more than just an aside.  Making knowledge concrete is actualizing it; taking it from the abstract to make it meaningful, valuable and productive.  But, is this the end of educational objectives; does success in life not require even more.


Enter Connectivism.  I associate connectivism with the work of  George Siemens and Stephen Downs.  I take this post from George as a good summary of Connectivism:

The big idea is that learning and knowledge are networked, not sequential and hierarchical.  . . . In the short term, hierarchical and structured models may still succeed. In the long term, and I’m thinking in terms of a decade or so, learning systems must be modelled on the attributes of networked information, reflect end user control, take advantage of connective/collective social activity, treat technical systems as co-sensemaking agents to human cognition, make use of data in automated and guided decision making, and serve the creative and innovation needs of a society (actually, human race) facing big problems.

I believe this take on Connectivism is modeled on computer and social media networks.  My own take is to include a more biological approach as another major node in connectivism: M.M. Bakhtin, a Russian literary critic known as a dialogic philosopher.  I want to draw this connection because dialogism is a reasonable way to make sense of everyday collective co-sensemaking activity by an organism interacting with its environment.  I see this as understanding the underlying way networks function when biological organisms (i.e., humans) are involved.

One of Bakhtin’s main ideas is heterglossia:

(A)ll languages (and knowledges) represent a distinct point of view on the world, characterized by its own meaning and values. In this view, language is “shot through with intentions and accents,” and thus there are no neutral words. Even the most unremarkable statement possesses a taste, whether of a profession, a party, a generation, a place or a time.  . . . Bakhtin goes on to discuss the interconnectedness of conversation. Even a simple dialogue, in his view, is full of quotations and references, often to a general “everyone says” or “I heard that..” Opinion and information are transmitted by way of reference to an indefinite, general source. By way of these references, humans selectively assimilate the discourse of others and make it their own.

Just as water is the medium that allows fish to swim, language is the medium that facilitates networks.  Rather than focus on words as the base unit, Bakhtin focusses on the utterance as his main unit of analysis.  This is from the main wikipedia Bakhtin article:

Utterances are not indifferent to one another, and are not self-sufficient; they are aware of and mutually reflect one another… Every utterance must be regarded as primarily a response to preceding utterances of the given sphere (we understand the word ‘response’ here in the broadest sense). Each utterance refutes affirms, supplements, and relies upon the others, presupposes them to be known, and somehow takes them into account…

I see this as a detailed account of the Wittgenstein use argument that I used earlier.  I take from a psych perspective: The inner psychological world reflects and models the interaction we have with the world.  Because learning is facilitated by social interaction with other people in dialogue, our mind is structured in a dialogical fashion.  This is to see knowledge as existing not only through network nodes, but nodes that reflect dialogue and inter-connected utterances. (This is similar to structuralism, but goes well beyond it in its implications.) Even when we are learning through self study we structure that study in a dialogical fashion.  When we engage in soliloquy, we posit a general other to which we address our words.  Transferring knowledge is not just cutting and pasting it to another node in the network.  We must also adjust to new intentions, new references, and often to the tastes of a new profession or discipline.  I don’t know what the neurological correlates are to dialogic activity, but cognition at a conscious level (and some aspects of unconscious levels), I see the mind as structured by its interaction with this complex social / speech world.

I don’t yet have a good example of pedagogy that reflects this dialogic connective theory.  It would certainly be activity based and structured more like an open-ended apprenticeship and some sort of performance.  I’m thinking that some relevant learning objectives would include: higher order cognition in unstructured situations (e.g. knowledge transfer, problem identification and solving, creative thinking, situated strategic thinking),  intrapersonal dispositions (e.g. motivation, persistence, resilience, and metacognition like self-directed learning) and interpersonal skills sets (e.g. collaboration, effective situated communication, relationship development).

I think a key to achieving a higher level of connective pedagogy is valid assessment in an area where assessment has proven difficult.  Assessment in this context must also be ontologically responsible to the student.  The purpose of ontologically responsible assessment is not to rank, rate, or judge either students or teachers.  That is a task for other assessments. Instead, ontologically responsible assessment is a way of making ourselves visible, both to ourselves and to others, in a joint student teacher activity that conveys the students history and future horizons.  (Horizon = A future that I can see only vaguely, but contains a reasonable route to achieve, given both the student’s and teacher’s  join commitment to each other and to the path.  Education as a doable, visible, committed and ontologically responsible joint activity by student and teacher.

TI’m neven satisfied with an ending, but this seems like a good jumping off point for another post and another time.  I feel the need for input before going further in this direction.


#CCK11 – The Bias in Frames are an Integral Part of Design, Innovation and Education

Serendipity drawns me back into the frames discussion, this time through Jon Kolko’s Magic of Design series on the Fast Company Design Blog.  This post also relates to an assertion that the arts are integral to the 21 Century economy.  Most people’s everyday work lives operate in something close to a scientific orientation, but we still need access to a more biased and creative orientation.  Integrating the arts into our social workspaces give us inspiration to add design thinking to this workspace and the process is explained through Jon Kolko’s Magic of Design.

I’ve previously discussed Jon’s first 2 posts on a process for innovation and providing work space to explore deviant ideas.  His last post in this series is about the importance of bringing frames, perspectives and biases to the design process.  The statement: “the true test of a first-rate mind is the ability to hold two contradictory ideas at the same time” is attributed to F Scott Fitzgerald.  To participate in design processes, the trick is to bring both diversity and this type of intelligence to your processes.  In this case, we can not ignore science as a way of driving our actions, but we also need creative innovation, and in some ways science and innovation are at opposing ends of a spectrum.  Sometimes we need to embrace our biases.  As Jon explains it:

For as a designer stands in front of a whiteboard in a war-room, surrounded by anecdotes, quotes, pictures, sketches, and working models — and searching for a new, innovative, and persuasive idea — she is relying on her ability to connect something in her own life with something in the data she’s gathered. She is purposefully applying a frame of bias to objective, empirical data, in order to produce something new.

This is called sensemaking.  . . . the interplay of action and interpretation rather than the influence of evaluation on choice.”  . . . all of this (design activity) is useless if the people doing the synthesis aren’t very interesting. Synthesis requires a team of varied and highly eclectic designers who are empowered to embrace their biased perspectives. . . . Groundbreaking design doesn’t come through statistical regression testing, metrics, and causality. It comes from the richness of a biased perspective on the world.

Here is the primary Issue: How do we hold the multiple perspective as important and shift between them on an everyday bases?  There is no place where this is more important than in education.  What kind of Environment can help us to function better in this way?

#CCK11 Education: Stretching the Mind by Adopting New Frames

A follow up on the frame discussion prompted by reading Jamshed Bharucha’s Education as Stretching the Mind.   Jamshed places the idea of re-framing as a central goal of education, which he states like this:

Learn new frameworks, and be guided by them.  But never get so comfortable as to believe that your frameworks are the final word, . . .

He defines frameworks broadly:

a range of conceptual or belief systems — either explicitly articulated or implicitly followed. These include narratives, paradigms, theories, models, schemas, frames, scripts, stereotypes, and categories; they include philosophies of life, ideologies, moral systems, ethical codes, worldviews, and political, religious or cultural affiliations. These are all systems that organize human cognition and behavior by parsing, integrating, simplifying or packaging knowledge or belief. . . .

But there is a problem.  Frames are necessary to reduce cognitive chaos and complexity to a manageable level, but the mind also has an overwhelming bias to maintain these frames, even in the face of disconfirming evidence and sometimes they even create perceptions that are just plain wrong.

The brain maps information onto a small set of organizing structures, which serve as cognitive lenses, skewing how we process or seek new information. These structures drive a range of phenomena, including the perception of coherent patterns (sometimes where none exists), the perception of causality (sometimes where none exists), and the perception of people in stereotyped ways.

But the plasticity of the brain can allow us to change our mind, abet within limits and with much effort, critical tools, reasoning, and the support of ethical and committed people called educators.  Neuro-linguistic Programming: I’ve always thought that therapy should be grounded in education, but maybe education should be grounded in therapy.  I believe strongly in positive psychology, but maybe we can also benefit from curing some of our diseased conceptions.

How Do You Innovate

Jon Kolko wrote How Do You Transform Good Research Into Great Innovations? on the fast company design blog.  I would summarize his view as design synthesis which involves:

  1. Visualize your data
  2. Search for Patterns
  3. Develop and experiment with different models (his definition of models = a visual representation of an idea, an artifact)

A good process.  This point is important:

Because these are thinking tools, tools for synthesis, there’s only one wrong way to do this: not doing it at all. Looking at the data and talking about the data doesn’t count. If it isn’t modeled, written, drawn, and otherwise solidified in an artifact, it never happened. (Emphasis added)

More on Mediation

I’ve previously discussed cognitive mediation here, but today I want to consider the foundation or the roots (etiology) of this concept in my thinking.

  1. Marx considered labor as a form of mediation to explain how humans interacted with their environment  (This was guided by Hegel’s version of dialectic theory, usually stated as thesis-antithesis-synthesis).  Marx did not delve much into the specifics of how mediation worked, except as he used the idea to focus on the way that labor became subservient to capital, thereby alienating laborers.(See note 1.)
  2. Vygotsky extended the psychological aspects of this view of mediation by analyze how language and concepts acted like cognitive tools that enabled humans to give meaning to perception, (He spoke of translating lower psychological functions like perception into higher psychological functions like meaning). (See note 2.)  Mediation then enabled humans to interact with and modify their environment (or to perform labor).  Vygotsky also noted that mediators are not usually developed by individuals out of thin air, but already exist in the surrounding culture and people acquire these abilities by imitation, instruction or similar means.
  3. Gal’perin noted that all cognitive mediation was not equal.  Tools could be improved to make labor more effective or efficient.
  4. Hence my idea that it is good to be aware of the mediation you’re using.  If your goal is to improve the performance of people’s labor, understand what mediators are guiding performance.  Consider developing better mediation and passing it on through learning new ways of mediating, by changing mediators in work processes or by both methods.  This approach may be able to improve performance far better than through increasing individual efforts (like boot strapping). The bottom line – If we are to fully enter into a “knowledge age”, we must understand how knowledge mediates to improve our practices and labors.
  1. Note – My own personal opinion is that Marxist analysis is frequently very enlightening.  But, considering the general failure of communism and central planning, Marxism generally fails to offer any viable alternatives ways to organize human activity.
  2. Note – Vygotsky began as an enthusiastic Marxist, hoping that it would lead to the end of Jewish persecution.  He died young from TB, but lived to see his ideas attacked because he committed the Stalinist sin of referencing western ideas, like those from William James or Jean Piaget.  His ideas, although explicitly Marxist in their original intent, have generally been taken up by social cultural educational psychology (cognitive psychology that sees cultural as the place where cognition originates and with enculturation as important to cognitive development).  He is generally ignored by Marxist theorist today.  I believe it is because he focused on the mediational side and not on the alienation side of the Marxist equation.

Ockham’s Razor: the Psychological Need for this Important Philosophical Concept

I think that Ockham’s Razor deserves wider discussion and application (“entities must not be multiplied beyond necessity”).  The issue for me is not that simpler is better, it’s that complexity in any intellectual artifacts will often do more to obscure meaning than it is to enlighten us. It came to my attention while listening to the Goldman Sachs testimony and the general feeling that financial engineering was too complex for the US Congress to understand.  I believe this complexity in financial systems was multiplied beyond necessity, and the most obvious reason is that this complexity helped to obscure what was going on between traders.

Science is also not without fault, not only with complex theoretical statements, but also with the expansion of vocabulary.  Sometimes theoretical or lexical complexity is necessary in order to communicate nuances.  But then the complexity often takes on a life of its own.  This not only restricts the ability to communicate, but also taxes cognition.  Reducing complexity can help us to think better.

An example from my early life in graduate school.  I once was reading Henry Giroux late at night, finished a paragraph, and realized that I had understood nothing from that paragraph.  Two more readings of that paragraph did not bring any more enlightenment.  Of course is was because I was not familiar with the vocabulary or with the arguments he was presenting.  Now when I fine a new Giroux book, I’ll scan through the pages to see if there have been any changes or development to his basic argument.

It’s not only experience that causes this to happen, it’s also that I now understand Giroux’s arguments at a much simpler level.  When we cannot simplify our cognition, we are forced to understand things in a much more route fashion.  It happens in methodology too!  The more complex the methodology is, the more likely that people will use set methodological formulas or use others work in unquestioned ways.  When it can be simplified, our ability to cognitively manipulate ideas is increased.

So, for me, Ockham’s Razor does not mean that simple theories may be the best, but that simple understandings allows us to do our best thinking.

Critical Thinking, Scientific Reasoning, and the Incorporation of Evidence into Everyday Practice: A Conceptual SymbiosisI

It seems to me that there is a natural affinity between evidence-based practice, scientific reasoning and critical thinking.  I think Kuhn (quoted in Dawson, 2000) captures the essence of this symbiosis:

I have undertaken here to show that these two abilities–the ability to recognize the possible falsehood of a theory and the identification of evidence capable of disconfirming it–are the foundational abilities that lie at the heart of both informal and scientific reasoning. These abilities lie at the heart of critical thinking, which similarly can be regarded, at the most global level, as the ability to justify what one claims to be true (Kuhn, 1993).

Some background considerations and directions for future thoughts and research.

  1. I’m taking the perspective that what cognitive control we have over our decisions and actions, is mediated by our beliefs, theories, schemas and prior knowledge.  Without this mediation everyday actions would represent an unbearable cognitive load.
  2. Although there are good strategies for enabling critical thinking, at it’s core, critical thinking is the ability and disposition to seek disconfirming evidence and use it to change our minds (beefs schemas, theories, etc. . . ).
  3. Although we often equate scientific thinking with the scientific method (hypothesis testing), the core of it’s reasoning is also the disposition to seek and make use of disconfirming evidence.
  4. Evidence-based organizations must actively support critical thinking through their culture and in the organization of their internal processes and practices.
  5. Practice validity (seeking evidence for the validity of organizational practices) is the ability to justify the efficacy of our actions, just as Kuhn considers critical thinking to be a way to justify our claims to truth.

A shout-out to Harold Jarche who’s post Critical thinking in the organization led me down this primrose path.


Dawson, R. (2000). Critical Thinking, Scientific Thinking, and Everyday Thinking: Metacognition about Cognition, Academic Exchange Quarterly, accessed 4-8–10 at,+Scientific+Thinking,+and+Everyday+Thinking:…-a067872702

Kuhn, D. (1993). Connecting scientific and informal reasoning. Merrill-Palmer Quarterly, 39(1), 74-103.

A New Path for Organizational Learning? Developing Discipline Specific Higher Order Thinking Skills for Evidence-based Practice

I am thinking of two ways of addressing evidence-based practice.  These are two ways in which one may devise consultive approaches for moving organizations toward evidence-based practice.  The one I have been discussing lately is to evaluate the processes, practices and practice protocols in terms of the evidence for their validity.  A second way is an educational approach: to develop individual and team abilities in the higher order thinking skills that are necessary to collect and use evidence in daily decision-making.  This is the approach taken by  Middendorf and Pace (2004).  As Middendorf and Pace point out, the types of higher order skills that are needed in many situations are often tied to specific disciplinary ways of thinking rather than to generic formulas of higher order thinking skills.  Their way of modeling the analysis skills needed to interpret and apply evidence is called decoding the disciplines, which can be conceived as 7 steps to uncover and solve problematic or unsuccessful thinking:

  1. Identify Bottlenecks; places where evidence is not being used or where analysis is breaking down.
  2. Identify how experts respond to these types of situations
  3. Identify how expert thinking can be modeled
  4. Devise feedback methods to scaffold expert thinking
  5. Devise ways to motivate learners to progress toward expert thinkers
  6. Devise assessments to monitor progress
  7. Plan for sharing learning and making this approach a part of the organizational culture.

The latest issue of The Chronicle of Higher Education (11-18-09) reports on the attempt to develop this approach at Indiana University in Bloomington.  David Pace’s history courses at IU attempts to develop two skills that he feels are core to the discipline of history: “assembling evidence and interpreting it”.

“Students come into our classrooms believing that history is about stories full of names and dates,” says Arlene J. Díaz, an associate professor of history at Indiana who is one of four directors of the department’s History Learning Project, as the redesign effort is known. But in courses, “they discover that history is actually about interpretation, evidence, and argument.”

The Chronicle reports that the history curriculum at IU is now organized around specific analytic skills and the different course levels by which they should be mastered.

Volume 98 of the journal New Directions for Teaching and Learning was devoted entirely to this topic.  It includes examples of the decoding methodology as it is applied to history, marketing, statistics, genetics, molecular biology, astronomy, the humanities, physiology, and a specific chapter devoted to supporting the assessment step.

I have a kind of initial excitement about this approach.  I’ve known that learning and education are important to all kinds of organizations today and I’ve always been enamored by the meme that businesses must become more like universities.  Decoding the Disciplines is a potential methodology that could crosses over between these two very different universes and also provide a model for organizational learning.


Middendorf, J. & Pace, D. (2004). Decoding the Disciplines: A Model for Helping Students Learn Disciplinary Ways of Thinking, New Directions for Teaching and Learning, 98, 1-12.

available at

Glenn, D (2009). A Teaching Experiment Shows Students How to Grasp Big Concepts, The Chronicle of Higher Education, Nov 18, 2009.

Design, Hermeneutics, Wittgenstein and Our Ethical Commitment to the World

I. A New Understanding of Design

The Harvard Business Ideacast #160 is an interview with Roberto Verganti, the author of Design Driven Innovation – Changing the Rules of Competition by Radically Innovating what Things Mean.  Verganti’s ideas about design point to the etymology of design, derived from the latin designer – to designate, or as Verganti presents it; to ascribe meaning.  For Verganti, designing seems to be an act of hermeneutics; finding interpretations that change the meaning of things or services.  One of examples that Verganti gives is Sony, who invented the idea the personal and portable music player with the Walkman; later digitalized as the CD playing Diskman.  Later, it was Apple that put music on portable hard disks and subsequently changed the meaning of a personal portable music device, leading to the success of the ipod.  The marketing success of these products was not due to any kind of technical advantage.  Many companies could have put music on portable hard disks including Sony.  Success came in how Apple was able to changed the meaning and the place of music in people’s lives.  In other words, it was a hermeneutic act.  Understanding design as interpretation helps to clarify how design thinking is relevant to human activity across many functions.  First, a short diversion that will hopefully lead to a deeper understanding of hermeneutics.

II. Hermeneutics in Contemporary Philosophy

I’m not a professional philosopher, but hermeneutics was a buzz word from my graduate days in the 90s and here is my take on it.  Whether you trace the philosophical line of thought from Schleiermacher to Gadamer or from Nietzsche to Derrida, hermeneutics and meaning has been playing a central role in contemporary philosophy.  For me, hermeneutics culminates in the later ideas of Wittgenstein because of the scope of his thought that includes important spaces for science, and ethics.

Humans are meaning making organisms that are realized as they participate within situated forms of life.  Just like the story of Adam and Eve naming the animals, we actively experience the world around us, we ascribe meaning to the world and to those experiences and there seems not else we can do.  Words and experiences are better understood if they are not thought of as abstract representations of the mind, but rather, the means of hermeneutic action in the context of a lived life. Deed before word.  This draws from the idea that we do not directly experience the world, but whether we are perceiving objects or our experiences, they are understood by meaning creating mediators like scientific methodology.  Changing the meaning changes our basic understanding of what the object or experience will be.

According to Janik (2002), Wittgenstein took many fundamental ideas from Heinrich Hertz who believed that “rhetorical adequacy is as important as architecture” (see p. 8 ) when talking about scientific models.  Hertz gave three criteria for scientific models

  • They must be logically permissible, (i.e., internally consistent, empirically correct)
  • They must be communicatively appropriate or effective.
  • They must have usefulness in a given situation.

You can see in Hertz, the origin of the thought that even scientific meaning is derived from action; again, the idea of deed before word or action that leads to meaning.  This is also the space where ethics enters into the conversation.  A focus on action also leads to discussions of usefulness and of the need to evaluate the consequences of action.

Contemporary hermeneutics is not trivial.  It is a profound view of the world and what human are thought to be.  It was well expressed in a quoted passage of Slavoj Žižek’s Parallax View, recently discussed and posted by Jeff Meyerhoff in Philosophy Autobiography.

. . . from Kierkegaard and Nietzsche to the late work of Wittgenstein, the most radical authentic core of being human is perceived as a concrete practico-ethical engagement and/or choice which precedes (and grounds) every “theory,” every theoretical account of itself, and is, in this radical sense of the term, contingent . . . (Quoting Fichte) “What philosophy one chooses depends on what kind of man one is.” . . . in the last resort there is not theory, just a fundamental practice-ethical decision about what kind of life one wants to commit oneself to.

III. As a final task, I will look at hermeneutics as design from the different ideas that I’ve been discussing recently:

  • Validity – As I’ve said before, Messick’s idea of validity can be thought of as a hermeneutics of measurement and that in turn also serves for me as another path that hermeneutics enters into an account of science.  Science (through measurement and theory) is not a raw empirical experience of the world, but is a hermeneutic application of experience.  It should not be divorced from art as it so often is, but both are still contingent to one’s commitments and practice-ethical decisions.  The Messick account of validity is consistent with Wittgenstein when he emphasizes that validity is found in the interpretation of test use, not in the abstract qualities of a test and when he says that validity should include an evaluation of the consequences of assessment.  An obvious statement of ethical import.
  • In response to an earlier query from Ann Burdick, who wonders why non-designers are so active in design conversations, design, as it is in Verganti’s hermeneutic act, is a form of life that is understood at some level by all humans.  Artists, directors, writers and the like acquire specialized skills and expertise at interpretation within the mediums of their specialization, but at some level; we are all practico-ethical designers and artists of our lives and of those around us that we touch.  Specialization allows them the ability to speak for the broader society, but all people have a need to act designerly.
  • In terms of Fred Collopy’s Management by Design, this is an acknowledgement of the central role of managers as practical ethical interpreter heros, leading society through the chaotic world of business.  Designing managers must master the range of hermeneutic  tools that allows managers and the organizations they lead to re-interpret and to change the meanings of their historical circumstances in route to envisioning and acting on a new and changed future.
  • In terms of the evidence-based movement, evidence and science are profound tools of interpretation, just as they were for Wittgenstein.  You might say, they are the sword and shield of our business hero.  But our manager heros are also like King Arthur in that his true strength is not in his weapons, but in his commitment to those around him, they to him, and in the practical and ethical choices they all make, or fail to make.

Evidence-based approaches, science, art, design, theories, words; these are the tools of our choices and they are not trivial tools.  They are the best and most productive tools we have to create our lived world, but they do not release us from the need to make the ethical choice.  And as Bob Dylan said; “You’re gonna have to serve somebody”.