A Practice Perspective on the Quants and the Humanists

Lee Drutman responded to Timothy Egan’s New York Times Article about creativity and Big Data.

First TE says that companies like Amazon who are based on quantitative methods are not creative because they “marginalized messiness”.  LD responds that “(d)ata analysis and everything that goes into it can be highly creative”, meaning (I guess) that Quants can get down in the mess too.  Both are good points but miss another aspect that unites the arts / humanities and the sciences, and this is the heart of my argument.  They are both creating practices that effect our live in important ways.   The point is that we all create.  It’s not whether we are or are not creative.  It’s a question of what we are creating.  From John Shotter’s Cultural Politics of Everyday Life:

But now, many take seriously Foucault’s (1972: 49) claim that our task consists of not – of no longer – treating discourses as groups of signs . . . but as practices that systematically form the objects of which they speak.

In other words, it’s not wether the Quants are creative, but do their analyses treat me as an object to be controlled, or do they treat me as a human being where the analysis respects my being.  That’s called ontologically responsible assessment.  Again, from Shotter:

I want to argue not for a radical change in our practices, but for a self-conscious noticing of their actual nature.

We should offer people clear and understandable analysis where they can make new connections, but also respects and is responsible to their rights as a person.  Yes, as Lee claims, the sciences and the humanities can work together.  But beyond that, they are both human based social practices.  If we see them as practices a la Foucault, there is much more in common than is different.  They are both not only creative, but they are creating.

Instructionism, Constructionism and Connectivism: Epistomologies and Their Implied Pedagogies

Ryan2.0’s blog recently hosted a discussion on different pedagogies based on Instructionist, Constructionist and Connectivist  theories of learning.  I tend to see these differences on an epistemological / psychological / psychometrics level.  (I’m an educational psychologist, not a philosopher.)  I think this line of thinking is helpful for exploring some of my recent thoughts.

First a note; I resist labels on learning theories.  A consensus may be developing, but there are so many sub-positions that if you look at 100 constructivist positions, you’ll find 100 different takes (as evidenced by many of the comments on Ryan’s post).  I just find labels unsatisfying as points of reference for communication in learning theories at this time; they convey too little meaning to me.  Tell me what you don’t like about a learning theory; I probably don’t like it either.

What’s the Point

Ryan’s main point is that all of these pedagogical position are evident in current education practices and we should think in terms of “and” not “or”.  This fits with my own view that paradigm shifts should proceed by subsuming or at least accounting for the successful parts of the previous paradigm, while enabling teachers and scientists to move beyond problematic aspects of older theories.  To really understand these different theories, it will be good to see how pedagogy changes as we move from one to the next.  My post here looks at each one of these different theories in terms of epistemology / psychology / psychometrics, and than discuss a place where implied pedagogies are relevant to practice today.

Direct Instruction

I’m not familiar with instructivism per say, but it seems similar to direct instruction, a pedagogy that is associated with positivism / behaviorism.  Direct instruction often uses empirically based task analyses that are easy to measure and easy to employ.  Applied Behavioral Analysis is a specialized operant behavioral pedagogy that is a prime supporter of direct instruction.  Many, if not most classroom use direct instruction in some form today.  It seems like common sense and many teachers may not be aware of the underlying epistemology.

One prominent area where advanced uses of direct instruction is growing is in computer based adaptive learning like the Knewton platform. Students follow scripted instruction sequences. A student’s specific path within the script is determined by assessments that follow Item Response Theory (IRT) protocols.  The assessment estimates a student’s command of a latent trait and provides the next instruction that is appropriate for the assessed level of that trait.  The best feature of Adaptive learning systems is the efficiency in moving students through a large body of curriculum or in making leaps in skill levels like the improvement of reading levels.  Because it is also easy to measure, it’s possible to use advanced psychometric computer analyses.

Critiques of direct instruction can be similar to critiques of behaviorism in general.  Even though test developers are becoming more sophisticated in measuring complex constructs (eg. Common Core), the learning that results from direct instruction can still be seen as lacking in conceptual depth and in the ability to transfer to other knowledge domains.  It also doesn’t directly address many important higher level cognitive skills.

Constructivism

Enter constructivism.  I think of constructionism as beginning with Piaget’s learning through schema development.  Piaget’s individual constructive approach is expanded by social theorists and ends up with embodied theorists or in ideas similar to Wittgenstein’s; that knowledge and meaning are closely linked with how they are used.  Wittgenstein’s early work was similar to the work of logical positivists.  He eventually found that meaning in everyday activities is inherently circular and the only way to break out is not through precision, but to look for meaning in what people are doing and how they are using knowledge.  In some ways it’s like a return to behaviorism, but with a position that is more inline with hermeneutics than empiricism.

I recently saw a presentation of an instructional program (MakerState) based on the Maker / Hacker Space movement that functions much like a constructivist approach to education.

MakerState kids learn by doing, by creating, designing, experimenting, building…making. Our makers respond when challenged to think outside the box, to think creatively and critically, to collaborate with their peers, to problem solve, to innovate and even invent solutions to challenges they see around them.

This program can be founded on the same curriculum as that used in direct instruction when developing maker challenge activities and it can use this curriculum to scaffold maker activities with STEAM principles.  But the outcomes are open ended and outcome complexities are well beyond what is capable through direct instruction.  Learning by doing is more than just an aside.  Making knowledge concrete is actualizing it; taking it from the abstract to make it meaningful, valuable and productive.  But, is this the end of educational objectives; does success in life not require even more.

Connectivism

Enter Connectivism.  I associate connectivism with the work of  George Siemens and Stephen Downs.  I take this post from George as a good summary of Connectivism:

The big idea is that learning and knowledge are networked, not sequential and hierarchical.  . . . In the short term, hierarchical and structured models may still succeed. In the long term, and I’m thinking in terms of a decade or so, learning systems must be modelled on the attributes of networked information, reflect end user control, take advantage of connective/collective social activity, treat technical systems as co-sensemaking agents to human cognition, make use of data in automated and guided decision making, and serve the creative and innovation needs of a society (actually, human race) facing big problems.

I believe this take on Connectivism is modeled on computer and social media networks.  My own take is to include a more biological approach as another major node in connectivism: M.M. Bakhtin, a Russian literary critic known as a dialogic philosopher.  I want to draw this connection because dialogism is a reasonable way to make sense of everyday collective co-sensemaking activity by an organism interacting with its environment.  I see this as understanding the underlying way networks function when biological organisms (i.e., humans) are involved.

One of Bakhtin’s main ideas is heterglossia:

(A)ll languages (and knowledges) represent a distinct point of view on the world, characterized by its own meaning and values. In this view, language is “shot through with intentions and accents,” and thus there are no neutral words. Even the most unremarkable statement possesses a taste, whether of a profession, a party, a generation, a place or a time.  . . . Bakhtin goes on to discuss the interconnectedness of conversation. Even a simple dialogue, in his view, is full of quotations and references, often to a general “everyone says” or “I heard that..” Opinion and information are transmitted by way of reference to an indefinite, general source. By way of these references, humans selectively assimilate the discourse of others and make it their own.

Just as water is the medium that allows fish to swim, language is the medium that facilitates networks.  Rather than focus on words as the base unit, Bakhtin focusses on the utterance as his main unit of analysis.  This is from the main wikipedia Bakhtin article:

Utterances are not indifferent to one another, and are not self-sufficient; they are aware of and mutually reflect one another… Every utterance must be regarded as primarily a response to preceding utterances of the given sphere (we understand the word ‘response’ here in the broadest sense). Each utterance refutes affirms, supplements, and relies upon the others, presupposes them to be known, and somehow takes them into account…

I see this as a detailed account of the Wittgenstein use argument that I used earlier.  I take from a psych perspective: The inner psychological world reflects and models the interaction we have with the world.  Because learning is facilitated by social interaction with other people in dialogue, our mind is structured in a dialogical fashion.  This is to see knowledge as existing not only through network nodes, but nodes that reflect dialogue and inter-connected utterances. (This is similar to structuralism, but goes well beyond it in its implications.) Even when we are learning through self study we structure that study in a dialogical fashion.  When we engage in soliloquy, we posit a general other to which we address our words.  Transferring knowledge is not just cutting and pasting it to another node in the network.  We must also adjust to new intentions, new references, and often to the tastes of a new profession or discipline.  I don’t know what the neurological correlates are to dialogic activity, but cognition at a conscious level (and some aspects of unconscious levels), I see the mind as structured by its interaction with this complex social / speech world.

I don’t yet have a good example of pedagogy that reflects this dialogic connective theory.  It would certainly be activity based and structured more like an open-ended apprenticeship and some sort of performance.  I’m thinking that some relevant learning objectives would include: higher order cognition in unstructured situations (e.g. knowledge transfer, problem identification and solving, creative thinking, situated strategic thinking),  intrapersonal dispositions (e.g. motivation, persistence, resilience, and metacognition like self-directed learning) and interpersonal skills sets (e.g. collaboration, effective situated communication, relationship development).

I think a key to achieving a higher level of connective pedagogy is valid assessment in an area where assessment has proven difficult.  Assessment in this context must also be ontologically responsible to the student.  The purpose of ontologically responsible assessment is not to rank, rate, or judge either students or teachers.  That is a task for other assessments. Instead, ontologically responsible assessment is a way of making ourselves visible, both to ourselves and to others, in a joint student teacher activity that conveys the students history and future horizons.  (Horizon = A future that I can see only vaguely, but contains a reasonable route to achieve, given both the student’s and teacher’s  join commitment to each other and to the path.  Education as a doable, visible, committed and ontologically responsible joint activity by student and teacher.

TI’m neven satisfied with an ending, but this seems like a good jumping off point for another post and another time.  I feel the need for input before going further in this direction.

 

Seeing Students Develop: From Objective Data to Subjective Achievement

Even though the personalization / individualization of instruction is being driven by objective data in learning platforms, this data can also be used to facilitate a deeper self-understanding  commitment and understanding between the student and the teacher.

To see the future, students and teachers should focus on their horizons.  Horizons here refer to a point in developmental  time that can’t be seen clearly today, but that I can reasonable expect to achieve in the future.  Because many aspects of this developmental journey are both precarious and dependence on future actions, this joint vision can’t be wishful thinking, but must be clearly framed in terms of privileges and obligations.  When it is treated this way, assessment is not a picture of student achievement, but is a methods for making both student and teacher visible to each other in a way that is rational, meaningful and conducted in an ontologically responsible manner; that is, in a way that is true to who we we want to become (Shotter, 1993).

This model of support begins with valid assessments that are clear and explicit about their  meaning, the underlying values implied and the actual or expected consequences.  The learning process can then be understood from a narrative perspective as well as mathematically.  By referencing empirically supported path models, personalization can include choice, preparing the way for stronger commitment and clarification of learning directions, choices and possibly experiments involving learning directions.

Theis idea is not to suggest that assessment must become less objective, but to recognize that an education process must contribute to the development of a subject.  Educating a student is not like designing a computer chip.  It is about helping an individual actualize their unique capabilities while finding themselves and their place in society.  The Goal of Education is intellectual development.  Approaches that are tethered to a mechanistic model of education will fail in this goal and are not even appropriate in terms of the efficiency by which they may be justified.  Assessment may start with objective visions, but its uses must directly translate to the subjective tasks that are central to both teacher and student.

4 Reasons Adaptive Learning Could Replace High Stakes Standardized Testing (It’s in the Validity)

I attended a recent nyc edtech meet up at Knewton in NYC. While looking at their promotional materials on their platform it occurred to me that this system has a stronger basis in validity over high stakes standardized testing (HSST).  I know it’s a (big) data driven approach, likely similar to what I was familiar with at Sylvain, except that digitalization allows you to address many more dimensions in the data, to cross-reference different domain skills and to better represent intellectual development over time.  This post is about the validity of big data adaptive learning systems as compared to high stakes standardized testing (HSST).

  1. The easiest distinction to be made is to contrast the “snapshot in time” nature of HSST and the developmental histories of adaptive learning.  Development is the way students and teachers understand school-based learning especially when it’s not linear, but proceeds in fits and starts.  Neither does a snapshot relate to the purposes of assessment.  In adaptive learning error is not judgement, but an excuse for more learning.
  2. This point may seem esoteric but I think important.  HSST must represent an ambitious  construct interpretation, that is, a single HSST question must represent the same learning that is represented in hundreds if not thousands of questions in an adaptive learning system.  And while the assessments in the adaptive system are part of the learning process, HSST constructs often stand outside of any pedagogical process.  (See #1 below)
  3. There are negative consequences associated with HSST.  Because of the lag time between testing and reporting, there is less instructional relevance to HSSTs.  Assessments in adaptive learning provide immediate feedback and are instrumental to the learning process.  There are also many unintended consequences, like instructional time that is wasted on test prep or the disassociation of error from an opportunity to learning.
  4. Assessments are consequential for students.  In adaptive learning assessments determines the instructional pathway the student will pursue.  If done well, the student will perceive this assessment to have been appropriate and helpful.  In many HSST (e.g. the SAT) assessments may be perceived as a threat and associated with a lack of opportunity.  See #2 below

It seems to me that as Adaptive Learning becomes more common and its validity become recognized, HSST will no longer be needed.

#1.  “If the IUA does not claim much (e.g., that students with high scores on the test can generally perform the kinds of tasks included in the test), it does not require much empirical support beyond data supporting the generalizability of the scores. A more-ambitious interpretation (e.g., one involving inferences about some theoretical construct) would require more evidence (e.g., evidence evaluating the theory and the consistency of the test scores with the theory) to support the additional claims being made”.  Kane (2013) p.3

#2. “The SAT is a mind-numbing, stress-inducing ritual of torture. The College Board can change the test all it likes, but no single exam, given on a single day, should determine anyone’s fate. The fact that we have been using this test to perform exactly this function for generations now is a national scandal”. NYTimes

 

A New Form for Validity

Thinking about new projects.  Here are the general contures of a new way of looking at validity.

  1. There have been criticism of Samuel Messick unified view of construct validity and Kanes Argument based approached.  I have yet to accept any logical argument made against either framework, yet I am sympathetic when it is said that these frameworks are not practical administratively.  
  2. Consider an argument made by the philosopher Karl Popper.  Popper makes a distinction between justification and criticism on the way to his famous idea of fallisficationism.  Just like one cannot claim that one’s theory is true through experimentation (you can only be sure of your results if they are false), so too it is precarious to justify one’s beliefs, but easy to demonstrate if their false.  Justification can be seen as a next to impossible task, but criticism is more likely to be seen as true.  If we respond to criticism with a desire to improve and adjust our beliefs than our beliefs will approach a closer version of what you might call truth.  So, the best way to justify assessment validity is by being open to criticism; always seeking to improve through critical reasoning.
  3. This does not nullify Messick’s framework (Messick, 1995), but it shifts it from justification to a framework for critique and critical thinking.  Messick’s framework moves from a hopelessly difficult attempt at justification and becomes a critical framework for knowledge transparency.  Recent developments in philosophy  have demonstrated the contingent nature and how the shape of knowledge is shaped by the form of its production.  Messick’s transparent critical framework for the production of assessment knowledge is the best way to see the underlying contingencies
  4. Kane’s framing of  validity as an argument is more suited to a critical approach than a justificationist approach.   The very nature of argument sets up a 2 sided dialogue.  Every argument presupposes a dialogic counter argument.  If you enter into an argument you must be willing to entertain and engage with critical position.  Kane’s framework is more suited to respond to critical than to depend on justification.

Scaffolding Start-ups: A New Role for Education

How does education change in the future.  The biggest trend that must be dealt with is change itself on a massive scale.  Knowledge is both increasing and decaying on an exponential trajectory.  What you knew 5 years ago may not be relevant today and many capabilities you may need today were unheard of 5 years ago.  Careers also change; requiring retraining.  The web is open and accessible and puts content at our finger tips, but it does little to help us structure that information in ways that build our capabilities.  Capabilities require more than just knowledge it requires active practice-based learning.  Capabilities also require different types resources, many of which may be still very scarce or expensive.  We need new educational institutions that provides new type of resources that build capabilities; the ability to do.

Here is an examples of a new model of an incubator institution, Brewery Inc; a brewery incubator based out of Houston recently profiled in FastCo.  Brewery Inc provides shared access to professional brewing equipment, a shared workspace, business workshops, a tap room with an established customer base and a regulatory framework all to support aspiring nano-brewery entrepreneurs to developing their product before venturing out on their own.  All of this helps to mitigate start-up costs and risks through shared knowledge and resources.

“Brewers pay $1,500 for the year to use one of Brewery Inc.’s fermenters. In exchange, “we’re actually taking all the licensing under our name, and taking all the responsibility for those brewers,” Borrego says. “When they’re ready to open their own business, the beer is perfect, the market is there, brand is established, and they’re fully ready to focus on the business aspect.”

As an educational pedagogy, what this brewery is doing is scaffolding the start-up process.  Entrepreneur Nano-brewers are able to learn by doing and what they are able to do is is extended by Brewery Inc.’s knowledge and resources.  When they leave the nest (so to speak) they are ready to fly on their own.  Sure, some could have survived on their own, but Brewery Inc’s processes accelerates and deepens their learning and their knowledge of how to make a brewery happen.

What types of resources are needed to scaffold start-up in other businesses?

Psychology and Management: Dealing with Dysfunction and Cross Purposes

Stowe Boyd’s recent post reminds me that management is a social, psychological and indeed, a human endeavor.  Part of this endeavor concerns dealing with unavoidable dysfunction that will arise.

The Problem:

Far too often organization members operate at cross-purposes. This is particularly true with organizational structures enabling division of labor: members are grouped into divisions, functions, and departments, and then further split into groups and teams, in order to create specialized functioning on behalf of the larger system . . . members too often come to identify with the parts rather than the whole to which they belong. . . . with predictable misalignments in purpose, activities and relationships. (Kahn, 2012, p.225)

 

a particular person or leader may be carrying all kinds of unconscious anxieties, aggressions, and energies of those being led; bloody mergers, acquisitions, downsizing or combative relations with competitors or the world at large may veil all kinds of individual and group fears and inadequacies; a corporate group’s understanding of its external environment may be dominated by the unconscious projections of a few key managers; a strong corporate subculture may be mobilizing neglected aspects of a corporate “shadow’ that are worthy of attention and of being brought to light. (Ross)

A Solution:

In understanding these hidden dimension of everyday reality, managers and change agents can open the way to modes of practice that respect and cope with organizational challenges in a new way. . . . They can begin to untangle sources of scapegoating, victimization, and blame and find ways of addressing the deeper anxieties to which they are giving form. They can approach the “resistance” and “defensive routines” that tend to sabotage and block change with a new sensitivity, and find constructive ways of dealing with them. (Ross, Ibid)

References

William A. Kahn, The Functions of Dysfunction: Implications for Organization Diagnosis and Change, Consulting Psychology Journal: Practice and Research, 2012, Vol. 64, No. 3, 225–241.

Gordon Ross’s blog; http://gordonr.tumblr.com/post/42860195841/structures-rules-behaviours-believes-and-the

 

 

Validity: the Overlooked Issue in Big Data

Validity is an important, but often overlooked issue whenever measurement and data analysis is involved and this includes Big Data applications.  Like Steve Lohr’s concerns is his NY Times article on the potential pit falls of Big Data (Do the models make sense?  Are decision makers trained and using data appropriately?) or Nassim Taleb’s article, Beware the Big Errors of Big Data, validity concerns are paramount, but the nature of vlaidity is not addressed.

Validity is an overall evaluative judgment of the degree to which empirical evidence and theoretical rationales support the adequacy and appropriateness of interpretations and actions on the basis of test scores or other modes of assessment (Messick, S, 1995, Validity of Psychological Assessments, p.741). Also available here

That is to say, when we look at data analytics, are the results justifiable.  Just having data doesn’t make it right. Big Wrong Data can be a dangerous thing.

As big data becomes a larger part of our everyday life, validity must also becomes a critical component of analysis; especially if big datas is to find success beyond the current fashion. As Samuel Messick (ibid) said;

. . . validity, reliability, comparability, and fairness are not just measurement principles, they are social values that have meaning and force outside of measurement whenever evaluative judgments and decisions are made. (Messick, Ibid).

This importance is not reflected in the scant treatment that validity often receives in data and measurement training or in most discussions of big data. The modern view of Validity (after Samuel Messick) is about more then judging the rightness of one’s measures, it is also about the transparency of the assumptions and connections behind the measurement program and processes. I’ll propose the following (non-exhaustive) list as a place to begin when judging the the use of data and measurement:

  • Content Validity – Data and measurement are used to answer questions and the first step in quality measurement is getting the question right and clear. Measurement will not help if we’re asking the wrong questions or are making the wrong inferences from ambiguous questions. When questions are clear you can proceed to begin linking questions to appropriate construct measures.
  • Structural Fidelity – Additional information should show how assessment tasks and data models relate to underlying behavioral process and the contexts to which they can be said to apply.  Understand the processes that underly the measures.
  • Criterion Validity – This examines convergent and discriminant empirical evidence in correlations with other pertinent and well understood criterion measures. Do your results make sense in light of previous measures.
  • Consequential Validity – Of particular importance are the observed consequences of the decisions that are being made. As Lohr’s article points out, our data based operations do not just portray the world, but play an active role in shaping the empirical world around us. It’s important to compare results with intentions.

Good decisions are based on data and evidence, but inevitably will rely on many implicit assumptions. Validity is about making these assumptions explicit and justifiable in the decision making process.

“The principles of validity apply not just to interpretive and action inferences derived from test scores as ordinarily conceived, but also to inferences based on any means of observing or documenting consistent behaviors or attributes. . . . Hence, the principles of validity apply to all assessments . . .”(Messick, ibid, p.741).

Reference – Messick, S. (1995). Validity of Psychological Assessments: Validation of Inferences From Persons’ Responses and Performances as Scientific Inquiry Into Score Meaning, American Psychologist, 50, 741-749.

A Dialogical Understanding of User-Centered Design

The Anomalogue Blog inspired me to think when it said: “this is what brand strategy wants to become: a philosophy of an organization which enables it to function according to a particular intellectual and artistic taste”.
I believe a good brand strategy is user centric in that it engages the user as an involved participant in the brand. Everyone today wants to channel the mojo of Apple, maybe even Apple itself now that Steve’s gone. I think the core of Apple was that it was out to change the world through technology, but unlike IBM, we were invited as participants in that change through using Apple’s technology.  IBM wants to change the world by what it’s experts do to us, but Apple changes the world with our participation.

Yes, as Amonalogue says, the backstory of strategy design can be understood in terms of a philosophically deep pragmatism and I understand that as it is expressed by Wittgenstein and Bakhtin.

From the Wikipedia article on Wittgenstein:

. . . philosophical problems arise when language is forced from its proper home into a metaphysical environment, where all the familiar and necessary landmarks and contextual clues are removed. He describes this metaphysical environment as like being on frictionless ice: where . . . all philosophical problems can be solved without the muddying effects of everyday contexts; but where, precisely because of the lack of friction, language can in fact do no work at all.[154] Wittgenstein argues that philosophers must leave the frictionless ice and return to the “rough ground” of ordinary language in use.

OK, as Ludwick anticipated, most of the world doesn’t get Wittgenstein. I believe one key is to understand the nature of this rough ground. Here’s where I look to Bakhtin.

“We must renounce our monological habits so that we might come to feel at home in the new (dialogic) artistic sphere which Dostoevsky discovered, so that we might orient ourselves in that incomparably more complex artistic model of the world which he created” (Bakhtin, 1984, p.272).

I think Design operates in this artistic sphere of dialogue. This sphere is user centric, but in a way that is dynamic, relational and chiasmic (multiply intertwined).   From John Shotter http://pubpages.unh.edu/~jds/Essex.htm

“All real and integral understanding is actively responsive… And the speaker himself is oriented precisely toward such an actively responsive understanding. He does not expect passive understanding that, so to speak, only duplicates his or her own idea in someone else’s mind. Rather, he expects response, agreement, sympathy, objection, execution, and so forth… “

In this view, successful design does not try to capture the user, but invites the user to become chaismically intertwined with the organization and with other users in what could be called dialogic design. Apple invitation to participate as technology changes the world is one example of dialogic design in strategy. Where are other example of this type of design:

In science I look to Messick’s understanding of assessment validity (judgments of the truthfulness of empirical observations). He looks not only within traditional boundaries of science through construct validity (judgments of consistency with theory, domain and prior empirical observations), he also considers categories outside of traditional science in judgments of the utility of assessment tools, the value implications of those tools and social consequences that are secondary to tools use.

In journalism I look at the move from reporting the facts, to the new role of journalists as community builders; where people do not want to only be told the truth, but want to become active participants in building the world as well. One example is http://www.americasdemocrats.org  There they are using journalist tools for the purpose of creating a politically active community where people’s voice can be expressed as they participate in political action.

Can Apple keep it’s mojo? Not by building pretty things, that’s Tiffany’s brand. Apple’s only brand is changing the world through technology and bring us along to drive the change. Can Apple continue to change the world though us?

How the 20th Century Rocked Our Foundations

From Danny Quah (Professor of Economics and Kuwait Professor at the London School of Economics and Political Science):

At (the Penang Free School) I’d excelled in mathematics and science, but that is now only a small part of what I need to do to be a productive contributing member of the community. What matters more instead? A good sense of what is artistically compelling and linguistically convincing. A political awareness of what ought to matter to people in international society. Articulatenesss in writing and speaking, and an ability to debate effectively. Physical acuity and a feeling of confidence and security in my own skin.

The Limitations of Mechanistic Thinking

Though we may not be consciously aware of the models that are implied by our thinking, much of the language that forms our conceptual toolbox is still founded in 19th Century enlightenment concepts of mechanistic materialism. This is true even though science in some cases no longer supports some of these outdated common ways of speaking and thinking. This reductionistic rational foundation to understanding is quite different from Bakhtin’s dialogic approach that opened my last post.  Any first step in moving beyond this limited way of thinking and speaking must begin with better language rooted in a new dialogical paradigm.  Of course the materialistic mechanistic models are still very important, but our language should help us see where these models are appropriate and where they are perpetuating limitations that are holding us back.  As Danny Quah implies above; just when science is taking on an even bigger societal role; the role of a scientist is being transformed.

W. Barnett Pearce in Thinking about Systems and Thinking Systemically pointed out that Einstein, Godel, Wittgenstein and Whitehead; all began to expose the anomalies of a mechanistic materialism early in the 20th Century:

  • Einstein’s thoughts opened the world of quantum mechanics demonstrating the limitations of Newtonian mechanics.
  • Wittgenstein, in his Treatise on Logical Philosophy, wrote one of the clearest statement of logical and language. He always was against dogmatism and the presuppositions of philosophy and he understood that much of the real meaning we need was not to be found logical premisses, but in how people acted in everyday life. Ludwig Wittgenstein
  • Godel also found that any logical system could not be both consistent and complete. Kurt Gödel
  • Whitehead first sought a complete logical foundation to mathematics, only later to be inspired by quantum science and came to view his prior project in logic as wrongheaded. He moved on to revisit Process Philosophy; to understand the world’s foundation as dynamic and the world as an ongoing process. Science gives very accurate snapshots of that process, but in many situations we need to understand the process.

The assumption of scientific materialism is effective in many contexts, says Whitehead, only because it directs our attention to a certain class of problems that lend themselves to analysis within this framework. However, scientific materialism is less successful when addressing issues of teleology and when trying to develop a comprehensive, integrated picture of the universe as a whole. Alfred North Whitehead

Pearse reached the conclusion that;

. . . If the task is not so much to see how well our knowledge fits the Enlightenment criteria as to figure out what are the appropriate criteria for our knowledge, then we can move on with confidence . . . we should be less concerned about the hypotheses and propositions that we can assert than our abilities to enter into a wide variety of systems (or aggregates, or not-so-well-formed systems) and act effectively. The emphasis might well be on what we can do rather than on what we know – that is, on our ability to think systemically in the contexts in which we find ourselves.

Can Design Thinking Be a New Way of  Productively Talking.

In many ways I believe that good Design Thinking begins with an acknowledgement of these systemic anomalies, but I also think it needs to develop better foundations and conceptual tools so we can truly move beyond them. The shallowness of our common rational language often stands in bleak contrast to the depth of our experiential understanding and nowhere does this standout more than in design thinking. Hence, we get phrases like “playfulness”, “out of the box” all of which exist over there inside “innovation laboratories”. The challenge in both working in a design way, and in communicating this type of work with others, can be found in the inadequacies of these common ways of talking, thinking and acting; all founded in the language of the prominent techno-rational paradigms of the 19th Century. When we speak of “playing” at work we are straining against a common understanding of work that is not helpful for new ways of conducting our daily economic activities. Work is linear and object oriented within a specified process. The vast majority of today’s jobs can only be successfully accomplished if we act dynamically, relationally, cooperatively in dynamic processes and with an eye that seeks innovation.

These needs are all dialogical needs. Not only are most organization struggling to understand and support this new way of working, our very language steers us in the wrong direction, often without our conscious awareness of the contradictions that are created. If we need to think out of the box, what is the purpose of the box. Where do we find the boundaries of that box and can we feel really comfortable when we or our coworkers leave it. How can we be playful at work when work and play have opposite meanings. Do we need playing boxes and working boxes.  Can we really go between them and how do we derive our thinking boxes in the first place?  Are we not straining against the limitations of language conventions that are foreign to design thinking?

Boundary Crossing as a foundation of Design Thinking

Innovation often depends on serendipity as we dialogue with an entire world of people and ideas. A large part of an organizational Innovation lab might be termed a serendipity lab, but of course serendipity cannot be reliably found in the lab.  Even the idea of a laboratory is in some ways a reflection of a 19th century rational scientific mind set.  Design thinking requires working in environments and cultures that are multidisciplinary, multi-purposed and dialogic, that is, dynamic, relational and engaged, especially with others that might be alien to our own way of thinking. It requires a dialogic exposure that is well beyond the diversity of any organization, that crosses all organizational boundaries. To a great extent it is living on the edge and dialoging with others living on their own edges. How to do this will be the topic of my next post.