#cck11 – Where Should We Find Knowledge Production: Disciplinary Peer Review or Networks of Practice.

Reviewed a recent article by Susu Nousala (2010) that I thought was related to cck11: Improving the peer review process: an examination of commonalities between scholarly societies and knowledge networks, (Electronic copy available at: http://ssrn.com/abstract=1762876)

Discussing general peer review systems the author ask:

(W)hat other ways may be used or incorporated to better serve emergent interdisciplinary and/or hybrid disciplines, creating a more holistic community based process to improve overall review outcomes.  . . . Over time the emperical approach to peer review has led critics to discribe the whole system as generally allowing conventional work to succed whilst discouraging innovative thinking.

I would add some things that I feel are not specifically addressed in the article:

One, for scholars interested in research on practice, the peer review process tends to be organized by disciplinary interests for people who are intending on advancing within a disciplinary hierarchy.  Research on practice tends to be a primary location for interdisciplinary – hybrid research, while peer review often subjects research to disciplinary requirement that are not about furthering practice.  My basic point is that furthering practice is not the same as furthering disciplinary knowledge.  This is a potential place where innovation in interdisciplinary research is hampered. Disciplinary requirements can do as much to hurt knowledge development as it can to encourage it.  As pointed out by Wenger (1998 and Nousala, Organizations are made up of many overlapping communities of practice.  There is a case to be made that these communities are the natural place for knowledge development and the negotiation of peer meaning; not disciplinary organizations.

Secondly, the peer review process is about individuals reviewing others individual efforts that exist in a finished form.  Since knowledge can be seen as existing and developing within communities of practice, you can see that it is the community as a network where ideas combine and meaning is negotiated.  Peer reviewing individual papers at best obscures this knowledge creation process.  At the worst, it discourages the dialogue that is needed and necessary to negotiate and synthesize new forms of knowledge and practice, thereby needlessly restricting knowledge development.  The authors state;

In more recent times “peer review has become a powerful social system” [9] which has multiple layers of knowledge networks linking and supporting its members within these communities of practice.

In my mind, peer review processes do not acknowledge most knowledge networks and are restrictive in that the first requirement of knowledge is acknowledgment.  Acknowledgement should be a community or network process, not the pervayence of a few individuals.

Conclusion

The authors conclude;

Without the foundation of sustainable practice and processes, the build up of the internal knowledge networks will not occur. Instead, there will only be information systems and management, which do not function in the same way and can not take the place of tacit knowledge networks.

In my view, at least a portion of the peer review process should occur within network processes.  Supported by the development of Open Educational Resource and student personal learning environments, learning is becoming network centric.  It is time to acknowledge that knowledge production is also network centric and it is time to move scholarly activity into open network processes.  This would acknowledge the importance of networks in knowledge development, enable the integration of digital networks to improve productivity in knowledge production, enable practitioner communities to participate in scholarly processes and would align scholarly processes with current theories of learning.

Action Analytics: Formative Assessment with An Evidentiary Base

I found Linda Baer’s interview on Action Analytics (AA) with George Siemens interesting by way of what I see as a natural synthesis with evidence-based practice.  I see the relationship in the way that  Linda seems to defines AA in 3 steps:

  1. Identify from the research base what is needed to improve student performance
  2. Develop appropriate measures to generate needed data
  3. Use the data and data technology to guide teacher action while students can still be helped.

I find it similar to the idea of formative assessment, but with the addition of an evidentiary component.  Formative assessment is about developing feedback during the learning process in contrasted to summative assessment that occurs after learning. Summative assessment has only a post-hoc pedagogical purpose, while formative assessment is an integral part of everyday pedagogy.  The major difference between formative assessment and AA is that Linda specifies a place for evidence in the process.

I believe that  AA can be relevant beyond the field of education as a general methodology for practice and also as a way to combine evidence based practice and the growing field of analytics.  Analytics can be most productive when they are integrated into feedback loops in a formative way and will be even better when research based evidence is included in the design of feedback loops and in the development of the  measures that generating feedback data.  I expect integration to be tricky and will likely require a robust systems approach.

Successful Practice Requires Science and Aesthetics: Trusting in Data and Beauty

In Praise of Data and Science

MIT’s Technology Review posted the article: Trusting Data, Not Intuition.  The primary idea is to use controlled experiments to test ideas and comes from Ronny Kohavi of Microsoft (and formerly of Amazon).  The article can be summarized as follows:

(W)hen ideas people thought would succeed are evaluated through controlled experiments, less than 50 percent actually work out. . . . use data to evaluate an idea rather than relying on . . . intuition.  . . .  but most businesses aren’t using these principles.  . . .What’s important, Kohavi says, is to test ideas quickly, allowing resources to go to the projects that are the most helpful.  . . . “The experimentation platform is responsible for telling you your baby is really ugly,” Kohavi jokes. While that can be a difficult truth to confront, he adds, the benefit to business—and also to employees responsible for coming up with and implementing ideas—is enormous.

This articles further supports my thesis that Evidence-based practice, analytics, measurement and practical experimental methodology are closely related, mutually supportive, and a natural synthesis.

In Praise of Aesthetics

I do believe that, while trusting science is an important idea, that trust should also be tempered because it is a tools for decision-making and acting, not a general method for living.  A successful life of practice is a balance between the empirical and the aesthetic.  You could say that aesthetics, looking at life emotionally and holistically is the real foundation of our experience and how we live life.  Within that frame, it is helpful to step back reflexively and consider the use of empirical tools to benefit our experience, but without denying our aesthetic roots.  Wittgenstein wrote on this (from the Stanford Encyclopedia of Philosophy article on Wittgenstein’s Aesthetics).

“The existence of the experimental method makes us think we have the means of solving the problems which trouble us; though problem and method pass one another by” (Wittgenstein 1958, II, iv, 232).

For Wittgenstein complexity, and not reduction to unitary essence, is the route to conceptual clarification. Reduction to a simplified model, by contrast, yields only the illusion of clarification in the form of conceptual incarceration (“a picture held us captive”).

What I want is to have access to the tools of science and the wisdom to know when to choose their reflexivity.  What I’m against is;

the naturalizing of aesthetics—(which) falsifies the genuine complexities of aesthetic psychology through a methodologically enforced reduction to one narrow and unitary conception of aesthetic engagement.

Avoiding Naive Operationalism: More on Lee Cronbach and Improving Analytics

Introduction

Consider again Cronbach and Meehl’s (1955) quote from my last post.
We do believe that it is imperative that psychologists make a place for (construct validity) in their methodological thinking, so that its rationale, its scientific legitimacy, and its dangers may become explicit and familiar. This would be preferable to the widespread current tendency to engage in what actually amounts to construct validation research and use of constructs in practical testing, while talking an “operational” methodology which, if adopted, would force research into a mold it does not fit.  (Emphasis added)
What was widespread in 1955 has not substantially changed today.  Construct measures are routinely developed without regards to their construct or consequential validity, and it is in detriment to our practices.  I will name this state, naive operationalism; measuring constructs with what amounts to an operational methodology.  I will also show why it is a problem.

Operational Methodology: Its Origins as a Philosophical Concept

What do Cronbach & Meehl mean by an operational methodology?  Early in my psychological studies I heard the definition of intelligence stated as “that which is measured by an intelligence test”.  It was an example of operationalism (or operationism). Originally conceived by a physicist named Percy Bridgman, operationalism conceptually states that the meaning of a term is wholly defined by its method of measurement.  It became popular as a way to replace metaphysical terms (eg. desire or anger) with a radical empirical definition.  It was briefly adopted by the logical positivist school of philosophy because of its similarity to the verification theory of meaning. It also became popular for a longer time period in psychology and the social sciences.  Neither use stood up to scrutiny as noted in Mark Bickhard’s paper.
Positivism failed, and it lies behind many of the reasons that operationalism is so pernicious: the radical empiricism of operationalism makes it difficult to understand how science does, in fact, involve theoretical and metaphysical assumptions, and must involve them, and thereby makes it difficult to think about and to critique those assumptions.
Not only does the creation of any measurement contains many underlying assumptions, the meaning of any measurement is also a by-product of the uses to which the measurement is put.  The heart of validity theory in the work of Cronbach (and also in Samuel Messick), is in analyzing various measurement assumptions and measurement uses through the concepts of construct and consequential validity.  Modern validity theory stands opposed to operationalism.

Operational Definition as a Pragmatic Psychometric Concept

Specifying an operational definition of a measure is operationalism backwards.  Our measurements operationalizes how we are defining a term, not in the abstract, but in actual practice.  When we implement a measurement in practice, that measurement effectively becomes the construct definition in any processes that involves that measure.  If the process contains multiple measures, it is only a partial definition.  If it is the sole measure, it also becomes the sole construction definition.  Any measure serves as an operational definition of the measured construct in practice, but we don’t believe (as in operationalism) that the measures will subsume the full meaning of any construct.  Our operational definition is no more than a partial definition and that is why consequential and construct validity are needed in our methodological thinking.  Validity research tell us when our operational definitions are problematic and may give us indication as to how to make improvements to our measures.  Validity research studies the difference between our operational definitions and the construct being measured.

Naive Operationalism

For most of us, operationalization outside the larger issue of a research question and conceptual framework is just not very interesting.
I could not disagree more! Not including validity in our methodological thinking will mean that our operationalized processes will result in what I will call naive operationalism.  If we devise and implement measures in practice, without regard for their validity, we will also fail to understand any underlying assumptions and will be unable to address any validity problems.  In effect, it is just like philosophical operationalism and sets us up for the same problems. Lets consider a concrete example to see how it can become a problem.

An Example of Naive Operationalism

Richard Nantel and Andy Porter both suggests that we do away with Performance Measurement, which is considered “a Complete Waste of Time”.  These are the reasons given for scrapping performance measurement:
  1. Short term or semiannual performance  reviews preventing big picture thinking, long-term risk taking and innovation. We want employees to fail early and often.
  2. Performance systems encourage less frequent feedback and interferes with real-time learning.
  3. Compensation and reward systems are based on faulty  incentive premises and undermining intrinsic motivation.
  4. There’s no evidence that performance rating systems improve performance.
Consider each reason in turn
  1. This critique is advocating for a different set of constructs.  True, the constructs they imply may not be common to most performance measurement systems, but there is no reason to stay with standard constructs if they are not a good fit.
  2. There is no reason why formative assessments like action analytics and other more appropriate feedback structures could be a part of any performance improvement systems.
  3. This is another instance where it appears that the wrong constructs, based on out of date motivational theories, are being measured.  They are the wrong constructs and therefore the wrong measures.
  4. The consequences of any measurement systems is the most important question to ask.  Anyone who doesn’t ask this questions should not be managing measurement processes.

Conclusion

What is the bottom line?  There is nothing Richard or Andy point out  that would make the concept of performance measurement wrong.  The measurement systems they describe are guilty of naive operationalism.  The idea that any specific measure of performance is the sole operational definition needed and this is true even they are unaware of what they are doing.  No!  We should assess the validity of any measurement system and adjust according to an integrated view of validity within an appropriate theoretical and propositional network as advocated by Cronbach and Meehl.  Measurement systems of any kind should be based on construct and consequential validity, not an operational methodology, whether it is philosophical or naive.

#LAK11 – Validity is the Only Guardian Angel of Measurement (Geekish)

David Jones has posted about the general lament of high-stakes testing and asks; “what’s the alternative”?  You could rephrase this to ask, does measurement help us or hurt us?  Not only has he piqued my interest to think more along these lines, but I think the question is also relevant to data analytics, LAK11, and anyplace where measurement is used.  So. . .dive in I will!

David sites the association of the testing movement with globalization and managerialization, but I also believe that analytics, appropriately applied, can benefit education in pragmatic everyday ways.  He also quotes Goodhart’s Law, a British Economist, who spoke on the corruptibility of policy measurement.  I prefer Donald Campbell’s similar law even better for this situation because he was a psychometrician and speaks in testing language.  He states:

The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.

I believe that the problem being discussed, at its heart, is a validity problem.  High stakes testing is not appropriately bearing the high burden assigned to it.  It is not meeting the criteria of consequential validity; it does not produce better long-term educational outcomes.  Cronbach and Meehl (1955) explained why validity is important for situations like this one.

We do believe that it is imperative that psychologists make a place for (construct validity) in their methodological thinking, so that its rationale, its scientific legitimacy, and its dangers may become explicit and familiar. This would be preferable to the widespread current tendency to engage in what actually amounts to construct validation research and use of constructs in practical testing, while talking an “operational” methodology which, if adopted, would force research into a mold it does not fit.

This is what I believe they are saying (also taking into account Cronbach and Messick’s later developments in the concept of validity).  We measure constructs, not thing in themselves.  A construct is defined by the network of associations or proposition within which it occurs  (See Cronbach and Meehl, Recapitulation Section #1).  Validity is the rational and empirical investigation of the association between our operational definitions of our constructs (that is our tests) and our network of associations.  Without this investigation, what we are measuring is operationally defined by our tests, but what that is remains undefined in any meaningful way.  We can’t teach constructs or measure them unless we thoroughly understand them at least at a theoretical level.  Operizationalism has been rejected, whether it was founded in positivist philosophy or in common sense ignorance.

Most people who I’ve heard advocating for standardized and standards based high stakes testing, do so based on a principle of school accountability, not because such testing has been unequivocally demonstrated as a way to improve schools.  It’s a logical argument and it seems to be lacking empirical support.  Teaching to the test is regarded as inappropriate, but if the test is the sole standard of accountability, than it is an operational definition of what we are to be teaching.  In that case, anything other than teaching to the test seems illogical.

So let’s dig deeper.  I think there are nested psychometric problems within this testing movement.  Campbell’s Law may overtake any attempt to use measurement in policy in a large sense, but I am going to start with how a measurement regime might be designed better.

1. What measurement problems exist with current tests?

Teaching to the test as it is commonly practiced is not good because it is doubtful that tests are really measuring the correct information.  There are many unintended things being measured in these high stakes standardized tests (technically referred to in validity theory as irrelevant variance). In many ways, our measures are based (operationalized) more on tradition and common sense as opposed to empirically sound psychometry.  This is what Cronbach warned of when our tests don’t match the constructs.  To improve tests we need to go beyond common sense and clarify the constructs we desire our students to exhibit.  Why don’t we do this now.  Most likely it is too difficult for policymakers to get their head around, but there is a possible second reason.  It would reduce the validity of tests as their validity is measured by positivist methodology.  Validity is an overall judgement but positivist don’t like fuzzy things like judgements.  Tests may need to be reduce in validity in some areas, in order to gain validity overall.  Many people guiding testing procedures likely have a narrow view of validity as opposed to a more broad view of validity as espoused by Messick or Cronbach.  This lead to other issues.

2. Standards do not Address Many Important Educational Outcomes.

The curriculum, as it is reflected in standards, is not always focused on the most important knowledge and skills.  I think it reflects three things.  A kitchen sink approach (include the request of every constituency), a focus on standards that are easily measured by multiple choice or similar types of questions, and expert opinion.  The ability to creatively argue points of view, write with persuasion and conviction, to read, interpret, discuss and develop subtle points of meaning among peers, and to track the progression and maturation of these types of skills over time are important things that are not well measured by current high stakes tests.  A kitchen sink approach does not allow teachers to focus on depth.  Assessments like portfolios contain more information and a broader validity base, but are seen as less reliably (i.e. it’s possible to cheat or include personal bias).  Expert opinion is a type of content validity and is considered the weakest form of validity evidence.  With the development of high stakes testing, we are in more of a position to measure the validity of curriculum standards and to adjust standards accordingly, but I see no one doing this.  Maybe there is some research on the ability of high school students to function as college freshman, but this outcome is inconsequential in a long view of one’s life.  Tests should be held accountable for consequential validity and to empirically show that they result in improved lives not just parroting facts or helping teachers of college freshman.  It is not just teachers that should be held accountable, it is also test and standard developers.

3. Post-positivist Psychometrics

To be sure, there are trade-off in any form of measurement.  Sometimes improving validity in one area weakens validity in other areas.  Validity never reaches 100% in any situation.  However, because tests are mandated by law, I believe current validity questions favor views of what will be held valid in a court of law.  Law tends to be conservative and conservative psychometric are based in philosophical positivism.  I bet that many people making policy decisions have a poor understanding of what I consider to be sound psychometrics, psychometrics that are consistent with post-positivist philosophy.  Let me be clear, positivist psychometrics are not wrong, just incomplete and limited.  This was the insight of Wittgenstein.  Positivism looks at a small slice of life, while ignoring the rest of the pie.  Wittgenstein said if we want to understand language, look at how people are using language.  Similarly, Samuel Messick said, if you want to understand a test, follow the outcomes.  How are people using the test and what are the results of what they are doing.  This is the most important test of validity.

To sum up

There are many possible things that could be done in answer to David’s question.  I have focused on how you might improve testing processes. Do not focus on tradition and traditional  technique, but on standards and testing practices that creating authentic  value (what Umair Haque would call thick value) for students who will live out their lives in the 21st century, a century that is shaping up to be quite different from the last.  Testing could be part of the equation, but lets hold teachers and schools accountable for the value they create as it is measured in improved lives, not in some questionably valid test score.

#cck11 – Adding to a New Model of Education: John Seely Brown’s New Book

What is learning?  What does it mean to understand and what does it mean to be an educated person?  You can give a definition, but your answer will be incomplete without going beyond a simple definition to include a specific a model of learning.  Adding to a new model seems to be what Brown and Thomas are doing in A New Culture of Learning as presented by John Hagel’s blog post.  (I’m still waiting for a copy of the book; possibly more to follow?)  John says:

We all have the uncomfortable feeling that the education we received is serving us less and less well. The reassuring notion that the concentrated dose of education in our younger years would serve us well for the rest of lives appears increasingly suspect.  . . . What if there was a different model?  . . . (A) fundamentally different approaches to acquiring knowledge.

The meaning of the differentiation this book proposes came to me when reading a critique of social media learning by ryan2point0.  If you think about technological changes in education when you are guided by old models, they will look much different then when they are seen through the prism of Brown and Thomas’ model.  We need a new model of learning that embraces tension, imagination and play.  John Hagel add these 4 claims that he draws from the book:

1. Tacit knowledge is becoming more important when compared to explicit knowledge.

(T)acit knowledge cannot be taught – it can only be learned, but only if the environment is designed to do that. In a stable world, focusing on explicit knowledge perhaps made more sense, but in a more rapidly changing world, tacit knowledge becomes increasingly central to our ability to thrive.

2. Questions are more important than answers.  (Hagel, quoting from the book)

(L)earning is transformed from a discrete, limited process – ask a question, find an answer – to a continuous one. Every answer serves as a starting point, not an end point. It invites us to ask more and better questions.

3. Learning is a social process

Collectives provide the context for learning and the learning process involves a complex interplay between the personal and the collective.

4. Brown and Thomas’ new model of learning is derived from imagination and play.

Imagination is about seeing possibilities and generating the questions that frame the learning process. Play is about the engagement and experimentation that drives the learning process.  Both of these become even more powerful when they move beyond the individual and drive collectives that can learn from each other.

This book seems to be devising a way that educators can think about learning processes when guided by the book The Power of Pull.  It’s a much different from traditional educational processes and the organization of most educational institutions.  I think there has always been a pedagogical distinction between passing on received stable knowledge and the generation of new knowledge where we don’t necessarily know the right answer.  But most education is about learning what the teacher already knows.  In traditional education, it is only after you have reached the pinnacle of learning that you deemed ready to venture out to find new stuff.  What we are seeing more and more is that this is a false distinction.  There may be some knowledge that we want to pass on in a stable form, but there is also room at all levels of education to explore new knowledge.  This is not just a constructionist pedagogical trick.  There really is room for new understandings at all levels.  Discovery learning is really about finding new knowledge, not about finding knowledge and then testing the student to see if he really found the correct knowledge.  The world of knowledge is very big indeed!

This looks like a good book.  I’m sure there will be more thoughts to follow.

#cck11 – Equipotency: A Potentially Important Concept for Connectivism?

I would like to contrast some recent interesting posts (prompted by the CCK11 MOOC) with the peer to peer concept of equipotency which I will define as: an open and equal capability to participate in diverse social network activity.  The theoretical / memetic foundation of equipotency is the emergence of open peer to peer culture, that I think can also be related to the idea of knowledge flows, as defined by Hagel, Brown and Davison (HB&D) in the Power of Pull.

Stephen Downes notes that we need a precise vocabulary  to analyze and talk about social networks.

Rather than use prejudicial and imprecise vocabulary, . . . we can respond to it meaningfully, with clarity and precision.  . . . the point is that we can use network terminology to explain much more clearly complex phenomena such as instruction, communities and interaction.

I believe we need much more than vocabulary, we also need a framework; a theoretical account to help us distinguish between information and noise and to point out how things are changing overtime.  This is the relevance of connectivism as a theory. What is the connectivism framework?  I like Jennie McKensie’s summary in a Connectivism Linkedin Group conversation when she says:

Understanding according to George Siemens is, “Depth, Diversity, Frequency, Integration and the strength of your Ties”.

But, Paul McKensie reply was also interesting.

Knowledge is distributed with a decreasing half-life – why do we insist on cementing the same blocks of content together.

Traditional education, focusing on content and a specified curriculum is, I think, an example of HB&D’s push learning.  It can only really be successful where knowledge is stable, changing only slowly.  When we are faced with situations where knowledge behaves more like Paul’s decreasing half-life metaphor, we need an openness to change that focuses on more than things like equipotency.  Equipotency may become an important to a connectivism framework.  Concepts such as tie strength may not be focussing on the most salient aspects of learning relationships.

Sui Fai John Mak further expands on this discussion through discourse analysis and asks whether discourse and power relationships are important to the social web.  Quoting Rita Kop he says;

(T)he notion of  ‘supernode’ predictably emerges when some contributors are recognized by a  number of others as having particular relevance to, or knowledge of a problem. There seems to be a natural tendency within the ‘perfectly’ democratic network to organize itself, over time, in a hierarchical system composed of leaders and followers.

In her dissertation Rita also said:

As research has shown, the open WWW has a hierarchical structure and is not the power free environment that some would like us to believe (Barabasi, 2003; Mejias, 2009) (pp. 267-268)

HB&D’s point is that it is not longer possible to identify what will be important in order to push it out to the network.  Digital networks can be seen as a flow of knowledge, and the point is to be open and able to draw on this flow in productive ways.  As I commented on John’s blog: many people are still searching for expertise in their network participation, teachers or knowledgable others (in Rita’s terminology) who can push the knowledge they need to their where they are at the time of need, but participation in peer to peer culture recognizes that value can arise from any node and can not be predicted in advance.  Supernodes, if they are truly valuable, may represent people who are not experts or knowledgable others in content knowledge, but are most able to recognize value in the knowledge flowing around them.

Open peer to peer culture as a way to understand the creation of value and participation in web-based social networks.  Peer to peer culture according to wikipedia is described and defined as:

  • Relationly and structurally dynamic,
  • based on the assumed equipotency of its participants,
  • organized through the free cooperation of equals

Task wise it can be thought of as:

  • the performance of a common task (peer production),
  • for the creation of a common good (peer property),
  • and with forms of decision-making and autonomy that are widely distributed throughout the network (peer governance).

Peer to peer culture may describe a new evolving type of community that is relevant to learning, especial where knowledge is in development.  It is likely important for collaboration and Collaborative Inquiry and it may warrant a prominent place in the Connectivist’s framework.  As I see it, equipotency is an important key to peer network organization.  Strong and weak ties, expertise, authority, and other forms of discourse based power can exist within and can influence network activities, but like Hagel Brown and Davison’s emphasis on serendipity, value creation can not be easily predicted and does not always emanate from expertise or strong network ties. Networks must be open to the unexpected contribution of any node in the network.  This is the basis of equipotency and peer to peer value creation networks.

#cck11 Connectivism is a Retroactive Theory to Previous Learning Theories

Mike Dillon asks:

“(H)ow connectivism fits into the scheme of how we learn and how we educate” and states “there is obviously the debate about whether or not it can stand as an independent learning theory”.

I believe that most successful new theories are, as Mike says, retroactive, in that they arise to address what previous theories were unable to address while also explaining the same phenomena that the previous theory addressed. The problem with most learning theories is that the discipline is so conservative. People hang on to their perspective and moving on very slowly.   It what Thomas Kuhn described when he noted that many paradigms change not because people change their minds, but because they retire.  A second reason things appear complicated is that the field does not move in a strict linear fashion.  We still haven’t seen the end of people reinterpreting John Dewey.

I find connectivism most closely resembles the Vygotskian Social-Cultural School. Vygotsky addressed the inadequacies of behaviorism directly in his day (1930s Russia) and his introduction to American’s in the 1970s also served to address the limitations of early cognitivism and provided a more detailed functional view of aspects of social constructivism.  Vygotsky was a contemporary to John Dewey and his thinking was similar in many ways. What I think Vygotsky did not address very well was the creation of new knowledge and he also relied too much on mental representations in his thinking.  (Much of this criticism is also applicable to Dewey.)  I think much of connectivism was contained within Vygotsky’s and Dewey’s work, just under-developed or aspects that were unacknowledged by these thinkers. I think this focus on new knowledge and on a non-representational view of cognition is where connectivism excels.  I usually think of connectivism mostly as a retroactive extension and an update of Vygotsky, yet one that is sufficiently extensive that it warrants a place in its own right.