It’s Time Change in Scientific Communication and Dissemination

Publishing Science on the Web by John Wilbanks of the Common Knowledge Blog who participates in the discussion concerning the future of scientific dissemination in the digital age:

. . . science is already a wiki if you look at it a certain way. It’s just a really, really inefficient one – the incremental edits are made in papers instead of wikispace, . . .  And the papers are written in a highly specialized form of text that demonstrates the expertise of the writer in the relevant domain, but can form a language barrier to scientists outside the domain.  . . .  How can we get to enough technical standards so that this kind of science can be harvested, aggregated, and mashed up by people and machines into a higher level of discipline traversal? . . .  But the language barrier among scientists is preserved – indeed, made worse – by the lack of knowledge interoperability at the machine level. It’s the Tower of Babel made digital.

Two really important issues in scientific communication and dissemination that are critical for technological progress and for evidence-based practice.  One is the organization of scientific findings scattered through various journals instead of collaborative consolidated instruments like wikis.  Time is the real information problem today and some form of wiki is the answer.  The second issue is knowledge interoperability.  Precise language is important in scientific communication, but I still get the feeling that current writing styles and vocabularies in many disciplines, when you look at function, have more to do with politics than communication.

Thanks to George Siemens for the point

Making Inferences about the Use of Artifacts in Practices

This post is to think through what I brought up last post, applying the concept of validity to practice.


I remember hearing in school validity asked the question: “does the test measure what it’s intended to measure”?  The problem with this type of approach is that it leads you in circles, both practically and epistemologically.  Messick changed this to a question that was quite literally more consequential.  Is there evidence that the use of the test brings or contributes to the results you intended?


If you view a test or assessment as an artifact imbedded in a practice, you could apply the same type of logic to any artifact that play an active role in that practice.  In artifact creation like in Holmstrom’s article, the logic of validity could be applied as a guide.  Although there would still be an artistic element it would not be random or unsupported.


There is an evidentiary aspect to this way of considering artifacts.  Validity is all about evidence. Theoretical evidence, process evidence, empirical evidence, consequential evidence, generalizability evidence; this is all about validity.  In fact, validity theory can be a way of accounting for evidence from all type of methodology.  Random Controlled Trails are still the best way of judging validity for certain types of research questions, but any type of method can contribute to evidence.

Summary: Validity is a well-developed body of though that can be applied to making inferential judgements about evidence supporting the use of assessment or other active artifacts in the context of a specific practice.

Problem-Based Artifact Creation Process

This post extends my thinking about the last post. The concept of validity is as applicable to practices as it is to tests. When I develop an assessments and protocol, I think about validity from the get go, not just after the fact. Similarly, when designing a process or practice, it is good to add a validity perspective. That is what I’ve done with Holmstrom’s problem-based artifact creation process; add where content, theory, and data should be considered. Here is a prototype process map:Design Sci Map

Bridging Science and Practice via a Science of the Artificial

Interesting article,

Bridging Practice and Theory: A Design Science Approach by Jan Holmstrom, Mikko Ketokivi, & Ari-Pekka Hameri (Decision Sciences, 40 (1), 65-87.)

It’s really less about bridging theory and practice than it is about developing a science of the artificial a la Herb Simon.  My take on the topic – focus less on discovering the world (predicting) and more or creating the world by making artifacts the central focus or unit of analysis of research.  Make problem framing leading to artifact fabrication a part of research.  You can think of this as a science of practice.  Think less of theories about how the mind works than of artifacts to improve practice.  Theories are still important, for instance, you can’t judge practice validity without theory; but think of artifacts as a bridge between theory and practice.  Hmm, maybe it is about bridging.

But it’s also about engineering.  Think about the prevalence of artifacts (broadly speaking) in our world, and the way that artifact can shape cognition and behavior, other disciplines beging to share concerns ussually associated with engineering.  Education likely has more in common with engineering than it does with traditional psychology.

A side bar here about validity.  Research, including a Science of the Artificial, is about answering questions.  The methodology you choose must match the research question.  In this conception of research you will likely ask a series of different questions, some of which will only become apparent as you go through a process.  You still have to change your methodology as your questions require.  This make bias more possible, indeed bias and objectivity plays slightly different roles in this artificial science when you’re purposely creating rather than discovering.

Combining Evidence and Craft for Successful Practice: No False Dichotomies

Evidence-based (in all its various permutations) is a construct that needs to be carefully worked out.  If evidence-based practice was self-evident, we would have achieved it through the success and extension of operationalism*, but, that wasn’t to be the case.  Participating in a practice requires evidence, craft and experience combined in a way that is fraught with complexity; but improving practices of all kinds is dependent on meeting this complexity.

I recently came across two ideas from Wampold, Goodheart & Levant (Am Psychol. 2007 Sep;62(6):616-8) that go a long way to clarifying this evidence-based construct.  Their first clarification is a definition of evidence and the second is to counter the false dichotomy of evidence vs. experience.

Evidence and Inference

Evidence is not data, nor is it truth.  Evidence can be thought of as inferences that flow from data.  . . . Data becomes evidence when they are considered with regard to the the phenomena being studied, the model used to generate the data, previous knowledge, theory, the methodologies employed, and the human actors.

This is not a simple positivist conception of evidence, but reflects a complex multimodal aggregation.  In addition, I would add that the primary concern of practitioners is really the validity of the practices they are conducting.  The validity of practice is supported by evidence, but it is dependent on the use of practice in context.  We do not validate practice descriptions or practice methodologies, but rather the use of practice in its local contexts, understood by reference to phenomena, models, knowledge, theories, ex-cetera.  I’ll have to look back at validity theory to see if I can get a clearer description of this idea.

Evidence and Experience

A second insight expressed by Wampold, Goodheart & Levant is the integrative nature of evidence and experience as they relate to practice; where any opposition between experience and evidence is considered to be a false dichotomy.  The ability to use evidence is a component of practice expertise including the ability to collect and draw inferences from local data through the lens of theory and empirical evidence or in the ability to adjust practices in response to new evidence.  It’s experience and evidence and evidence as a part of experience.

Evidence and Craft

I find it somewhat serendipitous that I have been drawn into conversations involving design management and evidence-based management.  It is because I believe that the success of each one depends on the other.  The positivist agenda of running the world by science is not tenable.  The world is too complex and there are too many relevant or even distant variables for a positivist program to be sustainable.  Science cannot do it all, but neither can we be successful without science, evidence and data.  We need a bit of craft and a bit of evidence to engage in practice.  That may often include craft in the way that evidence is used and it may entail craft that is beyond evidence. It just should not draw false diochotomies between evidence and craft.

The goal of operationalism “was to eliminate the subjective mentalistic concepts . . .  and to replace them with a more operationally meaningful account of human behavior” (Green 2001, 49). “(T)he initial, quite radical operationalist ideas eventually came to serve as little more than a “reassurance fetish” (Koch 1992, 275) for mainstream methodological practice.” Wikipedia (Incomplete references noted in this article, but it seems trustworthy as I’m familiar with Koch’s work.)

Unemployment: The Human Equation

The Human Equation by NY Times columnist Bob Herbert, who says that un and under-employment is the real economic problem.  I think he’s onto something.  Economist are not likely to jump yet, since unemployment usually is a lagging indicator, but remember the jobless recovery after 2001?  Could it be even worse this time.  The distant past is not always a good indicator of the future when fundamentals are shifting.

Consider, the service economy.  If agriculture employs 3% and manufacturing is heading for 20%, we’ll need about 77% of the population in some kind of service occupation.  That seems a bit much to sustain.  In the industrial revolution, agriculture did not shrink as fast as manufacturing has in the last 40 years.

It is not just throwing money at the problem.  We need to rethink the fundamental of our capabilities and our imagination of what’s possible.  Society needs change (think of the changes from the industrial revolution)and we need it to happen pretty fast.  How about a change management cabnet post?  Could we see the rise of totalitarianism again this century?  If people become desperate enough . . .?

The Integration of Design and World: More on Design Thinking.

This post responses to Anne Burdick’s invitation concerning the presentation: Design without Designers (found in the comments of my last post).  I will address the question, why would educational theory build on design concepts or how do I see the relation between education and design? I will look at three areas:

  • Erasing the distinction between art and science
  • Artifactual cultural aspects of psychology
  • The trans-disciplinary nature of ideas

Erasing the distinction between art and science

I see general changes in the practice of science along the following lines:

  • The critique of positivism (for promising more than methodology could ever deliver)
  • The critique of postmodernism (for fetishizing certainty; i.e. If positivism fails than scientific judgement cannot be trusted at all.) and
  • More acceptance for addressing real world problems (where problems tend to be interdisciplinary and often involve mixed methods).

The result is that many of the walls and silos of science have been reduced including the distinction between art and science.  In example, I often refer to judgements based on validity.  Although validity uses rational and empirical tools, building a body of evidence and achieving a combined judgement is more like telling a story.

Artifactual cultural aspects of psychology

The work of (or derived from) Vygotsky is popular in psychology and education.  It has also proved consistent with, and complimentary to the recent findings of the “brain sciences”.  While there are genetic and hardwired aspects of psychology, the structure of our minds can be said to reflect, to a great extent, the structure of the social and artifactual world that we live in.  The design of the world is more than just a decorative environment to an autonomous mind, it has an impact on who we are in both development and in how we interact with it in our ordinary lives.

Our delineation of the subject matter of psychology has to take account of discourses, significations, subjectivities, and positionings, for it is in these that psychological phenomena actually exist. (Harré and Gillet, 1994, The Discursive Mind and the Virtual Faculty)

The trans-disciplinary nature of ideas

Ideas never seem to respect the traditional academic disciplinary structure the way that methods and theories did during most of the 20th Century.  In the mid-90s a graduate school mentor pointed out that you could read many books at that time and have no clue to the discipline of the author without reading the back cover.  Psychologist, educators, literary critics, philosophers, sociologists and yes, designers, they all often seem to be speaking in the same language about the same type of things.

In Conclusion

  • The distinction between art and science is dissolving.  Method is important, but it does not rule.  Achieving a scientific break-through is analogous to creating a work of art (even though it still uses rational and empirical tools).
  • The design of our world is not just decoration, it reflects who we are and who we are reflects the design of the world.
  • Tools (artifacts, concepts, theories, etc. . .) are needed to act on the world.  Where these tools come from is less important than our ability to make use of them.

So in the above ways, design and design thinking is everywhere.  I do think designers should be more present in my own thinking as both a technical adjunct and as a foundation of both my thought and of the academic curriculum?  Yes, I do!  What do you think from a designer’s perspective?  How does the thinking of designers and current design curriculum fit into the above ideas?

Cartesian Problems in Communicating about Designing and Design Thinking

Interesting article – Thinking About Design Thinking – by Fred Collopy blogging for Fast Company.  Fred considers, “As (Design Thinking) is a way of talking about what designers can contribute to areas beyond the domains in which they have traditionally worked, about how they can improve the tasks of structuring interactions, organizations, strategies and societies, it is a weak term”, because it makes a “distinction between thinking and acting.”

As Fred points out Design Thinking is beset by the Cartesian Mind – Body problem, which is frequently being rejected today.  One form of rejection is found in the idea, “thought” has it’s genesis in “action”, like how you learn to walk and then you learn to think about where you want to go.  A similar idea (attributed to Bakhtin) is that Cartesian thinking unnecessarily divides being from becoming, where the abstractions of disembodied thought never fully capture either the actions of our lives or the moral aspects of those actions.

This is especially important for education that often has it exactly backwards, trying to teach you how to think in order to go out into the world to act.  Education would be so much more valuable if there were no dichotomous walls. (i.e. classroom/world, schooling/working, or even the idea that education = a 4 year quest for certification instead of an ongoing quest for knowledge.)

Future EBMgmt Research Ideas

  1. I will need to think more on an evidence evaluation framework and how Rousseau’s model might be enhanced by Messick’s model of validity as discusses in my last post.
  2. As Messick said that validity is about test use not tests in themselves, so evidence-based practice is about how the evidence is used in practice, not about the evidence itself.  This needs to be spelled out.
  3. The practice research gap – Research validity generally become greater the more context can be controlled and parsed out of studies.  In evidence-based practice evidence must be related to context to be valid.  The more confident you are of research results, the less confidence you are that it will relate to the constellation of factors seen in contexts.  I don’t know how you can get beyond this without some applied research that puts research syntheses to the test.
  4. Practice is most often cross or interdisciplinary.  This impacts the last point, but it also means that each practice relates to many potential disciplines.  Accumulating the vast amounts of data will be next to impossible in a practical manor.  We need a technological solution through some sort of Web 3.0 or metadata solution as well as a technological way to compile data.

Considering the Validity of Evidence

In my last post I looked at an evidence-based framework that included evidence evaluation.  Denise Rousseau from Carnegie Mellon has extended the ability to evaluate evidence with a new model (Full paper here) that include these evaluation categories: Construct Validity, Internal Validity, Effect Size, Generalizability, Intervention Compliance, and Contextualization.  These catagories correspond closely to the six catagories of validity proposed by Messick (previously discussed here).

Rousseau Catagories Messick Catagories
Construct Validity Structural
Internal Validity External (Not a perfect match but logically similar)
Effect Size External
Generalizability Generalizability
Intervention Compliance Substantive
Contextualization Content

This is not an exact match category by category, but the way in which evaluation evidence is categorizing is very similar in approach and in purpose.  What Rousseau leaves out is consequencial validity and she does not address content and substantive validity in full.