Understanding Paul Meehl’s Metascience: Pragmatism for Empirical Social Science

Some recent involvement in LinkedIn conversations has led me to delve more into Paul Meehl’s work in Philosophy of Science or what he referred to as scientific metatheory.  As the book A Paul Meehl Reader notes, Paul’s essays were painstakingly written and most readers do not read his work so much as they mine his work for insights over many years; so I suspects this will be a long term project.

Here is the first nugget: progress in the soft sciences is difficult and painstaking and much of the existing research work mat be flawed and found wanting. Here are some reasons:

  1. Theory testing often involves derived auxiliary theories which, if not highly supported themselves, will add unknown noise into the data.  Often these theories are also not spelled out or understood.
  2. Experimenter error, experimenter bias, or editorial bias is present more often than is generally acknowledged or even known or considered.
  3. Inadequate statistical power.  In general, much more power is needed.  Meehl thinks that we should often seek statistical power in the .9 range in order to overcome unknown noise (error) in the data.
  4. Seriously accounting for the crud factor (the possible effect of ambient correlational noise in the data).
  5. Unconsidered validity concerns.  The foundation of science is measurement, but often the validity of measurement tools are not considered seriously.  Experiments are often measuring things is new ways even if they are using well studied instrument and this requires analysis for validity.

What this means is that more methodological care is needed such as:

  1. Seeking predicted point values that are stronger in terms of falsification and lend more verisimilitude than the often weak corroboration that come from non-null significance testing.
  2. More power (i.e. .9) in hypothesis testing to protect against weak auxiliaries, unknown bias and general crud.
  3. Understanding the difference between statical significance and evidentiary support.  Observations are evaluated in terms of statistical hypotheses and are a statistician’s concerns about the probability of the observations.  But theories are evaluated by the accumulation of logical facts.  These are not evaluated in terms of probabilities, but in terms of verisimilitude.
  4. Science should seek more complete conceptual understanding of the phenomena under study.

I believe this last point is similar to Wittgenstin’s concerns that in science problem and method often pass one another by without interacting.  I think this concern is also similar to verisimilitude in theory. Verisimilitude maybe considered a fussy interpretive concept, but the problems uncovered by the Reproducibility Project show that hard sciences are not as interpretive free as is often supposed. I’m also coming to the conclusion that it is in Meehl (and the like minded Messick) that traditional empirical science and pragmatism can be brought together.  It is the idea that a social constructivist approach must account for both the successes and the failures of empirical science if it is to move forward productively.  Meehl and Messick were not pragmatists, but I am saying that in dealing with the problems thay saw in empirical science, a critical pragmatic approach can be envisioned.  As Meehl along with Wittgenstein, Popper and maybe Lakotos are some of the best critics within the empirical sciences and building from their critiques seems like an interesting place to explore.

 

 

Ockham’s Razor: the Psychological Need for this Important Philosophical Concept

I think that Ockham’s Razor deserves wider discussion and application (“entities must not be multiplied beyond necessity”).  The issue for me is not that simpler is better, it’s that complexity in any intellectual artifacts will often do more to obscure meaning than it is to enlighten us. It came to my attention while listening to the Goldman Sachs testimony and the general feeling that financial engineering was too complex for the US Congress to understand.  I believe this complexity in financial systems was multiplied beyond necessity, and the most obvious reason is that this complexity helped to obscure what was going on between traders.

Science is also not without fault, not only with complex theoretical statements, but also with the expansion of vocabulary.  Sometimes theoretical or lexical complexity is necessary in order to communicate nuances.  But then the complexity often takes on a life of its own.  This not only restricts the ability to communicate, but also taxes cognition.  Reducing complexity can help us to think better.

An example from my early life in graduate school.  I once was reading Henry Giroux late at night, finished a paragraph, and realized that I had understood nothing from that paragraph.  Two more readings of that paragraph did not bring any more enlightenment.  Of course is was because I was not familiar with the vocabulary or with the arguments he was presenting.  Now when I fine a new Giroux book, I’ll scan through the pages to see if there have been any changes or development to his basic argument.

It’s not only experience that causes this to happen, it’s also that I now understand Giroux’s arguments at a much simpler level.  When we cannot simplify our cognition, we are forced to understand things in a much more route fashion.  It happens in methodology too!  The more complex the methodology is, the more likely that people will use set methodological formulas or use others work in unquestioned ways.  When it can be simplified, our ability to cognitively manipulate ideas is increased.

So, for me, Ockham’s Razor does not mean that simple theories may be the best, but that simple understandings allows us to do our best thinking.