About howardjohnson

I recently graduated from Temple University (doctorate in educational psychology). I'm from Northeast Ohio, US. I'm not currently working, but I would like to find an interesting project. Hobbies include music of any style. I'm proficient in any percussion, but still love to play guitar, bass, hammered dulcimer or keyboard on occasion.

Let’s Bring a Level of Artistry to Building Forms of Digital Life

Matthias Melcher’s post on the digital humanities has got me thinking about extending the ideas from my post here that  referenced Lee Drutman’s ideas on the creativity of quants.

Let’s start with this John Shotter quote about Foucault’s Archeology of Knowledge; ideas about how the world got to be the way it is now.

But now, many take seriously Foucault’s (1972: 49) claim that our task consists of not – of no longer – treating discourses as groups of signs . . . but as practices that systematically form the objects of which they speak.

In many discussions the humanities and the sciences are structurally defined by how they differ from each other.  But step back; distance oneself to see the practice and form of life that normally escapes notice.  People engaged with educational discourses are shaping educational practices (forms of life) and students (their object) much as a painter shapes the forms on his or her canvas.  This is not to critique these practices, but to bring to our attention the artistry that is possible in creating all forms of life: not just painting and literature, but no less in educational practice, data science, or social science.  Also, as participants jointly engaging in these forms of life, let us also bring artistry to the objects of which they speak; us.

Here’s my main point: Data science is about to transform education.  It can take many different forms.  Will we take the notice and expend the effort to add a level of artistry in what we create, or will we blindly stumble through.  Can data be an architectural tool through which we create a more beautiful world.

Unpacking Ontologically Responsible Assessment

In this post I want to unpack the term Ontologically Responsible Assessment mentioned in this post

Why Develop an Ontology:

An ontology defines a common vocabulary for researchers who need to share information in a domain. It includes machine-interpretable definitions of basic concepts in the domain and relations among them.  . . .   There is no one correct way to model a domain— there are always viable alternatives. The best solution almost always depends on the application that you have in mind .  Source

When people say that students need 21st Century skills, what they really mean is that they want to change their ontological commitments as to what students are, and to what they will become.  When we move from a mechanistic factory model of education to a dialogic networked model; we are really changing our ontological commitments from components in a machine to actors in a network.  Ontologies try to clarify questions about the nature of being and becoming a student in the context of educational practice.  I would add (to the typical information systems objectivist account) that an ontology in educational practice also involves recognizing that students are constituted by networked relationships and the various domain discourses within which they interact.  The main difference in this ontology is that (in contradistinction to most information systems ontologies) its organization is not hierarchal and behavioral, but rather contexted, networked and dialogic.  This doesn’t mean there is no place for hierarchal behavioral objectives, just that they no longer form the core of our educational goals.

Why Responsibility:

Depending on whether one believes that reality is objectively given or subjectively / collectively constituted, the understanding of responsibility will differ. This, in turn, has a serious impact on how individuals and collectives can or should use IS (Information Systems).  . . . Reality is thus not given and open to objective discovery but outcome of the intentional activity of perception and interpersonal communication. This means that the dimensions of responsibility must be discovered through communication. (Stahl, 2007 Available from Research Gate or Ontologies, Integrated Series in Information Systems Volume 14, 2007, pp 143-169)

Education can’t be conceived through objective behavioral description, rather, it is conceived in the context of conversational realities.  Students are not cogs in a machine, but are people and the conversational realities where we meet them involves commitments, requirements, privileges, and various other high level latent traits that defy easy objectification.  To be responsible is to jointly actualize an educational program.

What Do I mean by Assessment:

What is the purpose of educational assessment?  Wikipedia speaks about documenting knowledge, skills, attitudes, and beliefs.  Merriam Webster talks about making judgements.  Edutopia talks about assessment as a mechanism for instruction.  I want to focus on another aspects that may seem technical, but I believe gets to the heart of the matter.

What is it that we measure are latent constructs for the most part.  As Michael Kane (2013) frames it:

Test scores are of interest because they are used to support claims that go beyond (often far beyond) the observed performances. We generally do not employ test scores simply to report how a test taker performed on certain tasks on a certain occasion and under certain conditions. Rather, the scores are used to support claims that a test taker has, for example, some level of achievement in some domain, some standing on a trait, or some probability of succeeding in an educational program or other activity. These claims are not generally self-evident and merit evaluation.  Validating the Interpretations and Uses of Test Scores

More than anything else, assessment, at its core, is the process of estimating a latent trait and making it visible.  It is the first step in the analytic process of drawing connections, but we can’t connect the dots until they are visible to us.  There are 2 people for whom this is of primary importance: the teacher and the student.  If we observe the educational practices that involve testing, these are often the last 2 stakeholders that are given consideration, but they should be the first.

Conclusion

This 3 fold understanding of educational assessment includes developing an ontology where assessment practices recognize a full account of the being and becoming of students.  It does not restrict our view to what is easily measured, but essentially meaningless in the bigger picture or final analysis.  Secondly, it is responsible in that assessment is linked to an expectation for engagement that goes beyond behavioral description to fully recognize the full complexity of that student engagement as a dialogic and networked individual.  And finally, it does not use data in a mechanistic fashion, but uses construct measurement to make their joint responsibilities and ontologies visible to teachers and students in everyday educational practice.

 

A Practice Perspective on the Quants and the Humanists

Lee Drutman responded to Timothy Egan’s New York Times Article about creativity and Big Data.

First TE says that companies like Amazon who are based on quantitative methods are not creative because they “marginalized messiness”.  LD responds that “(d)ata analysis and everything that goes into it can be highly creative”, meaning (I guess) that Quants can get down in the mess too.  Both are good points but miss another aspect that unites the arts / humanities and the sciences, and this is the heart of my argument.  They are both creating practices that effect our live in important ways.   The point is that we all create.  It’s not whether we are or are not creative.  It’s a question of what we are creating.  From John Shotter’s Cultural Politics of Everyday Life:

But now, many take seriously Foucault’s (1972: 49) claim that our task consists of not – of no longer – treating discourses as groups of signs . . . but as practices that systematically form the objects of which they speak.

In other words, it’s not wether the Quants are creative, but do their analyses treat me as an object to be controlled, or do they treat me as a human being where the analysis respects my being.  That’s called ontologically responsible assessment.  Again, from Shotter:

I want to argue not for a radical change in our practices, but for a self-conscious noticing of their actual nature.

We should offer people clear and understandable analysis where they can make new connections, but also respects and is responsible to their rights as a person.  Yes, as Lee claims, the sciences and the humanities can work together.  But beyond that, they are both human based social practices.  If we see them as practices a la Foucault, there is much more in common than is different.  They are both not only creative, but they are creating.

Instructionism, Constructionism and Connectivism: Epistomologies and Their Implied Pedagogies

Ryan2.0’s blog recently hosted a discussion on different pedagogies based on Instructionist, Constructionist and Connectivist  theories of learning.  I tend to see these differences on an epistemological / psychological / psychometrics level.  (I’m an educational psychologist, not a philosopher.)  I think this line of thinking is helpful for exploring some of my recent thoughts.

First a note; I resist labels on learning theories.  A consensus may be developing, but there are so many sub-positions that if you look at 100 constructivist positions, you’ll find 100 different takes (as evidenced by many of the comments on Ryan’s post).  I just find labels unsatisfying as points of reference for communication in learning theories at this time; they convey too little meaning to me.  Tell me what you don’t like about a learning theory; I probably don’t like it either.

What’s the Point

Ryan’s main point is that all of these pedagogical position are evident in current education practices and we should think in terms of “and” not “or”.  This fits with my own view that paradigm shifts should proceed by subsuming or at least accounting for the successful parts of the previous paradigm, while enabling teachers and scientists to move beyond problematic aspects of older theories.  To really understand these different theories, it will be good to see how pedagogy changes as we move from one to the next.  My post here looks at each one of these different theories in terms of epistemology / psychology / psychometrics, and than discuss a place where implied pedagogies are relevant to practice today.

Direct Instruction

I’m not familiar with instructivism per say, but it seems similar to direct instruction, a pedagogy that is associated with positivism / behaviorism.  Direct instruction often uses empirically based task analyses that are easy to measure and easy to employ.  Applied Behavioral Analysis is a specialized operant behavioral pedagogy that is a prime supporter of direct instruction.  Many, if not most classroom use direct instruction in some form today.  It seems like common sense and many teachers may not be aware of the underlying epistemology.

One prominent area where advanced uses of direct instruction is growing is in computer based adaptive learning like the Knewton platform. Students follow scripted instruction sequences. A student’s specific path within the script is determined by assessments that follow Item Response Theory (IRT) protocols.  The assessment estimates a student’s command of a latent trait and provides the next instruction that is appropriate for the assessed level of that trait.  The best feature of Adaptive learning systems is the efficiency in moving students through a large body of curriculum or in making leaps in skill levels like the improvement of reading levels.  Because it is also easy to measure, it’s possible to use advanced psychometric computer analyses.

Critiques of direct instruction can be similar to critiques of behaviorism in general.  Even though test developers are becoming more sophisticated in measuring complex constructs (eg. Common Core), the learning that results from direct instruction can still be seen as lacking in conceptual depth and in the ability to transfer to other knowledge domains.  It also doesn’t directly address many important higher level cognitive skills.

Constructivism

Enter constructivism.  I think of constructionism as beginning with Piaget’s learning through schema development.  Piaget’s individual constructive approach is expanded by social theorists and ends up with embodied theorists or in ideas similar to Wittgenstein’s; that knowledge and meaning are closely linked with how they are used.  Wittgenstein’s early work was similar to the work of logical positivists.  He eventually found that meaning in everyday activities is inherently circular and the only way to break out is not through precision, but to look for meaning in what people are doing and how they are using knowledge.  In some ways it’s like a return to behaviorism, but with a position that is more inline with hermeneutics than empiricism.

I recently saw a presentation of an instructional program (MakerState) based on the Maker / Hacker Space movement that functions much like a constructivist approach to education.

MakerState kids learn by doing, by creating, designing, experimenting, building…making. Our makers respond when challenged to think outside the box, to think creatively and critically, to collaborate with their peers, to problem solve, to innovate and even invent solutions to challenges they see around them.

This program can be founded on the same curriculum as that used in direct instruction when developing maker challenge activities and it can use this curriculum to scaffold maker activities with STEAM principles.  But the outcomes are open ended and outcome complexities are well beyond what is capable through direct instruction.  Learning by doing is more than just an aside.  Making knowledge concrete is actualizing it; taking it from the abstract to make it meaningful, valuable and productive.  But, is this the end of educational objectives; does success in life not require even more.

Connectivism

Enter Connectivism.  I associate connectivism with the work of  George Siemens and Stephen Downs.  I take this post from George as a good summary of Connectivism:

The big idea is that learning and knowledge are networked, not sequential and hierarchical.  . . . In the short term, hierarchical and structured models may still succeed. In the long term, and I’m thinking in terms of a decade or so, learning systems must be modelled on the attributes of networked information, reflect end user control, take advantage of connective/collective social activity, treat technical systems as co-sensemaking agents to human cognition, make use of data in automated and guided decision making, and serve the creative and innovation needs of a society (actually, human race) facing big problems.

I believe this take on Connectivism is modeled on computer and social media networks.  My own take is to include a more biological approach as another major node in connectivism: M.M. Bakhtin, a Russian literary critic known as a dialogic philosopher.  I want to draw this connection because dialogism is a reasonable way to make sense of everyday collective co-sensemaking activity by an organism interacting with its environment.  I see this as understanding the underlying way networks function when biological organisms (i.e., humans) are involved.

One of Bakhtin’s main ideas is heterglossia:

(A)ll languages (and knowledges) represent a distinct point of view on the world, characterized by its own meaning and values. In this view, language is “shot through with intentions and accents,” and thus there are no neutral words. Even the most unremarkable statement possesses a taste, whether of a profession, a party, a generation, a place or a time.  . . . Bakhtin goes on to discuss the interconnectedness of conversation. Even a simple dialogue, in his view, is full of quotations and references, often to a general “everyone says” or “I heard that..” Opinion and information are transmitted by way of reference to an indefinite, general source. By way of these references, humans selectively assimilate the discourse of others and make it their own.

Just as water is the medium that allows fish to swim, language is the medium that facilitates networks.  Rather than focus on words as the base unit, Bakhtin focusses on the utterance as his main unit of analysis.  This is from the main wikipedia Bakhtin article:

Utterances are not indifferent to one another, and are not self-sufficient; they are aware of and mutually reflect one another… Every utterance must be regarded as primarily a response to preceding utterances of the given sphere (we understand the word ‘response’ here in the broadest sense). Each utterance refutes affirms, supplements, and relies upon the others, presupposes them to be known, and somehow takes them into account…

I see this as a detailed account of the Wittgenstein use argument that I used earlier.  I take from a psych perspective: The inner psychological world reflects and models the interaction we have with the world.  Because learning is facilitated by social interaction with other people in dialogue, our mind is structured in a dialogical fashion.  This is to see knowledge as existing not only through network nodes, but nodes that reflect dialogue and inter-connected utterances. (This is similar to structuralism, but goes well beyond it in its implications.) Even when we are learning through self study we structure that study in a dialogical fashion.  When we engage in soliloquy, we posit a general other to which we address our words.  Transferring knowledge is not just cutting and pasting it to another node in the network.  We must also adjust to new intentions, new references, and often to the tastes of a new profession or discipline.  I don’t know what the neurological correlates are to dialogic activity, but cognition at a conscious level (and some aspects of unconscious levels), I see the mind as structured by its interaction with this complex social / speech world.

I don’t yet have a good example of pedagogy that reflects this dialogic connective theory.  It would certainly be activity based and structured more like an open-ended apprenticeship and some sort of performance.  I’m thinking that some relevant learning objectives would include: higher order cognition in unstructured situations (e.g. knowledge transfer, problem identification and solving, creative thinking, situated strategic thinking),  intrapersonal dispositions (e.g. motivation, persistence, resilience, and metacognition like self-directed learning) and interpersonal skills sets (e.g. collaboration, effective situated communication, relationship development).

I think a key to achieving a higher level of connective pedagogy is valid assessment in an area where assessment has proven difficult.  Assessment in this context must also be ontologically responsible to the student.  The purpose of ontologically responsible assessment is not to rank, rate, or judge either students or teachers.  That is a task for other assessments. Instead, ontologically responsible assessment is a way of making ourselves visible, both to ourselves and to others, in a joint student teacher activity that conveys the students history and future horizons.  (Horizon = A future that I can see only vaguely, but contains a reasonable route to achieve, given both the student’s and teacher’s  join commitment to each other and to the path.  Education as a doable, visible, committed and ontologically responsible joint activity by student and teacher.

TI’m neven satisfied with an ending, but this seems like a good jumping off point for another post and another time.  I feel the need for input before going further in this direction.

 

Seeing Students Develop: From Objective Data to Subjective Achievement

Even though the personalization / individualization of instruction is being driven by objective data in learning platforms, this data can also be used to facilitate a deeper self-understanding  commitment and understanding between the student and the teacher.

To see the future, students and teachers should focus on their horizons.  Horizons here refer to a point in developmental  time that can’t be seen clearly today, but that I can reasonable expect to achieve in the future.  Because many aspects of this developmental journey are both precarious and dependence on future actions, this joint vision can’t be wishful thinking, but must be clearly framed in terms of privileges and obligations.  When it is treated this way, assessment is not a picture of student achievement, but is a methods for making both student and teacher visible to each other in a way that is rational, meaningful and conducted in an ontologically responsible manner; that is, in a way that is true to who we we want to become (Shotter, 1993).

This model of support begins with valid assessments that are clear and explicit about their  meaning, the underlying values implied and the actual or expected consequences.  The learning process can then be understood from a narrative perspective as well as mathematically.  By referencing empirically supported path models, personalization can include choice, preparing the way for stronger commitment and clarification of learning directions, choices and possibly experiments involving learning directions.

Theis idea is not to suggest that assessment must become less objective, but to recognize that an education process must contribute to the development of a subject.  Educating a student is not like designing a computer chip.  It is about helping an individual actualize their unique capabilities while finding themselves and their place in society.  The Goal of Education is intellectual development.  Approaches that are tethered to a mechanistic model of education will fail in this goal and are not even appropriate in terms of the efficiency by which they may be justified.  Assessment may start with objective visions, but its uses must directly translate to the subjective tasks that are central to both teacher and student.

4 Reasons Adaptive Learning Could Replace High Stakes Standardized Testing (It’s in the Validity)

I attended a recent nyc edtech meet up at Knewton in NYC. While looking at their promotional materials on their platform it occurred to me that this system has a stronger basis in validity over high stakes standardized testing (HSST).  I know it’s a (big) data driven approach, likely similar to what I was familiar with at Sylvain, except that digitalization allows you to address many more dimensions in the data, to cross-reference different domain skills and to better represent intellectual development over time.  This post is about the validity of big data adaptive learning systems as compared to high stakes standardized testing (HSST).

  1. The easiest distinction to be made is to contrast the “snapshot in time” nature of HSST and the developmental histories of adaptive learning.  Development is the way students and teachers understand school-based learning especially when it’s not linear, but proceeds in fits and starts.  Neither does a snapshot relate to the purposes of assessment.  In adaptive learning error is not judgement, but an excuse for more learning.
  2. This point may seem esoteric but I think important.  HSST must represent an ambitious  construct interpretation, that is, a single HSST question must represent the same learning that is represented in hundreds if not thousands of questions in an adaptive learning system.  And while the assessments in the adaptive system are part of the learning process, HSST constructs often stand outside of any pedagogical process.  (See #1 below)
  3. There are negative consequences associated with HSST.  Because of the lag time between testing and reporting, there is less instructional relevance to HSSTs.  Assessments in adaptive learning provide immediate feedback and are instrumental to the learning process.  There are also many unintended consequences, like instructional time that is wasted on test prep or the disassociation of error from an opportunity to learning.
  4. Assessments are consequential for students.  In adaptive learning assessments determines the instructional pathway the student will pursue.  If done well, the student will perceive this assessment to have been appropriate and helpful.  In many HSST (e.g. the SAT) assessments may be perceived as a threat and associated with a lack of opportunity.  See #2 below

It seems to me that as Adaptive Learning becomes more common and its validity become recognized, HSST will no longer be needed.

#1.  “If the IUA does not claim much (e.g., that students with high scores on the test can generally perform the kinds of tasks included in the test), it does not require much empirical support beyond data supporting the generalizability of the scores. A more-ambitious interpretation (e.g., one involving inferences about some theoretical construct) would require more evidence (e.g., evidence evaluating the theory and the consistency of the test scores with the theory) to support the additional claims being made”.  Kane (2013) p.3

#2. “The SAT is a mind-numbing, stress-inducing ritual of torture. The College Board can change the test all it likes, but no single exam, given on a single day, should determine anyone’s fate. The fact that we have been using this test to perform exactly this function for generations now is a national scandal”. NYTimes

 

A New Form for Validity

Thinking about new projects.  Here are the general contures of a new way of looking at validity.

  1. There have been criticism of Samuel Messick unified view of construct validity and Kanes Argument based approached.  I have yet to accept any logical argument made against either framework, yet I am sympathetic when it is said that these frameworks are not practical administratively.  
  2. Consider an argument made by the philosopher Karl Popper.  Popper makes a distinction between justification and criticism on the way to his famous idea of fallisficationism.  Just like one cannot claim that one’s theory is true through experimentation (you can only be sure of your results if they are false), so too it is precarious to justify one’s beliefs, but easy to demonstrate if their false.  Justification can be seen as a next to impossible task, but criticism is more likely to be seen as true.  If we respond to criticism with a desire to improve and adjust our beliefs than our beliefs will approach a closer version of what you might call truth.  So, the best way to justify assessment validity is by being open to criticism; always seeking to improve through critical reasoning.
  3. This does not nullify Messick’s framework (Messick, 1995), but it shifts it from justification to a framework for critique and critical thinking.  Messick’s framework moves from a hopelessly difficult attempt at justification and becomes a critical framework for knowledge transparency.  Recent developments in philosophy  have demonstrated the contingent nature and how the shape of knowledge is shaped by the form of its production.  Messick’s transparent critical framework for the production of assessment knowledge is the best way to see the underlying contingencies
  4. Kane’s framing of  validity as an argument is more suited to a critical approach than a justificationist approach.   The very nature of argument sets up a 2 sided dialogue.  Every argument presupposes a dialogic counter argument.  If you enter into an argument you must be willing to entertain and engage with critical position.  Kane’s framework is more suited to respond to critical than to depend on justification.

Scaffolding Start-ups: A New Role for Education

How does education change in the future.  The biggest trend that must be dealt with is change itself on a massive scale.  Knowledge is both increasing and decaying on an exponential trajectory.  What you knew 5 years ago may not be relevant today and many capabilities you may need today were unheard of 5 years ago.  Careers also change; requiring retraining.  The web is open and accessible and puts content at our finger tips, but it does little to help us structure that information in ways that build our capabilities.  Capabilities require more than just knowledge it requires active practice-based learning.  Capabilities also require different types resources, many of which may be still very scarce or expensive.  We need new educational institutions that provides new type of resources that build capabilities; the ability to do.

Here is an examples of a new model of an incubator institution, Brewery Inc; a brewery incubator based out of Houston recently profiled in FastCo.  Brewery Inc provides shared access to professional brewing equipment, a shared workspace, business workshops, a tap room with an established customer base and a regulatory framework all to support aspiring nano-brewery entrepreneurs to developing their product before venturing out on their own.  All of this helps to mitigate start-up costs and risks through shared knowledge and resources.

“Brewers pay $1,500 for the year to use one of Brewery Inc.’s fermenters. In exchange, “we’re actually taking all the licensing under our name, and taking all the responsibility for those brewers,” Borrego says. “When they’re ready to open their own business, the beer is perfect, the market is there, brand is established, and they’re fully ready to focus on the business aspect.”

As an educational pedagogy, what this brewery is doing is scaffolding the start-up process.  Entrepreneur Nano-brewers are able to learn by doing and what they are able to do is is extended by Brewery Inc.’s knowledge and resources.  When they leave the nest (so to speak) they are ready to fly on their own.  Sure, some could have survived on their own, but Brewery Inc’s processes accelerates and deepens their learning and their knowledge of how to make a brewery happen.

What types of resources are needed to scaffold start-up in other businesses?

Psychology and Management: Dealing with Dysfunction and Cross Purposes

Stowe Boyd’s recent post reminds me that management is a social, psychological and indeed, a human endeavor.  Part of this endeavor concerns dealing with unavoidable dysfunction that will arise.

The Problem:

Far too often organization members operate at cross-purposes. This is particularly true with organizational structures enabling division of labor: members are grouped into divisions, functions, and departments, and then further split into groups and teams, in order to create specialized functioning on behalf of the larger system . . . members too often come to identify with the parts rather than the whole to which they belong. . . . with predictable misalignments in purpose, activities and relationships. (Kahn, 2012, p.225)

 

a particular person or leader may be carrying all kinds of unconscious anxieties, aggressions, and energies of those being led; bloody mergers, acquisitions, downsizing or combative relations with competitors or the world at large may veil all kinds of individual and group fears and inadequacies; a corporate group’s understanding of its external environment may be dominated by the unconscious projections of a few key managers; a strong corporate subculture may be mobilizing neglected aspects of a corporate “shadow’ that are worthy of attention and of being brought to light. (Ross)

A Solution:

In understanding these hidden dimension of everyday reality, managers and change agents can open the way to modes of practice that respect and cope with organizational challenges in a new way. . . . They can begin to untangle sources of scapegoating, victimization, and blame and find ways of addressing the deeper anxieties to which they are giving form. They can approach the “resistance” and “defensive routines” that tend to sabotage and block change with a new sensitivity, and find constructive ways of dealing with them. (Ross, Ibid)

References

William A. Kahn, The Functions of Dysfunction: Implications for Organization Diagnosis and Change, Consulting Psychology Journal: Practice and Research, 2012, Vol. 64, No. 3, 225–241.

Gordon Ross’s blog; http://gordonr.tumblr.com/post/42860195841/structures-rules-behaviours-believes-and-the

 

 

Validity: the Overlooked Issue in Big Data

Validity is an important, but often overlooked issue whenever measurement and data analysis is involved and this includes Big Data applications.  Like Steve Lohr’s concerns is his NY Times article on the potential pit falls of Big Data (Do the models make sense?  Are decision makers trained and using data appropriately?) or Nassim Taleb’s article, Beware the Big Errors of Big Data, validity concerns are paramount, but the nature of vlaidity is not addressed.

Validity is an overall evaluative judgment of the degree to which empirical evidence and theoretical rationales support the adequacy and appropriateness of interpretations and actions on the basis of test scores or other modes of assessment (Messick, S, 1995, Validity of Psychological Assessments, p.741). Also available here

That is to say, when we look at data analytics, are the results justifiable.  Just having data doesn’t make it right. Big Wrong Data can be a dangerous thing.

As big data becomes a larger part of our everyday life, validity must also becomes a critical component of analysis; especially if big datas is to find success beyond the current fashion. As Samuel Messick (ibid) said;

. . . validity, reliability, comparability, and fairness are not just measurement principles, they are social values that have meaning and force outside of measurement whenever evaluative judgments and decisions are made. (Messick, Ibid).

This importance is not reflected in the scant treatment that validity often receives in data and measurement training or in most discussions of big data. The modern view of Validity (after Samuel Messick) is about more then judging the rightness of one’s measures, it is also about the transparency of the assumptions and connections behind the measurement program and processes. I’ll propose the following (non-exhaustive) list as a place to begin when judging the the use of data and measurement:

  • Content Validity – Data and measurement are used to answer questions and the first step in quality measurement is getting the question right and clear. Measurement will not help if we’re asking the wrong questions or are making the wrong inferences from ambiguous questions. When questions are clear you can proceed to begin linking questions to appropriate construct measures.
  • Structural Fidelity – Additional information should show how assessment tasks and data models relate to underlying behavioral process and the contexts to which they can be said to apply.  Understand the processes that underly the measures.
  • Criterion Validity – This examines convergent and discriminant empirical evidence in correlations with other pertinent and well understood criterion measures. Do your results make sense in light of previous measures.
  • Consequential Validity – Of particular importance are the observed consequences of the decisions that are being made. As Lohr’s article points out, our data based operations do not just portray the world, but play an active role in shaping the empirical world around us. It’s important to compare results with intentions.

Good decisions are based on data and evidence, but inevitably will rely on many implicit assumptions. Validity is about making these assumptions explicit and justifiable in the decision making process.

“The principles of validity apply not just to interpretive and action inferences derived from test scores as ordinarily conceived, but also to inferences based on any means of observing or documenting consistent behaviors or attributes. . . . Hence, the principles of validity apply to all assessments . . .”(Messick, ibid, p.741).

Reference – Messick, S. (1995). Validity of Psychological Assessments: Validation of Inferences From Persons’ Responses and Performances as Scientific Inquiry Into Score Meaning, American Psychologist, 50, 741-749.