Action Analytics: Formative Assessment with An Evidentiary Base

I found Linda Baer’s interview on Action Analytics (AA) with George Siemens interesting by way of what I see as a natural synthesis with evidence-based practice.  I see the relationship in the way that  Linda seems to defines AA in 3 steps:

  1. Identify from the research base what is needed to improve student performance
  2. Develop appropriate measures to generate needed data
  3. Use the data and data technology to guide teacher action while students can still be helped.

I find it similar to the idea of formative assessment, but with the addition of an evidentiary component.  Formative assessment is about developing feedback during the learning process in contrasted to summative assessment that occurs after learning. Summative assessment has only a post-hoc pedagogical purpose, while formative assessment is an integral part of everyday pedagogy.  The major difference between formative assessment and AA is that Linda specifies a place for evidence in the process.

I believe that  AA can be relevant beyond the field of education as a general methodology for practice and also as a way to combine evidence based practice and the growing field of analytics.  Analytics can be most productive when they are integrated into feedback loops in a formative way and will be even better when research based evidence is included in the design of feedback loops and in the development of the  measures that generating feedback data.  I expect integration to be tricky and will likely require a robust systems approach.

A Measure of Process Standards can become a Key to Unleashing Creativity

When you’re a hammer, everything looks like a nail.  I have to be careful or everything looks like a measurement opportunity to me.  Nonetheless, I can’t deny that there seems to be opportunities to implement better measures supporting evidence-based practice (as I suggested in my last post).  I think the process would go something like this.

  • Identify and scope out the domains of interest that are important to you.
  • Conduct systemic reviews to establish a description of the processes that represent best practices within each domain.
  • Develop a descriptive questionnaire to allow an organization to compare their current practice with best practices.
  • Initiating a change project based on a capability maturity model of process change.

The best practice questionnaire becomes the focal point.  It is the measure of your organizations current performance and it provides a prescription for where you’re headed.  It’s easy to understand.  Also necessary are outcome measures that provide feedback on the validity of the standards to your organization.

2 caveats:

1. Complete consensus may not be possible, but at least consensus within a proscribed paradigm should be expected.  What the instrument would have the potential to do is to focus research within a paradigm and provide a research platform for many organization to conduct their own improvement projects in the management discipline; similar to what six sigma has done for manufacturing.

2. Which leads to one final caveat.  This is not the end all and be all in management decision-making.  What this approach does is to provide a framework to organize and scaffold your thinking around evidence-based practice.  Science can only provide you with standards; with a description of what has been proven to work in the abstract.  Not everything can be proven by science; not everything can be summarized in a standard process.  What standards do is to tell you these things work, stop re-inventing the wheel. Put these things into place and then place your development focus on the contextual, the relationship, the imaginative, and other areas where empirical science is less helpful.  Knowing where to put your creativity, that’s the real benefit of standards.

The Research Practice Gap: Why is Evidence-based Practice so Hard to Achieve.

There’s has been some recent articles in the social science literature (nursing, education, management, HR, etc. ) about Evidence-based practice (EBP) or the research practice gap that exists in very many fields.  Why is EBP so difficult to achieve and why do so many solution articles leave me so underwhelmed.  I will offer a reason for the difficulties that I have not yet heard in a convincing manner.

Problem: Using research across different practices is basically the same problem as the transfer of learning or knowledge across contexts.

Reason for the problem: it takes work. Knowledge is closely tied to the contexts of production.  There may be theories and prior research that are applicable to a specific practice, but it takes work to contextualize that knowledge, see its applicability to specific contexts, and change the resulting practice.  What is that work:

  • Establishing a broad practitioner knowledge-base in order to know that the applicable theories and knowledge exist.
  • Knowing how the existing problem or practice can be reframed or re-understood in the light of this new knowledge.  It’s not just using knowledge in a new context, it is re-producing that knowledge or sometimes producing knowledge that is unique to that context.
  • Making changes and dealing with side problems common in change management.
  • Developing a feedback methodology for evaluating and adjusting practice changes

Solution;  we need practitioners with better skills and better tools:

  • A larger knowledge-base and a better network (or community of practice) that allows practitioners to tap into the cognition distributed across practitioner networks. In someways practitioners, because they need to be generalist, need a larger knowledge-base than do researchers who can restrict themselves to specialty areas.
  • Skills in problem framing:  re-conextualizing knowledge, hypothesis generation and testing, setting up experimental and other feedback methodology
  • Skills in communication and change management.  Understanding what to do is one thing, understanding how to get it done is another thing entirely.

Better tools. Many article speak like there is broad consensus on what practitioners should do like that consensus already exists.  That does not seem like the paradigmatically defined world of science that I know.  I think there is hard work yet to be done in writing practice standards and guidelines for best practices in most areas.  They are important however, as standards will form the basis for practitioners to be able to create measurement tools to measure how their practices are conforming, creating a deep understanding of their practice.  A measurement tool will also provide a practice compliancy pathway for changing practice.

Designing and Supporting Participation Cultures (or the Management of any Social System)

I reread an article from Gerald Fischer this morning and wanted to get the gist of it into my management toolbox.

Designing and Supporting Participation Cultures

Gerald Fischer wrote the following ideas about the design of software systems, but it can readily apply to any social system or system of management.  Quoted and Adapted from Fischer, G. (2009). Rethinking Software Design in Participation Cultures, http://l3d.cs.colorado.edu/~gerhard/papers/ASE-journal.pdf

  • Embrace Users as Co-Designers
  • Provide a Common Platform to support sharing and the insight of others
  • Enable Legitimate Peripheral Participation
  • Share Control
  • Promote Mutual Learning and Support
  • Foster a Social Reward and Recognition Structure

Also a couple additional great insight from Dr. Fischer, in systems,

strike a balance in system design between automate and infomate.  I see this acting in two ways. Sometimes you want to collect information and at other times, supply information.  Sometimes you want to structure systems so that particular actions will happen, and sometimes you want to supply information that will allow the person to self-structure their actions.

All Systems (or social infrastructures) Evolve, intervene through a SER model: seed, evolve, reseed on a meta-design framework.

Meta-design [Fischer & Giaccardi, 2006] is a design methodology . . . that allow “owners of problems” to act as designers. A fundamental objective of meta-design is to create socio-technical environments [Mumford, 1987] that empower users to engage actively in the continuous development of systems rather than being restricted to the use of existing systems. Meta-design aims at defining and creating not only technical infrastructures for the software system but also social infrastructures in which users can participate actively as co- designers to shape and reshape the socio-technical systems through collaboration. (p.5-6.)

Even though Fischer is speaking of software design, it is really good design for all socio-technical systems and is also relevant to business management or any technical field based on the social sciences.  After all, what is management today other than the design and support of a participatory culture?