LAK11: Metrics

There was a good discussion I had almost two years ago on LMSs and RoI. My observations were:

  1. Organizations use LMS metrics to measure employees’ learning and development and derive RoI from training initiatives. Obviously tracking and automated flexible reporting of any sort is valuable to any organization in any function – provided it is accurate to start with. And obviously lots of time and effort in organizations is spent on validating data from a LMS that in turn provides a source of constant improvement just like from other systems for other functions. These systems provide base data upon which further analyses can be conducted.
  2. At the very atomic level, tracking data is captured for an individual course. This tracking data is used as the input for other data capture around compliance, development plans and certifications. The fundamental question asked is “did employees learn?” or in predictive terms “can employees perform?” – whether it is to demonstrate compliance to legal requirements, track whether an individual is progressing as per the development plan or to certify them for skills. i.e. at the atomic level, data captured for the course is directly correlated to asking “did employees learn?” or “can they perform?”.
  3. This atomic tracking data for a LMS is time spent, attendance, scores, and satisfaction ratings (maybe cursory or detailed, and additional parameters as you suggested as well). Performance management systems could include mechanisms to track or correlate from other perspectives as part of appraisal processes, perhaps thereby adding to the accuracy of analytics.
  4. This data is tracked by means of assessment instruments such as summative assessments that use items that are of multiple types – multiple choice, Likert-scale etc. These instruments and their utility must be separated from their typical use and effectiveness. So it would be wrong to infer “no multiple choice questions, assessments, pre-tests, or Likert-scale surveys EVER”. Rather, their typical use and effectiveness in determining whether “an employee has learnt” or “an employee can perform” is important and something that is a key aspect of determining RoI.
  5. These instruments are very powerful if they (and their constituent items) meet the basic requirements of educational testing – reliability (if the assessment consistently achieves the same result) and validity (if it is really measures what it is intended to measure).
  6. This requires special expertise and time to create (not just by mapping to an established taxonomy) and establish for every course. The LMS has nothing to do with this process. This is evidenced in high stakes assessments like the SAT or GRE which have a long and statistically backed development process. 
  7. For routine courses, perhaps not many organizations or their development vendors would either know or spend that time and effort to create statistically valid tests. One would expect, though, that at least certification testing would follow a much more rigorous test creation process because of the stakes involved. 
  8. Also some instruments may be better for testing certain types of knowledge/ability than others. For example, multiple-choice questions don’t necessarily lend themselves to much more than recall of facts and routine procedures. There is a choice involved that, on a larger scale, impact the metrics that the LMS collects.
  9. Let us look at time spent. Typically, the LMS would record the beginning of a session time and the end/suspend time and add it to the overall time that has elapsed to give us a sense of overall duration that the learner has spent on the course. What can we derive from this measure? Some learners may learn faster, some slower. Some may be distracted by a phone call, others may just not have enough time to go through it all in one attempt and therefore take longer to complete. What can we glean from this? Similarly, attendance. What can we say for that, especially in larger or virtual classes where it is easy not to be noticed, although you could still be “there”? I am interpreting both these in the sense of “did employees learn?” or “can they perform?”
  10. Again “Tracking development vs. a learning plan prepares people to advance” is what is accepted traditionally as perhaps the best way to proceed. However, there are perhaps new perspectives, such as those brought about through connective, networked learning, communities of practice and informal learning, that may merit some thought and attention at least in terms of the impact these could potentially have on how we learn and how we have managed these challenges traditionally.

Based on the next version thinking on Learning Analytics, we are seeing a lot of movement around metrics for lifestreaming and merging the digital/physical worlds. There have been various attempts to survey MOOC participants for information regarding their interactions and profiles (See, for example, Antonio Fini on CCK08 and Jenny, John and Roy’s work on CCK08 [full project report]).  I believe there are some covering how MOOCs should be designed (John Mak in PLENK2010). These should focus on measuring the course itself.

Here we get into an interesting new domain. It is one of the subjects for LAK11 – knowledge analytics.

George Siemens defines knowledge analytics (Educause presentation) as:

Linked data, semantic web, knowledge webs: how knowledge connects, how it flows, how it changes

But what does that imply for our metrics discussion? Stephen Downes talking about Network Semantics and Connective Learning, defines three major elements of a network – entities, connections and signals (messages interpreted by receivers). The degree of connected-ness in a network is a function, according to him, of the density of the network, the speed of communication, flow/bandwidth and plasticity of connections. Given these, context, salience, emergence and memory become essential elements of network semantics.

Connective semantics is therefore derived from what might be called connectivist ‘pragmatics’, that is, that actual use of networks in practice. In our particular circumstance we would examine how networks are used to support learning. The methodology employed is to look at multiple examples and to determine what patterns may be discerned. These patterns cannot be directly communicated. But instances of these patterns may be communicated, thus allowing readers to (more or less) ‘get the idea’.

What are the metrics that can support these semantics, then? 

Tying these back to George’s vision on Intelligent Data and a phase of knowledge analytics where we try to estimate the distance between current level of skills and desired level of skills, I think course level design metrics should at least cover the following categories:

  1. Metrics based on the four elements that Stephen defined for differentiating any network from a learning network – autonomy, open-ness, diversity and interactivity/connected-ness, this time from the course design perspective. I had thought of Metrics from the Connectivist perspective based on these from the learning analytics perspective. 
  2. The robustness of the network for learning needs – this could include availability and adequacy of connections (people and resources), permeability, recommendor systems efficiency, speed of information flow, information processing capacity (bandwidth) etc.
  3. Metrics for the evolution of a learning network – this could include defining the state of the network as such, the state of an individual learner with respect to the network, density of connections etc. I think it will be useful to take some common models of group evolution or open collaboration.
  4. Metrics generated from social collaborative learning instruments – I don’t say tools, but instruments similar to what multiple choice questions do in traditional courses (what I call Native Collaboration techniques with their genesis in Critical Literacies)
  5. Level of personalization (environmental adaptation) – measures that record how well the environment personalized itself to specific learner requirements

There will doubtless be more categories. Maybe there is also some work already underway to remodel the four levels of evaluation that Kirkpatrick proposed in the SoMe context which should throw up some more categories and metrics.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

Up ↑

%d bloggers like this: