I mentioned Project Tin Can before on this blog. To give you a snapshot of the top ideas in their forum (please contribute as much as you can), here are some of the user ideas on how learning experiences should be tracked:
- Distributed content – include content across organizational boundaries
- Transparency of SCORM runtime data
- Multiple collaboration between learners and teams
- Handle sequencing dynamically
- Move SCORM outside the browser
- Tracking learning not hosted in a LMS
I think these are ideas about the learning experience which will resonate with the LAK11 and PLENK communities and are important enough to be considered seriously.
In this post, I want to cover what LETSI (Learning, Education, Training Systems Interoperability) is doing.
LETSI has many working groups. There is one on defining the next generation Runtime Web Services (RTWS) layer for runtime communication between LMS and Content using state of the art web technology. The Orchestration working group is working on expanding the sequencing specification in SCORM.
While “sequencing” implies the ordering of activities over time, we anticipate other ways in which things need to be combined: components need to be combined to make adaptable activities, different “players” need to be combined to engage in a collaborative or competitive activity; and learning delivery services need to interoperate with external data services. We see sequencing as just one type of orchestration.
The Content As A Service (CAAS) working group is exciting. They are talking, among other things, about separating learning activities from the LMS, available from separate content providers. This could be a photo gallery shared in Facebook, or a question shared in Quora or this blog post – any resource/activity that could simply launch, engage the learner and report back metrics.
Then there is the Learning Activity Description (LAD) working group. This, in my opinion, is a very important group because it seeks to define a framework for describing what a learning activity is, how it gets triggered and what it can result in. If defined in the network / SoMe context, this could cover practically every activity (or generation of learning artifacts) that a learner would engage in.
So, for example, today a blog post, structurally, provides two pieces of information – about the content and context of the blog post, and, about the collaboration with the network that accessed/commented on the post. If a content provider, for instance, provided a blog service in which every blog post could be rated by the viewer and which also exposed the rating data back to the content provider, and if a metric that is defined to judge competence states that rating has to be above a particular level for the learner to be deemed competent in the field that the content refers to, then the blog post could be termed an instance of a learning activity. On the other end, it could be a complex simulation activity with complex outcomes and data.
With simple to complex educational outcomes related tweaks to existing platforms, our sense-making artifacts could provide a third source of inputs to the analytics process posed by George (apart from profile and intelligent data) – that of data that provides direct competency related performance information arising out of learning activities. As the group suggests in the Activity ontology definition (Slide 24):
Traditional Instructional Object classifies performance data to the Expected Patterns=Outcomes. Its Outcomes: standard completion, failure, error type, …Intelligent Instructional Object performs assessment of Competencies and results in Outcome = Competence Profiles.
For knowledge analytics, this could be seen as a way to think about measuring the gap between actual and required competency levels based on a competency framework.
But a crucial requirement for LAD to work is shared vocabulary or ways to specify the same. Another crucial requirement is shared data areas (shared memory). And this is the work of the final working group, that on Namesets and Common Memory.
I like what LETSI and Tin Can are doing from multiple perspectives. Firstly, as a long time SCORM sufferer, this open-ness and ability to move with the times is great! Secondly, there is an appreciation that the process of learning and teaching cannot be constrained to a set of data models and runtime APIs. Thirdly, there is a way now that non-LMS and non-Learning Object methodologies can potentially be made to work – in a distributed, open, cross-platform and connective manner. From the LAK11 perspective, both from knowledge and learning analytics standpoints, these are significant developments.