Trust Stephen to come out with another super presentation. The presentation titled The Representative Student seeks to explore two challenges related to the modeling – the role of simulations or models in both delivering and learning about learning; and the relationships between adaptive courseware and social learning environments.
This comes close on the heels of the interesting debate/discussion between Stephen (Atoms and Reasons), Heli (Criteria-CritLit and How to assess Learning), Alan Cooper (Assessing Learning) and John (Enculturation). Also comes close on the heels of an interesting post by NetworkSingularity (John?) (Marsh) who writes that “In today’s fast moving media ecologies intention trumps perfection every time”.
Also, the great presentation by Alec Couros (Networked Learning 101) and the great discussions I have been having with Al and Luisa, the team behind OPUS 2 and the research around the PENTHA ID model and complex AI based adaptive learning environments.
I have to mention George Siemens’ Changing Roles article as well as some of my own work around NBT and Connectivism Impacts and Jay Cross’s Learnscapes particularly because of Stephen’s comments around the role of the observer in networked learning environments.
Stephen makes the point that Critical Literacies (cognition, change, pragmaticism, syntax, context and semantics) are all aspects of thought, experience and communication. They are not only the various dimensions of these models but also the key skills involved in working with these models. In the Atoms post, he makes the point that assessment cannot be thought of as based on atomic learning units – there are none. In fact, given his post on Connective Knowledge, it is logical that some theory of assessment will follow from the basic attributes or types of knowledge that he had listed. And, in Having Reasons, he remarks “How connectivism moves beyond being a ‘mere’ forming of associations, and allows for a having, and articulation, of reasons” is what interests him at this point.
But what really got me excited is the possibility that all these ideas could probably merge if we started looking at simulations on a wider scale – connective simulations that could provide a way to abstract from the richness and complexity of our learning process in a meaningful manner – allowing us to not only gain better insight about learning, but also to be able to guide our efforts to architect/enable observation based assessments.
The challenge, in my opinion, is also to prove that the new forms of assessment are scalable and accurate. That is, a large number of people can reliably be observed (or can demonstrate) “being” or “doing” in a manner that is reliable, accurate and consistent. The accuracy problem is important because simulations can only do so much in abstracting from a complex real-world.
If we had that method, and it was proved superior to traditional methods, then we would have buy-in. After all, the problem confronting us at this moment really is that we still end up trying to observe and assess people’s performance afresh whenever they start on a job, despite qualifications and proof from reliable assessments.
As an example, let us take a virtual worlds simulations based approach to learning and assessment. In a virtual world, combined with a simulation, it is easy to observe (physically) a participant learning to be or learning to do. Let us take the specific case of a hardware training LAB virtual world. The student can be joined in the room not only by his peers but also by an instructor or expert who observes the steps taken to resolve a technical problem.
It could be a virtual simulated world where the learner is attempting to construct a mind map or even a new piece of architecture or design from physical and virtual social media resources, using the richness of a real world – color, shape (maybe smell at some point or things that Stephen points out – linguistic structures, mathematical representations, videos, paintings, songs, gestures, behaviors and more), and being observed at the same time – by an AI based connectionist system or by a human observer.
And then let us imagine if there was a system that allowed multiple assessing sources to bring inputs on things that Stephen mentions, like dissonance, participation gap and resolution (Heli quotes Stephen on these in her Criteria post) based on observation – which could take subjective input as well bring together statistics from behavior recording or some such online tracking mechanism based on some intelligence/social collaboration or connectivist metrics that I proposed – and we could perhaps have a more complete connectivist methodology.