Learner Experience Variables

Jul 16 2019
Learner Experience Variables

This post discusses an idea I’m working on as part of further implications arising from the PhD. The idea is the potential to develop learner experience variables data values that could be used to help machines see more complex kinds of learning experience present in digital interactions and learner generated content within emerging technology mediated smart learning contexts. In turn this could potentially help to provide smarter knowledge and interaction delivery to each learner on a stage by stage basis of any learning activity in an emerging technologies supported context.

Following the development of the table for ‘understanding experience complexity’ (see below), and then thinking about the learner generated content that needs to be analysed in some way, I came upon the idea that I could grade it somehow using this table of variations, with added Bloom’s scores. I’d already developed a table of ‘pedagogical’ equivalences – in fact I’ve done this several times – between Bloom’s Revised and SOLO taxonomies and various factors relevant to the study. But I needed to develop the idea further. This post contributes to that thinking.

NB - all images are clickable to see larger versions.

Background thinking

Early in the project I matched equivalences of Bloom’s Revised and SOLO to what I was calling ‘pedagogical aspects of interest’ (PAI), which were knowledge construction, role and identity, digital and information literacy and overall engagement. In the absence of any data or even any adequate understanding of the project, I placed simple description assumptions of levels of experience of these PAI in a table and matched these to Bloom’s and SOLO levels. Though this was only useful to illustrate very early concepts of ‘pedagogical experience variation’ for smart learning journeys, the idea stuck and I ran with it a bit as even though it was pointless of itself it seemed to represent something important.

Much more recently (early 2019) I made a complex table that had these same PAI in relation to multiple other related equivalences:

  • Beetham’s (2012) notional networked learning activity types, as this simple framework had provided a solid starting point for the whole project.
  • The most popular pedagogical topics of relevant discourse that my brief overview meta analysis of literature had thrown up – participatory, community, interactive, identity and collaborative, (which luckily were very relevant to ‘connectivist’ style learning).
  • Some ‘Learning 4.0’ factors from a book chapter by Peter Henning (2018) that has extremely relevant thinking, though (in my view) loses itself in overly complex technical scheming. (As an aside, that chapter is in a Knowledge Management book, not a technical or educational book. Another sign of the convergence and approaching singularity of all disciplines perhaps!).
  • The Digital Competences Framework factors, as I had by now realised my work situates very closely to that framework and supports the DigComp2.1 aims.

However, this table in the end didn’t tell me much, beyond the fact that many things concurred, confirmed and supported each other in this post connectivist style learning.

Breakthroughs in experience variation categories

In the past three months I have made several recent breakthroughs which are key to the entire project being a success. Firstly, the understanding of what the experience variation categories for the journey as a whole are that were staring me in the face from the data, and being confident enough, and able to justify making decisions to use them; then the inclusivity in these categories, and the levels of depth of experience within them; finally developing the table of understanding experience complexity that provides an overall framework of clarity for all levels and all categories for this system of analysis and thinking. This shows clearly the possible relationships: the horizontal, vertical and diagonal inclusivity and progression that is possible or evidently happening in the experiencing of a smart learning journey. (Of note is that this understanding and related breakthroughs happened because I wrote papers about the stages of findings, which I believe helped to focus thinking in tighter ways.)

The four categories and levels with their simplest descriptions are shown below:

four categories of experience variation

Categories of experience variation for the smart learning journey as a whole

Now that I have much firmer ground from which to go forward, it has become clear I no longer need the ‘notional’ PAI, as these are now replaced with those present in the categories of experience variation themselves, having been derived from the work itself (tasks, discussion/collaboration, being there and knowledge and place as value). The earlier table of pedagogical interpretive equivalencies is still relevant, I will just remove the PAI that I myself conceptualised early on in the study.

Grading learner generated content with an experience complexity rubric

Thinking about how to grade the LGC, I made an equivalency table between the surface to deep learning of the four categories of experience variation for the smart learning journey. The table shows a summary of these categories and levels of experience complexity with the relative Bloom’s Revised ‘score’. Also included is a summary description of the surface to deep learning concept (from Marton & Säljö, 1976), and I make use of descriptor terms taken from Hounsell’s work about types of essay writing (2005), of arrangement, viewpoint and argument.

This then clearly shows how Bloom’s Revised and/or SOLO can be matched to equivalent levels of experience complexity for the four categories, and I think works quite well. This becomes the rubric of variables by which I ‘assess’ the LGC that has been uploaded to Edmodo class areas in my study. I think this is fairly tidy and makes total sense.

Experience Complexity surface to deep learning, with Bloom’s & SOLO

Experience Complexity surface to deep learning, with Bloom’s & SOLO

 Experience Complexity Rubric

Experience Complexity Rubric

Experience Variables Data

So how can this work? What is needed is a clear rubric and representative experience complexity variables to assign to each piece of content or textual comment. The two images below show how we might convert a human grading of learner generated content using this rubric to then appreciating what it could achieve at machine learned interpretive scale.

Experience Variables Data table

Experience Variables Data for locations

The next images show that image content can already be analysed to a reasonable level and if additional learner experience aspects were added to any data algorithm for LGC (in this case of images), this could be machine learned from the human grading of this type of content, so we might develop learner experience variation variable data across citywide micro learning activities at huge scale over time.

Screen-Shot-2019-07-16-at-11.19.04

Locations and webpages

Screen-Shot-2019-07-16-at-11.19.39

People at locations, engaging

Screen-Shot-2019-07-16-at-11.19.09

Facebook image recognition

This post provides a summary of the thinking that has led up to this idea, and a subsequent post will cover how this data might be used to provide smarter learning pathways with varying types of knowledge content to learners.

These ideas are being incorporated into a paper, provisionally entitled “Learner experience complexity as data variables for smarter learning”, to be submitted to AI and Society special issue ‘Ways of Machine Seeing’, which will report in further detail about the methodology behind these categories of experience variation, and potential for smarter knowledge delivery and more flexible provision of user journey interface progressions.


References
  • Anderson, L.W., & Krathwohl, D.R. (Eds.) (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. New York: Addison Wesley Longman.
  • Beetham, H. (2012). Designing for Active Learning in Technology-Rich Contexts. In Beetham, H., & Sharpe, R. (Eds.), Rethinking Pedagogy for a Digital Age: Designing for 21st Century Learning (2nd Ed) (pp 31-48). New York and London. Routledge. Taylor & Francis.
  • Biggs, J.B., and Collis, K.F. (1982). Evaluating the Quality of Learning-the SOLO Taxonomy (1sted). New York: Academic Press.
  • Henning, P.A. (2018). Learning 4.0. In North, K., Maier, R., & Haas, O. (Eds.), Knowledge Management in Digital Change, New Findings and Practical Cases. Springer International Publishing AG, Switzerland. pp. 277–290.
  • Hounsell, D. (2005) ‘Contrasting conceptions of essay-writing’. In: Marton, F., Hounsell, D. and Entwistle, N., (eds.) The Experience of Learning: Implications for teaching and studying in higher education. 3rd (Internet) edition. Edinburgh: University of Edinburgh, Centre for Teaching, Learning and Assessment. pp. 106-125.
  • Marton, F., and Säljö, R. (1976), “On qualitative differences in learning: I – Outcome and Process”, British Journal of Educational Psychology, Vol. 46, pp. 4-11.

 


Suggested Posts


Previous Post Next Post