I wrote a paper a while back which in part discussed Google Lens. Google Lens will soon become standard Android camera technology, permitting the smart retrieval of knowledge content associated with stuff we view through our smartphone camera. This is a huge innovation, only currently being conceptualised for commercial purposes. But placing that functionality in a generic open access knowledge terrain means we would be able to call up content at will for learning. Briefly, the challenge of smarter learning using augmentation of reality is about how we search, find and retrieve (sort, save) information relating to objects in the real world.
Imagine for a moment a nice plugin for your institutions VLE platform of choice, where you might be able, as a tutor or as an institutional administrator, to set faceted search controls for faculty, school, topic, level, even cohort and time limit access, including your own institution’s bespoke content supplemented by other web pages of quality that could be offered to your learners, helping to get them going in their geo-spatially located projects.
The work Im involved in for the doctorate, while using more simple technology, very much concerns this kind of information dissemination and retrieval, as part of investigating early use-case user experience in these scenarios. It’s good to know that even as my own project is limited in technological application, it is completely relevant to these cutting edge innovations in knowledge retrieval.
- My paper: ‘A smarter knowledge commons for smart learning’ https://link.springer.com/article/10.1186/s40561-018-0056-z
- Google Lens: A billion recognisable things https://www.theverge.com/2018/12/19/18149120/google-lens-ai-camera-recognize-detect-1-billion-items
- Google Lens App: https://mashable.com/2018/06/04/google-lens-standalone-app/