Absent critical voices in AI and Higher Education

Apr 29 2024
Absent critical voices in AI and Higher Education

The absence of critical voices in AI and Higher Education debates has become a deafening silence. This think-piece reflects on why this might be, and where to find trusted academic sources that go against the grain to highlight the situation as it really is.

I become increasingly puzzled by the lack of critical voices in the stampede to adopt AI generative response engines for learning and teaching practice in higher education. Why aren't there high profile critiques of these tools? Or the companies that own them? Where are the articles about the huge financial cost and bigger and more perilous environmental cost? I watch as various people I know develop their positive career development trajectory because they have embraced the golden chalice of AI solutionism, yet what is the substance for this unquestioning embrace? It is as if there is a mass silent collusion of acceptance of these technologies, with no debate, no real knowledge for informed decision-making and no sense of responsibility for what education is or should be in an age of unparalleled automation capabilities. Honestly, I suspect that perhaps many academics secretly know they have very little understanding of AI, but stay positive and do not admit their concerns, in case they are found out to be completely incapable of making any informed judgement. I feel it is my duty to be a dissenter, to voice concerns, but I am outnumbered, treated a bit like a fanatic, a nutty professor on the fringe of serious debate, excluded and probably made fun of. But maybe this kind of mockery is the resort of those who feel threatened.

It is unnerving that this should be so. Sure, the general mantra of ethics and bias considerations for the use of such tools is trotted out regularly, but with little further detail to what these issues mean in reality. What kinds of ethics? What kinds of bias? Then, the idea of 'AI' (a vague catchall term at best) saving or being the future of education is an all too often repeated mantra in countless papers, blog posts or Substack articles, but no real reasoning for why this should be so or how this can work, or who wants it, or what shape it would take. In other words, no detail of what needs saving or remaking in the shape of AI. Again, I am left with the impression that a strange miasma of obedient uptake is upon us, like a terrible nightmare from Fahrenheit 451. I become frustrated, angry even.

Enforced tech adoption

Somewhat like every other stage of learning technologies adoption, the idea that one should be against them is the lone voice of the luddite, the angry outsider, the contrary. Learning Analytics was a particular case in point (imho), the data interactions of the people who were logging in, navigating through and clicking on content in learning management systems. These statistics do tell us useful things about the usability, user experience and general administration factors of LMS systems, but nothing (or at least not much) about actual learning. At worst, students who use the LMS a lot may actually not seek out or use other resources that are not included in LMS content, thereby losing out on self agency. Yet, billions of dollars have been poured into LA, forming key performance indicators of effective learning strategies partially built on these statistics, technology procurement budgets are developed, university strategic plans declare their new policies and strategies based in part on LA. And these statistics are themselves collected for a devised set of a priori principles that are the key performance indicators, a self perpetuating feeding frenzy of data for data's sake. It is a murky world of no one wanting to upset the precarious applecart of HE financial foundations - commercial business practice masquerading as obfuscated HE QA - in case the billion dollar (or GBP/EUR) gravy train pipeline of edtech finance is disrupted.

Student and course administration is the big elephant in the room, as admin processes in HE are self inflicted, largely a set of micro-aggressions to go through the motions of QA, and provide measurable evidence of L&T effectiveness, or student engagement, retention and conversion. No one believes this stuff, it's a performative compliance mechanism.

(Godwin-Jones, 2017, for LMS/LA information)

The knowledge web is collapsing

My own real worry, that I've been talking about for at least a year to no one in particular but anyone who might listen (my HCII 2024 invited paper, another blogpost here), is the steady poisoning of the knowledge web by AI generative output content. Over the past year this has now become a much more widely known issue that is taken a lot more seriously, with Google themselves recently taking preventative action to stop 'low quality' gen-AI content putrefying their search results. The massive influx of fake video into TikTok is also a sign of what is already happening at scale.

Yet the HE AI fanbase continues to advocate that higher education should be churning out gen-AI content with total impunity. Where does this content end up? In information repositories, both closed and open. It putrefies everything it touches. It fills academic journal papers (declared or not), staff reports and written output, student work, image databases, blogpost search results, news articles, guides to practice, no doubt open educational resources... Just think, soon, the TurnItIn database will be full of gen-AI content and your plagiarism reports will be partially measured against content created by response engine algorithms. Why hasn't anyone thought about that? Though, to be fair, maybe they have and are keeping quiet while they develop apps to sell you that deal with the poison of the apps you're currently using that they sold you before. This is after all, common practice in the technology development circle of innovation.

Gen-AI content consumes as it goes, like nano-bot poison, collapsing all in it's wake (think of that scene where the destroyer ship falls in on itself in Three Body Problem). This is not an exaggeration, it literally destroys search, and the ability to differentiate authentic human created content from bot mush will become an essential aspect of future information delivery. AI generative content tools are really no joke and should not be taken lightly, so casually. Having ideas about teaching students to view these tools 'critically' is naive in the extreme, it's like an ever expanding multiple pile up car crash. The problem here is that the problem itself is inestimable, the classic wicked and tangled situation, the more you use AI to 'mend', the more issues you will cause.

Sources Hoek (Google SEO), 2024, Hormillada, 2024, Koenig, 2023, Loukides, 2023, Mouton, 2023, Knight, 2023.

Self preservation

Everyone is only thinking about themselves, not the big picture. How will it effect me? Does it make my job easier? How can I save time? Does it produce work I need not produce myself? The obsession with admin tasks that academic staff are now expected to do in addition to their teaching AND their research is expanding year on year, so any help that a staff member can get, they will take. It is a bribery of self preservation, demanding huge work load with menaces, and threatened with career stagnation or worse, compulsory redundancy through 'natural wastage', if you do not comply. You must therefore make use of any borderline-unethical tools to save your sanity and maintain your job obligations. All around, degree programmes are removed, staff are no longer needed. This academic desolation is rampant, and growing. A staff member who wants to keep their job and their head above water embraces AI to help. It's a no brainer.

Take the recent opinion piece published in Inside Higher. The list of what AI 'can do' to help save HE costs was disturbing, certainly to my eyes. But to the eyes of extremely time poor lecturers in HE (or other levels of education), the list is very appetizing. The sentence to introduce this list was "AI can relieve faculty from some tedious tasks and do it at scale":

  • Course Design
  • Content and Pedagogy
  • Grading
  • Assessment and Accreditation
  • Research
  • Grant Writing
  • Job Search
  • Student Support
  • Administration

(Bowen, J.A., & Watson, 2024)

I find it incredible that course design, content and pedagogy, assessment and student support are even on this list, and are regarded as tedious tasks. Selecting pedagogical approach and course design using a bot is just inexplicably bad practice parading as the future. AI is not a genie to magic out of thin air the thoughts that the lecturer is having, and the skill and understanding they have developed over many years. Ok, assessment can be gruelling, and not intellectually rewarding sometimes, but nevertheless, to not be assessed by a human in many subject areas is extremely problematic. Grading (like assessment design) needs human eyes. There is discussion about AI chatbot rubric generation. I am astonished at this. The whole point of a module rubric is sensitive planning, and assessment criteria development for the specific learning at hand. To add student support to this list in any way is ethically extremely questionable as part of HE professional practice. This is not what students sign up for. This is not what academic staff expect as professional conduct. Though some of the other factors can perhaps be somewhat automated I still feel there are major issues here - generating research literature reviews with AI is very flawed - no allowance for human serendipity, no quirks, no personal interpretation or direction. Grant writing will become a horrific standardised bloodless process (though it is already to an extent, which is why it leans towards automation!); job searches imply we are all one homogenous blob that look for particular job titles, qualification gateways, location specifics, institution types etc., and anyway, we already have multiple ways to generate possible job searches via fairly capable recommender systems.

It must be said that much of this relentless pressure to adapt 'AI' into L&T comes in the form of preparing students for the future. If we look at texts from experts who have been writing in this field for several years, we can see a fairly damning estimation of gen-AI impact on their areas of practice. Neil Leach predicts that many architects and CAD designers will lose their jobs, but only those average skilled architects will be out of work pretty quickly - he also thinks that exceptional talent will benefit. There might be a similar prognosis in other fields, like film making, CGI, digital marketing content making, search engine optimisation, gaming (both coding and design), tutorial or other skills based book writing, and so on.

For Leach, conversations limited to popular AI tools such as Midjourney and ChatGPT detract from a broader reckoning that the architecture profession must have in the face of ever-more-capable AI models and platforms. This AI-induced reckoning includes, though is not limited to, the supply and demand of architectural labor, liabilities, and insurance, and the future of all pillars of the architectural community, be it practice, academia, or licensure. (Neil Leach, 2023)

What made this happen?

I think TurnItIn was the foot in the door. The way I see many academics discuss its use as a standardised aspect of learning and teaching, and the assessment process, is the thin end of the wedge that now means many academics find gen-AI the next logical step. It has set the scene for the adoption of recommender system results driven by algorithmic design, and as they see it, gen-AI is simply the most modern version of the same thing, no different to a similarity score, for example. However, I really don't believe that any of these automation technology processes have very much to do with learning, and I think deep down that a lot of other academics would agree with that. I can't help but think that training students to jump through machine driven compliance hoops to reach acceptable levels of quality or 'knowledge and skill' measurement is a really meaningless way of skilling young people. What they really learn is that a statistical score is what they must achieve, not other more valuable human traits and qualities. Do students come for this when they enter university? I suspect a percentage do, they just want the qualification, they don't care how they get it. But I would argue that a huge aspect of university is networking, friendship making, debating with other people, learning how to live in a professional atmosphere, respect knowledge, understand the complexity of the world around us. This may sound grandiose but it really is about that, and it is that which makes more rounded citizens, more versatile and prepared to meet the changing world of work. I think most academics would agree with this kind of interpretation of HE, no matter which subject area they are in.

Future positives at what price

But, AI is with us now and cannot be undone. There is huge potential to what AI technologies can do, to contribute to managing information flow, dataset analysis for medical science, traffic or flight control, power consumption management and distribution, communication networks, design of efficient buildings and urban infrastructure, and so on. Only a fool would believe that AI cannot be a game changer in many fields, often to the greater good. But at what price do these great leaps forward come?

An MIT Technology Review article from December 2022 takes a broad future shock approach on what is happening, possible advantages and acknowledgement of inherent problems. The article is worth a look because it describes the progression of the AI technical processes for written and image generation, and while it quotes several professionals who see an unlimited glittering future ahead, also includes stark warnings about what is to come.

The idea that we can work with a moving target of AI technological capability is in itself probably impossible. But when it entails that any field lending itself to being sped up, made 100 times cheaper and more efficient by using gen-AI, and subsequently means most people won't have a job, then what is the point beyond financial profit? This is a serious question, and is probably the crux of the whole challenge. Do we unravel the core functional principles of modern society, or do we create limits on what is permissible, in order to maintain the fabric of what makes society work as a functional machine? Perhaps training a lot more people to do non-digital work is the answer. I think we expect that gen-AI will be a bit like the impact of CAD or CGI on digital professions, but it won't. It will be 1000 times more powerful, like a massive nuclear bomb going off in multiple work areas, concurrently. Hardly anyone will survive. I don't really think this is much of an exaggeration, if you're thinking in terms of the next decade.

Do we live in a world that is now defined by Big Tech companies venture capital options, stockholder profit and the vast amounts of money made from data selling? Are we the computational citizens that Ben Williamson described in 2015? Do any of us have a choice about whether or not to turn everything off, or at least the majority of it? Can we ever change this en-masse hurtling towards being completely powerless against our lives being owned and defined in every facet by Big Tech?

Physicist Max Tegmark begins an early section of his 2017 book 'Life 3.0 : being human in the age of artificial intelligence' with "Welcome to the Most Important Conversation of Our Time", and I feel spoken to. We cannot look away, we should not make easy assumptions, we need to be vigilant, and informed. We must not take sides, we need full critical awareness. If we as academics don't take this position, who will?


Full size image (click)

A generated image of a city of AI building structures A generated image of urban AI structures using StarryAI Android image app


Sources

  • Bæk, D. H. (2024, April 24th). Google is not against AI generated content and text any longer. SEO AI. https://seo.ai/blog/google-is-not-against-ai-content
  • Bowen, J.A., & Watson, C.E. (2024 April). Is AI Finally a Way to Reduce Higher Ed Costs? Inside Higher. https://www.insidehighered.com/opinion/views/2024/04/23/ai-finally-way-reduce-higher-ed-costs-opinion
  • Godwin-Jones, R. (2017). Scaling up and zooming in: Big data and personalization in language learning. Language Learning & Technology, 21 (1), 4–15. http://llt.msu.edu/issues/february2017/emerging.pdf
  • Heaven, W.D. (2022, December 16 th). Generative AI is changing everything. But what’s left when the hype is gone? MIT Technology Review. https://www.technologyreview.com/2022/12/16/1065005/generative-ai-revolution-art/
  • Hormillada, H. (2024, April 1st). Math lessons from deepfakes of Drake, other celebrities on TikTok raise concerns about misinformation. CBS News. https://www.cbc.ca/news/canada/ai-deepfake-tiktok-1.7157571
  • Knight, W. (2023, Oct 5th). Chatbot Hallucinations Are Poisoning Web Search. Wired. https://www.wired.com/story/fast-forward-chatbot-hallucinations-are-poisoning-web-search/
  • Koenig, A. (2023, December 19th). Washington Post: The rise of AI fake news - UC social media expert Jeffrey Blevins makes the media rounds on the topic of fake news. https://www.uc.edu/news/articles/2023/12/social-media-expert-jeffrey-blevins-discusses-misinformation-with-multiple-media-outlets.html
  • Tegmark, M. (2017). Life 3.0 : being human in the age of artificial intelligence. Alfred A. Knopf, Penguin Random House. https://lccn.loc.gov/2017006248
  • Walsh, N.P. (2023, Jun 19th). 'AI Is Both Incredible and Terrifying': A Conversation with Neil Leach. Archinect. https://archinect.com/features/article/150352996/ai-is-both-incredible-and-terrifying-a-conversation-with-neil-leach
  • Williamson, B. (2015). Educating the Smart City: Schooling Smart Citizens through Computational Urbanism‘. Big Data and Society 2, no. 2, 1–13. https://doi.org/10.1177/2053951715617783

Further reading

  • Giovannella, C., Licia Cianfriglia, L., & Giannelli, A. (2024). AIs @ School: the perception of the actors of the learning processes (DOI: 10.13140/RG. 2.2.17580.07044). Preprint: https://www.researchgate.net/publication/380074080_AIs_School_the_perception_of_the_actors_of_the_learning_processes
  • Leach, N. (2023). Architecture in the Age of Artificial Intelligence: An Introduction to AI for Architects. Bloomsbury. https://www.bloomsbury.com/uk/architecture-in-the-age-of-artificial-intelligence-9781350165519/
  • Loukides, M. (2023, October 24). Model Collapse: An Experiment, What happens when AI is trained on its own output? O’Reilly. https://www.oreilly.com/radar/model-collapse-an-experiment/
  • Matei, S. A. (2023, November 28). An academic ChatGPT needs a better schooling. Times Higher Education. Https:// www.timeshighereducation.com/blog/academic-chatgpt-needs-better-schooling
  • Mayberry, K. (2023, Jun 7). Will AI Degrade Online Communities? Tech Policy Press. https://techpolicy.press/will-ai-degrade-online-communities/
  • Mouton, C. A. (2023, July 20). ChatGPT Is Creating New Risks for National Security. Rand Corporation. https://www.rand.org/pubs/commentary/2023/07/chatgpt-is-creating-new-risks-for-national-security.html
  • Risse, M. (2021). The Fourth Generation of Human Rights: Epistemic Rights in Digital Lifeworlds. Moral Philosophy and Politics 8 (2), 351–378. https://doi.org/10.1515/mopp-2020-0039
  • Sankaran, V. (2024, January 9). OpenAI says it is ‘impossible’ to train AI without using copyrighted works for free. The Independent. https://www.independent.co.uk/tech/openai-chatgpt-copyrighted-work-use-b2475386.html
  • Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., & Anderson, R. (2023). The Curse of Recursion: Training on Generated Data Makes Models Forget. ArXiv, abs/2305.17493
  • Smith, M. (2023, Jun 23). The Internet Isn’t Completely Weird Yet; AI Can Fix That “Model collapse” looms when AI trains on the output of other models. IEEE Spectrum. https://spectrum.ieee.org/ai-collapse
  • Villalobos, P., Sevilla, J., Heim, L., Besiroglu, T., Hobbhahn, M., & Ho, A. (2022). Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning. Arxiv. https://doi.org/10.48550/arXiv.2211.04325
  • Waite, T. (2024, January 3). Here are all the artists Midjourney allegedly uses to train its AI. Dazed Digital. https://www.dazeddigital.com/art-photography/article/61677/1/midjourney-ai-16000-artists-andy-warhol-frida-kahlo-yayoi-kusama-picasso-disney


Suggested Posts


Previous Post Next Post