Danish university policies on generative AI

Problems, assumptions and sustainability blind spots

Authors

DOI:

https://doi.org/10.7146/mk.v40i76.143595

Keywords:

Generative AI, higher education, policy, sustainability, Denmark, ChatGPT

Abstract

The sudden and meteoric rise of generative Artificial Intelligence (genAI) has raised fundamental concerns for universities. Using Bacchi’s methodology on ‘problematisation’, we analyse which concerns Danish universities have addressed through their policies and guidelines. We identify three key problematisations: assessment integrity, legality of data and veracity. While each of these problematisations involves specific limitations, together they also strongly emphasise symbolic and epistemological issues and consequently mostly ignore the materiality of genAI, for example, in terms of labour and energy use. Drawing on critical AI studies, this article argues that universities should also consider the huge planetary
costs that (gen)AI poses as well as the full range of AI’s exploitative business models and practices. Universities should integrate these considerations into both their decision-making on (not) using certain technologies and their policies and guidelines for research and teaching, just as sustainability is already a criterion in their travel or investment policies today.

Author Biographies

Olivier Driessens, University of Copenhagen

Olivier Driessens is Assistant Professor at the Center for Tracking and Society in the Department of Communication, University of Copenhagen.

Magda Pischetola, University of Copenhagen

Magda Pischetola is Assistant Professor at the Section for Education in the Department of Communication, University of Copenhagen.

References

Adeshola, I., & Adepoju, A. P. (2023). The opportunities and challenges of ChatGPT in education. Interactive Learning Environments, 1-14 https://doi.org/10.1080/10494820.2023.2253858

Albayati, H. (2024). Investigating undergraduate students' perceptions and awareness of using ChatGPT as a regular assistance tool: A user acceptance perspective study. Computers and Education: Artificial Intelligence, 100203. https://doi.org/10.1016/j.caeai.2024.100203

Bacchi, C. L. (1999). Women, Policy and Politics: The Construction of Policy Problems. SAGE. https://doi.org/10.4135/9781446217887

Bacchi, C. L. (2009). Analysing policy: What’s the problem represented to be? Pearson.

Bacchi, C. L. (2019). Introducing the ‘What’s the Problem Represented to be?’ approach. In M. Tröndle & C. Steigerwald (Eds.), Anthologie Kulturpolitik: Einführende Beiträge zu Geschichte, Funktionen und Diskursen der Kulturpolitikforschung (pp. 427–430). Transcript Verlag. https://doi.org/10.1515/9783839437322-031

Baldassarre, M. T., Caivano, D., Fernandez Nieto, B., Gigante, D., & Ragone, A. (2023). The Social Impact of Generative AI: An Analysis on ChatGPT. In Proceedings of the 2023 ACM Conference on Information Technology for Social Good, 363-373. https://doi.org/10.1145/3582515.3609555

Baltsen Bøgeholt, L. (2023, January 21). Den overraskede verden med sin intelligens: Nu forbyder flere af landets universiteter chatrobot til eksaminer. DR. https://www.dr.dk/nyheder/indland/den-overraskede-verden-med-sin-intelligens-nu-forbyder-flere-af-landets

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? . Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922

Bin-Nashwan, S. A., Sadallah, M., & Bouteraa, M. (2023). Use of ChatGPT in academia: Academic integrity hangs in the balance. Technology in Society, 75, 102370, https://doi.org/10.1016/j.techsoc.2023.102370

Blum-Ross, A., & Livingstone, S. (2018). The trouble with “screen time” rules. In G. Mascheroni, C. L. Ponte, & A. Jorge (Eds.), Digital Parenting: The challenges for families in the digital age (pp. 179–187). Nordicom, University of Gothenburg. http://norden.diva-portal.org/smash/record.jsf?pid=diva2%3A1265024&dswid=-814

Brevini, B. (2020). Black boxes, not green: Mythologizing artificial intelligence and omitting the environment. Big Data & Society, 7(2), 2053951720935141. https://doi.org/10.1177/2053951720935141

Brevini, B. (2021). Creating the technological saviour: Discourses on AI in Europe and the legitimation of super capitalism. In P. Verdegem (Ed.), AI for Everyone? Critical perspectives (pp. 145–159). University of Westminster Press. https://doi.org/10.16997/book55.i.

Brodie, P. (2023). Data infrastructure studies on an unequal planet. Big Data & Society, 10(1), 20539517231182402. https://doi.org/10.1177/20539517231182402

Callanan, E., Mbakwe, A., Papadimitriou, A., Pei, Y., Sibue, M., Zhu, X., Ma, Z., Liu, X., & Shah, S. (2023). Can GPT models be Financial Analysts? An Evaluation of ChatGPT and GPT-4 on mock CFA Exams (arXiv:2310.08678). arXiv. https://doi.org/10.48550/arXiv.2310.08678

Chan, C. K. Y. (2023). A comprehensive AI policy education framework for university teaching and learning. International journal of educational technology in higher education, 20(1), 38. https://doi.org/10.1186/s41239-023-00408-3

Chandra, S., Shirish, A., & Srivastava, S. C. (2022). To be or not to be… human? Theorizing the role of human-like competencies in conversational artificial intelligence agents. Journal of Management Information Systems, 39(4), 969-1005. https://doi.org/10.1080/07421222.2022.2127441

Chaudhry, I. S., Sarwary, S. A. M., El Refae, G. A., & Chabchoub, H. (2023). Time to revisit existing student’s performance evaluation approach in Higher Education sector in a new era of ChatGPT—A case study. Cogent Education, 10(1), 2210461. https://doi.org/10.1080/2331186X.2023.2210461

Chavanayarn, S. (2023). Navigating ethical complexities through epistemological analysis of ChatGPT. Bulletin of Science, Technology & Society, 02704676231216355. https://doi.org/10.1177/02704676231216355

Chen, L., Zaharia, M., & Zou, J. (2023). How is ChatGPT’s behavior changing over time? (arXiv:2307.09009). arXiv. http://arxiv.org/abs/2307.09009

Cheng, M. W. T., & Yim, I. H. Y. (2024). Examining the use of ChatGPT in public universities in Hong Kong: A case study of restricted access areas. Discover Education, 3(1). https://doi.org/10.1007/s44217-023-00081-8

Choi, J., Hickman, K., Monahan, A., & Schwarcz, D. (2022). ChatGPT goes to law school. Journal of Legal Education, 71(3), 387–400. https://doi.org/10.2139/ssrn.4335905

Coeckelbergh M (2022) The political philosophy of AI. Polity Press.

Coeckelbergh, M., & Gunkel, D. J. (2023). ChatGPT: deconstructing the debate and moving it forward. AI & SOCIETY, 1-11. https://doi.org/10.1007/s00146-023-01710-4

Cotton, D. R., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 1-12. https://doi.org/10.35542/osf.io/mrz8h

Couldry, N., & Mejias, U. A. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press. https://doi.org/10.1515/9781503609754

Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press. https://doi.org/10.12987/9780300252392

Crawford, K. (2024). Generative AI’s environmental costs are soaring—And mostly secret. Nature, 626(8000), 693–693. https://doi.org/10.1038/d41586-024-00478-x

Dansk Erhverv. (2023). Danskernes brug af ChatGPT eller andre sprogmodeller. Dansk Erhverv. https://www.danskerhverv.dk/politik-og-analyser/analyser/2023/juni/danskernes-brug-af-chatgpt-eller-andre-sprogmodeller/

Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., ... & Wright, R. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642

Flores-Vivar, J. M., & García-Peñalvo, F. J. (2023). Reflections on the ethics, potential, and challenges of artificial intelligence in the framework of quality education (SDG4). Comunicar, 31(74), 35–44. https://doi.org/10.3916/C74-2023-03

Fui-Hoon Nah, F., Zheng, R., Cai, J., Siau, K., & Chen, L. (2023). Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research, 25(3), 277-304. https://doi.org/10.1080/15228053.2023.2233814

Goodlad, L. M. E. (2023). Humanities in the Loop. Critical AI, 1(1-2), https://doi.org/10.1215/2834703X-10734016

Gorichanaz, T. (2023). Accused: How students respond to allegations of using ChatGPT on assessments. Learning: Research and Practice, 9(2), 183-196. https://doi.org/10.1080/23735082.2023.2254787

Hagendorff, T., Fabi, S., & Kosinski, M. (2023). Human-like intuitive behavior and reasoning biases emerged in large language models but disappeared in ChatGPT. Nature Computational Science, 3(10), 833-838. https://doi.org/10.3352/jeehp.2023.20.1

Hogan, M. (2018). Big data ecologies. Ephemera: Theory & Politics in Organization, 18(3), 631–657. https://ephemerajournal.org/index.php/contribution/big-data-ecologies

Huh, S. (2023). Are ChatGPT’s knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology examination?: A descriptive study. Journal of Educational Evaluation for Health Professions, 20(1). https://doi.org/10.3352/jeehp.2023.20.1

İpek, Z. H., Gözüm, A. I. C., Papadakis, S., & Kallogiannakis, M. (2023). Educational applications of the ChatGPT AI system: A systematic review research. Educational Process: International Journal, 12(3), 26-55. https://doi.org/10.22521/edupij.2023.123.2

Jo, H. (2023). Decoding the ChatGPT mystery: A comprehensive exploration of factors driving AI language model adoption. Information Development. https://doi.org/10.1177/02666669231202764

Kaltheuner, F. (2021). AI snake oil, pseudoscience and hype: An interview with Arvind Narayanan. In F. Kaltheuner (Ed.), Fake AI (pp. 19–38). Meatspace Press.

Kiryakova, G., & Angelova, N. (2023). ChatGPT—A challenging tool for the university professors in their teaching practice. Education Sciences, 13(10), 1056. https://doi.org/10.48550/arXiv.2304.03271

Kung, T. H., Cheatham, M., Medenilla, A., Sillos, C., Leon, L. D., Elepaño, C., Madriaga, M., Aggabao, R., Diaz-Candido, G., Maningo, J., & Tseng, V. (2023). Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digital Health, 2(2), e0000198. https://doi.org/10.1371/journal.pdig.0000198

Li, P., Yang, J., Islam, M. A., & Ren, S. (2023). Making AI less ‘thirsty’: Uncovering and addressing the secret water footprint of AI models (arXiv:2304.03271). arXiv. https://doi.org/10.48550/arXiv.2304.03271

Linderoth, C., Hultén, M., & Stenliden, L. (2024). Competing visions of Artificial Intelligence in education—A heuristic analysis on sociotechnical imaginaries and problematizations in policy guidelines. Policy Futures in Education, 14782103241228900. https://doi.org/10.1177/14782103241228900

Lo, C. K. (2023). What is the impact of ChatGPT on education? A rapid review of the literature. Education Sciences, 13(4), 410. https://doi.org/10.3390/educsci13040410

Lund, B. D, Wang, T., Mannuru, N. R., et al. (2023). ChatGPT and a new academic reality: Artificial Intelligence-written research papers and the ethics of the large language models in scholarly publishing. Journal of the Association for Information Science and Technology, 74(5), 570–581. https://doi.org/10.1002/asi.24750

Lund, B. D., & Naheem, K. t. (2024). Can ChatGPT be an author? A study of Artificial Intelligence authorship policies in top academic journals. Learned Publishing, 37(1), 13–21. https://doi.org/10.1002/leap.1582

Luo, J. (Jess). (2024). A critical review of GenAI policies in higher education assessment: A call to reconsider the “originality” of students’ work. Assessment & Evaluation in Higher Education, 1–14. https://doi.org/10.1080/02602938.2024.2309963

Memarian, B., & Doleck, T. (2023). ChatGPT in education: Methods, potentials and limitations. Computers in Human Behavior: Artificial Humans, 100022, https://doi.org/10.1016/j.chbah.2023.100022

Mills, A., Bali, M., & Eaton, L. (2023). How do we respond to generative AI in education? Open educational practices give us a framework for an ongoing process. Journal of Applied Learning and Teaching, 6(1), 16-30. https://doi.org/10.37074/jalt.2023.6.1.34

Mok, A. (2023, January 19). CEO of ChatGPT maker responds to schools’ plagiarism concerns: ‘We adapted to calculators and changed what we tested in math class’. Business Insider. https://www.businessinsider.com/openai-chatgpt-ceo-sam-altman-responds-school-plagiarism-concerns-bans-2023-1

Morozov, E. (2013). To save everything, click here: Technology, solutionism and the urge to fix problems that don’t exist. Allen Lane.

Munn, L., Magee, L., & Arora, V. (2023). Truth machines: Synthesizing veracity in AI language models. AI & SOCIETY. https://doi.org/10.1007/s00146-023-01756-4

Newfield, C. (2023). How to make “AI” intelligent; or, the question of epistemic equality. Critical AI, 1(1-2), https://doi.org/10.1215/2834703X-10734076

Ofcom. (2023). Online nation—2023 report (p. 106). Ofcom. https://www.ofcom.org.uk/__data/assets/pdf_file/0029/272288/online-nation-2023-report.pdf

Oravec, J. A. (2023). Artificial Intelligence implications for academic cheating: Expanding the dimensions of responsible human-AI collaboration with ChatGPT. Journal of Interactive Learning Research, 34(2), 213-237.

Pawelec, M. (2022). Deepfakes and democracy (theory): How synthetic audio-visual media for disinformation and hate speech threaten core democratic functions. Digital Society, 1(2), 19. https://doi.org/10.1007/s44206-022-00010-6

Perrigo, B. (2023, January 18). Exclusive: The $2 per hour workers who made ChatGPT safer. Time. https://time.com/6247678/openai-chatgpt-kenya-workers/

Petersen, L. B. (2024, February 8). AU lemper reglerne for brug af kunstig intelligens til specialer og bacheloropgaver. Omnibus. https://omnibus.au.dk/arkiv/vis/artikel/au-lemper-reglerne-for-brug-af-kunstig-intelligens-til-specialer-og-bacheloropgaver

Pischetola, M. (2021). Re-imagining digital technology in education through critical and neo-materialist insights. Digital Education Review, 40(2), 154-171, https://doi.org/10.1344/der.2021.40.154-171

Rahm, L., & Rahm-Skågeby, J. (2023). Imaginaries and problematisations: A heuristic lens in the age of artificial intelligence in education. British Journal of Educational Technology, 54(5), 1147–1159. https://doi.org/10.1111/bjet.13319

Rawas, S. (2023). ChatGPT: Empowering lifelong learning in the digital age of higher education. Education and Information Technologies, 1-14. https://doi.org/10.1007/s10639-023-12114-8

Ricaurte, P. (2019). Data epistemologies, the coloniality of power, and resistance. Television & New Media, 20(4), 350–365. https://doi.org/10.1177/1527476419831640

Roemer, G., Li, A., Mahmood, U., Dauer, L., & Bellamy, M. (2024). Artificial intelligence model GPT4 narrowly fails simulated radiological protection exam. Journal of Radiological Protection. https://doi.org/10.1088/1361-6498/ad1fdf

Rudolph, J., Tan, S., & Tan, S. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning and Teaching, 6(1), Article 1. https://doi.org/10.37074/jalt.2023.6.1.9

Sample, I. (2023, July 10). Programs to detect AI discriminate against non-native English speakers, shows study. The Guardian. https://www.theguardian.com/technology/2023/jul/10/programs-to-detect-ai-discriminate-against-non-native-english-speakers-shows-study

Selwyn, N. (2024). On the limits of Artificial Intelligence (AI) in education. Nordisk Tidsskrift for Pedagogikk Og Kritikk, 10. https://doi.org/10.23865/ntpk.v10.6062

Shew, A. (2020). Ableism, technoableism, and future AI. IEEE Technology and Society Magazine, 39(1), 40–85. https://doi.org/10.1109/MTS.2020.2967492

Størup, J. O., & Lieberoth, A. (2023). What’s the problem with “screen time”? A content analysis of dominant voices and worries in three years of national print media. Convergence, 29(1), 201–224. https://doi.org/10.1177/13548565211065299

Sullivan, M., Kelly, A., & McLaughlan, P. (2023). ChatGPT in higher education: Considerations for academic integrity and student learning. Journal of Applied Learning & Teaching, 6(1), 1-10. https://doi.org/10.37074/jalt.2023.6.1.17

Tacheva, J., & Ramasubramanian, S. (2023). AI Empire: Unraveling the interlocking systems of oppression in generative AI’s global order. Big Data & Society, 10(2), 20539517231219241. https://doi.org/10.1177/20539517231219241

Taffel, S. (2015). Archaeologies of electronic waste. Journal of Contemporary Archaeology, 2(1), 78–85. https://doi.org/10.1558/jca.v2i1.27119

Taffel, S. (2023). Data and oil: Metaphor, materiality and metabolic rifts. New Media & Society, 25(5), 980–998. https://doi.org/10.1177/14614448211017887

ThankGod Chinonso, E. (2023). The impact of ChatGPT on privacy and data protection laws, SSRN. http://dx.doi.org/10.2139/ssrn.4574016

TV 2. (2023, June 30). Hver tiende studerende har snydt til eksamen med ChatGPT [Every tenth student has cheated at exams with ChatGPT]. nyheder.tv2.dk. https://nyheder.tv2.dk/tech/2023-06-30-hver-tiende-studerende-har-snydt-til-eksamen-med-chatgpt

Uzun, L. (2023). ChatGPT and academic integrity concerns: Detecting artificial intelligence generated content. Language Education and Technology, 3(1), 45-54.

Van Dis, E., Bollen, J., Zuidema, W., Rooij, R., & Bockting, C. (2023). ChatGPT: Five priorities for research. Nature, 614, 224-226. https://doi.org/10.1038/d41586-023-00288-7

Varanasi, L. (2023, November 5). GPT-4 can ace the bar, but it only has a decent chance of passing the CFA exams. Here’s a list of difficult exams the ChatGPT and GPT-4 have passed. Business Insider. https://www.businessinsider.com/list-here-are-the-exams-chatgpt-has-passed-so-far-2023-1

Ventayen, R. (2023). OpenAI ChatGPT generated results: similarity index of artificial intelligence-based contents. Social Science Research Network Electronic Journal. https://doi.org/10.2139/ssrn.4332664

Verdegem, P. (2021). Introduction: Why we need critical perspectives on AI. In AI for everyone? Critical perspectives on AI (pp. 1–18). University of Westminster Press. https://doi.org/10.16997/book55

Warschauer, M., Tseng, W., Yim, S., Webster, T., Jacob, S., Du, Q, & Tate, T. (2023). The affordances and contradictions of AI-generated text for second language writers, Journal of Second Language Writing, 62. https://doi.org/10.2139/ssrn.4404380

Williamson, B. (2017). Big Data in education: The digital future of learning, policy and practice. SAGE. https://doi.org/10.4135/9781529714920

Winner, L. (1978). Autonomous technology: Technics-out-of-control as a theme in political thought. MIT Press.

Winograd, A. (2023). Loose-lipped large language models spill your secrets: The privacy implications of large language models. Harvard Journal of Law & Technology, 36(2), 616-656. https://jolt.law.harvard.edu/assets/articlePDFs/v36/Winograd-Loose-Lipped-LLMs.pdf

Xiao, P., Chen, Y., & Bao, W. (2023). Waiting, banning, and embracing: An empirical analysis of adapting policies for Generative AI in Higher Education (arXiv:2305.18617). arXiv. https://doi.org/10.48550/arXiv.2305.18617

Yu, H. (2023). Reflection on whether Chat GPT should be banned by academia from the perspective of education and teaching. Frontiers in Psychology, 14. https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1181712

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Public Affairs.

Downloads

Published

2024-08-30

How to Cite

Driessens, O., & Pischetola, M. (2024). Danish university policies on generative AI: Problems, assumptions and sustainability blind spots. MedieKultur: Journal of Media and Communication Research, 40(76), 31–52. https://doi.org/10.7146/mk.v40i76.143595

Issue

Section

Articles: Theme section