Water and fire

Should universities embrace AI?

11 min
30-08-2023
Text Katrien Verreyken
Image Sebastian Steveniers

IN SHORT

  • According to David Martens, we need to fully embrace AI as a university. Nevertheless, AI will make up to a quarter of our current jobs disappear.
  • Some of the creative process and intellectual effort will be lost if students write their papers and dissertations with AI, Anthony Longo fears.
  • The risks of AI: no transparency in the creation process and no clear ownership, the risk of biases, and power concentration on some big US tech players.
  • Storing data must be done fairly and securely, and we need to know how AI models are trained and who exactly has access to them.
  • Ethical reflection is essential. This can be done, for example, through a ‘Responsible AI’ course, and through good training courses for teaching staff.
  • The new research centre ‘Antwerp Center for Responsible AI’ (ACRAI) aims to pool university-wide expertise so that ethical decisions on AI are not only left to companies.

Can ChatGPT be used to write a dissertation? And more generally: should we go full steam ahead with recent developments in Artificial Intelligence (AI), or is some restraint needed here? Datamining & Responsible AI specialist David Martens, founder of the new ‘Antwerp Center for Responsible AI’ and, among other things, lecturer of the Data Science Ethics course, thinks our university should fully embrace AI. Philosopher Anthony Longo, who is writing his PhD thesis on digital technology and launched the platform devirtuelemens.be, fears that due to the speed of implementation, the reflex to be critical is somewhat lost. He thinks we should push the brakes for now.

 

Fully embracing AI

 

It has been raining AI messages in the press lately: doomsayers and progress philosophers are speaking out. Historian and futurologist Yuhal Noah Harari fears that AI will destroy our civilisation, Ann Nowé, head of the VUB’s AI lab, called ChatGPT an ethically irresponsible experiment, Geoffrey Hinton, one of the fathers of AI technology, quit Google and warned of the dangers of the technology, and there are a lot of signs that the air is escaping from the stock market bubble around artificial intelligence. But who is right? And what role can or should universities play in that whole AI story?

 

Datamining and Responsible AI specialist David Martens (Faculty of Business and Economics) is very adamant about this: ‘As a university, we need to fully embrace AI. AI can make up to a quarter of our current jobs disappear, OpenAI even claims that eighty per cent of all jobs will change due to the interference of generative AI. Part of our role as a university is to ensure that the economy and society are made better by this technology, and it is our duty to teach current students how to use AI responsibly.’

quote image

As a university, we must ensure that the economy and society are made better by AI, and it is our duty to teach current students how to use AI responsibly.

David Martens

Ownership and transparency gone

 

‘I’m a bit more worried’, PhD student Anthony Longo says (Department of Philosophy). ‘AI presents the university with a number of issues that we cannot ignore. For example, don’t students lose opportunities to train their intelligence because of AI? When implementing any new technology, we lose something and gain something. There will be new skills and forms of creativity, but I think some of the creative process and intellectual effort will also be lost if students start writing their papers and dissertations with AI. AI has been developed in part to remove some of the “friction” in performing tasks, but I think it is in that friction, in the struggle and failure, that the value of the learning process lies. When using AI, to what extent are you still educating students academically and not merely teaching them how to write the right prompts?’

 

Longo also fears that transparency in the creation process will be lost with AI. ‘Take ChatGPT for example: I no longer know exactly where my output is coming from, while for a researcher in philosophy that is very important. What are the assumptions? How did these arise and how is a conclusion reached? I think that through AI some form of ownership in that creation process is also lost.’

 

European AI legislation and risks

 

‘I don’t agree with everything’, Martens shares. ‘I think it is the responsibility of the teaching staff to adapt their forms of assessment in such a way that students continue to critically reflect on their own work even when using AI output. I let my students use ChatGPT for their papers, but I also let them write a paragraph about what they liked and disliked about it, and of course I point out the dangers and risks. Our university has since created policies on AI. For now, I am also using Harvard’s policy because it is clearer to students. It is true that ownership is one of the risks. European AI legislation fortunately expressly states that OpenAI platforms must declare where their “copyrighted data” comes from. Another risk is the possibility of creating “biases”. If an algorithm is trained with non-representative information, then inequality between populations will increase. And a very big risk is the so-called “concentration of power”: at the moment, a number of big US tech players like OpenAI, Microsoft, Meta and Google are leading the discussions on this topic and we have no control over it ourselves. This makes it all the more important that Europe sets out guidelines to follow.’

AI and exploitation

 

‘Somehow, I cannot rid myself of the impression that AI depends on some kind of exploitation’, Longo believes. ‘I see three forms there: the exploitation of society because individuals are used as test objects without knowing it; the exploitation of labour because the AI models depend on “content moderation” and training that often happen in poor working conditions in emerging countries; and the exploitation of nature: the development of AI requires a huge amount of raw materials and there is little transparency about this. As a university, we need to consider that too. When considering the suppliers we work with, why not critically examine whether AI fits into the university’s sustainability or diversity strategy?’

 

‘I think these are fine and legitimate academic questions, but they should not get in the way of the rollout of AI’, Martens says. ‘The exploitation of the individual is mainly about transparency; as far as the exploitation of labour is concerned, there are really more urgent issues to address; and as far as the exploitation of nature is concerned, it is really not that bad. In terms of CO2 emissions, training GPT3 is equivalent to a car driving around the world nine times (356,000 km). Are these adequate reasons to ignore this technology for the time being then? To me it seems not.’

quote image

Does AI fit into the university’s sustainability or diversity strategy?

Anthony Longo

Opportunities and dangers

 

‘I mostly see the opportunities of AI’, Martens says. ‘With AI, we just do a better job at a lower cost: writing that difficult email, finalising an abstract, fine-tuning a paper, finding a good metaphor, etc. I also teach a workshop for PhD students on the use of AI at our faculty’s doctoral day, and one of the things I suggest there is that they let ChatGPT play the “reviewer” who critiques their thesis and identifies possible gaps or errors. In my view, AI stimulates creativity more than it curtails it. The technology generates new ideas, makes quick summaries of a particular area of research, explains things easily in a way that you can also apply in lectures. Using visual AI applications like Midjourney, you can generate stunning diagrams for your presentations. And furthermore, in my view, AI also encourages equality between people. Previously, only researchers who had money could pay for such services, but now everyone has access to them.’

 

But Martens is not blind to the dangers; he even wrote a book about them. ‘For AI, I apply the FAT principles: fairness, accountability and transparency. Storing data should be done fairly and sensitive info does not belong in it. We need to know what safeguards the system has, how the models are trained and who exactly has access to the data and models.’

 

Humanising AI

 

‘Another big danger I see is in anthropomorphising AI – treating the technology as human or ascribing human characteristics’, Longo warns. ‘This happens, for example, in the AI implementation in Snapchat for young people, or in AI applications used for mental health care. Research shows that people are very vulnerable to emotional-affective technology. Using very human, emotional language is potentially harmful. And it also feeds the narrative that we are moving towards a form of AI that will take over from humans.’

‘Again, those are interesting discussions, but they distract from the opportunities AI offers’, Martens believes. ‘However, I do agree with you that it is baffling that AI experiments are conducted on children. That is unacceptable and we must take a clear stand against it.’

 

AI versus book printing

 

Can we compare what we see happening now with AI to a past evolution/revolution? Google’s breakthrough, for example? ‘I think even a comparison to the invention of printing or the telephone is possible’, Longo says. ‘When the telephone entered the living room, new codes of conduct in communication suddenly had to be developed. That also caused a shockwave. In the case of AI, we need to learn to make sense of digital models without self-awareness. We have seen in the past that implementation always followed such a shock, but the difference this time is in the speed at which it all happens. Whereas the printing press was given a few centuries to become established everywhere, here it seems almost a matter of days. It is all evolving so incredibly fast that there is barely time to research what literacy we need for AI, and what skills or social etiquette. In my view, the speed leads to a rather uncritical appropriation, which in turn raises an urgent demand for regulation. So maybe it is not a bad thing that we slow down a bit and take our time?’

Course ‘Responsible AI’

 

‘I believe that fully embracing AI and critically reflecting on it can go hand in hand’, Martens believes. ‘But that ethical reflection is indeed essential. I’m thinking about a course called ‘Responsible AI’, but most importantly, good training courses and programmes for our lecturers on how to use AI responsibly in their courses. The Flemish government invested millions in AI research in recent years. More such initiatives are definitely needed. By the way, I hope this will also become a topic in the upcoming rector elections (in the spring of 2024 UAntwerp will elect a new rector, ed.). How are we going to teach people how to use AI and what will the role of universities be in that?’

 

‘I do hope that the role of the university is not merely to follow the status quo of the labour market, but that our knowledge institution is also allowed to actively steer’, Longo counters.

 

‘Hence our new research centre, “Antwerp Center for Responsible AI” (ACRAI)’, Martens replies. ‘Through ACRAI, we already want to challenge research and combine expertise from different faculties on AI to provide direction to society and the economy.’

Share this article