AI forcing Estonian universities to rethink study organization

Universities are facing an unprecedented situation where students can rely on digital tools to solve even the most difficult assignments. Educational institutions are looking into ways of altering the organization of studies, while it requires more creativity and flexibility from academic staff.
Kristjan-Julius Laak, junior researcher in artificial intelligence for the University of Tartu Computer Science Institute, said that large language model-based chatbots like ChatGPT are now accessible to everyone. "According to a study we conducted, 81 percent of students have used chatbots, and 67 percent have done so in their coursework. Of these, three-quarters use artificial intelligence quite frequently," he told ERR.
Laak is part of the university's "Using Text Robots in Coursework" project group. Since, figuratively speaking, the genie of artificial intelligence can't be put back into the bottle, the group is exploring ways to integrate coursework and chatbots without students losing valuable learning experiences. "We're also supporting instructors, organizing experience seminars and writing articles," he cited as examples.
The group has also compiled a corresponding usage guide that shares tips for the responsible use of artificial intelligence. Instead of letting the chatbot do all the work, the guide teaches to use artificial intelligence more as a helper or a learning assistant. Similar guides have been prepared by other Estonian universities, including Tallinn University and Tallinn University of Technology (TalTech).
Tallinn University's cultural data analysis researcher and member of the artificial intelligence advisory council, Andres Karjus, said that text robots can be very useful learning aids. However, students sometimes delegate entire tasks to them, whether it be essays, reports or self-reflection.
"At Tallinn University, we are already working on the second version of the guideline. Its core message is that it's important to clearly discuss in which subjects and under what circumstances it makes sense to use artificial intelligence, and when it's necessary to say, 'Here, you should do it yourself, otherwise you might miss out on learning something important,'" said Karjus.
AI sparks mistrust
According to Karjus, it is currently not reliably possible to prove whether or not a person has used a chatbot to compose a text. This can create complicated situations between students and instructors where it is very difficult for either party to prove their case. It is very hard for an instructor to accuse a student of using artificial intelligence, but it is also very hard for the student to defend themselves.
"If an instructor has been observing a student's progress and suddenly there is some kind of mysterious leap, it might raise the question of whether they did their work themselves. This external help doesn't necessarily have to be a machine; it could also be a friend or a parent. It's just that with a machine, its use cannot be easily detected," the researcher acknowledged.
Developers of plagiarism detection software are already selling special artificial intelligence detectors, but Karjus believes that one should definitely not rely on them. "In plagiarism detection, special software compares whether a given text is similar to any other texts, but with robot-generated text, there's simply nothing to compare. You can look for certain words that some language models produce a bit more, but even that is a quite futile prediction," he pointed out.
Karjus said there can be unfortunate situations where a student has put in the effort, but the instructor accuses them of using artificial intelligence based on a false positive result or merely a suspicion that arises from intuition. "This is again a very difficult situation for the student, who then has to prove that they haven't used any external help – that they are a human, not a machine."
"At Tallinn University, we had a discussion day for instructors about artificial intelligence, and there was a panel discussion that included a student representative. They conducted a survey among their fellow students about their concerns regarding artificial intelligence. One respondent wrote about having to ensure their work wasn't too good to avoid accusations of using ChatGPT. Such situations should be avoided in education," Karjus noted.
Tallinn University of Technology has considered amending regulations to govern the use of artificial intelligence. However, according to Prorector Betra Leesment, it was ultimately concluded that the field is already regulated by the code of academic ethics. "More important than sanctioning is to raise students' awareness of academic ethics," emphasized Leesment.
Kristjan-Julius Laak affirmed that there is no software capable of definitively distinguishing whether a text was created by artificial intelligence or a human. "The main problem with them is that they give false positive responses, meaning they indicate work was done with artificial intelligence when it actually wasn't. Therefore, our group has taken the stance that the university should not use such programs," Laak stated.
Therefore, the University of Tartu working group believes that the nature of assignments given to students should be changed. "If the instructor has given assignments that artificial intelligence can easily solve, then making any kind of distinction is essentially impossible. This puts both students and instructors in a difficult situation," Laak admitted.
"These conflicts are very hard to resolve because it's impossible to prove anything. Herein lies our recommendation that assignments should be designed in such a way that they couldn't be solved with artificial intelligence, or there should be some other assessment criterion that wouldn't be influenced by artificial intelligence," the scientist added.
Leesment referred to the same point. "In assessing knowledge, we must use more of those evaluation methods that can check the level of students' learning outcomes and ensure that their submitted works are their own," said the prorector.
Study organization must change to keep up with AI development
According to Kristjan-Julius Laak, it is currently clear that universities must not remain inactive. If no changes occur in teaching methods, students will start doing their homework with the help of artificial intelligence. He believes that one cannot be fully prepared for such significant changes, but it is necessary to start somewhere.
"I encounter instructors who say that such practices are already happening in their courses. Quite a few of them have already made the first changes. Some have abandoned written homework and moved more teaching into seminar rooms," Laak provided as an example.
The University of Tartu working group advises instructors to try solving the homework they assign with artificial intelligence and experiment with how to make these assignments more resistant to AI. It also recommends reviewing teaching methods and assessment criteria. "It would be worth considering, for example, assessing active learning activities in seminars, instead of assessing homework," the researcher said.
Thus, the nature of homework must change according to the working group, and Laak mentions that this has already happened in many places. "For example, the flipped classroom method can be used, where the purpose of homework is to prepare students for work in the classroom. Preparation and the student's contribution in the seminar are assessed. They need to do homework to later do something that is assessed. The goal is to make learning more learner-centered," Laak summarized.
--
Follow ERR News on Facebook and Twitter and never miss an update!
Editor: Marcus Turovski