Oliver Laas: Estonia's AI Leap requires AI ethics

If AI bias is not addressed in the learning process, it could amplify discriminatory prejudices that students are already exposed to through social media and other channels, argues Oliver Laas in his Vikerraadio daily commentary.
With the new academic year, the education program AI Leap 2025 (TI-Hüpe 2025), developed in collaboration between the public and private sectors, will be launched this fall. High school students and teachers will be given access to artificial intelligence (AI) learning applications. According to President Alar Karis, one of the project's initiators, the goal is to teach people how to use AI "in the smartest way possible."
The initiative has already been commented on from various perspectives. Gert Jervan has pointed out the poor Estonian language proficiency of existing AI applications. Liina Kersna and Kristjan-Julius Laak argue that it is important to teach students self-regulated learning from an early stage. Karl Pütsepp has written about the challenges of educational innovation being bogged down in form rather than content. However, the public discussion so far has lacked a perspective addressing the role of ethics and values.
President Karis' words suggest that "smart" AI usage primarily refers to instrumental intelligence, as the expected outcome is to enhance "the international competitiveness of Estonians and the Estonian economy." This type of intelligence focuses on selecting the most effective means to achieve set goals.
Given AI's societal impact (e.g., changes in the labor market) and environmental consequences (e.g., its contribution to climate change), I argue that at least equally — if not more — important is practical wisdom, a concept Aristotle referred to as phronesis. This form of wisdom involves the moral evaluation of both goals (e.g., writing an essay) and the means to achieve them (e.g., using a chatbot for this purpose), leading to informed decision-making. Therefore, teacher training within the AI Leap program, as well as the later instruction of students, should include discussions on AI ethics.
If AI ethics is to be integrated into AI Leap, which topics should be covered? One proposal suggests addressing four key themes: privacy, surveillance, autonomy and algorithmic bias and discrimination.
Privacy is crucial because companies that provide AI learning applications and services collect user data to train their models and personalize their services. When the primary users are minors who are compelled to use these applications for educational purposes, privacy concerns become even more pressing.
Data collection for both model training and personalized learning requires extensive monitoring of students. Even if students or their parents have given consent, it is often not informed consent, as privacy policies describing data collection and processing are either not read or not fully understood. If the data gathered through surveillance is used for automated predictions of academic performance or other forms of student profiling, the ethical concerns surrounding AI in education become even more pronounced.
Algorithmic predictions of academic performance or other personal metrics (such as recidivism rates or creditworthiness) can reduce an individual's autonomy, as people are subtly pushed toward choices that align with the preferences of the algorithm and its creators. Large language models can also affect a student's autonomy by influencing their written and spoken self-expression, nudging them toward the dominant linguistic norms of their cultural environment.
This happens because the training data for large language models often underrepresents the dialects of smaller social groups, meaning that these dialects are rarely reflected in the texts generated by AI.
Contrary to popular belief, machine learning algorithms and models are not objective. They reproduce biases present in the training data as well as assumptions taken for granted by their creators. A well-known example is the gender bias in image generators: pictures of nurses tend to depict women, while images of doctors predominantly show men. If AI bias is not addressed in the learning process, it could reinforce discriminatory stereotypes that students are already exposed to through social media and other channels.
These and many other topics should be discussed with both teachers and students. By integrating AI ethics into AI Leap, the program could produce not just instrumentally skilled but also practically wise citizens — individuals capable of critically evaluating both their goals and the tools available to achieve them in various contexts.
--
Follow ERR News on Facebook and Twitter and never miss an update!
Editor: Marcus Turovski