Study: Now is the time to address algorithmic discrimination, while the deployment of AI systems in Finland is still at a modest level

Government Communications DepartmentGovernment analysis, assessment and research activitiesMinistry of Justice
Publication date 23.8.2022 8.35
Press release 491/2022

The use of AI systems is not yet widespread in Finland, which means it is still possible to address algorithmic discrimination from the very beginning of the AI development cycle. This is according to a new study, which developed a tool for assessing the discriminatory impacts of artificial intelligence and promoting equality in the use of AI applications.

Artificial intelligence has developed at an unprecedented rate over the past five years. At the same time, there have been growing concerns about algorithmic discrimination. A particularly noteworthy concern is that machine learning systems can maintain and exacerbate existing discrimination and inequality through automated decision-making.

This can be due to unrepresentative training data or poorly selected predictor variables, among other factors. In the United States, for example, companies have had to discontinue the use of recruitment algorithms that were found to discriminate against female applicants based on historical data: since women had not been selected for certain positions in the past, the algorithms did not recommend this now, either.

Technical tools alone are not enough to solve algorithmic discrimination

The Avoiding AI Biases project mapped the risks to fundamental rights and non-discrimination involved with machine learning-based AI systems that are either currently in use or planned for use in Finland. The mapping revealed that algorithmic discrimination has attracted reasonable attention, at least in the public sector, despite the fact that the deployment of AI systems is still in its infancy. There are no clear models or tools in place for cooperation between the authorities against algorithmic discrimination. These would, however, play an important role in identifying and preventing algorithmic discrimination.

The study also examined the risks associated with discrimination in AI systems, the reasons behind the discriminatory impacts and the methods developed to prevent them from the perspective of the Finnish Non-Discrimination Act. The analysis found that discriminatory biases are most often created in the value chains of AI systems as a combined effect of different socio-technical factors. They cannot be solved using technical methods alone without case-by-case consideration. Without transparent supervision and standards, there is a risk that the power to make decisions related to algorithmic discrimination in technical solutions will become centralised to private AI developers.

Assessment framework aims to promote equality 

Based on the mapping carried out for the study, the researchers developed an assessment framework for non-discriminatory AI applications. The framework helps to identify and manage risks of discrimination, especially in public sector AI systems, and to promote equality in the use of AI.

The assessment framework is based on the lifecycle model, which emphasises that the discriminatory impacts of AI systems can arise at different stages of their lifecycle, from design and development to deployment. The framework aims to support the implementation of the Non-Discrimination Act and the obligation to promote equality in AI applications.

Recommendations for regulating algorithms

As part of the study, the researchers issued a number of policy recommendations that support the use of the assessment framework and its application to public governance. The recommendations are divided into three groups based on their objectives: raising public awareness of algorithmic discrimination; increasing cooperation between stakeholders in the responsible development of AI; and promoting equality in the use of AI through proactive regulation and tools.

The Avoiding AI Biases project was carried out by Demos Helsinki, the University of Turku and the University of Tampere. The study is part of the implementation of the 2021 Government plan for analysis, assessment and research.

Inquiries: Atte Ojanen, Research Coordinator, Demos Helsinki, tel. +358 50 9177 994, atte.ojanen(at)demoshelsinki.fi

The Government’s joint analysis, assessment and research activities (VN TEAS) produce data used to support decision-making, everyday operations and knowledge-based management. They are guided by the Government’s annual plan for analysis, assessment and research. The content of the reports published in the publication series of the Government’s analysis, assessment and research activities is the responsibility of the producers of the data in question and does not necessarily represent the view of the Government. For more information, visit https://tietokayttoon.fi/en.