Commissioner on supervision and ethics in the use of artificial intelligence

Artificial intelligence is here and will be applied more and more. It is up to all of us to accept that fact and find a way to deal with it properly. We are quite belated and there is no time for long discussions while the world around us is changing rapidly, said the Commissioner for the Protection of Equality, Brankica Janković, at the panel “Supervision and ethics in the use of artificial intelligence”, held during the Privacy Week organized by Partners Serbia.

The Commissioner stated that the ever-widening use of artificial intelligence has triggered dramatic changes that await us in all areas of life, bringing with them potential advantages but also risks of discrimination and human rights violations, which is why it is necessary to establish appropriate principles of data processing, algorithm creation, and control.

Given that social, health, and economic services, as well as the labor market, will increasingly rely on automation, both in the private and public sectors, and that violations of rights by algorithms are difficult to detect, which can affect a huge number of people, the Commissioner points out that it is necessary to include members of different social groups in the creation of such programs, so that, with their specific perspective, they contribute to inclusiveness of the process, “cleaning” of data, and elimination of prejudices from algorithms and codes. It is necessary to raise awareness among the technical community about risks and dangers, but also to build their capacities on the basic concepts of recognizing and reacting to discrimination, added Janković.

The Commissioner also cited data from the State of Artificial Intelligence Bias Report, by one of the leading artificial intelligence companies DataRobot, which was produced in collaboration with the World Economic Forum and global academic leaders, that many managers of the world’s largest companies share deep concerns about the risk of bias in artificial intelligence systems (54%) and expect the national legal framework to prevent them (81%). Although many companies have developed internal mechanisms to eliminate prejudices and discrimination, it must be taken into account that they also have their limitations due to the inability to recognize context and other more abstract concepts, the Commissioner pointed out.

Bearing in mind all of the above, the creation of fair algorithms and the use of data will not be a simple process, but I believe that control mechanisms will crystallize and become uniform over time, as well as that potential discriminatory outcomes will be routinely checked in relevant institutions such as the Commissioner, concluded the Commissioner Janković.

Print Friendly, PDF & Email
back to top