Data ethicist Juraj Podroužek: “I think it’s correct to have some red lines that set out areas where AI should not venture.”
Technologies are not value-neutral; they are imbued with the values of their creators. How can the ethical values that society deems important be reflected in technologies? And why should companies use their own moral compasses, and not just rely on current legislation? With AI ethics researcher Juraj Podroužek, we talked about the ethical dimension of using AI and biometrics, the correct regulations, and the ways in which companies can stay trustworthy.
From an ethical point of view, biometrics is currently perceived as controversial – specifically because it touches these values. How do you, as an ethicist, see the role of biometrics?
Biometrics is actually a perfect example of technology where the good intentions could collide with their unintended social impacts. On the one hand, it can help us maintain safety at airports or border crossings. Yet at the same time, it can undermine principles of privacy or dignity. My role as a philosopher and ethicist is to help the developers to be aware of such value conflicts in their technology. The ultimate goal here is for trustworthy and ethical technology that reaches its intended goals, while also upholding the ethical principles that society deems important.
This all sounds good, but how to transfer those values into the actual production process?
Well, you can stand and preach about how, for example, Clearview is using technology in a way that infringes on people’s privacy. But that doesn’t get you too far. What business taught me is that it’s better to be proactive, identify potential social risks ex ante, and try – through proper design – to avoid them before they happen. The current situation in AI regulation provides a good opportunity to start talking about what we actually care about in our work.
Ethics goes beyond legislation. In normal circumstances, there are rules and regulations that you have to abide by, but there is little regulation regarding AI. How do you introduce ethical rules into tech when there is no outside pressure to do so?
There is a proposal by the European Commission on how to regulate the whole AI industry in Europe, and this proposal of the so-called Artificial Intelligence Act is being widely discussed. But this will still take some years to pass, so we’re currently in a so-called policy vacuum, where many aspects of AI don’t have standardised rules. For example, the concept of responsibility is not firmly established. And this is the perfect time for ethics to come in and fill this vacuum. While the laws are being prepared, as companies we can be proactive and set some trends for any future AI regulation. As a philosopher, I believe there are universal human values that we care about as a society, such as doing no harm or non-maleficence.
In recent years we’ve seen a very strong backlash toward some AI systems, and facial recognition specifically. Why is it so vocally opposed? You already mentioned Clearview, but there was also ID.me that was quickly pulled from the IRS systems in the USA, and other examples. The algorithm of ID.me is the most accurate in the world.
I think that biometrics, and facial recognition especially, are unique in the way they can invade our physical private space. Some people feel helpless and believe that biometrics can make decisions outside of their control. For example, Israeli surveillance researcher Avi Marciano is talking about “mute individuals”, for which their bodies “talk” instead of them. Because by entering an area that’s under surveillance, you let your body talk for you, which can even lead to depersonalisation and loss of human autonomy and dignity.
“I think that biometrics, and facial recognition especially, are unique in the way they can invade our physical private space.”
The other aspect is the value of personal space itself. If someone or something enters our space without our permission, we feel physically uncomfortable. We see it for example in stores and shopping centres, or in elevators. When you get too close to another person, this person will probably leave because of the uncomfortable closeness. And biometrics does exactly that, on a large scale, because it gets uncomfortably close. Then there are of course the examples of misuse by non-democratic governments or companies, and the chilling effects that can have on a whole society.
Even here in Eastern Europe, we also have the experience of being constantly spied on by the state, making us wary of any technology that makes spying on us even easier. This leads to the Big Brother feelings and instinctive fear of facial recognition in public spaces.
So what we’re talking about is basically an instinct, not a rational reaction. But this shows in legislation as well, as some legislators want to flat-out ban facial recognition for any use, including the beneficial ones. So how do we reach the standards that you mentioned at the beginning, to make people happy?
The proposal of the AI Act I mentioned earlier is talking about prohibited practices as well. Those concern the AI systems that clearly violate the European shared values. For example, the proposal of the AI Act says that social scoring systems, or systems that would leverage on human vulnerabilities, are right out. And one of the prohibited items is also “real-time” biometrics deployed in publicly accessible spaces for the purpose of law enforcement. This mainly addresses the mass surveillance we just talked about, which is often used without consent.
In general, I think it’s correct to have some red lines that set out areas where AI should not venture – like mass surveillance or manipulation techniques. These technologies and systems that use them are incompatible with trustworthy AI by their very nature – you can’t do mass public surveillance and uphold people’s right to privacy at the same time, for example. But for most areas in AI, even in biometrics, I think there are very few practices that are fully unethical in such a way. So for these systems I prefer a risk-based approach to regulation, where you can assess the possible risks and address them before entering the market, which means during the design and development phases.
“In general, I think it’s correct to have some red lines that set out areas where AI should not venture – like mass surveillance or manipulation techniques.”
In the Kempelen Institute of Intelligent Technologies (KInIT), where I lead the team focused on ethics and human values in technology, we conduct research that is aimed at these proactive risk-based methods that will support ethical design of AI systems. For example, an airport is a semi-open public space where you expect some level of security screening to be going on. And in such a place, biometrics should be permissible – but the potential ethical and societal risks should be properly addressed and prevented. This has to be done ex ante, of course.
ALTAI – The Assessment List on Trustworthy Artificial Intelligence is a tool helping businesses and organisations to self-assess the trustworthiness of their AI systems under development. The concept of Trustworthy AI is based on seven key requirements:
1. Human Agency and Oversight;
2. Technical Robustness and Safety;
3. Privacy and Data Governance;
5. Diversity, Non-discrimination and Fairness;
6. Environmental and Societal Well-being; and
Companies such as airports do not usually develop their AI solutions themselves. Why should they care about ethics in their AI or biometric solution? After all, they are not obliged by law to do so, so they can simply go for the lowest price. And how can they actually distinguish between the ethical and unethical solutions?
Again, the main focus should be the universal moral principles and values. It may seem that these change with different cultures across the world. But in AI ethics, there are already sets of values that the experts in the field agree upon, and which create the requirements for so-called trustworthy AI – like human control, safety, privacy, transparency, fairness, responsibility, or social and ecological sustainability. The experts from the European High-Level Expert Group on AI have created an Assessment List for Trustworthy AI (ALTAI) that could help you address these values and principles. At the Kempelen Institute we use such tools as ALTAI to specify what you should actually look for when you try to assess the social impacts of your technological solution.
So you can just click through a form and be done with it?
Not really. Some of the questions in these assessments really need expert guidance – mostly when you’re not familiar with the concepts they use in order to make sense for you as a company. This expert on AI ethics should help you think about your processes, and situations where your technology is used within a broader perspective, and can be a good guide for tackling these questions in a sensitive way. Otherwise, you can just click through it and consider it done; a formality. But once you start thinking about it deeply, it takes time both to answer honestly and implement it.
Where do you see the main ethical challenges today, and how can they be addressed?
When we assessed one of the facial recognition systems, we identified over 30 different moral and societal problems. Biometrics has the highest impact on protection of privacy and autonomy – there’s a question of how to provide users with alternatives when they object to biometric identification in semi-public spaces, for example.
Transparency is another big issue: you should know that you’re entering a space where AI or biometrics is used as soon as you enter. But transparency also means explainability in this context: you should be able to understand how the AI system actually reached its conclusion. With deep learning, it’s not always so easy to see.
The third big issue is fairness and accuracy. You need to know that your system is not systematically biased against some groups of people, or be able to provide countermeasures.
Until you have been able to address all these issues, it will be very hard for you as a company to earn the trust of people and users, and lower their levels of fear towards your technology.
I am convinced that you cannot just receive instant trust from people. It has to be earned – through rational explanation of what your technology does, and what it does not do. That’s why it’s also important to think about ethics. Because you can show people who come into contact with your technology that you actually thought about the risks, that you actually care about their issues, and that they can therefore trust you and your systems.
AUTHOR: Ján Záborský
PHOTOS: Dominika Behúlová