Skip to content Skip to sidebar Skip to footer
Home Resources Blog AI: Empowering women or reinforcing bias?

AI: Empowering women or reinforcing bias?

6 minutes reading time

AI: Empowering women or reinforcing bias?

With the rapid rise of technology, particularly artificial intelligence, a lot of new challenges are emerging. One critical issue is that AI systems are showing alarming signs of bias—reinforcing gender inequality in many areas. Cyber Security Coalition spoke about these important challenges with Mrs. Catherine Van de Heyning, Professor European Fundamental Rights at University of Antwerp, Public Prosecutor and Expert Advisory Committee UN Human Rights Council. She was one of the keynote speakers at the latest Women4Cyber event hosted at the Belgian Federal Public Service Economy in Brussels.

Let’s start with a big question: do you think AI is good or bad in general, and more specifically for gender equality? 

Dr. Van de Heyning: “I truly believe that AI has many benefits and offers huge possibilities to improve our lives. And not just in general or in a professional context, but for more empowerment and equality as well. There are tons of examples where AI has already generated more equality. One of my favourite projects is one of UNDP, which is called eMonitor+. Basically, it uses AI to promote information integrity and monitor the potential of conflict, considering violence against women and online violence. Another example are AI driven tools like the one Glassdoor is using to see whether there is real equal pay in vacancies.  

But AI is still doing more bad than good today for gender equality? 

Dr. Van de Heyning: Unfortunately, there is currently a decline of gender equality because of AI as well, and there are many examples of more violent behaviour against women and girls because of AI. There are a few triggers to consider. The first one is bias. We’re seeing it in the development of AI, from deepfakes to algorithms and voice cloning. AI tools are trained on data but when that data in research or our society is biased, then the output is biased as well. Moreover, even though there are more women working in the tech industry today, only about one out of five people working in AI are women and mostly not in a developer role. This does not mean that male developers actually want to have this bias, but the reality is that they are not always aware of it.  

Can you give some examples of this bias? 

Dr. Van de Heyning: “Look at AI that is being used in healthcare. We already know that in science, women’s health has been under-evaluated, that some of the criteria that are specific to women have not been included in research and data. And that causes bias from the source. Fortunately, there are some incredibly good initiatives such as the Female Heart Hospital of the University Hospital in Antwerp that is really focusing on female heart problems. So, it’s not just about the applications, it’s also about ensuring that there is sufficient research and sufficient data out there about women. 

Another example are the well-known AI image creators. We all love them because you can be so creative making cool new images for presentations, for example. And yet, most of these image creators are clearly biased. Ask for image of a judge, a CEO or construction worker and you will get a male in the majority of the image creators. If you ask for one of a cleaner or a teacher, you’ll get a female. But it’s also about the way they are portrayed, which is often in a very stereotypical or even downright sexist. 

All these types of bias stimulate a certain perception of women. We all want to believe AI should be objective. And yet, some of these tools have been trained and retrained for years, they can be recalibrated, and recoded to make them search also for other data, but apparently that has not been done properly so far. From the start of the design that seems to have been disregarded.”  

Next to bias, what else is alarming you?

Dr. Van de Heyning: One of the most problematic things is absolutely disinformation. AI is playing into that evolution, and in a gender-related way. We see examples of political opponents being sidelined for instance through deep fake interviews that never happened. Also, there are several AI bots now really pushing out specific information on gender. And of course, one of the very worrying trends is deep nuding, where AI is being used to portray women as if they were naked. In research from already two years ago, we found that more than half of the 15- to 25-year-olds knew about deep nudes, and 8 per cent had already made deep nudes. Thankfully, it is criminalized to disseminate these pictures due to new EU regulation and in Belgium it is also illegal to make them.  

Are there any solutions out there to combat this? 

Dr. Van de Heyning: There are technical solutions, including those using AI. A tool like Alecto AI, for example, scans the internet for non-consensual intimate images in order to take them down. But now we already see fake accounts asking for nude images claiming that your image has been or will be disseminated, and they will help you prevent it. But indeed, we need to go a lot further and work towards an AI gender-based agenda.  

What should such an agenda look like? 

Dr. Van de Heyning: “First of all, we need to work on responsibility and awareness. Recently, there was an AI summit in Paris, showing that a lot of nations are coming together and want change. And that can only be done with real responsibility. And this responsibility from the tech industry should be for the whole technology life cycle. So, not just when something goes wrong but also prevent it from going wrong. I think the only way you can do that is via liability and regulation.  

Next, we need to implement gender-neutral by design. At the start of a design, companies need to think about gender neutral solutions that will avoid harm and are also not biased. Generating awareness for that is key, for example through bias training for employees, in particular for developers. We do need more female developers in the team, but we also need men to be conscious of their bias and its potential impact.  

Finally, AI regulation is essential as well. In some parts of the world, they want to deconstruct regulations because they are bad for business. But we are Europeans. We have old houses, and we renovate them. We do fantastic things with our cities that have been there for centuries. And that’s how we need to think about technology. Within constructs that are good for society, and certain regulation, you can be creative and safe at the same time.”  

How do you look at the future in these challenging and uncertain times? 

Dr. Van de Heyning: I am very convinced that we need a new agenda. I believe there is really a potential new era for Europe, because we will have the opportunity to create and market new products that are not harmful, and people can trust. Of course, funding from the EU, governments and companies is necessary to fuel this innovation, but now is the time to fill the gap for people looking for alternatives. At the same time, we can also have an agenda where we foster, where we mentor, where we include a more unbiased workforce to create the AI of the future. Even if there are many challenges ahead, today can be a restart for Europe, a restart for Belgium to think differently about technology.  

 

Women4Cyber Belgium 06-03-25
About the author
Jo De Brabandere

Jo De Brabandere

Experienced Marketing & Communications Expert and Strategist
Jo De Brabandere is an experienced marketing & communications expert and strategist.
Join our podcast
Please choose your preferred listening platform and language

Spotify

EN

FR

NL

Apple

EN

FR

NL

Join our newsletter

Cyber Pulse keeps you up-to-date on the latest cybersecurity news, community actions and member stories.