Skip to content Skip to sidebar Skip to footer
Home Resources Blog Privacy Focus Group – Practical AI Use Cases

Privacy Focus Group – Practical AI Use Cases

4 minutes reading time

Privacy Focus Group – Practical AI Use Cases

In this third AI meeting of the Privacy Focus Group, Joeri Ruyssinck, CEO of ML2Grow (left on photo) demystified the concept of ‘Artificial Intelligence’ and showed how to reap its benefits without incurring security risks or privacy breaches. He provided a useful overview of all the risks associated with AI in terms of cybersecurity and explained how best to address them. Maarten Cannaerts and Peter De Coorde shared their practical experiences with introducing a governance framework for the responsible use of AI at KBC Group, including the lessons learned on change management and business involvement.

Practical AI Use Cases in good trust

It is easy to drown in the sea of dire warnings about the danger of AI, in particular to our privacy. The Coalition Privacy focus group, however, helps with examples of AI done right!

In its third session on Artificial Intelligence (AI), focus group chair Jan Léonard, DPO Orange, points out that beyond the first steps, the impact of AI on privacy requires attention to other aspects and domains, such as e.g. compliance. And thus the need for real-life examples of privacy-respecting AI solutions.

De-mystifying AI

In his ‘Artificial Intelligence demystified and its relation with cybersecurity and privacy’ presentation, Joeri Ruyssinck, CEO of ML2GROW, kicked off with a definition of AI. It is ‘a new way to solve problems’ by ‘making the system itself intelligent, instead of creating yourself the intelligence.’ Or, “instead of programming a computer, you teach a computer to learn something and it does what you want,” dixit Eric Schmidt, ex-CEO Alphabet.

Joeri Russynck illustrated the differences with the traditional way of solving problems (e.g., regarding the use of internal and external data) and pointed out several dangers (e.g., regarding training and use of AI models). He also presented a classification of ‘privacy preserving AI’, resulting in privacy-preserving model training, inference and release.

The ‘proof of the pudding’ of this all was a clear and convincing example of turning that most ‘privacy-invasive’ device of all – the camera – into an inherently trusted cornerstone of a crowding monitor tool (in answer to the ruckus caused by the plans of camera’s used for this purpose on the Belgian coast). Combining a ‘custom all-in-one edge device’ with an inventive data capture/processing approach, the required service was implemented while maintaining privacy. This is important, as in the discussion, it was pointed out that the public interest often demands solutions with potentially severe impact on privacy.

Trustworthy AI

Privacy-preserving AI cannot be a one-off, or left to good luck. ‘Trusted AI’ requires a way ‘to instill trust in our AI solutions’. That was the topic of the ‘KBC’s governance framework for responsible use of AI’ presentation by Peter De Coorde and Maarten Cannaerts, both of KBC. This framework has been in the works for the past few years, and is vital in ‘convincing stakeholders that our AI modeling, deployment and usage is trustworthy’. And that it is a core long-term element of the company’s strategy of ‘responsible behavior and business ethics’, wholeheartedly supported by top management.

The presentation highlighted three aspects in this process– trusted AI, in depth, in practice – and stressed the importance of involving the whole business side! One must realize that all AI are human creations, mimicking people, with a consequent need to beware of (and address) biases. Machines can be trusted to do a person’s job, but – just as for humans – we need to install controls to make sure they behave properly. Therefore, throughout the development cycle of AI projects at KBC, there are several ‘approval’ checks, by a variety of experts.

In depth, Trusted AI is considered from five perspectives: data protection and privacy; diversity, fairness and non-discrimination; accountability and professional responsibility; safety and security; and transparency, explainability and human control. Checking all of this by consistently asking ‘what if’ questions. Interestingly, in this way the KBC approach takes into account many of the concerns and demands in the proposed European ‘AI Act’.

In practice, it took some time to integrate these checks in the relevant processes and tools, including ‘trust as a selling point’ course and a technical fairness framework (a recent addition and ‘leading to ‘interesting’ business discussions’). Several projects have benefited from these practices, e.g. processes regarding job applications; customer intake and others. Obviously, the main point is that AI in good trust is possible, but requires solid, long term and well-structured approaches. This session of the Privacy focus group offers some crucial insights and welcome examples.

About the author
Guy Kindermans

Guy Kindermans

Information technology journalist
Guy Kindermans is a freelance journalist, specialized in information technology, privacy and business continuity. From 1985 to 2014 he was senior staff writer at Data News (Roelarta Media Group).
Join our podcast
Please choose your preferred listening platform and language

Spotify

EN

FR

NL

Apple

EN

FR

NL

Join our newsletter

Cyber Pulse keeps you up-to-date on the latest cybersecurity news, community actions and member stories.