Skip to content Skip to sidebar Skip to footer
Home Resources Blog Privacy Focus Group: AI and Data Protection

Privacy Focus Group: AI and Data Protection

4 minutes reading time

Privacy Focus Group: AI and Data Protection

The second webinar of the Privacy Focus Group on the subject of ‘Artificial Intelligence’ (AI) tackles a major challenge: how to reconcile the use of AI with the demands of GDPR, particularly regarding data protection? Because that’s an absolute must, both for best results and to avoid expensive penalties…

AI and Data Protection

As with the first webinar, the Privacy Focus Group availed itself of the expertise of the KU Leuven’s Centre for IT and IP Law (Knowledge Centre Data & Society). This ‘A.I. and the GDPR’ webinar was expertly presented by Brahim Bénichou, Koen Vranckaert and Ellen Wauters.

Know your data

The presentation kicks off with an overview of GDPR principles (see GDPR art.5) that must be abided by AI solutions, whether they use clear personal data (this is allowed!) or non-personal data that enable identification anyway (AI is very clever in determining patterns). AI solutions must restrain themselves to the minimal needed amount of relevant data, used only for a strict and lawful purpose in an accountable way, and as long as useful. Furthermore, transparency is imperative (no ‘black box’ or opaque nature). Developers and users alike cannot treat these requirements in a negligent way (no ‘check-box’-only approach), but must include them ‘by design’ in the data processes. Actually, by paying attention to a better selection of data and performing data cleansing (removing unneeded, outdated etc. data), and by keeping AI solutions ‘explainable’, one can avoid ‘garbage’ results (as in ‘garbage in, garbage out’).

In practice
Data protection in ‘real life’ requires a ‘by design’ and ‘by default’ effort with appropriate measures. This webinar lists several points of attention as e.g. risk based modeling (e.g. LINDDUN), explainable processes, the use of anonymization/pseudonymization/encryption, data security, learning methods and the absolute need of documenting all efforts (‘if you didn’t document it, you didn’t do it’!!). AI solutions get on thin ice particularly if applied to ‘profiling’ (allowed if GDPR principles are respected) and ‘automated decision making’ (prohibited except for three situations). Also, Data Protection Impact Assessments may be indicated (cf. the UK’s ICO 9 step approach). Do check out these parts of the webinar. Also of interest are the remarks on the use of AI solutions in scientific research and for statistical purposes. Various exceptions and rules apply, including specific Belgian legislation (e.g., non-pseudonymised data only as a last option).

Transparency is a must
Even more than in ‘everyday’ applications processing personal data, AI solutions must provide the utmost transparency, both ‘internal’ and ‘external’. Internal transparency relates to an understanding of AI systems by and in the companies themselves (policies, guidelines, accountability). External transparency implies extensive, intelligible and easily accessible (clear and plain language) information on a list of topics (e.g., which categories of personal data are collected or the sources of these data, purpose, consent, data subject rights…) for people impacted by the AI solutions. This is of particular importance if ‘automated decision-making’ processes are involved! The onus of proof of sufficient effort regarding transparency is on those responsible for the AI solutions, notwithstanding the potential/probable laziness and negligence of the data subjects involved (interesting example of this ‘negligence’: in 2010, on April 1st, GameStation found that 88% of shoppers did not read or care that the ‘terms and conditions’ included the transfer of their immortal soul…).

Need for a European AI Act?
Considering the scope of the GDPR, the question was raised whether there is yet a need for a specific European AI Act? Apparently ‘yes’, with the AI Act as a necessary complement to the GDPR. The latter regulation covers situations involving personal data, but not e.g. AI solutions without personal data that can nevertheless have negative impact on people. “The concept of harm and the scope of processes clearly exceed pure personal data. […] Though there is overlap with the GDPR, the AI Act offers a broader coverage.” At the time of the webinar, the AI Act was still a proposal with ‘yet work to be done.’ It will be necessary to strike a balance between data protection and the possibility to use AI without infringing the law. Also, the changing nature of AI must be taken into account.

Artificial Intelligence, in combination with privacy, is still very much unknown territory for developers, users and privacy protection officers. This webinar helps you find your way!

Useful links

Additional information on this subject by the Knowledge Centre:

The text of the EU’s Artificial Intelligence Act (proposal) (both text and annexes) can be downloaded here.

About the author
Guy Kindermans

Guy Kindermans

Information technology journalist
Guy Kindermans is a freelance journalist, specialized in information technology, privacy and business continuity. From 1985 to 2014 he was senior staff writer at Data News (Roelarta Media Group).
Join our podcast
Please choose your preferred listening platform and language

Spotify

EN

FR

NL

Apple

EN

FR

NL

Join our newsletter

Cyber Pulse keeps you up-to-date on the latest cybersecurity news, community actions and member stories.