Skip to content Skip to sidebar Skip to footer
Home Resources Blog Forging digital trust in the Age of AI

Forging digital trust in the Age of AI

7 minutes reading time

Forging digital trust in the Age of AI

At the recent GRC: Be Connected! Lustrum event, Allan Boardman, an Independent Business Advisor with CyberAdvisor.London, 2023 ISACA Hall of Fame Inductee and seasoned professional with extensive experience in audit, risk, security, and governance, delivered a compelling presentation. He explored the transformative power of Artificial Intelligence (AI) within the industry and the significant opportunities and challenges it presents for GRC and security professionals. We sat down with Mr. Boardman to delve deeper into the key aspects of his presentation and gain further insights into this rapidly evolving landscape and the need for digital trust.

Your presentation highlighted that “The Age of AI has Begun…”. What does this mean for you?  

Allan Boardman: I firmly believe that the development of AI is as fundamental as, or even more so than the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. Entire industries are poised to be reoriented around it, and businesses will increasingly distinguish themselves by the effectiveness of their AI adoption. The public release of ChatGPT in November 2022 ignited a revolution. Since then, AI adoption has surged, and new competitors have emerged, most recently DeepSeek in January 2025. This feels like the most significant digital disruption of our era. 

You also outlined several key risks and challenges. Could you highlight some of the most pressing concerns and how they should be mitigated? 

Allan Boardman: “Well, there’s quite a spectrum of risks organisations need to be mindful of. Forbes highlighted fifteen key risks, with the top three being a lack of transparency, bias and discrimination, and privacy concerns. This indicates a clear need to develop AI in an ethical way proactively. Specifically, addressing various biases—whether they are gender, racial, socioeconomic, or content-related—is paramount for ensuring fairness and equality in AI outcomes. Building trust hinges on transparency, meaning clear visibility into the data used, the models employed, and the decision-making processes. 

And then there’s privacy, which is absolutely critical given the use of personal data by AI. This requires robust security measures to prevent breaches and unauthorised access. Even anonymised data can pose risks of re-identification, and AI tracking raises legitimate surveillance concerns. Furthermore, poor data handling can amplify existing biases and lead to discriminatory outcomes. So, in essence, organisations need to implement strong data protection measures, establish transparent consent protocols, and develop clear policies for how AI is used.” 

The spread of misinformation is increasingly prominent. What are your thoughts on these matters? 

Allan Boardman: This is a significant area of concern. The rapid spread of false information through AI can erode trust in media and institutions. AI can be used for manipulation, fraud, and even political interference, and deepfakes are often challenging to detect and verify. Developing advanced detection tools, promoting digital literacy, and enforcing stringent regulations are essential to combat this.

AI security risks also present unique challenges 

Allan Boardman: “Certainly. When we discuss security risks, organisations are navigating a complex landscape. We are talking about threats like data breaches, malware and hacking specifically targeting AI systems, exploitation of vulnerabilities within those systems, and adversarial attacks designed to manipulate AI inputs. And of course, there are insider threats, where individuals misuse AI capabilities. A particularly concerning area is what’s called ‘Shadow AI’ – the unauthorized use of AI tools. This introduces a whole new set of risks, including potential data security breaches, compliance violations and intellectual property risks. To effectively mitigate these risks, organisations must implement robust security protocols and maintain continuous monitoring. 

Furthermore, we’re seeing the emergence of so-called jailbreaks with large language models, or LLMs. This is where users cleverly manipulate prompts to bypass the safeguards and content policies built into these models. These jailbreaking patterns are often shared widely on social media, which exacerbates the problem.” 

The legal and regulatory landscape surrounding AI is clearly in flux. What are some of the key approaches being taken globally? 

Allan Boardman: “The legal landscape is indeed changing rapidly as regulators and lawmakers around the globe strive to keep pace with AI development. We are seeing three distinct approaches to regulating AI. The first one is a single risk-based law to regulate AI systems broadly, as seen in the EU, Canada, Brazil, and South Korea. The EU’s AI Act, passed in March and approved in May 2024, is a prime example, focusing on managing risk from AI systems by classifying them into different risk levels with corresponding controls.  

The second approach involves various narrow laws to regulate specific applications or domains of AI, which is the predominant approach in the US and China. In the USA, there is no single federal AI law, with regulations issued by different agencies depending on the sector. However, key laws like the National AI Initiative Act, the Fair Credit Reporting Act, the Americans with Disabilities Act are relevant in this respect. 

The third approach is based on regulator-led initiatives supported by frameworks and strategies, as adopted by the UK, Australia, Singapore, and Japan. The UK, for instance, published an AI Strategy focusing on innovation and responsible development and is considering a legal framework for high-risk AI. Interestingly, India announced in late 2024 that it would not directly regulate AI but focus on voluntary codes instead.” 

Given these varying approaches and the inherent risks of AI, are there any established frameworks that organisations can adopt to manage these challenges effectively? 

Allan Boardman: Yes, there are certainly valuable frameworks available. The NIST AI Risk Management Framework (AI RMF 1.0) provides a comprehensive and flexible approach to managing risks associated with AI systems. It aims to foster trustworthy AI by focusing on ethical, reliable, and transparent development and deployment practices. Another important standard is ISO/IEC 42001:2023 – Information Technology – Artificial Intelligence Management System (AIMS). This provides guidelines for establishing, implementing, maintaining, and continually improving an AIMS. It is the world’s first AI management system standard and offers valuable guidance for responsible AI adoption and governance. 

The title of your presentation also mentioned the intersection of AI and cybersecurity, and the concept of digital trust. How do these elements connect? 

Allan Boardman:” So, the question we need to ask ourselves is whether AI poses a threat or turns out to be an asset in cybersecurity. AI offers immense potential for automated threat detection, rapid response and adaptive defence mechanisms, boosting security. However, we must address data quality, privacy, over-reliance on AI, and the danger of AI systems operating autonomously. Careful implementation is key to leveraging AI as an asset, rather than a threat, in cybersecurity. 

As GRC professionals we all want to be building trustworthy systems and help organisations implement AI safely, securely, ethically and responsibly. Digital Trust encompasses the ability of people, organisations, processes, information, and technology to create and maintain a trustworthy digital world. ISACA, for instance, has developed a framework to develop such a digitally trustworthy ecosystem.  

In the context of AI and cybersecurity, building digital trust requires prioritising governance and accountability, ensuring transparency and explainability of AI algorithms, and upholding data privacy and security. We cannot achieve this in isolation; collaborative engagement among industry, government, and academia is critical.” 

Could you provide some concrete examples of how AI is being applied in GRC? 

Allan Boardman: “Certainly. In audit, AI use cases include fraud detection, risk assessment, ensuring regulatory compliance through automated checks, process automation, anomaly detection, continuous monitoring, and even automating the generation of audit reports. In cybersecurity, tools like Microsoft Security Copilot demonstrate how AI can assist with incident response by providing guidance and insights, threat hunting by analysing large datasets, intelligence gathering from various sources, providing policy insights and resolutions, automating tedious tasks, building queries and analysing scripts, and managing overall security posture. For risk management, AI can be used, among others, for risk identification through data analysis, predictive analytics to forecast future risks, automated monitoring of risk indicators, scenario analysis, ensuring regulatory compliance, and fraud detection.” 

What are your concluding thoughts for GRC professionals navigating this era of rapid AI advancement? 

Allan Boardman: “My final thoughts echo the need to engage, embrace, and empower ourselves with AI. Generative AI offers extraordinary possibilities, but addressing the associated risks is crucial for building trust and ensuring a positive user experience. Regular updates and continuous monitoring of AI systems are essential for their effectiveness. Prioritising security and privacy is paramount to protect users and ensure compliance. Ultimately, balancing the benefits and challenges will enable the responsible and ethical use of AI. As others have noted, the internal audit profession, for example, is at a crucial juncture. We must embrace AI as both a catalyst and a tool to remain relevant and deliver strategic value in this ever-changing world. Therefore, I urge everyone to explore and embrace AI and feel empowered to supercharge their careers!” 

 

GRC: Be Connected! 27-03-25
About the author
Jo De Brabandere

Jo De Brabandere

Experienced Marketing & Communications Expert and Strategist
Jo De Brabandere is an experienced marketing & communications expert and strategist.
Join our podcast
Please choose your preferred listening platform and language

Spotify

EN

FR

NL

Apple

EN

FR

NL

Join our newsletter

Cyber Pulse keeps you up-to-date on the latest cybersecurity news, community actions and member stories.