News

AI literacy: the key to a responsible digital future

January 20th, 2025
By: Conclusion

Share

'How many AI applications do you actually use from the moment your alarm goes off until you go to bed again – consciously and unconsciously?', asked Adil Bohoudi, Managing Director of Conclusion AI 360, during the AI literacy event on 15 January in the Mauritshuis. 'More than a hundred?'. Your smartphone, public transport, social media, web shops and streaming services. AI is everywhere. AI is no longer a thing of the future, in fact, it is already deeply interwoven into our daily lives.

'63 percent of employees use ChatGPT without permission', says Bohoudi. 'It has become an essential part of our daily lives. That's why must make ourselves and our employees aware of the pros and cons, dangers and opportunities. Companies struggle with two major fears: on the one hand, FOMO - fear of missing out - and on the other hand, the fear of ending up on the front page of the newspaper with bad AI news.' Consider, for instance, the chatbot of DPD or the court case lost by Air Canada. AI literacy helps organizations navigate between these extremes.

AI in the public sector: opportunities and risks

Ewout Irrgang, vice president of the Netherlands Court of Audit, presented the findings from a report on AI use delivered last year after a study among seventy government organizations and 433 use cases. 'In half of them, the balance between opportunities and risks is completely lacking', he says. 'AI has changed the world in no time and tasks that used to take days or weeks can now be done in seconds. When everything changes, you can do two things: see what opportunities it offers or wait for it to blow over.'

Becomes mandatory as of next week

He continues: 'We all need to become AI literate. Well, actually, we already should be. As of next month, it will be mandatory. And that is not without reason: government organizations have staff who deal with AI systems For instance, HR employees need to know what is being used in the field of AI and a civil servant must understand that he cannot blindly follow the results of AI. If you don't get AI literacy right, the use of AI could turn into a bad science fiction film. My message is: control it, but don't lose sight of the opportunities.'

AI to make reporting easier

And those opportunities are significant: AI makes the government a lot more efficient. Out of 70 government agencies, 55 already utilize AI in multiple systems', says Irrgang. Consider tools that help civil servants prepare draft answers to parliamentary questions or summarize meetings. The police are using AI to improve the reporting process and the Ministry of Economic Affairs and Climate Policy is using AI to perform calculations more efficiently.

Insufficient risk assessment

However, there are risks involved, Irrgang warns. This can lead to problems with data protection and digital sovereignty. 'And when you talk about AI, that automatically includes the cloud. Over the past ten years, the central government has thoughtlessly entered the cloud. The ministry doesn't even know whether a quarter of the cloud services are public or private.' Half an hour before the Conclusion event, Irrgang presented the relevant report to the House of Representatives.

Discriminatory algorithms

In addition, there is the danger of discriminatory algorithms. 'Many people think this is a major problem with the government, partly because of the benefits scandal. But our study shows that this risk is manageable if you take the right measures,' Irrgang explains. For example, the Royal Netherlands Military Constabulary uses AI to determine which flights require additional checks. Active consideration is given to preventing bias.

Table talk: How do we increase AI literacy?

During the panel discussion, three experts gave their views on AI literacy within organizations. Privacy expert Emerald de Leeuw-Goggin (Global Head in Privacy & AI Governance at Logitech) emphasized the

importance of training, and that it should be created from different perspectives and by different disciplines. 'During these training sessions, we shouldn't only identify the risks, but also the opportunities. Mandatory training on rules is boring and often counter-productive. We need to show people how AI can make their jobs easier.' In addition, she emphasized that good AI core values are essential: 'Organizations often write beautiful AI policies, pages long, but no one remembers them. And if no one sticks to it, it's basically useless. If you establish four or five core values, this'll help employees make the right decision at the moment.'

Vonne Laan, Privacy and AI lawyer at The Data Lawyers, provided insight into the AI Act: 'The AI Act is here and is already in effect. The various articles in the AI Act will be applied in phases. In February, AI literacy actions become mandatory. The prohibitions for high-risk applications will also apply. To determine what this means for an organization, an organization must first know what it's doing with AI. Although many parties do not develop AI themselves, they do purchase AI, consciously or unconsciously. Much of the software that is purchased runs on AI in the background. And apart from that, employees often use free AI tools such as ChatGPT and image generators. The rules of the AI Act are therefore relevant for almost all organizations. And that also offers opportunities. After all, the strategy within which the AI Act was developed is aimed at promoting innovation and not hindering it. We also advise our clients from that perspective.'

Esther van Egerschot, professor of AI governance and Ethics, AI expert at Woman4Ethical AI at UNESCO and Lead AI Strategist at Conclusion AI 360, warns that many companies have no idea which AI applications they are using: "Ask within your company how many AI applications there are. Usually, you won't get an answer. If you don't know what you have, you can't check whether you comply with laws and regulations.'

Where to start?

Many companies and governments are struggling with fragmented AI policies. 'We came across 24 different ways of weighing AI risks. That makes it unclear for everyone', says Irrgang. There is therefore an urgent need for clear frameworks and standardization. The first step? Start with what is already in order. 'You can't wait three years for the basics to be perfect. Start now and continue to build', Bohoudi advises. AI literacy isn't just about compliance, it's also about competitive advantage. 'Companies that take this seriously create up to 25 percent more value', Van Egerschot adds. 'I also believe it's the moral duty of organizations and institutions to include everyone. AI is here to stay, it will impact our lives. We all have to deal with it.'

The time for hesitation is over

Government and business are at a crossroads. AI offers enormous opportunities, but also brings risks. Without AI literacy, it is impossible to find the right balance. Van Egerschot: ‘Train, train, train. That's my advice. Not only the employees, but especially the board and the supervisory board. The Dutch Data Protection Authority is investigating whether senior management can be held personally liable. So make sure that the top layer of the organization is extremely well informed.'

Irrgang closes with a text by Bob Dylan: 'You better start swimming, or you'll sink like a stone, for the times they are a-changin.' AI is not a thing of the future, it is today's reality. Playtime is over, now it's time to become AI literate.

WhitepaperAI literacy

The new legal obligation that offers opportunities, with risks attached

Every organization using AI must comply with Article 4 of the EU AI Act from February '25: promoting AI literacy. Start working on AI literacy. Leverage the opportunities, understand the risks, and comply with the law. Get inspired by our whitepaper.