AI Literacy in Action: What It Is, How to Deliver It, and Why It Matters
Published on: 15/05/2025
Article Authors The main content of this article was provided by the following authors.

Is your organisation prepared for the new AI compliance era?

With the EU AI Act now in force, for many employers AI literacy is no longer a “nice to have”—it’s a legal and operational necessity. This webinar explains exactly what AI literacy means in the context of today’s workplace, how to effectively deliver it across your organisation, and the consequences of failing to act. 
This is a practical and informative session tailored to HR, compliance, learning & development, and leadership teams. Learn how to safeguard your organisation while empowering staff to use AI responsibly, ethically, and in compliance with evolving legislation.

You’ll gain insight into: 
•    Why is everyone talking about AI Literacy suddenly?
•    What AI literacy looks like in practice—and why it’s broader than just tech know-how 
•    The risks of low literacy levels for individuals and organisations alike 
•    The legal implications of the EU AI Act and data protection rules 
•    Strategies for embedding AI literacy through eLearning and cultural transformation 
•    How to future-proof your organisation’s use of AI and GenAI tools
•    How to measure for improvements in AI Literacy
•    What are the costs and risks of doing nothing here?

Speaker : Barry Phillips, Chairman, Legal Island 
Barry is a passionate advocate of responsible use of AI in the workplace, arguing that the goal of employers now should be to become “Super worker” organisations where all employees are empowered and upskilled by AI.
He has delivered numerous presentations, webinars and workshops to hundreds of organisations on GenAI since the beginning of 2023. He is also author of “ChatGPT in HR – A Practical Guide for Employers and HR Professionals” (2025) available on Amazon and at other booksellers online.

Transcript:

Barry: Well, good morning to everybody. You're very welcome to this webinar on AI Literacy. I hope it's glorious weather where you are and it's as sunny as it is here in County Antrim.

Can I start by saying thank you to MCS Group for agreeing to sponsor this webinar.

My name is Barry Phillips. I'm the founder of Legal-Island. I'm also an author of a recent book on "ChatGPT in HR", which was published just a few months ago via Amazon.

I'd like to start this by asking this question of everybody, and that's simply this. Is there anybody that's feeling overwhelmed by AI at the moment? And if you are, perhaps you can just give me a thumbs up in the chat. Anyone feeling that the AI train is just going too fast for them? If that applies to you, then just give us a thumbs up, and I'll see what the reaction is to that question.

It does seem to me that if you're sprinting or you have to sprint just to stand still with the fast pace of AI, it really is extraordinary.

Well, if you are feeling that, then I've got good news for you, but I've also got bad news.

Let me start with the bad news. And the bad news is simply this: I'm afraid it's not going to get any slower. If anything, it's going to speed up even more, and that's because of the level of investment in AI is just increasing year on year. It's also to do with the fact that chips are getting faster and getting cheaper and bigger.

But the good news is that you're not on your own if you have this feeling. Every time I speak to anyone in HR or every time that we do a survey of HR at Legal-Island, the most common complaint in the area of AI is just the sheer speed of development.

But I have to say that there is a great career opportunity for us here in HR. And I've been saying that ever since ChatGPT 3.5 dropped in November of 2022. Yes, it is fast, but if we can take the time to really understand it and you in HR can really embrace it, then there are real career opportunities for you, and every reason to see your career accelerate right up to the C-suite, if you're not already there.

So, what I'd like to do is just to start with a couple of polls just to get an indication of where you are in your organisation and where your contemporaries are.

The first question that I'm going to be asking is "To what extent is AI or GenAI currently being used across your organisation?" And if you'd like to complete that answer now, then we can go on to the next poll.

I'm just going to take a photo of that because it's quite interesting. We'll see there that 67% are saying minimal use of AI at the moment, with 28% actually saying no use of it at all. That's interesting.

And if we can go to the next poll, if you could drop that in for us there, Gosia. Has your organisation provided staff with any formal guidance or training on the safe and ethical use of AI tools at work? And we've got about 15% saying yes, we've implemented structured training, 20% are saying partially, and a huge majority have actually said no to that. So that's interesting, and I'd like to comment on that in a moment.

This question, "Why is everyone talking about AI literacy?" Well, in part because everybody seems to be talking about AI at the moment. And certainly, from an investment point of view, everybody is talking about it because the investment in AI is absolutely huge at the moment.

Last year, in the U.S. alone, it amounted to $109 billion. Now, to give you an idea of how much that is, that is the equivalent output GDP of a country like Croatia. I mean, it's absolutely nuts. It's huge.

And if you were to compare it with, let's say, IKEA, IKEA would have to run a current turnover for roughly 14 years straight, not just in the U.S. but globally, to match that level of investment. It's just absolutely extraordinary.

Is it becoming a new must-have skill? Well, I think it is, and we're fairly close to it becoming a must-have skill.

There may be people in attendance today that are old enough to remember the '80s when Microsoft Word first appeared in the office. And suddenly, it just became a skill that everybody had to have, at least in the knowledge economy.

And so, it was Microsoft Word followed by Outlook, followed by probably PowerPoint, and then Excel. You needed a bundle of those four on your CV. I think we're preciously close to having a situation now in which everyone is going to be expected to have at least an outline understanding of AI.

And certainly, that is the indication from that Shopify memo that circulated recently. If you haven't heard about that, that's a memo that came from the CEO of Shopify, Tobi Lütke, and he basically said in the memo, "Look, it is now a baseline expectation that every member of staff in Shopify has at least an outline understanding of AI and how to use it".

And he made it clear in that memo that it is now part of the Shopify performance review criteria, that managers will be asking employees about how they are using AI to upskill them.

He also went on to say, "Look, we're at a point now that we are not going to add to the headcount until you can actually prove to me or give me a reason that this extra work can't be done by AI".

Now, this caused a bit of a rumble, but to be frank, it shouldn't have surprised a lot of people because it's very reflective of where a lot of organisations are at the moment in terms of their approach to AI.

Of course, everybody is talking about it, too, because there is now a legal obligation to have AI literacy as a result of the EU AI Act. And I'll talk about that in a moment.

But the AI Act and surrounding documents use this term "AI literacy" a lot. So, it has dropped it, if you like, into common parlance, and we can expect it to stay around for a very, very long time.

Now, let's just talk about the EU AI Act for a moment. It's rare that somebody like me will take a moment to stand back and actually admire a piece of legislation. I used to be a practising lawyer. I've now stepped back from that. But I have to say I really do actually admire this piece of legislation. It really is quite extraordinary in terms of its reach and what it set about to achieve.

What is particularly interesting and impressive is that this is a rare example of pre-emptive legislation. Most legislation appears as a reaction to a system or a situation that has really broken, and to attempt to fix it, give clarity, and regulation. This was pre-emptive, and saw that AI was going to be used and dropped in a big way and that it would need to be regulated.

Now, let's imagine you had to get agreement in your organisation for a really big important document, and there were, let's say, eight stakeholders that you had to go to, to get their views on a particular document. You would probably be thinking, "This is going to be hard work", wouldn't you?

Well, let's imagine that somebody who's responsible for driving the draft of the EU AI Act has to go to 27 EU states representing 448 million people with a document, by the end of it, that was 144 pages long, working in 24 different languages. Can you just imagine the size of that exercise? But they still managed to do it.

And it regulates, of course, a huge amount of AI, not just us in HR. Now, you may be forgiven for thinking that it targets just HR because of the impact, but it doesn't. It goes across many sectors, and it is applicable to engineering, to health sector, to pharma, to agriculture, and so on.

And not only that, but it had to be redrafted even before it became a final piece of draft legislation, because it was going nicely towards getting over the start line when suddenly, at the end of 2022, ChatGPT 3.5 dropped. Those that were looking after this must have looked at it in sheer astonishment and horror, thinking, "Goodness me, this is a completely different type of AI that we hadn't regulated for in this draft. What do we do?"

And within just a few months, they had an extra section which was governing this new kid on the block, if you like, which was GenAI, the likes of ChatGPT, Copilot, and so on and so forth.

So, are you caught by the terms of the EU AI Act? Yes, if you are within the EU. But possibly yes if you are outside of the EU. You may be subject to what is commonly referred to as the long reach or the tentacles of the EU AI Act, or the Brussels Effect, as it's also sometimes known.

It means, for example, in HR, if you're operating an AI software system that, let's say, is a recruitment AI tool, but you recruit somebody from the EU, this will apply to you.

If you're operating an AI performance management system and you're based in Belfast, but you've got staff in Warrington in England, but you've also got a handful of staff in Kildare and Cork, then you will be caught by this.

And that's why many of the lawyers are advising clients that are outside, strictly speaking, of the EU to basically act and behave as though they are governed by the EU Act, rather than approach it on a case-by-case basis, which is going to be quite time-consuming and also very risk-prone.

So, AI literacy defined. What does the EU AI Act say about AI literacy? Well, in Article 3(56), it describes it as "skills, knowledge, and understanding that allow providers, deployers, and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and the risks of AI and the possible harm it can cause".

And in Article 4, it gives further elaboration of AI literacy by saying the Act "mandates that providers and deployers of AI systems must ensure, to the best of their ability, that their staff and others involved in operating these systems possess a sufficient level of AI literacy. And this involves considering factors like technical knowledge, experience, education, and training, as well as the context in which the AI systems are used and the individuals or groups that they impact".

So, in other words, it's not enough to know how to open up ChatGPT and how to prompt it. You need to know when ChatGPT can be used, when it shouldn't be used. You need to know how to use it without jeopardising the security, for example, of personal data.

When operating specialised AI tools, such as an AI recruitment and selection tool, you need to understand the role that it is performing and when human intervention is going to be required.

I think a good way of breaking this down, this AI literacy, is to look at three pillars. And one pillar is that of mindset, the other is methods, and the third is meaning.

Mindset is really about curiosity and encouraging the employee to get curious with GenAI particularly, and really to go away and to experiment and to play with it to see what its capabilities are and how it is that it can best be used, whilst at the same time remaining sceptical, understanding that you have to be constantly vigilant for hallucination and also for the prospect of bias and inaccurate results.

Methods is about prompt craft, it's about tool selection, and it's about risk checks.

Now, prompt craft, I just want to mention Ethan Mollick, who is an expert on AI in the workplace. He says something which I think is quite interesting about prompting. He would dispute the importance of going after the perfect prompt. He said that good prompting is good enough.

And his approach to AI literacy in the workplace is to encourage employees to be curious, to experiment, and to see what works for them rather than to go after the perfect prompt and the perfect words to use.

Tool selection is about whether it's going to be ChatGPT or Google Gemini or Copilot.

And it's also going to be about risk checking and making sure that when you're using the tool, you're not causing some harm. On that one particularly, using an LLM but keeping the data safe is really important.

And I just wanted to share with you at this moment something that we use at Legal-Island with our staff when using ChatGPT. We encourage them to use what we call the ABC rule. And that is when using ChatGPT, you avoid entering personal or commercially-sensitive data. You be sure that the training function is turned off before use. And C, you check for accuracy before relying on the response.

The B for "be sure that the training function is turned off" is really important. And it's surprising that when I am in workshops working with HR people just how many people still don't understand the importance of the training function and making sure that it's turned off.

If it's left on, then ChatGPT will use the information put into the prompt bar for its own training purposes. So, there's every danger that that information could be recycled and reused somewhere else, which, obviously, from a data protection point of view, is a complete disaster.

So, it's A, avoid; B, be sure about the training function; and C, check.

And if you don't know how to turn off the training function, it's not difficult. Google, and you'll get some instructions, and you'll see it's in settings.

Finally, meaning. Ethics, bias awareness, and impact on humans.

Ethics is about the AI being used in a way that aligns with the values of the employer.

And bias awareness is not only to do with algorithms that you may be deploying in your software, but it is, of course, being very sceptical and aware that many of the big models, like ChatGPT and Gemini Pro, are trained on data that is inherently biased or skewed.

They talk about this acronym of WEIRD data, which stands for Western, Educated, Industrialised, Rich, Democratic countries. And mainly Western and Northern Hemisphere countries, their data is used for training purposes. So inevitably it comes with bias that you need to watch out for.

AI literacy training. There is no requirement from the EU that it has to be done in a particular format. So, in other words, they don't insist that it has to be done face-to-face. They don't insist that it has to be done online. It could be a combination of those two.

We recommend at Legal-Island that it's kept to short bursts of 20 minutes, or 30 or 40 at the most, and that it's layered. In other words, you do training that is for everybody to ensure a minimum level of literacy, and then you could go to a further level of practitioner, and a final level of champion or master of AI. That tends to be where you have specialist teams that are using specialist AI software.

You can also gamify with prompt-offs and AI ethics escape rooms. Don't have time to explain those, but if you're curious and wanted to drop that into ChatGPT, it will come with some very interesting examples of how you can do that if you wanted to.

I can't do this webinar without mentioning our own AI literacy eLearning course. I'd be in big trouble from my marketing team if I forgot to do it. So here it is. It's the new eLearning AI Literacy Skills at Work course, which is designed to take into account the legal requirement from the EU.

It is targeted at all employees, and it takes about 40 minutes to complete the training, but the great thing about it is you can pop in again and again and do it in bursts of 5 or 10 minutes to suit your own schedule.

My colleague will be dropping in a link. If you would like information about this eLearning, then you can use the link to leave your details, and we will reach out to you. The link will also be sent to you in a follow-up email.

So, what are the risks in doing nothing here? Well, I don't think we have a choice to do nothing, but let's see what happens if you do. Well, you risk a fine of €15 million or 3% of global annual turnover.

UK SMEs that adopted AI early have seen productivity gains ranging from 27% up to 133% compared to non-adopters. That's according to an ESRC-funded study. That's huge. So, with this comes an opportunity to really outcompete your competitors.

And so, if you're in the private sector, of course it could well be that you just don't have an opportunity to do nothing here and that you really have to embrace this. And we'll see that even if you're in the public sector, it throws up challenges if you choose to do nothing or choose to be slow in adopting.

Talent retention is one thing I just wanted to talk about. I was at a breakfast seminar just a few weeks ago, and there was a representative of a public sector organisation, quite a well-known one. I won't mention this person's name, but he was saying in their organisation they have banned the use of GenAI completely.

His concern with that was that he was realising that a lot of staff in the organisation couldn't use what they considered to be the best tools to complete the work. And yet when they went home, they had complete and free access to them. And his concern was that maybe a retention and also an attraction problem that they have if they are seen to be restraining their employees to use systems which are effectively very out of date and inefficient.

How do you measure AI literacy? Of course, one of the best ways of doing it is to ask employees how they feel by doing pre- and post-pulse surveys. Ask them how they feel in terms of confidence of using AI and also competence.

You can track usage of AI, but it's not easy, particularly in LLMs such as ChatGPT, and particularly if employees are using it in a sort of single-account capacity.

If that is the case, then really the only way you can track it is by actually going into use folders and finding out how often they're using it and for what reasons. And if you want to do that, it might be best to warn them or ask for their permission in advance.

If you are keen to track, then probably enterprise use of something like ChatGPT is going to be the best way of tracking it. But not every organisation or even many have that enterprise-level use of ChatGPT because it is so expensive.

Of course, the other problem that you have is that if you ban something like ChatGPT in the workplace, you can't be sure it's not going to be used. You just then get the dark use of something like ChatGPT, and then it's almost impossible to track it because, of course, you can't say to staff, "Look, I know we've banned it, but I also know that you are using it. Would you mind giving me your use stats and statistics?"

You can tie learning badges to real KPIs, and you can publish a quarterly AI health report that some companies are beginning to do, which include use of top AI tools.

Let's have a look at a couple of organisations/employers that are already doing a lot in the AI literacy space.

The EU has what they refer to as the Living Repository of AI Literacy Practices, which is basically there for employers to register their AI literacy practices, to share them with others.

There are two current Irish employers that have done just that. One is OpenSky Data Systems, based in, I think, Kildare. They're in the medium-sized organisation category of 50 to 249. And in their paperwork that they submitted, they say that they do AI training to employees and clients. They do technical and non-technical training, so that's basic training for everybody. And then they'll do specific training for various teams on AI.

They've seen a 65% increase in usage of specific AI tools, and they would say that they have a culture of innovation and encouraging employees to feel empowered and to explore AI solutions leading to better problem solving.

There's also a company in Belfast based on Ormeau Road called Enzai, and they are a microcompany, up to 15 employees. They do two elements of AI literacy. One is in the form of guides that they also send out to their clients as well as employees. And then they do an AI governance training section, which is instructor-guided learning, which consists of general-level training that we talked about before and also role-specific training, which will be in much more detail.

They mentioned in their submission that they are also tackling the challenge of the sheer pace of change and development and also tailoring training for specific use cases.

I mentioned at the very beginning of this webinar a career opportunity that I think comes with the appearance of AI. We have a certificate in AI for HR, which is CPD-certified. It's the only one that we're aware of that's CPD-certified, which is AI specifically for HR. It starts on 12 June.

It's ideal, I think, for those in HR that want to make a real statement on their CV or application for promotion that they get the importance of AI and the importance for it generally and also for their organisation.

This is the second time that we've run it this year. The first time, we were lucky enough to get really good reviews. So, it's a great course.

I think there are just a few places left. So, if you would like one, please be sure to go online quickly. There is also a code that you can use if you're paying in sterling and a code that you can use if you're paying in euro, which will qualify you for a 10% discount.

So, that's us at 11:30. We were due to finish in 30 minutes, so that's a great time to leave it. Unfortunately, I don't have time for questions, but that's my email address. If there are any questions that you have and want to fire to me, you can get me at barry@legal-island.com.

And if you fire it off by lunchtime, I promise I will respond to you by the end of the day. Thereafter, I'm off to Donegal to Murder Hole Bay for a couple of days to really enjoy the sunshine. So, can't wait for that. I hope you make the most of the glorious weather too.

Thank you for joining the webinar. I hope you found it useful, and I look forward to seeing you again very soon. Thank you. Bye-bye.

Sponsored by:

Disclaimer The information in this article is provided as part of Legal Island's Employment Law Hub. We regret we are not able to respond to requests for specific legal or HR queries and recommend that professional advice is obtained before relying on information supplied anywhere within this article. This article is correct at 15/05/2025
AI Literacy Skills at Work: Safe, Ethical and Effective Use
All Staff
Artificial Intelligence
Popular
eLearning Course
Certificate in AI for HR - CPD Certified
Online
Artificial Intelligence
Popular
Events
Annual Review of Employment Law 2025
Hybrid
Annual Review of Employment Law
Popular
Events
Legal Island’s LMS, licensed to you Imagine your staff having 24/7 access to a centralised training platform, tailored to your organisation’s brand and staff training needs, with unlimited users. Learn more →