Barry Phillips (CEO) BEM founded Legal Island in 1998. Since then, the company has become the leading workplace compliance training company in the island of Ireland. He was awarded a British Empire Medal in the New Year’s Honours List 2020 for services to employment and equality.
Barry is a qualified barrister, coach and meditator and a regular speaker both here and abroad. He also volunteers as mentor to aspiring law students on the Migrant Leaders Programme.
Barry is an author, releasing his latest book titled 'Mastering Small Business Employee Engagement: 30 Quick Wins & HR Hacks from an IIP Platinum Employer' in 2020 along with Legal Island MD Jayne Gallagher.
Barry has worked at the European Parliament, the European Court of Human Rights and the International Labour Organisation in Geneva before qualifying as a lawyer in 1993.
He has travelled extensively and lived in a total of eight different countries considering himself to be a global citizen first, a European second and British/Irish citizen last of all. His guiding mantra in life is “Never react but respond. Get curious not furious.”
Barry is an Ironman and lists Russian language and wild camping as his favourite pastimes.
This week Barry Phillips argues that many organisations just aren’t providing HR with the clarity needed to do a good job implementing AI.
Transcript:
Hello Humans!
And welcome to the podcast that aims to summarise each week an important development in AI relevant to the world of HR. My name is Barry Phillips.
Today I want to talk about fog.
Not the atmospheric kind. The organisational kind. The type that descends the moment someone in a senior leadership meeting says the words "we need to be doing more with AI" and everyone nods, types it in their notebook, and then… nothing happens. Or worse, everything happens, in seventeen different directions at once.
Here's the uncomfortable truth: most organisations don't have an AI problem. They have a clarity problem. And HR is sitting right in the middle of it, wondering whether to lead, follow, or just quietly request a transfer.
So today, let's talk about what perfect or at least functional clarity on AI actually looks like for HR. And why, without it, we're essentially building a house in fog.
First things first: does your organisation have a written AI strategy?
Not a Slack message from the CEO. Not a slide deck someone found on the internet and emailed round. An actual, considered document that says: this is why we are using AI, and this is what we expect it to do.
Because here's where the critical fork appears and it's bigger than most people admit.
Is AI there to drive efficiencies? Automate the repetitive, shorten the slow, make the existing machine run faster? That's already a huge undertaking. Complex, political, and full of tripwires.
Or, and this is the bolder play is the organisation using AI to fundamentally rethink how work gets done? Not just speed up the current model, but question whether the current model deserves to survive at all?
The first is hard. The second is that, plus vision, plus a tolerance for genuine uncertainty. If your organisation's strategy doesn't make clear which it is, HR is going to be preparing people for a journey without knowing the destination. And that, as any seasoned HR professional knows, is a brilliant way to lose people before they’ve sat down to start work
Now, the strategy also needs to answer a second question one that HR should be pushing hard on, even if it makes the finance director wince.
What is the objective of AI? Is it purely about the bottom line? Margin improvement, cost reduction, shareholder value?
If so, say that. Clearly. Because the alternative pretending it's about growth and opportunity while quietly eliminating roles is the fastest way to destroy trust in any workforce. Staff aren't naive. They know efficiency programmes when they see them, even when they're wearing a shiny AI badge.
But if the genuine goal is sustainable growth expanding what the business can do, opening new possibilities, genuinely creating as well as cutting then HR has something to work with. That's a narrative you can build engagement around. That's a strategy you can get people to adopt, not just endure.
The objective shapes everything: your communication, your change management, and critically your credibility. Get this wrong and HR becomes the messenger everyone blames. Get it right and you become the function that helped the organisation land.
Which brings us to the question HR should be asking loudly and often: what exactly is our role here?
When it comes to AI and upskilling, helping people build new skills, adapt, stay relevant, is that HR's responsibility or L&D's, if your organisation is fortunate enough to have that as a distinct function? Either way, HR needs to know, because leading on this means close coordination with IT and data protection colleagues. Governance matters. It's not glamorous, but neither is a data breach at 11pm on a Friday.
Then there's the other lane: AI applied to processes. Identifying which workflows might be shortened, transformed, or eliminated by AI. And here's where HR needs absolute clarity are you leading this effort, or supporting it? Because they require completely different postures.
And here's the detail no one wants to discuss in the strategy meeting: sometimes the process doesn't suit AI at all. Not because AI isn't capable but because the process itself is broken. Automating a bad process doesn't fix it. It just breaks faster. Before HR signs off on any AI-led transformation, someone needs to ask whether the thing we're planning to turbocharge actually works in the first place.
So here's my closing thought and it might sting slightly.
We talk a lot in HR about the risk of being left behind by AI. Of not moving fast enough. Of being disrupted.
But I'd argue the bigger risk right now isn't moving too slowly.
It's moving confidently in the wrong direction.
Because an HR function that charges into AI adoption without strategic clarity, without role clarity, without knowing whether it's leading or supporting; that function doesn't look pioneering. It looks like it's been handed a fog machine and told to call it innovation.
Clarity first. Action second and in that order.
AI Literacy Skills at Work: Safe, Ethical and Effective Use