Barry Phillips (CEO) BEM founded Legal Island in 1998. Since then, the company has become the leading workplace compliance training company in the island of Ireland. He was awarded a British Empire Medal in the New Year’s Honours List 2020 for services to employment and equality.
Barry is a qualified barrister, coach and meditator and a regular speaker both here and abroad. He also volunteers as mentor to aspiring law students on the Migrant Leaders Programme.
Barry is an author, releasing his latest book titled 'Mastering Small Business Employee Engagement: 30 Quick Wins & HR Hacks from an IIP Platinum Employer' in 2020 along with Legal Island MD Jayne Gallagher.
Barry has worked at the European Parliament, the European Court of Human Rights and the International Labour Organisation in Geneva before qualifying as a lawyer in 1993.
He has travelled extensively and lived in a total of eight different countries considering himself to be a global citizen first, a European second and British/Irish citizen last of all. His guiding mantra in life is “Never react but respond. Get curious not furious.”
Barry is an Ironman and lists Russian language and wild camping as his favourite pastimes.
This week Barry Phillips asks what will finally persuade employers to properly regulate AI usage in the workplace.
Hello Humans
And welcome to the weekly podcast that aims to summarise in around five minutes an important issue relating to AI in the workplace. My name is Barry Phillips.
Last week, at Legal Island we surveyed over two dozen employers throughout the island of Ireland, and the results were striking: 54% admitted they had no idea which large language model their employees were using—if any.
But let's be honest about that "if any" qualifier. They're using them. Research from Wharton Business School reveals that 81% of senior employees use generative AI at least once a week, with nearly half using it daily or more.
If you're old enough to remember the early days of workplace internet adoption, this should sound familiar. Usage came first. Regulation arrived much later. We learned to email before we learned about viruses and the risks of sending sensitive documents to the wrong recipients. We embraced the web—that curious "information superhighway"—as though it were an infinite library. Only later did we discover its addictive qualities and time-wasting potential, prompting employers to block access to Friends Reunited, Myspace, and later, Facebook.
So what will force us to take AI regulation seriously this time?
My prediction: external pressure from data protection authorities—the Information Commissioner in the UK and the Data Protection Commissioner in Ireland. While neither has been formally appointed to oversee workplace AI regulation, misuse or reckless deployment of AI by employees will inevitably fall under their purview. It's not a question of if, but when.
Consider the risks: an employee using ChatGPT or Gemini Pro with training mode enabled could inadvertently feed company data—including personal information—into a system that might reproduce it elsewhere, publicly. The employer who pleads ignorance, claiming an innocent mistake, won't get far when investigators discover they had no idea which AI tools their employees use and no evidence of AI literacy training.
Here's what should keep every employer awake at night:
We're not in the early days of the internet anymore, where we could afford to learn by making mistakes. The stakes are exponentially higher now. A single misstep with AI can expose thousands of customers' personal data, violate GDPR with penalties reaching into the millions, or compromise intellectual property that took decades to build.
The wild west days of AI in the workplace aren't just ending—they're ready to face full on scrutiny by external enforcement bodies. The only question is this: how will your organisation fair when the spotlight lands on you?
Because unlike the slow rollout of internet policies in the 1990s, AI regulation isn't coming with a grace period. It's coming with precedent, established privacy laws, and regulators ready to sharpen their teeth on organisations demonstrating a cavalier approach to AI governance.
The time to act isn't after the first data breach, or after the first regulatory fine, or after you see your competitor's name in the headlines for all the wrong reasons.
The time to act is now. Before your 54% becomes 100% liability.