Barry Phillips (CEO) BEM founded Legal Island in 1998. Since then, the company has become the leading workplace compliance training company in the island of Ireland. He was awarded a British Empire Medal in the New Year’s Honours List 2020 for services to employment and equality.
Barry is a qualified barrister, coach and meditator and a regular speaker both here and abroad. He also volunteers as mentor to aspiring law students on the Migrant Leaders Programme.
Barry is an author, releasing his latest book titled 'Mastering Small Business Employee Engagement: 30 Quick Wins & HR Hacks from an IIP Platinum Employer' in 2020 along with Legal Island MD Jayne Gallagher.
Barry has worked at the European Parliament, the European Court of Human Rights and the International Labour Organisation in Geneva before qualifying as a lawyer in 1993.
He has travelled extensively and lived in a total of eight different countries considering himself to be a global citizen first, a European second and British/Irish citizen last of all. His guiding mantra in life is “Never react but respond. Get curious not furious.”
Barry is an Ironman and lists Russian language and wild camping as his favourite pastimes.
This week Barry Phillips argues that it may be time to relax the rules relating to use of ChatGPT in the workplace.
Transcript:
Hello Humans!
And welcome to the weekly podcast that breaks down critical AI developments for HR professionals—in five minutes or less. My name is Barry Phillips.
Let's talk about a risk that many organisations still don't fully understand: data training in generative AI tools.
Here's something that might surprise you. Until April 2023—five months after ChatGPT launched—OpenAI didn't offer users a way to opt out of data training. Every prompt, every question, every piece of information you typed was automatically used to train future models. That included personal data, commercially sensitive information, confidential HR records—all potentially incorporated into the system and, theoretically, capable of appearing in responses to other users anywhere in the world.
When OpenAI finally introduced the opt-out feature in April 2023, they did so quietly. Almost embarrassingly so. No fanfare, no announcements—just a small toggle buried in the settings.
Since then, organisations have scrambled to address this risk. At Legal Island, we've developed what we call the ABC Rule for AI safety:
A – Avoid inputting personal or commercially sensitive data
B – Be sure to turn off the training function
C – Check for accuracy every time
We've trained hundreds of employees on this approach. But here's the problem: it relies entirely on human memory and diligence. What happens when someone forgets to toggle that switch? What if they ignore the ABC rule in a moment of convenience?
OpenAI's response came in September with their Business account—£25 per month, with training automatically turned off by default. A step forward, certainly. Combined with end-to-end encryption that even MI5 couldn't crack and comprehensive ABC training, you'd think that would be enough reassurance.
For some organisations, it is. But for many others—particularly in the public sector—it simply doesn't go far enough. The stakes are too high. The risks too unpredictable.
But here's the uncomfortable truth: while the public sector waits for perfect security that may never come, the private sector is racing ahead. They're automating recruitment, streamlining HR processes, and gaining competitive advantages that grow larger every day. At some point, caution becomes paralysis. Perhaps it's time for public sector regulators to ask themselves: are we protecting data, or are we just preventing progress?
The goalposts have moved significantly since 2023. Maybe our risk tolerance should move with them?
That's it for this week. Until next time bye for now.