You have 1 article left to read this month before you need to register a free LeadDev.com account.
Estimated reading time: 3 minutes
As AI governance frameworks tighten, overeager executives are posing the biggest risks in their use of AI tools and models.
It has been an uphill battle to establish effective guardrails for the use of generative AI and LLM-powered tools in the workplace since they burst onto the scene in 2022. But what if the people responsible for setting those guardrails are the most likely to ignore them?
A recent survey suggests that the higher up the hierarchy you are within an organization, the more likely you are to dabble with ‘shadow AI’ outside the sightlines of colleagues.
Some 93% of executive level staff have used unapproved tools at work, according to a CyberNews survey, compared to 62% of professionals – which the survey defined as employees below a managerial level. That comes on top of a separate study that suggests six in 10 managers use AI to make decisions about their direct reports – with one in five saying they rely on AI “often” or “all of the time.”
Microsoft data suggests half of workers in UK businesses alone use unapproved consumer AI tools like Copilot or ChatGPT at work every week. But management’s worries about their teams may be misguided. It’s the bosses we need to worry about.
Your inbox, upgraded.
Receive weekly engineering insights to level up your leadership approach.
“Clown car” deployment
“At my company I saw executives buying their own subscriptions to tools like Claude in violation of the policy they themselves wrote,” says one developer at a midwestern US company, who was granted anonymity to speak freely. The developer compared the ad hoc use of AI in their company to bosses driving a “disorganized clown car.”
The developer – whose business counts fewer than 100 staff, so can get up close and personal with the executive level – is less worried about their immediate bosses, but is concerned that the executives’ tech literacy isn’t at the level of the engineering staff. “It’s ironic that we’re told the risks of hallucinations and vibe coding are too high for us to deploy AI code, while these people are using it to process reports and business critical information,” he says.
“Issues around governance or lack thereof are pretty relevant here,” says Joe Peppard, academic director at University College Dublin’s Michael Smurfit Graduate Business School. “Obviously there’s a reason why employees are not being allowed to use AI.” Putting proprietary data into an LLM can be damaging to a business – as can relying on outputs that often hallucinate or create errors for business-critical elements.
Yet the risks of bosses using AI are even more damaging, reckons Peppard, who recently released a study suggesting that many digital transformation efforts fail because of blind spots among executives that prevent them from avoiding risks others might well see.
More like this
Executive dysfunction
Those who are least likely to be interacting with AI tools day-in, day-out are also those who are least likely to understand their strengths and limitations. “It’s rarely malicious,” says Phil Chapman, cybersecurity expert at Firebrand Training. “Usually it’s someone curious about the technology or just trying to get through their workload faster.”
Chapman points out that executives come unstuck because of their position within organizations. “Senior leaders assume they understand the risks because they’re experienced decision-makers, but AI governance is a technical and risk management area where seniority doesn’t equal expertise,” he says. “They’re becoming reliant on AI for increasingly sensitive tasks, but they haven’t learnt the fundamentals about data handling or privacy implications.”

Deadline: January 4, 2026
Call for Proposals for London 2026 is open!
The reason why
Managers are under pressure to do more with less: headcount freezes, hiring delays, and endless reporting cycles leave them overstretched. AI tools promise shortcuts to maintain output without official headcount increases. Partly, it’s probably also ego – a desire to appear innovative, or at least not to be left behind by their peers.
“Executives see their counterparts experimenting with AI and feel they can’t be the ones caught flat-footed,” says Peppard. “But if the corporate policy is ‘No AI’, and a manager’s using it, that shouldn’t be the case.”
Unchecked, that tension between caution and curiosity is becoming the real AI risk: not the machines themselves, but the people at the top who can’t resist using them.