You have 1 article left to read this month before you need to register a free LeadDev.com account.
Estimated reading time: 7 minutes
What do you do if you find yourself in a scenario where an exec is pushing for AI adoption?
In today’s landscape, the drive for AI integration may feel relentless. Whether it’s product features, internal initiatives, or developer tools, this AI-first approach is creating tension between execs and engineering leaders. For execs, AI stands as an avenue of innovation. For devs, rapid and often careless adoption is a recipe for disaster.
When it feels like AI is being forced on you from all sides, what is the most effective way to push back against upper management?
Understanding the proposed AI solution
You may have already encountered the scenario: you sit down in a meeting with senior execs, and someone broaches the idea of jumping on the hype train with a new AI-focused product or solution. The idea, at least to you, feels half-baked, having failed to take into consideration quality, complexity, or even technical feasibility.
Start by taking a breath and letting go of that instinct of frustration, if you have it. Instead, approach the situation with curiosity – try to understand what is being proposed. In doing so, you’ll gain all the necessary context needed to vet a solution properly at this stage.
Usually, most conversations around leveraging AI fall into one of three categories:
Your inbox, upgraded.
Receive weekly engineering insights to level up your leadership approach.
1. When you’re asked the open-ended AI question
This scenario almost always starts with a senior leader broaching an open-ended question to see where AI can be leveraged in the organization.
If you’re going to enter a conversation about leveraging AI, this is the ideal. When there aren’t preconceived notions or specific solutions being proposed, the open-ended AI question becomes a brainstorming opportunity. Treat it like a creative exercise: if you could use AI anywhere, where would it be most valuable? What tasks could it solve?
For example, you might identify a small, yet useful, way to inject AI to help users format specific payloads correctly. Something tactical and scoped as a first use case can be a great idea.
Most importantly, you should remember – given the nature of such open-ended questions – that concluding that AI is not a viable solution is also valid. If you reach this conclusion, just be prepared to explain why, outlining the risks and concerns that led you to that decision.
2. When execs think AI can solve a specific problem
Let’s say that the exec comes into a meeting and raises a specific problem within the engineering org that they think can be solved with AI. This scenario isn’t that different from the first scenario! It should still be treated as an opportunity to brainstorm. Your initial internal questions should be: “If this problem were to be solved by AI, what would that solution look like?”
Let’s say the problem is that customers are hesitant to switch over to your tool from a competitor’s. Your main hypothesis as to the cause of this is that your product’s migration process is too onerous. An AI-powered tool that helps move customer data from competitor tooling to yours could be a great first-step solution.
Much like with the first scenario, an executive’s proposed solution isn’t always the best one. This is a perfect opportunity to discuss the pros, cons, and limitations of using AI for that specific problem.
3. When someone has a specific solution in mind for AI
The final common scenario you might encounter is dealing with someone who has proposed an idea for a specific AI solution. Take AI out of the sentence, replace it with any other popular technology or productivity tool, and you’ll realize that, at its core, this is a solution in search of a problem.
Solutions in search of problems are something engineers encounter often, and your role as the voice of pragmatism doesn’t change just because AI is the latest trend.
Instead of immediately pushing back, validate whether AI can do the work that’s being pitched. Abstract AI solutions are often more glamorous than they seem, and doing initial research around the fidelity of the proposed solution can ground the conversation. If AI can do some percentage of the solution but not all of it, surface that as part of the ongoing conversation. If it can only handle the task 70% of the time, make that clear and determine whether that’s a good enough number or not.
Once you’ve completed the validation exercise, think about the entire lifecycle. AI is a different beast, and a lot of the default expectations around code deliverables look different in an AI-powered solution. Consider whether it meets other typical non-functional requirements within your organization. What does scalability look like? Does it meet reliability requirements? What about performance?
Don’t stop there; consider technical considerations. What will it take to test, deploy, and maintain this solution? Comparing AI workflows to your standard processes will reveal necessary compromises. This leads to an important conversation around what tradeoffs leaders are willing to make to use this technology.
More like this
Explicitly define the problem you’re solving
Once you’ve fully understood the scope of the new AI solution or product, the next step is to define the problem the solution claims to be solving. What is this a solution for? Can you define it? Is it just bells and whistles? Can the person proposing the solution define the problem? If you don’t know what problem you’re solving for your users, then you shouldn’t be investing in it.
A good question to ask is whether adding a “powered by AI” bit of text to the screen would be just as valuable to the organization as actually powering something by AI. If it’s purely about optics, you want to understand that, or you’re not going to be able to have an effective discussion.
Let’s say you can define the problem. How much of a problem is this? Do you have data or metrics on this pain point? If the solution weren’t AI, would this problem be a high priority for the team? We set priority based on the problem to be solved, not the technology chosen for the solution.
You can measure problems in myriad ways. Look at the monetary cost, the time cost, the opportunity cost, etc. Once you’ve found a metric you like, create a baseline that measures the current state of the problem. Work with product on this if you can, as they are typically well-versed in defining problems and opportunities. If the baseline is acceptable, then you likely don’t have a high-priority problem to solve.
What if “using AI” is the priority?
Sometimes, the real goal isn’t to solve a problem, but simply to “use AI.” The drivers for this might be external marketing, an assumption that investment and familiarity with AI will pay dividends, or due to a partnership deal where using the AI offering is lucrative to the organization. Understanding this context can help frame the discussion.
If you find yourself in this situation, then you have an opportunity to treat this as an open-ended question to brainstorm. A potential dialogue is, “It seems like our ultimate goal is to show that we are integrating with AI and staying current. Perhaps we can step back from this specific solution and brainstorm together about the best opportunities for AI in our space.” This shifts the focus from one tactical opportunity to finding the best overall strategy.
What if you have to do it anyway?
Sometimes discussions don’t land in the place you want them to. Ultimately, this AI solution may be something you have to build. If the above approaches aren’t effective, consider the benefits of actually bringing in this new technology to your stack and treating it like a professional development opportunity.
You can learn to understand different types of AI and their corresponding models, e.g., machine learning, natural language processing, and computer vision. Mastering data skills and frameworks like TensorFlow, or discovering new integration opportunities with AI for user interaction, like chatbots.
As you work with AI, consider its limitations. What can’t it do well? Consider ethics and compliance, and how that should inform future AI opportunities. Track the performance of the AI solution you build over time and consider how that data can power future discussions.
Final thoughts
I think we can all be confident that AI has changed the nature of our industry for good. Exactly what that looks like, and how it evolves, is yet to be seen. What we can control is how we enter discussions around introducing AI to the technologies we build. How we continue to build products and services that provide value and prioritize humans, whether they’re AI-integrated or not. The goal is to ensure an open dialogue with our leaders around how we step into this ever-changing ecosystem.