Berlin

November 4 & 5, 2024

New York

September 4 & 5, 2024

Framing AI rollouts in the right light

How can you introduce AI into teams in a way that doesn't invite unrest within your org? It starts with presenting it in the right light.
December 20, 2024

You have 1 article left to read this month before you need to register a free LeadDev.com account.

Estimated reading time: 10 minutes

Sabrina Farmer, CTO of GitLab, on how to throw a psychological safety net around AI to frame initiatives in a positive light.

There’s a lot of doom and gloom associated with AI. Media coverage typically focuses on dystopic outcomes like lost jobs, which causes anxiety and fear of change. Inextricably tied to Terminator’s Skynet and the machine-driven end times, it’s reasonable for programmers to feel a bit anxious when leadership starts to get excited about AI.

“Leadership is all about framing,” says GitLab’s CTO, Sabrina Farmer. “I think AI will change jobs and force evolution. From a leader’s perspective, we need to talk in a way that makes people want to embrace it.”

Generative AI is incredibly time-saving and powerful. It has the potential to automate toil, detect helpful patterns, streamline DevOps tasks, and keep developers in a flow state for longer periods of time. Leaving this newfound ability on the table is simply not a question anymore. Yet, introducing it has roadblocks. “In reality, changing how you work is really hard,” says Farmer.

Knowing how AI is perceived, rollouts will require precise positioning. Farmer is stressing positive net gains, sharing cool stories, learning through trial and error, and having leaders embed AI into their own day-to-day. Through a combination of de-escalating AI fervor and a choose-your-own-model flexibility, she’s setting the stage for a more open embrace of AI.

Frame it right

Instead of viewing AI as a replacement for creativity, Farmer is focusing on how it alleviates operational tasks engineers don’t want to do, such as documentation, testing, or vulnerability scanning. Engineers also respond better when AI is framed as a part of upskilling and career development conversations.

The security argument often resonates, too, she adds. For instance, in a case like Log4j that affects software packages universally, AI agents would be incredibly useful in detecting security vulnerabilities and automatically pushing changes.

The reality is that while code completion and generation can work wonders, you still need humans in the loop who know what they’re doing. Talk to your engineers and assure them that their role is valued and secure. This should help team members embrace and engage with new time-saving efforts with less apprehension.

Geek out with it

Encouraging developers to experiment and share impressive stories is another way to break down barriers and foster positive excitement around AI. For instance, Farmer recalls how one engineer at GitLab used AI to create embeddings to analyze their backlog for patterns and discover what code changes would make the biggest impact.

In a previous life, such a project would have taken ages of manual effort to scour through thousands, if not millions, of bugs in a backlog. But with AI, this grunt work can be completely automated. “AI allows you to find patterns, categorize them, and then you can make a priority decision at a high level,” says Farmer.

AI has also proven itself useful at GitLab for making sense of performance trends. Monitoring produces a lot of signals, which can be challenging to manually sift through. GitLab engineers dumped a ton of statistics and metrics data into GitLab Duo, GitLab’s own AI code assistant, to summarize them and produce a visual dashboard. The result proved something engineers had sensed but not yet validated: that continuously prioritizing feature development over maintenance was making features slower. These insights were then shared with product teams to guide prioritization on improving feature performance.

“AI skips us to the end result,” says Farmer.

Make leadership use it

It’s easy to tell people to change – actually doing it yourself is another thing entirely. As such, Farmer believes everyone needs to use AI, including executives. This grounds mandates in the real world and makes the cascade of communication from upper management down that much stronger.

The best leaders get their hands dirty with new tech and pass down the knowledge. “You have to educate yourself and understand how it works and what it means.” For instance, engineering leaders at GitLab will often record a video of themselves tinkering with new AI features or testing a new development workflow. They’ll often share the entire journey and what was learned, even if it’s a bit clunky or doesn’t work at first.

Getting leaders to learn and change in the open builds empathy and fosters more engagement. In an environment where employees feel safe from ridicule, they are more likely to experiment and collaborate effectively.

Be versatile but smart

Flexibility goes hand in hand with empathy. Sweeping, company-wide AI rules aren’t always realistic – the best practices for AI usage might look different for backend engineers or operational engineers versus frontend developers. As such, you may need to identify standards on a team-by-team basis.

To enable this, GitLab is fairly vendor agnostic, allowing teams to pair a job with the best model, such as Google’s Gemini, Anthropic’s Claude, or OpenAI’s ChatGPT. Versatility is key, since some models are finely tuned to specific actions and might outperform others in performance or quality depending on the task at hand. That could be security scanning, code completion, or data analysis. To determine the quality of various large language models, GitLab has even created an internal working group to compare test sets across models.

Whether you like it or not, engineers are already using AI in your environment, so fighting the trend is a losing battle. At the same time, you want to have visibility into what tools engineers are using and apply guardrails. “It’s out in the ecosystem, and as a leader, you’re better off knowing it’s being used and how than thinking it’s not,” she says. “The best bet is to allow AI and put controls in place.”

View change as an opportunity

Generative AI has been prone to hallucination, code quality issues, data privacy or IP leakages, and other concerns, making some cautious. While Farmer admits there may be some current drawbacks, she focuses on the opportunity for growth. “I’ve approached change as an opportunity.”

Farmer encourages a healthy outlook, acknowledging AI implementation is a journey that requires a community of support, teamwork, and group problem-solving. It’s going to be cool, it’s going to be valuable, she says, but the rollout must be very intentional.