You have 1 article left to read this month before you need to register a free LeadDev.com account.
Estimated reading time: 6 minutes
Onboarding looks a little different compared to a few years ago. But is it for the better or for the worse?
I recently transitioned roles and quickly discovered that onboarding onto a new team looks a bit different than it used to. As it turns out, AI is quite helpful for learning and absorbing new information. But discernment is key, and engineers should be aware of common pitfalls when relying on AI in this manner.
Despite some of the bigger hazards, I found that AI-assisted onboarding meant that I was ultimately able to start contributing to my new organization much faster. However, the impact of my contributions also stalled out quicker as I wasn’t deeply understanding the problems and context of the system and business. But does that mean I don’t recommend AI until you’re fully onboarded? Not quite.
How to leverage AI as a newly-onboarded engineer
Synthesizing data
One of the most intimidating elements of onboarding is the vast amount of things you just don’t know. Luckily, AI is particularly good at summarizing large amounts of data, making it a powerful tool to leverage.
When I started in my new role, I found that a lot of the documentation was out of date, no longer reflecting current processes. To tackle the issue, I could have dug into any number of files and pieced together all the information, but AI could do the same in a fraction of the time. I was then able to validate whether the model’s suggestions were successful or not, allowing me to update the documentation for whoever was onboarded next.
Like any onboarding journey, there inevitably came a time when I had to prepare for my first on-call shift. The team had extensive runbooks, but in order to make them more consumable, they were broken down by domain. As a newbie, I didn’t always know which questions and keywords fit in which domain. AI was great at scanning through the docs and finding a match for the domain I was looking at.
Sometimes it went in the opposite direction. I had recently learned about a specific product and domain that my team was responsible for. However, when looking at our large back-end codebase, it wasn’t immediately clear which APIs were relevant to the domain and which weren’t. Giving AI the task of narrowing my scope of concern was highly effective. By looking at connected code and database models, it surfaced which top-level queries and mutations were likely part of the domain I was trying to dig into.
Your inbox, upgraded.
Receive weekly engineering insights to level up your leadership approach.
The devil is in the details
One unexpected benefit of using AI as a tool was that it helped me navigate codebases much more efficiently. Oftentimes, when learning a new system, it takes a bit to understand the directory structure and levels of abstraction.
The first time I realized AI could be helpful for this was when I was looking at a live user interface (UI) and trying to figure out where in the code that functionality lived. In a pre-AI world, I’d search the codebase for a constant string on the page that seemed relatively unique and dig through the results until I’d traced back the component on the page I was looking for. While this approach worked, it took a while to complete. But with AI, I could say, “find me the code that controls this table cell on the page with this route.” Once I knew where to start digging, things moved much faster.
Keep me honest
Every codebase we work on has different patterns. Typically, one of the hardest parts of contributing to a codebase you’re not familiar with is determining how to adhere to the norms of the core development team.
Introducing AI into the mix, however, I could ask the AI about the overall approach I took. Does this new code I added adhere to the patterns of the existing codebase? Is the current codebase consistent in and of itself? In the latter instance, the AI outlined the percentages of each of the patterns used and which had been introduced most recently. From there, I could bring the conversation to my team to align on best practices if needed.
AI could do a lot to arm me with information, but it didn’t give me all the answers. I still had to interpret the information and choose how to act based on the data presented to me. It quickly became apparent to me where I could rely on the AI and where human judgment should take the lead.
More like this
Where AI will fall short
Foundational knowledge and AI trust
AI may be fast and accurate based on how it understands the information in front of it and the prompt being given, but that also means its accuracy can be subjective.
An engineer with deep knowledge of the codebase would be able to spot the AI’s mistakes immediately, but that was harder for someone who was seeing it for the first time. I had to rely on core software engineering principles and ask clarifying questions about the AI’s decisions. I asked why it chose one approach over another, pointed out patterns it didn’t use, and shared my high-level understanding of the feature to check how it handled edge cases.
I made sure that I didn’t accept the AI’s output at face value – I needed to continue the conversation and iterate on what I wrote until I was comfortable that I understood it.
Diagrams and visuals
One thing I often turn to when onboarding is visual representations of data flows, product behaviors, and systems integration points. This was an area where I found AI lacking. While it could produce basic, text-based flowcharts or plug information into tools like Obsidian, it couldn’t generate the type of clear, dynamic diagrams I’d normally sketch in a whiteboarding tool like Excalidraw.
The risks
Before recently switching teams, I was often missing the AI hype. It was constantly getting things wrong, and I was much faster at writing and iterating on code than it was. But once my domain knowledge and inherent advantage were stripped from me, AI became a great tool for iterating on my thoughts and intentions.
But there were unexpected consequences. Yes, AI knew more about the systems and code than I did. And yes, I was able to increase my velocity, despite being new, by using it. But I wasn’t closing the knowledge gap between me and AI as quickly as I would have without using it. I was learning about the system, the code, and the domain. However, I didn’t need to learn it as deeply and make as many mistakes along the way, which meant that, without those stumbling blocks, my onboarding velocity was slower than it would have otherwise been.

London • June 2 & 3, 2026
Rands, Nicole Forsgren & Matej Pfajfar confirmed
My recommendations
AI has a lot to offer when it comes to onboarding, but you can’t sit back and watch it run. Use it to synthesize data, or find a line of code, that’s all reasonable. But when you next face a similar problem, see what you’ve remembered and can solve on your own. If you want to use AI for efficiency while also testing your knowledge, see how specific you can get with your prompts. The more specific and detailed you are with your prompts by referencing the existing code, the more confident you can be that you’re learning the space you’re working.