6 mins
You have 0 further articles remaining this month. Join LeadDev.com for free to read unlimited articles.

Managing an effective hiring process in an all-remote company that’s doubling year over year is not easy.

That was one of my challenges during my time at GitLab as a frontend engineering manager. It was a period of hypergrowth, and it was critical for the organization that we hired the best talent so that we could continue accelerating the delivery of great features to our users.

The problem at hand

In the early days of hypergrowth, there was little standardization when it came to evaluating candidates in the technical interview stage. Candidates would often receive a wide variety of interview formats depending on who their interviewer was. This led to inconsistency and bias. Some interviewers were evaluating more strictly than others. Some interviewers would value certain software topics more than others in the interview. The randomly selected interviewers had too significant of an influence over a candidate’s ability to pass the technical interview stage.

As a result, after getting buy-in from my peers and other engineering management leaders, I started investing some of my time into developing a structured technical interview process and evaluation for GitLab. It was important for us to build a system that was scalable, so that we could easily onboard interviewers and provide an objective evaluation of each candidate. As an organization, we valued the concept of iteration; as such, the new process didn’t need to be perfect but had to be robust enough to be easily adopted and show improvement compared to the existing interview setup.

After spending some time observing the different interviewing methods in the industry, and internally, I noticed that our backend engineering teams had built a fairly robust interview model: prior to the technical interview session, candidates would be granted access to a predefined interview project containing a merge request. Candidates would then be tasked with running the code locally and performing a code review prior to the interview session. During the synchronous interview session, candidates would take the observations made in the code review and fix up the merge request to the best of their ability during the time allotted. A rubric would be used by the interviewer to check off all the tasks each candidate performed to determine a pass or a fail.

Forming an effective model

I was really intrigued by the backend model, especially since GitLab had always strived to seek out ways to have the technical interview resemble the daily work of engineers. (In the earlier days of the organization, candidates would work on an existing issue from the product backlog and implement them during the time-boxed technical interview. I wasn’t involved in the decision-making back then, but I presume the lack of consistency in evaluation and the concept of doing free work for the company likely led to that method being discontinued.)

After my observations, I decided to take the frontend technical interview in a similar direction. I built a sample project using the same technology stack as our codebase (Ruby on Rails, HAML, Sass, Vue) and created a half-implemented merge request with some mistakes for candidates to review and fix. In addition, I also added some tests into the merge request to evaluate candidates’ ability to write and understand how to test their code. This was something that we often did not evaluate in our previous method – and in my personal experience, not something that companies often evaluate during their technical interview process.

Since working on the frontend typically involves a wider breadth of technologies than the backend, the rubric we used for evaluation was first segmented into categories (e.g. testing) with specific task objectives (e.g. get the test passing) and an accompanying point value based on the difficulty of the task. At the end of each interview, we would tabulate the point values and use that as an objective method in determining whether a candidate would pass the interview stage. This rubric was created in Google Sheets which allowed us to easily create a sharable template with predefined Google Drive permissions. We also leveraged Google App Scripts to automate the collection of each rubric’s point values into one primary Google Sheet that could be used to monitor how candidates were performing over time. We would tweak the point values and add more task objectives as candidates went through the new process.

The impact of a structured interview process

Although the new interview process wasn’t perfect, through this system we were able to standardize evaluation, test our assumptions and find opportunities to raise the bar. After we rolled out this interview format, engineering managers had a better baseline for knowing that the candidates coming in would be proficient in the categories we evaluated them in. We were also able to implicitly test candidates for certain topics we wouldn’t have explicitly asked in the past, for example, the ability to use Git. Although Git wasn’t listed as a requirement for the job, it was significant enough to make a difference in an evaluation but previously not significant enough to be asked explicitly during the technical interview.

In addition, we were also able to test a hypothesis we had about incoming candidates. There was a period of time where we were trying to determine whether we should make professional Vue experience a requirement for the job. According to surveys, only 30% of frontend engineers have professional Vue experience. We had to be cautious because this meant that we would reduce our inbound application pool by 70%. Thankfully, through our new process and the ability to monitor candidate performance over time, we discovered that candidates with modern framework experience (e.g. Angular, React, etc.) performed on the technical interview similarly to candidates with Vue experience. This enabled us to verify our assumptions without negatively impacting our hiring process.

Another aspect that our structured process paved the way for us to do was rethinking how we were evaluating whether a candidate was at an intermediate or a senior level. Although years of experience is the generally accepted method for gauging seniority, it is not always the most accurate. At GitLab, we valued results more than input. As such, we wanted to lean on our technical interview to evaluate a candidate’s seniority. Through our new interview process, engineering management had to discuss and determine whether senior engineers should score on average more points than intermediate or whether they should just score more points on specific categories. In other words, should senior engineers be specialists or generalists? In the end (while I was still at GitLab), we decided to keep senior engineers as more advanced generalists (more points overall vs. more points in specific categories). Either approach has its tradeoffs, however, without this new setup, we would have never been able to come to that conclusion.

Overall, overhauling the frontend technical interview process at GitLab was a very rewarding moment in my career. It may not be the ideal strategy for your organization to implement, but hopefully, it encourages you to find ways to improve your interview process so that you can reduce bias and raise the bar in hiring.

Designing effective criteria for assessing engineering candidates equitably
Episode 01 Designing effective criteria for assessing engineering candidates equitably
Six ways to create an unbiased hiring process
Episode 03 Six ways to create an unbiased hiring process