Berlin

November 4 & 5, 2024

New York

September 4 & 5, 2024

How to create an interview rubric that actually works

Interview rubrics are a great way to reduce bias. Here's how to build a rubric for any technical role.
November 07, 2022

You have 1 article left to read this month before you need to register a free LeadDev.com account.

Interview rubrics are a great way to reduce bias. Here’s how to build a rubric for any technical role.

Hiring software engineers is a time-intensive and challenging process. It can also feel subjective, with ratings for the same candidates often varying from interviewer to interviewer. In fact, one study found that 65% of technical recruiters see bias in their current hiring processes. Consistent processes are key to ensuring that your engineering team can interview efficiently while also providing fair and objective evaluations of candidates.

To improve efficiency and limit bias, a structured rubric with a concrete scoring guide can help you evaluate a candidate’s competencies for a technical role. However, developing a rubric that fits your hiring needs can be tricky – especially for mid- to senior-level engineering roles, where leadership and other non-technical skills matter greatly.

CodeSignal

How do you actually build an interview rubric? The following four tips will help you create an effective rubric for on-site or remote interviews for any level of technical hire, from early career to senior-level.

1. Quantify all job-relevant skills

Interviewers often have to screen and debrief on a dizzying number of candidates to find someone who is the right fit for a role. It’s easy to make the mistake of allowing for quick qualitative feedback for the sake of time. However, this often makes a fair comparison between candidates near impossible (is a “good” versus “solid” candidate better?), and leaves recruiters looking back on notes with the ambiguous responsibility of interpreting what raters’ subjective comments mean.

With a numerical system, where level of performance is mapped to a specific number, you can create a single, quantified final score to compare skills between candidates. Better yet: ask interviewers to provide concrete observations of behavior – what the candidate did – that led them to their choice of score. Debrief sessions can be focused on hard numbers rather than opinions that may have little to do with the job-relevant skills you want to assess. Using numbers both streamlines evaluation and centers discussion on more objective measures.

2. Define what each score means to ensure consistency across interviewers

A rubric is useful only if each interviewer knows what each score means. Every interviewer approaching a rubric should be able to understand what observable behaviors or answers (not just subjective impressions) merit a particular score. For example, when evaluating a candidate’s collaboration skills, an observation like, “The candidate gave several specific examples of working with others to achieve a desired outcome,” is much stronger than, “The candidate seemed like a team player.” This is critical for consistency in ratings across candidates. When all candidates are being compared on the same criteria, they have a fairer and more equitable shot at success.

When designing the rubric, outline the range of scores possible for each skill and which observable behaviors the candidate should demonstrate in order to achieve each score. The more specific you can get, the better.

3. Evaluate both technical and soft skills

Technical competency is not the only skill an engineer needs to be successful, so it shouldn’t be the only area you assess your candidates on – particularly when it comes to mid- and senior-level hires. Engineers need to possess a wide range of both hard and soft skills to be successful in their roles, including effective communication, collaboration, and leadership. Studies show, for example, that teams who communicate effectively increase their productivity by as much as 25%.

Here’s an example of the level of detail you’ll want to achieve when defining skills to measure. Jamie Talbot, former Director of Engineering at Medium, developed a rubric that assessed for the following areas (and many others) when interviewing software engineers for his team:

  • Problem-solving
  • Autonomy
  • System design
  • Resoluteness
  • Curiosity
  • Collaboration
  • Values alignment

Consider all of the relevant competencies that are important to the role you’re hiring for, and pick the most important ones to include in your rubric. Then, spell out what each score looks like (“strong no” through “strong yes,” for instance) for each of these competency areas. The interview process is a great opportunity to evaluate the candidate on any skill – whether hard or soft – that matters to your decision-making.

4. Use early interviews to calibrate your rubric

Once you’ve built your rubric, check that it’s working as intended. We recommend having every interviewer score the same interview independently using the new rubric. Look to see if there are score discrepancies and which items on the rubric were most subject to different interpretations by your interviewers. If you see places where you can improve the rubric or make it more specific, now is the time to do so.

This process will ensure that each interviewer is interpreting the scoring criteria in the same way, and that you’re ultimately selecting the best candidate for the role in a consistent, fair, and objective manner. If you have time, we recommend repeating this process with new interviews.

A great rubric can help interviewers approach recruiting in an efficient and standardized method, and make more objective decisions. The result is a hiring process that gives all candidates a fair chance at success, and ensures engineering teams land the hires that are the best and most qualified fit for the role.

CodeSignal