AI is changing the face of the tech industry. Learn about the current AI climate, its pros and cons, and how AI tools can help in your company’s interview processes.
OpenAI released ChatGPT in November 2022 and global awareness of artificial intelligence (AI) was immediately raised. It was the first system to harness the power of large language models (LLMs) and make them accessible to mainstream users. Digital media became saturated with predictions and examples of how AI systems would significantly impact the global workforce through its ability to generate working code, write articles, and perform numerous other tasks.
Given this reality, we anticipate that AI will have a substantial effect on technical recruiting and has the potential to be successfully integrated into technical interviews.
The current AI landscape
While past technological breakthroughs created new jobs and limited worker displacement, LLMs, including OpenAI’s ChatGPT, Google’s Bard, and Meta’s LLaMA, may fundamentally change the world of work. A 2023 Goldman Sachs report estimates that, globally, 300 million jobs are at risk of having automation replace some of their duties, also arguing that the advancements will improve workplace productivity, but lead to job loss.
The speed at which LLMs are improving is likely to deepen economic impacts. Since the transformer model was introduced in 2017, LLMs have developed an unprecedented ability to understand user prompts and provide complex responses to a wide range of topics and challenges. Overall, the impact could be especially significant for programming professionals, including software engineers.
Contrary to concerns about job displacement, previous disruptive technologies, including the invention of the first compiler, knowledge-base resources like Google and Stack Overflow, as well as cloud computing, have opened up more opportunities. These innovations, in fact, have made computer science professions increasingly accessible to a broader range of people, fostering an environment that encourages more software engineers to enter the field.
What AI does well
The possibilities to incorporate LLMs into software engineering are substantial. Amongst other things, these tools can produce code snippets based on descriptions of what a piece of code needs to do, solving the “blank-page problem”. While large code blocks continue to be a challenge for LLMs, the “first drafts” of code they create can easily be modified for use.
LLMs can also be used for code troubleshooting to save software engineers significant amounts of time. Some possible benefits include:
- Code completion, an important tactic for programmers stuck and unsure of where to go next
- Identification of the appropriate language-specific library to address a unique coding challenge
- Translation of error messages to debug existing code
In addition to troubleshooting, LLMs can relieve software engineers of tedious or mundane tasks. Some examples include unit and functional testing; the generation of filler content, such as data sets, and the creation of code documentation.
Where AI falls short
LLMs may offer a number of benefits, but they also come with considerable limitations. They do not retain a memory of previous solutions beyond that allowed by their context window, and because they are only returning words based on statistical probabilities they cannot generate new ideas. LLMs cannot create long-form, structured content such as large code blocks or applications, perform multiple simultaneous tasks, or verify the correctness of generated outputs.
These shortcomings indicate that LLMs are unlikely to completely replace software engineers in the near future. Moreover, they highlight serious concerns regarding bias, accuracy, and security.
Ultimately, the quality of an LLM’s output is only as good as the training data it is given. If error and bias exist in the training data, the LLM will reflect it in its response. Most training data comprises an overuse of content generated by those who hold social power, particularly white men. This means that LLM outputs often favor majority groups and existing power structures and overlook data generated by marginalized groups.
The way LLMs make connections between inputs and outputs is considered an unobservable black box because the way these connections are created is usually unclear, and these models are known to spout falsehoods when they combine their training data. Users have found clever ways to bypass existing LLM safeguards to protect against toxic and harmful output, resulting in what’s known as a jailbreak.
Concerningly, LLMs have little awareness of code security. ChatGPT has been asked to develop code in a number of different languages and it frequently produces code with security flaws. It also struggles to correct that insecure code when explicitly told to do so. The potential risk: if a user plugs that insecure, LLM-developed code into an organization’s existing code base, it could create a security hole and cause a breach.
AI and the future of technical hiring
At Karat, we advocate for a more human-centered approach to integrating AI with technical hiring. We see LLMs as another valuable tool in programming toolboxes – much like Google and Stack Overflow. If developers use these tools to perform day-to-day work, candidates should be able to also use them during technical interviews. Furthermore, an intentional, focused approach to the use of AI tools in interviews ensures that a candidate’s true expertise and skill can be accurately assessed.
Our current interview assessments pressure-test real-world, technical abilities, with scope for the inclusion of AI tools. For instance, if employees are likely to use Google and Stack Overflow to look up documentation as a part of their work, ChatGPT could be used in the same way during an interview (alongside their interviewer). This has proven to be a successful tactic in one of the scenarios we have tested in our mock interviews with ChatGPT.
However, if the use of AI tools is prohibited in the workplace, there’s no point in allowing candidates to use them during interviews. This philosophy applies to organizations that have banned AI use outright – as well as to specific jobs within an organization where AI use is restricted or discouraged.
Emerging best practices for AI-powered technical interviews
While LLMs’ benefits and shortcomings are well-documented, research on their use for technical interviews is virtually non-existent. Most content surfaced after ChatGPT’s release and focuses on preventing candidates from cheating. Due to ChatGPT’s recent success in passing a technical interview for a level three software engineering position with Google, its potential influence is all the more important to consider.
Based on our ongoing research and expertise, we’ve developed guidance for building AI-enabled interviews. We’re also integrating these recommendations into our best practices to reflect the growing reality of the AI-powered workplace.
Offer an opportunity to build
Invite candidates to use LLMs to create starter code to accelerate other tasks during the interview. The request makes it possible to build enough working code for an entire application or basic interface in a one-hour time frame. LLMs cannot build something complex without direction from a user, so there is little risk of a candidate using the tool to gain an unfair advantage.
Design questions that require multiple steps
Create interviews that require completion of a multi-step task, because working with an LLM on a complex task requires breaking the task down into smaller components. This design will gauge a candidate’s ability to explain the task well enough to break it down into pieces small enough to prompt AI tools to offer the right assistance.
Provide large blocks of code
LMMs are great at summarizing sizable amounts of text. Optimize interview time by allowing candidates to use them to quickly read and comprehend a large code block. Not only is this a task that gives candidates more time to focus on working directly with the block, but it also closely resembles the skills needed for a software engineering role.
Incorporate code security
LLMs cannot currently generate code that is consistently secure. Challenge candidates to leverage an LLM to produce code without security vulnerabilities. Software engineers who are familiar with secure coding practices will be able to successfully complete this task.
Encourage discussion that explains rationale and decision-making
Asking candidates to explain the rationale behind their decision requires skill and experience that cannot be borrowed from an LLM. Structuring technical interviews that include time to explain the steps for a complex solution will help your organization hire candidates who are not overly reliant on AI.
Assess prompt engineering
In addition to evaluating a candidate’s output, an effective AI-enabled interview also assesses how effectively an LLM was used to achieve results. Prompt engineering is a skill that novice LLM users struggle to do well. Designing a scoring rubric that measures a candidate’s prompt engineering proficiency is critical to the quality of this kind of technical interview.
We are investing in the development of interviews that focus on the candidate’s ability to use LLM tools effectively, rather than designing interviews that can outmaneuver candidates who try to cheat using AI tools.
We are currently applying our philosophy to pilot AI-enabled interviews for entry-level, back-end software engineering roles. Our approach aligns with the ever-increasing digital workplace and growing adoption of AI tools that boost productivity and drive innovation around the world.