You have 1 article left to read this month before you need to register a free LeadDev.com account.
Estimated reading time: 3 minutes
Think of the best feedback you’ve ever received in your career — the kind that has stayed with you across roles and companies.
It might have been a manager naming a strength you hadn’t seen in yourself, or a peer pointing out a blind spot with enough care that it actually changed how you worked.
What made that feedback land wasn’t just the content. It was the person behind the words – their tone, their presence, their genuine care. They knew your work well enough to point to something real. They cared about your growth, not just the output you produced. You knew they wrestled with how to say it, because they cared about the outcome.
There’s a reason certain feedback stays with us long after the performance cycle ends. It reveals how closely someone was paying attention, how invested they were in our success, and what our relationship with them allowed them to say. This is work that cannot be automated.
Your inbox, upgraded.
Receive weekly engineering insights to level up your leadership approach.
Don’t outsource human connection
As managers, giving meaningful feedback is one of the few chances we have to show someone that we’ve truly noticed their work. In many ways, this is the most human part of the job.
So, what happens when we start to outsource the most human part of our job to large language models (LLMs)?
We’ve all been there. Performance review deadline looming, several peer reviews still unwritten, and a project deliverable at risk. In that moment, the temptation is real: why wouldn’t we reach for something that promises to turn a messy year of work into polished, coherent, vaguely emotionally intelligent feedback?
More like this
LLMs are exceptionally good at that. They take the chaos of notes, Slack threads, Jira tickets, PRs and messy context… and return something that sounds like coherent, attentive feedback. And this is how we end up with phrases like:
- “Showed commendable execution and played a pivotal role in delivery.”
- “Consistently went above and beyond to ensure cross-functional alignment.”
- “Demonstrated strong ownership and drove meaningful impact across initiatives.”
Let’s be honest – no one talks like that in real life.
The issue isn’t that these phrases are inaccurate; they are often technically correct. The issue is that they lack sincerity cues.
There is known research suggesting that when humans suspect a machine is behind a judgment call, their trust in that judgment drops (it’s called Algorithm aversion). These generic phrases signal that little specific attention was paid. It turns a moment of connection into a checkbox exercise.
The friction of writing a review – struggling to find the right word to describe someone’s unique strengths, recalling the specific Tuesday afternoon when they did something that genuinely amazed you – is part of the process. We cannot deeply appreciate someone’s work or help them grow while simultaneously looking for a shortcut to describe it.
AI as a partner, not a ghostwriter
Does this mean we should cut LLMs out of the performance process completely? Not really. The problem isn’t the tool; it’s using the tool to replace human judgment rather than support it.
Here is how to use AI to enhance the process, without losing the human element:
1. Use AI to organize, not to articulate. Human memory is fallible. We suffer from recency bias, remembering only the last few weeks of work. LLMs are excellent at surfacing patterns across scattered thoughts, tickets, PRs, and messages. Let the model help you remember what happened, but do not let it decide what it meant.
2. Use AI to challenge blind spots. We often attribute an employee’s mistakes to their character, but our own mistakes to circumstance (the Fundamental Attribution Error). Prompt the model to be a devil’s advocate. “Here is my observation of this behavior. What are three alternative explanations for why this might have happened?” This broadens your perspective before you write a single word.
3. Use AI to draft. Staring at a blank page is difficult. Research in writing support shows that editing is a distinct cognitive process from generating (A Cognitive Process Theory of Writing). Let an LLM generate a messy first draft to break the writer’s block. Then, rewrite. Inject your voice. Add the specific context only you know. If a sentence doesn’t sound like you, cut it.
4. Use AI for calibration. LLMs can help identify inconsistencies in expectations across levels, surface missing competencies, or spots where you may be overweighting a single event. This makes sure you’re writing fair feedback rooted in the competencies.

London • June 2 & 3, 2026
Rands, Nicole Forsgren & Matej Pfajfar confirmed
Final thoughts
Ultimately, what makes feedback land isn’t polished prose – it’s authenticity.
Feedback builds trust when it feels personal, timely, and grounded in observation. Use AI to handle the data, the organization, and the bias-checking. But keep the judgment in your own hands.