You have 1 article left to read this month before you need to register a free LeadDev.com account.
Your inbox, upgraded.
Receive weekly engineering insights to level up your leadership approach.
Estimated reading time: 6 minutes
AI-assisted coding isn’t just a trendy concept for startups – it’s becoming a bonafide enterprise development practice.
According to StackOverflow, 76% of developers use or plan to use AI in their development process.
But there’s an art to working with AI coding assistants like Cursor, Windsurf, or GitHub Copilot. Without the right guidance, these tools can erase code chunks, introduce security flaws, create technical debt, or hallucinate entire sections. Using natural language to vibe code your way through a feature leaves plenty of room for misinterpretation.
Fortunately, prompting best practices are emerging to help developers avoid these pitfalls. The way you prompt, the context you provide, the specificity of your instructions, even the persona you assign the agent can all dramatically improve results. Choosing the right type of underlying large language model (LLM) for the job matters, too.
Developer productivity company DX recently released a Guide to AI Assisted Engineering, covering prompting techniques from meta prompting and chaining, to one-shot examples, multi-model use, media inputs, and more. Let’s get into it.
More like this
1. Meta prompting
Meta prompting means being more intentional with how you structure prompts to shape the model’s behavior and output. Meta prompts provide specific instructions for how the LLM should process the request and format the output.
“It’s something not a lot of people think about since they expect a natural language process,” says DX’s deputy CTO, and author of the guide, Justin Reock. “But if you’re thoughtful in how you present ideas in a clear and structured way, including how you want the output of the structure to look, you get way better results.”
Consider a low-effort prompt, like ‘fix this issue’ paired with an error log. A meta prompt would take this further, asking the agent to first attempt to debug the code, then explain the error, provide a fix, and suggest best practices for the future, all with formatting instructions included.
Meta prompting works especially well with reasoning models, which excel at step-by-step problem solving. The main benefit: fewer iterative exchanges and more tailored results.
2. Prompt chaining
While meta prompting is simple to learn, prompt chaining – or recursive chaining – is more complex and powerful. It links multiple models together, feeding the output of one into the next to leverage their unique strengths.
Prompt chaining is like talking to a room full of specialists. “The comprehensiveness is mind-boggling,” says Reock. “It finds the gaps you’d forget as a human being.”
Here’s what it might look like:
- Start with a chat model like ChatGPT-4o to brainstorm. Ask it to take the role of a senior engineer who asks probing, perspective-shifting questions to understand your project’s goals and scope.
- Pass the results to a reasoning model like ChatGPT-o1 to scaffold a blueprint and break it into units of work.
- Then send that to a code-generation agent like Cursor or Copilot to produce the code.
You can extend this approach with mid-loop generation – where the agent writes code inside a function, producing more production-ready output.
According to Reock, prompt chaining can compress what might take a week of back-and-forth between architects and developers into half an hour.
3. One-shot prompting
One-shot prompting is simple but powerful: you give the LLM an example to guide its response. One-shot prompting (providing one example) and few-shot prompting (providing several) are opposed to zero-shot prompting, in which you don’t provide any context at all.
A zero-shot prompt like “generate an ecommerce API for a new store” might yield generic code that ignores your team’s naming or design conventions. A one-shot prompt, on the other hand, might include an API specification from a previous project, guiding the model to produce code more aligned with your structure.
The same technique could be applied for backend testing, UI, documentation, and more. Supplying a concrete example helps the model generate more accurate, context-aware output aligned with existing development practices.
4. Updating system prompts
Most AI code editors allow developers to include a system prompt. This is a persistent instruction that shapes the assistant’s behavior across all interactions. The concept is simple, but when used strategically, it can impact the entire organization.
A system prompt might be as short as: “You are a Java developer expert obsessed with spotting security flaws.” Or it could span several paragraphs detailing coding standards, compliance rules, or preferred languages.
The thing is, system prompts shouldn’t be static. They should evolve alongside your workflows and tools. “This is really an operational thing, but it’s really important,” says Reock. “When a model does something wrong, have a feedback loop in place to inform updates to the system prompt.”
Dynamic system prompts can help enforce org-wide upgrades – like adopting a new language version – or avoid repeated mistakes. By reporting bad model behaviors and iterating over time, you can improve output quality and scale best practices across teams.
5. Adversarial prompting
There are real benefits to prompting across multiple models and comparing their outputs. This head-to-head approach is known as multi-model or adversarial engineering.
Say you prompt both Model A and Model B to generate a function in Go. Each returns a slightly different implementation. Then you feed Model A’s output into Model B and ask it to critique it – and vice versa. The “winner” is the model that catches more issues or offers better optimizations.
This setup could involve any number of general-purpose LLMs or domain-specific small language models (SLMs), depending on the task. The advantage of pitting models against each other is simple: you get critical feedback on code quality and quickly identify which model performs best in your situation.
Reock says adversarial prompting can even flip traditional workflows like test-driven development on their head. “You can start with your test set, then have one model generate code to pass the tests, and another critique it for security flaws or logic gaps,” he says. This is one way to avoid blindly trusting AI – especially for organizations without strong test-driven automation in place.
6. Prompting with media
Voice-driven prompting is another productivity strategy worth considering. DX found that voice-to-text prompting can accelerate development time by up to 30%.
Beyond audio, images and diagrams also make strong prompt inputs. Say you’re working from a requirements doc for a low-cost, greenfield cloud-native app. Including the architecture diagram can help the model understand your system’s components and suggest relevant open-source tools.
Reock also suggests uploading a decision tree to help an LLM generate a user journey as a React app. Visual cues like this speed things up by grounding the model in context. New Relic engineers similarly reported better results when using schema screenshots instead of raw database text.
7. Adjusting determinism
LLMs are indeterministic, meaning their outputs often vary, even with the same prompt. This unpredictability powers their creative reasoning, but it can be frustrating in workflows that demand consistency or compliance with strict requirements.
One way to rein in randomness is by adjusting the model’s temperature, a setting that controls determinism, usually ranging between 0 and 1. Lowering the AI’s temperature (like to 0.1) makes outputs more predictable and repeatable, while raising it (such as to 0.9) increases creativity and variability. Cursor allows this adjustment, but most commercial agents don’t expose it.
Reock shows that changing the temperature can lead to drastically different outputs, even for a straightforward prompt like generating a basic JavaScript coloring app. Matching determinism to your goal helps, whether you want creative variety or a repeatable output.
What next?
Looking ahead, Reock believes multi-agent orchestration – tapping in multiple purpose-built agents for various parts of the software development lifecycle – presents the next generation of prompt-driven code development. “When this stuff came out it was just a passing thing,” he says. “But I’ve moved from a skeptic to a full-on believer in the last year.”
Prompting tactics are nascent in most enterprise settings, requiring knowledge sharing to develop a productive culture. Leveling up will involve additional training on prompting strategies and continual assessment of which techniques perform the best.
It’s also safe to say that as technology evolves, new prompting strategies will continue to emerge. For instance, others are finding effective results from prompts with ultra-specific debugging questions or ones that suggest line-by-line iterative development.
In the end, how you frame your request will determine the end result. So, it’s imperative to get it right.