Berlin

November 4 & 5, 2024

New York

September 4 & 5, 2024

Using experiments to bring security into your software development life cycle

It’s natural to want to bring new security practices into your SDLC, but embracing experiments will help you make the right calls for sustainable success.
November 01, 2023

You have 1 article left to read this month before you need to register a free LeadDev.com account.

It’s natural to want to bring new security practices into your SDLC, but embracing experiments will help you make the right calls for sustainable success.

Application security is more important than ever, however, teams are already stretched thin meeting delivery goals. Adding another layer of complexity when things already feel hard can be a nightmare for both your engineering culture and performance.

Finding practical ways to embrace secure development practices, without breaking what is already there, is crucial for keeping systems safe, without burning out our teams.

Cultural and practical challenges

It’s not budget stopping your team from embracing application security, often it’s a disconnect between how software development teams work and the impact of cybersecurity processes on that world.

From time to time, we stumble upon a new approach, ritual, process, or tool that can potentially improve our software’s security posture, and can be embedded into our software development lifecycle (SDLC). Before you excitedly sign a purchase order for a new tool, or start updating your internal process documentation and playbooks, take a breath. If you want this to succeed, there are some things you need to know.

You can’t pack other people’s bags

Imagine your team’s software development life cycle is like a suitcase.

Some teams will have it perfectly packed and arranged

They have a place for everything and know why everything is there. There is no wasted space. For every item they have added, they understand that something else will have to give. After all, whether you want to or not – there’s only so much you can fit in.

Some teams aren’t quite there yet 

At best, they have some well-managed elements and some understanding of what they want to achieve. At worst, they have chaos. Things have been stuffed in randomly and without thought for the other items in there.

Mature or not, the end result will be the same

Now imagine for a second that you want to put something into the bag as an outsider to this process.

In our organized and mature team, that change will immediately impact what is already there and what the compromises will be if this change happens. They may not be happy that someone else has packed items into their neatly curated suitcase.

In a less organized team, they may put it in without discussion and find out the impact over time. The response to this may be slower and gradually change from curiosity to frustration and resentment – depending on what happens.

Both could be better.

Understanding the purpose, outcome, and impact of processes 

Every element in an SDLC has a purpose, outcome, and set of impacts.

Purpose

The reason this action needs to happen.

Outcome

The result of the process, tool, or approach – often the input of another stage of the life cycle.

Impacts

The influence this element will have on people, money, time, security, performance, or quality.

Any new element’s impact must provide a concrete, useful output. The purpose and benefit must outweigh its impacts on the rest of the life cycle. Too little benefit, lack of useful output, or unclear purpose can lead to resentment and confusion. Too much impact and the same will happen.

Framing your new addition in terms of these criteria is not as simple as just handing over the tool and wishing the team luck.

Using experimental thinking to introduce a new practice

How can you navigate these criteria and help implement new processes, tools, or rituals in an SDLC?

You can’t change what you don’t understand

If you are hoping to influence or change something, you should understand it first. If you aren’t actively part of the software team or using these processes day-to-day, now is the time to learn.

Spend time with your engineers and walk through the current processes. This isn’t a time for you to suggest changes, but listen and learn.

Questions you should be asking:

  • What tools and processes are currently in this SDLC?
  • Who runs them? How much time does it take?
  • What is an acceptable completion time for each phase? This is especially important for tools embedded into deployment pipelines.
  • What is currently working well?
  • What is hurting the team, and what would they change if they could?

This process is valuable as you prepare to weave cyber security through it, but also for your relationships with the engineering team. Security is about collaboration, which starts with understanding and empathy.

Structure and define your experiment

Engineers are practical people and typically slow to develop trust. We know that even the most reputable companies’ most popular tools will behave differently when deployed into their environment.

The rollout of a new process, tool, or ritual needs to start as an experiment. By doing this, we create room for the engineering team to provide feedback and assess it from their perspective. It also gives you time as a leader to understand if this is the right approach. The last thing you want to do is spend your social capital with the team on something that causes pain or damages the relationship.

Like any experiment, you need to start with a hypothesis and some criteria that you can measure. Additionally, you will need an understanding of what success and failure would look like.

For example, an experiment for rolling out a source code review tool into the CI/CD pipeline would need the following:

Hypothesis:

Using a source code review tool in the CI/CD pipeline would:

  • Allow the team to review all source code on commit
  • Identify cyber security vulnerabilities in our specific language set and technology stack
  • Reduce the time taken to check code for cyber security issues

Success criteria:

  • Run time: Tool must run in less than 10 minutes so that the deployment pipeline is not compromised.
  • Run frequency: Tool can run on every commit and be triggered from our existing tools.
  • Output: The tool output can be automatically raised with engineers and also recorded in our ticketing or issue tracking system.
  • Exceptions: The tool can be configured to allow exceptions that are specific to our environment.

Scope:

In the best case, experiments should include brand-new code and projects as well as legacy systems to ensure the range of the capability is understood. 

Length:

Experiments will typically run for between one and three months to allow for testing over a sustained period and a wide range of development milestones. If an experiment is too short it will often be unrepresentative, too long and you may lose momentum.

Plan your experiments with success in mind

If the experiments succeed and you and the team are satisfied with the outcomes, it’s time to roll out. Rather than just turning things on and calling it a job well done, consider the following.

Training

Provide training for all engineers using, operating, configuring, or otherwise interacting with this process or tool. Ensure this links to clear explanations of the tool’s purpose, outcomes, and impacts from above. Failure in this step will often result in processes or tools with only one owner or without sufficient experience in the team to support it for the long term. You may also find that teams forget “why” the process is there to start with, which can increase the chance of subsequent removal.

Support 

Have a plan for if things change, there are issues, or the team needs help. If they find it is no longer working, and you don’t have a clear support system, the process or tool will be removed or avoided. Once this happens, getting it back in takes more than just flicking a switch – it takes negotiation and relationship repair.

Review

Tools and processes that work now will only be the right solution for a while. Have an annual process for reviewing the effectiveness of the tools and processes you have in place. This gives everyone involved a chance to address issues and identify any changes that need to be made.

Accept that not all experiments are successful

If your experiment fails to meet its requirements, whether it be via impact on the wider SDLC or unmet hypothesis from the original scope, it is critical that you revisit the experiment definition and evaluation process. All experiments must be allowed to fail.

Hold a review or retrospective of the experiment and capture your findings. Use these to refine the constraints of your experiment and define a new approach to try.

Even the best of us can be tempted by the sunk cost fallacy and feel that after three months of experimentation, we have invested enough time and need to roll out this solution wider. It is always better to learn lessons from three months and choose not to proceed, than to roll out an inadequate solution.

Collaboration is the key

No matter how big or mature your team is, introducing new elements to an existing software development life cycle can take time and effort. Remember, however, that you can make these changes together by framing your approaches as scoped experiments, collaborating with your wider team, and focusing on the purpose, outcome, and impacts of the proposed changes.