When I started working in DevSecOps, I hadn’t even done DevOps before (gasp!). I had been following the Waterfall software development methodology in the Canadian Public Service (with a tiny bit of agile thrown in) for over a decade, before I joined Microsoft and my entire world changed. I had been working on the OWASP DevSlop open-source project with my friend Nikky Becher at the time, muddling our way through, trying to figure out how to pentest in a DevOps environment, and suddenly I was supposed to be a DevOps expert. I thought “How can I catch up? And how can I do it FAST?”
I decided that I would create an app, using the Azure DevOps CI/CD, and I would ‘share’ my pipeline that I used to do it. Quickly I figured out there wasn’t a way in Azure DevOps (at the time) to open-source or ‘share’ a pipeline, so I decided I would film myself, live, as I built. I made a basic app, and a basic dev -> QA -> UAT -> Prod deployment, and I was off to the races. I started coding live on Twitch, and adding every security tool to my pipeline that I could get my hands on.
Not every episode of the OWASP DevSlop show went well, and neither did every tool. The DAST I chose threw a false positive while I was presenting at Microsoft in Ottawa, Canada (live on stage, naturally). There’s one episode with Nancy Gariché (one of the project leaders) and I just FAILED the whole time, and after 3 hours we never got the tool working at all. There was a 5 hour episode where I updated the .Net CORE framework, and all my dependencies that had vulnerabilities in them, and that was exhausting. But I LEARNED. And I learned fast.
In 2018 I joined a company called IANs Research, doing ‘Ask an Expert’ calls, helping clients with Azure and AppSec problems. As I learned more about DevOps and DevSecOps, I started helping clients with those topics as well. I would often have to do research to figure out the solution before a call, but I didn’t mind. All of their questions helped give direction to my learning. Over the years I have helped hundreds of AppSec teams crush their problems, and learned a LOT along the way.
In 2020 I started coaching two different companies with their AppSec programs, for a few hours a week, building out their DevSecOps programs. When contrasted with the short types of assistance I was giving to IANS clients, these long term relationships helped me see programs over extended periods of time. And I got my hands dirty on a weekly basis, trying many different types of tools (for better or worse).
I also attended numerous conference talks and read lots of articles, to see what others were doing, and learning best practices a long the way.
All of this learning was A LOT of work. One day an IANs client told me they were going to start in DevSecOps, and asked if I could make them a “do not do” list. A list of pitfalls that they should work to avoid. Dear reader, I dove deep down this rabbit hole. It had never occurred to me to start with “what not to do”, rather than “this is the way”. And how many headaches could be avoided…
WIth this in mind, I wrote a conference talk (video below), on this topic. And this blog series is going to explore each of the 15 ‘worst practices’ that I cover in the talk.
Me, keynoting the #RSAC Conference, April 2023
The 15 items I will present in this series are as follows:
Breaking Builds on False Positives
Untested Tools
Artificial Gates
Missing Test Results
Runaway Tests
Impossible SLAs
Untrained Staff
Forgotten Bugs
No Positive Reinforcement
Only Worrying About Your Part
Multiple Bug Trackers
Insecure SDLC
Overly Permissive CI/CD
Automation Only in the CI/CD
Hiding Mistakes and Errors
I hope that by sharing mistakes that I have seen and made, all of us can avoid these issues going forward.
~ The next article, ‘The Boy Who Cried Wolf’, is here. ~
The first terrible practice that we will examine in this series is breaking builds on false positives. Whenever I explain this to people who are new to DevOps, I remind them of the story of ‘the boy who cried wolf’. In the age-old story, a young boy lies and says a wolf has come into the village he lives in, and scares all the villagers. The villagers snap into action, to protect everyone from the wolf. After a brief search they realize there is no wolf, and that the child was lying. The boy plays this trick on the villagers more than once, and the villagers become very angry with the boy. At the end of the story, a wolf does enter the village, and the boy attempts to warn everyone, but no one is willing to listen to him anymore. He has been labeled a liar; someone to ignore. But this time the wolf was real, and people were hurt.
The takeaway of the story is that no one wins when trust is broken. People tell the story to children, to discourage them from lying. I tell the story to security professionals, so that we prioritize building trust with development teams, and thus avoid having our warnings ignored.
Originating from the word for a paper lantern, Andon is a term that refers to an illuminated signal notifying others of a problem within the quality-control or production streams. Activation of the alert – usually by a pull-cord or button – automatically halts production so that a solution can be found.
Toyota
DevOps pipelines are built to model real-life, physical assembly lines. Each assembly line has something called an “Andon cord”, which is pulled when there is an emergency to stop the line. The button or pull cord can save lives, and millions of dollars (imagine cars accidentally piling on top of each other and the potential cost). The cord is only pulled if something extremely dangerous is happening. When we “break the build” in a DevOps pipeline, we are pulling a digital Andon cord, which stops the entire process from continuing. And when we do this, we had better have a good reason.
When a test fails in the CI/CD pipeline, it doesn’t always break the build (stop the pipeline from continuing). It depends on how important the finding is, how badly it failed the rest, the risk profile of the app, etc. It breaks the build if the person who put the test into the pipeline feels it’s important enough to break the build. That it’s (literally) a show-stopper, and that they are willing to stop every other person’s work as a result of this test. It’s a big decision.
Now imagine you have put a lot of thought into all the different tests in your pipeline, and as to if they have the importance to break the build or just let it continue and send notifications or alerts instead. You and your team use this pipeline 10+ times a day to test your work, and you depend on it to help you ensure your work is of extremely high quality.
Now imagine someone from the security team comes along and puts a new security tool into your carefully-tuned pipeline, and it starts throwing false positives. All the time. How would that make you feel? Probably not very good.
I have seen this situation more times than I care to count, and (embarrassingly) I have been the cause of it at least once in my life. While working on the OWASP DevSlop project I added a DAST to our Patty-the-pipeline module (an Azure DevOps pipeline with every AppSec tool I could get my hands on). One evening Abel had done an update to the code, and he messaged me to say my scanner had picked something up. I didn’t notice his email, then went to Microsoft to give a presentation for a meetup the next day and… Found out on stage.
When my build broke I thought “OH NO, HOW EMBARRASSING”. But then I had another thought, and proudly announced “wait, it did what it was supposed to do. It stopped a security bug from being released into the wild”. Then we started troubleshooting (40+ nerds in a room, of course we did!), and we figured out it was a false positive. Now that really was embarrassing… I had been trying to convince them that putting a DAST into a CI/CD was a good thing. I did not win my argument that day. Le sigh.
Fast forward a couple years, and I have seen this mistake over and over at various companies (not open source projects, made up of volunteer novices, but real, live, paid professionals). Vendors tell their customers that they can click a few buttons and viola! They are all set! When in fact, generally we should test tools and tune them before we put them into another team’s pipeline.
Tuning your tools means making lots of adjustments until they work ‘just right’. Sometimes this means suppressing false positives, sometimes this means configuration changes, and sometimes it means throwing it in the garbage and buying something else that works better for the way your teams do their everyday work.
In 2020, I was doing consulting, helping with an AppSec program, and their only full time AppSec person proudly told me that they had a well-known first-generation SAST tool run on CI/CD every build, and that if it found anything that was high or above it broke the build. I said “COOL! Show me!” Obviously I wanted to see this awesomeness.
We logged into the system and noticed something weird: the SAST tool was installed into the pipeline, but it was disabled. “That’s weird” we both said, and went on to the next one. It was uninstalled. HMMMMM. We opened a third, it was disabled. We sat there looking and looking. We found one that was installed and running, but it was just in alerting mode.
The next time I saw him his face was long. He told me that in almost 100% of the pipelines his tool had been uninstalled or disabled, except 2 or 3 where it was in alerting mode (running, but it couldn’t break the build). We investigated further to find out that the teams that had it in alerting mode were not checking the notifications, none of them had ever logged into the tool to see the bugs it had found.
To say the guy was heartbroken would be an understatement. He had been so proud to show me all the amazing work he had done. It had taken him over a year to get this tool installed all over his organization. Only to find out, with a peer watching, that behind his back the developers had undone his hard-earned security work. This was sad, uncomfortable, and I felt so much empathy for him. He did not deserve this.
We met with the management of the developer teams to discuss. They all said the right things, but meeting after meeting, nothing actually changed. After about 3 months the AppSec guy quit. I was sad, but not surprised at all. HE was great. But the situation was not.
I kept on consulting there for a while, and discovered a few things:
The SAST tool constantly threw false positives. No matter what the AppSec guy had done, working very closely with the vendor, for over a year. It was not him, it was the tool.
The SAST tool had been selected by the previous CISO, without consultation from the AppSec team (huge mistake), and was licensed for 3 years. So the AppSec guy HAD to use it.
The AppSec guy had spent several hours a week just trying to keep the SAST server up and running, and it was a Windows 2012 server (despite being 2020, the SAST provider did not support newer operating systems). He also wasn’t allowed to add most patches, which meant he had to add a lot of extra security to keep ot safe. It was not a great situation.
The developers had been extremely displeased with the tool, having it report false positives over and over, and they turned it off in frustration. It was not malice, or anger, they had felt they couldn’t get their jobs done. They really liked the AppSec guy. When I talked to them about it, they all felt bad that he had quit. It was clear they had respected him quite a lot, and had given the tool more of a chance because of him.
It took over a year, but I eventually convinced them to switch from that original SAST to a next generation SAST (read more on the difference between first and second gen here). The new tool provided almost entirely true positives, which made the developers a lot happier. It also was able to run upon code check in, which worked better for the way they liked to do their work in that shop. When I had left, it was scanning every new check in, then sending an email to whoever checked the code in with a report if any bugs were introduced. Althought I didn’t have it breaking builds by the time I left, we went from zero SAST, to SAST-on-every-new-commit. And devs were actually fixing the bugs! Not all the bugs, but quite a few, which was a giant improvement from when I arrived. To me this was a success.
To avoid this fate, carefully pick your toolset (make a list of requirements with the developers, and stick to it), then test it out first on your own, then with developers, before purchase. Next, test the tool manually with a friendly developer team and work out as many kinks as you can before putting it into a CI. Then put it in alerting mode in the Ci with that team, again, watching for issues. If it runs well, start adding it for more teams, a few at a time. Pause if you run into problems, work them out, then continue.
Tip: You can also set up most static tools (ones that look at written code, not running code) to automatically scan your code repository. This is further ‘left’ in the CI/CD, because it is even earlier in the system development life cycle (SDLC). You can scan the code as it is checked in, or on a daily, weekly or monthly basis, whatever works best for you and your developers!
Right now, the concept of the software supply chain and securing it is quite trendy. After the solar winds breach, the attack on the crypto wallet, at the log4J fiasco, the entire world appears to be focused on securing the software supply chain. I’m not complaining. If anything, as an application security nerd, I am quite pleased that I am finally getting buy-in that these things need to be protected, and that vulnerable dependencies need to be avoided. Folks, this is GREAT.
Software composition analysis, often called SCA, means figuring out which dependencies your software has, and of those, which contain vulnerabilities. When we create software, we include third party components, often called libraries, plugins, packages, etc. All third-party components are made-up of code that you, and your team, did not write. That said, because you have included them inside of your software, you have added (at least some) of their risk into your product.
A ‘supply chain’ means all of the things that you need to create an end product. If you were creating soup, you would need all of the ingredients of the soup, you would need things like pots and pans in order to cook and prepare the ingredients of the soup, you would need a can or a jar to put it in, and likely a label on top to tell everyone what type of soup it is. All of those things would be considered your supply chain.
Imagine inside of your soup one of the ingredients is flour. Chances are that it (wheat) was grown in a field, and then it was harvested, and then it was ground down into flour, and then it might have been processed even further, and only then it was sent to you, so that you could create your soup. All of the steps along the way could have been contaminated, or perhaps the wheat could have rotted, or been otherwise spoiled. You have to protect the wheat all along the way before it gets to you, and once you make the soup, in order to ensure the end product is safe to eat.
Protecting all of the parts along the supply chain, from ensuring that there aren’t terrible chemicals sprayed on the ingredients as they grow, to ensuring that the can or jar that you put the soup into has been properly sterilized, is you securing your supply chain.
When we build software, we need to secure our software supply chain. That means not only ensuring the third-party components that we’re putting into our software are safe to use, but the way we are using them is secure [more on this later]. We also have to ensure how we build the software is safe, and this can mean using version control to store our code, ensuring any CI/CD that we use is protected from people meddling and changing it, and every single other tool we use or process we follow are also safe.
If you’ve followed my work a long time, I am sure you know that I think this includes a secure system development life cycle (S-SDLC). This means each step of the SDLC (requirements, design, coding, testing and release/deploy/maintain) contains at least one security activity (providing security requirements, threat modelling, design review, secure coding training, static or dynamic analysis, penetration testing, manual code review, logging & monitoring, etc.) A secure SDLC is the only way to be sure that you are releasing secure software, every time.
Tanya Janca
With this in mind, the difference between the two is that SCA only covers third party dependencies, while supply chain security also covers the CI/CD, your IDE (and all your nifty plugins), version control, and everything else you need in order to make your software.
It is my hope that our industry learns to secure every single part of the software supply chain, as opposed to only worrying about the dependencies. I want securing these systems to be a habit; I want it to be the norm. I want the default IAM (identity and access management) settings for every CI/CD to be locked down. I want checking your changes into source control to be as natural as breathing. I want all new code check-ins to be scanned for vulnerabilities, including their components. I want us to make software that is SAFE.
If you read my blog, you are likely aware that I recently started working at Semgrep **, a company that creates a static analysis tool, and recently released a software supply chain security tool. If you’ve seen their SAST tool, you know they’re pretty different than all the other similar tools on the market, and their new supply chain tool is also pretty unique: it tells you if your app is calling the vulnerable part of your dependencies. They call it ‘reachability’. If your app is calling a vulnerable library, but it’s not calling the function inside of that library where the vulnerability lives, you’re usually safe (meaning it’s not exploitable). If you ARE calling the function that is inside your library where the vulnerability is located, there’s a strong likelihood that the vulnerability could be exploitable from within your application (meaning you are probably not safe). We added this to the product to help teams prioritize which bugs to fix, because although we all want to fix every bug, we know there isn’t always time. In summary, if the vulnerability is reachable in your code, you should run, not walk, back to your desk to fix that bug.
Me, again
I have worked with more than one company who had programmers who did not
check in their code regularly (or at all) to source control. Let me tell
you, every single time it was expensive! Losing years of hard work will
break your heart, not just your budget. Supply chain security matters.
Join me in this adventure by starting at your own office! Whether you have budget or not, there are paid and free tools that can help you check to see if your supply chain is safe! You can also check some of this stuff manually, easily (the IAM settings on your CI/CD are just a few clicks away). Reviewing the setup for your systems, and ensuring you have everything important backed up, will make your future less stressful, trust me.
You can literally join me on this adventure, by signing up for the Semgrep newsletter! The Semgrep Community is about to launch live free events, including training on topics like this, and we can learn together. First email goes out next week, don’t miss out!
~ fin ~
** I work at Semgrep. This means I am positively biased towards our products and my teammates (I think they are awesome!) That said, with 27+ years’ experience in IT, being a best-selling author and world-renown public speaker, there are a LOT of companies that would be happy to let me work for them. I choose Semgrep for a reason; my choice to work there was intentional. That said, I will try not to be annoying by only talking about work on my blog, promise!
The Second Way of DevOps is fast feedback. In security, when we see this we should all be thinking the same thing: Pushing Left. We want to start security at the beginning of the system development life cycle (SDLC) and ensure we are there (providing feedback, support and solutions) the whole way through!
Pushing Left, Tanya’s Favorite Thing
Fast feedback loops means getting important information to the right people, quickly and regularly. One of the main reasons that Waterfall projects failed in the past was the lack of timely feedback; no one wants to find out twelve months after they made a mistake, when it’s too late to fix it.
The goal of security activities in a DevOps environment must be to shorten and amplify feedback loops so security flaws (design issues) and bugs (code issues) are fixed as early as possible, when it’s faster, cheaper and easier to do a better job. These DevOps people are really onto something!
Let’s go over several ideas of how to achieve this.
Activities to create fast feedback loops.
Automate as much as humanly possible. Inside or outside the pipeline, automation is key.
Whenever possible integrate your tools with the Dev and Ops team’s tools. For instance, have the issues found by your IAST tool turned into tickets in the developer’s bug tracking system, automagically.
When you have a Pentest done, check all your other apps for the things found in the report, then add create unit tests to look for these things and prevent them from coming back.
Rename insecure functions or libraries as “insecure” with a wrapper, so programmers see immediately that there is an issue.
Add security sprints to your project schedule (to fix all security bugs in backlog).
Asking the Dev and Ops what they are concerned about (in relation to security), so you can fix any problems the security team might be causing them.
Add important security tests that are quick and accurate to the pipeline. For instance, scan for secrets in the code that is being checked in. That is an important test!
If an important security tests fail in the pipeline, the continuous integration server must break the build. Just like quality tests. This is loud feedback.
Create a second pipeline that doesn’t release any code, but runs all the long and slow security tests, then have the security team review the results after and turn the important things into tickets for the Devs.
Tune all security tools as much as possible and validate all results so that the feedback you are giving is *accurate*. There is no point of sending lots of feedback if half of it is wrong.
Work with developers to create negative unit tests (sometimes known as abuse tests). Create copies of regular unit tests, rename them with “Abuse” at the end, then add malicious payloads and ensure that your app fails gracefully and handles bad input well.
Have reports from your security tools automatically send their results to a vulnerability management tool such as Defect Dojo or Thread Fix to keep metrics and use them to improve all of your work. You need feedback too.
Be creative. Any way that you can get feedback faster to other teams is a huge win for your team too!
The first “Way” of DevOps is emphasizing the efficiency of the entire system. Many of us tend to focus only on our part of a giant system, and get bogged down improving only our own contributions to the larger process. It’s rare that we stand back, look at the entire thing, and realize that if we helped another team or changed something small within our part, that it could improve other areas for the better. The first way of DevOps is about looking at the entire system, and making sure the entire thing is as efficient as possible. #speed
When we worked in Waterfall development environments security often acted as a gate. You had to jump through their hoops, then you were let through, and you could push your code to prod. Awesome, right? Not really. It was slow. Security activities took FOREVER. And things got missed. It was rigid and unpleasant and didn’t result in reliably secure software.
It may seem obvious to new developers that security should not slow down the SDLC, but I assure you, this concept is very, very new. When I was a software developer I referred to the security team as “Those who say no”, and I found almost all of my interactions with them left me frustrated and without helpful answers.
When we (security practitioners) think about The First Way, we must figure out how to get our work done, without slowing down all the other teams. They won’t wait for us, and we can’t set up gates. We have to learn to work the way they do; FAST.
Below I will offer suggestions for how we can work together with the dev and ops teams to ensure we get our mandate done, within the DevOps workflows and processes.
Tools:
First of all, we need to use modern tooling that is made for DevOps pipelines if we are going to put anything into the CI/CD pipeline. Never take an old tool and toss it in there; no DevOps team is going to wait 5 hours for your SAST tool to run. Tune your tools and ensure you select tools that are made for pipelines if that is how you are going to use them. Whenever possible, only run your tools on the ‘delta’ (the code changed in that release, not the entire code base).
Tool Selection:
When selecting tools, remember that not every tool needs to be put in the pipeline. In fact, having tools that are out-of-band, but located on the ‘left’ of the SDLC, can offer even more value and save time. Examples:
Package management tools that only serve packages that are not known to be insecure (pre-approved by a security research team)
Adding security tests to your unit tests, which are often run before the code arrives in the pipeline (for instance, write input validation tests that ensures your code properly handles input taken from the XSS Filter Evasion Cheat Sheet)
Ddding security tooling to the check-in process, such as secret scans (don’t even let them check it in if it looks like there’s a secret in the code)
Scanning your code repository for known-insecure components. It’s just sitting there, why not use it?
Bugs:
This also means that security bugs should be placed in the same bug tracker or ticketing system that the developers and ops teams are using. They shouldn’t check two systems, that is not efficient.
Finding Vulnerabilities:
If at all possible, we should be providing and/or approving tools that assist in finding vulnerabilities in written code (both the code your team wrote, and the code from dependencies) and running code. This could be SAST + SCA + DAST, or it could be SCA + IAST (run during unit testing, QA and in prod). It could also mean manual secure code review plus a PenTest the week before going live (this is the least-efficient of the three options presented here).
Templates and Code Reuse:
If it makes sense, create templates and provide secure code samples, there’s no need to reinvent the wheel. Also, enable the developers and ops teams to scan their own code by providing tools for them (and training on how to use them safely and effectively).
Think Outside The Box
We (security) can no longer be a bottleneck, we must work to enable them to get their jobs done securely, in anyway we can. Examine your processes to ensure they are efficient; create a second asynchronous (which does not release to prod) pipeline to automate your longer tests; write your own tools if you absolutely have to. The sky is the limit.
There are many definitions of DevOps, too many, some might say. Some people say it’s “People, Processes, and Products”, and that sounds great, but I don’t know what I’m supposed to do with that. When I did waterfall I also had people, processes, and products, and that was not great. I thought DevOps was supposed to be a huge improvement?
I’ve heard other people say that it’s paying one person to do two jobs (Dev and Ops), which can’t be right… Can it? I’ve also been told once by a CEO that their product was “made out of DevOps”, as though it was a substance. I decided not to work there, but that’s another story. Let’s look at some better sources.
DevOps is a set of practices that combines software development and information-technology operations which aims to shorten the systems development life cycle and provide continuous delivery with high software quality.
But what are the practices? Why are we aiming to shorten the SDLC? Are we making smaller software? What is ‘continuous delivery’?
To get to the bottom of it I decided to read The DevOps Handbook.
Then I knew what DevOps was, and I knew how to do it. And I
discovered that I LOVED DevOps.
According to the DevOps Handbook, DevOps has three goals.
Improved deployment frequency; Shortened lead time between fixes;
Awesome! This means if a security bug is found it can be fixed extremely quickly. I like this.
Lower failure rate of new releases and faster recovery time;
Meaning better availability, which is a key security concern with any application (CIA). Lower failures mean things are available more often (the ‘A’ in CIA), and that’s definitely in the security wheelhouse. So far, so good.
Faster time to market; meaning the business gets what they want.
Sometimes we forget that the entire purpose of every security team is to enable the business to get the job done securely. And if we are doing DevSecOps, getting them products that are more secure, faster, is a win for everyone. Again, a big checkmark for security.
Great! Now I think the DevOps people want the same things that I, as a security person, want. Excellent! Now: How do I *DO* DevOps?
That is where The Three Ways of DevOps comes in.
Emphasize the efficiency of the entire system, not just one part.
Fast feedback loops.
Continuous learning, risk-taking and experimentation (failing fast)
In the next post we will talk more in detail about The 3 Ways (and how security fits in perfectly).
DevSecOps is the security activities that application security professionals perform, in order to ensure the systems created by DevOps practices are secure. It’s the same thing we (AppSec professionals) have always done, with a new twist. Thanks Imran!
Emphasize the efficiency of the entire system, not just your part.
Fast feedback loops.
Continuous learning, risk taking and experimentation (failing fast). Taking time to improve your daily work.
Let’s dig in, shall we?
1. Emphasize the efficiency of the entire system, not just one part.
This means that Security CANNOT slow down or stop the entire pipeline (break the build/block a release), unless it’s a true emergency. This means Security learning to sprint, just like Ops and Dev are doing. It means focusing on improving ALL value streams, and sharing how securing the final product offers value to all the other steams. It means fitting security activities into the Dev and Ops processes, and making sure we are fast.
2. Fast feedback loops.
Fast feedback loops = “Pushing Left” (in application security)
Pushing or shifting “left” means starting security earlier in the System Development Life Cycle (SDLC). We want security activities to happen sooner in order to provide feedback earlier, which means this goal is 100% inline with that we want. The goal of security activities must be to shorten and amplify feedback loops so security flaws (design/architecture issues) and bugs (code/implementation issues) are fixed as early as possible, when it’s faster, cheaper and easier to do a better job.
3. Continuous learning, risk taking and experimentation
For most security teams this means serious culture change; my favorite thing! InfoSec really needs some culture change if we are going to do DevOps well. In fact, all of IT does (including Dev and Ops) if we want to make security everybody’s job.
Part of The Third Way:
Allocating time for the improvement of daily work
Creating rituals that reward the team for taking risks: celebrate successes
Introducing faults into the system to increase resilience: red team exercises
We are going to delve deep into each of the three ways over the next several articles, exploring several ways that we can weave security through the DevOps processes to ensure we are creating more secure software, without breaking the flow.
If you are itching for more, but can’t wait until the next post, watch this video by Tanya Janca. She will explain this and much more in her talk ‘Security Learns To Sprint’.
Application Security is every action you take towards ensuring the software that you (or someone else) create is secure.
This can mean a formal secure code review, hiring someone to come in and perform a penetration test, or updating your framework because you heard it has a serious security flaw. It doesn’t need to be extremely formal, it just needs to have the goal of ensuring your systems are more secure.
Now that we know AppSec is, why is it important?
For starters, insecure software is (unfortunately), the #1 cause of data breaches (according to the Verizon Breach Reports, 2016, 2017, 2018 and 2019). This is not a list that anyone wants to be #1 on. According to the reports, insecure software causes 30–40% of breaches, year after year, yet 30–40% of the security budget at most organizations is certainly not being spent on AppSec. This is one part of the problem.
The graph above is from the Verizon Breach Report 2017. Hats off to Verizon for creating and freely sharing such a helpful report, year after year.
If the problem is that insecure software causes breaches, and one of the causes is that security budgets don’t appear to prioritize software, what are some of the other root causes of this issue?
For starters, universities, colleges, and programming bootcamps are not teaching the students how to ensure that they are creating secure software. Imagine electricians whet to trade school, but they didn’t teach them safety? They twist two cables together and then just push them into the wall, unaware that they need two more layers of safety (electrical tape, and then a marrett). This is what we are doing with our software developers, we teach them from their very first lesson how to make insecure code.
Hello (insecure) World
Lesson #1 for every bootcamp or programming course: Hello World.
Step 1) Output “Hello World” to screen
Step 2) Output “What is your name?” to screen
Step 3) Read the user’s input into a variable (note: we skip teaching input validation)
Step 4) Output the user’s input to the screen with a hello message (note: we skip output encoding)
The above lesson teaches new programmers the best possible recipe for including reflected Cross Site Scripting (XSS) in their application. As far as I know there is not a follow up lesson provided on how ensure the code is secure.
“Hello World” is the most-taught lesson for starting a new programming language, I’m sure you’ve seen it everywhere. More like “Hello Insecure World”.
Although there has been some headway in universities and colleges recently, most of them barely scratch the surface in regards to application security.
So that’s reason #2. Let’s move onto reason #3.
Another issue that contributes to this problem is that security training for developers is costly. While this is true for all types of professional training, security training is significantly more expensive then other forms of technical training. A single course at the SANS institute (a well-respected training establishment that specializes in all things cyber), could cost an attendee as much as $5000-$7000 USD, for one week of training. There are other less-pricy options, such as taking a course when attending a conference, which usually range from $2000-$5000, however, those are of varying quality, and there is no standardized curriculum, making them a bit of a gamble. I’ve taken several trainings when attending various conferences over the years, and I’d say about 1/2 were good.
There are much cheaper alternatives to the options above (such as the courses from We Hack Purple and other training providers), and they are of very varying quality levels. I’ve seen both good free courses and some where I wish I could have my time back they were so bad. Most of them do not provide a curriculum to follow either, meaning it is often unclear to the student which other courses they should take in order to get the specific job they want. It is very easy to waste quite a bit of time and money; I know, that is how I started my AppSec career… Although I was quite lucky to have two professional mentors guiding me, which made it a lot easier. Not everyone has a mentor.
See how lonely she looks? She’s the ENTIRE security team! #WOCTechChat
Another cause (#4) of insecure software is that the security team is usually grossly outnumbered. According to several sources there is usually 100 software developers for every 10 operations employees for every single (1) security professional.
Let me repeat that. There are 100/10/1, Dev/Ops/Sec. With numbers like that you can’t work harder, you have to work smarter. Which is where we are going with this series.
Now we know the problem and several of the causes, what can we do about it? The short answer is DevSecOps, and the long answer is ‘read the rest of the blog series’.
For now though, let’s define DevSecOps, before we dive into what DevOps is, The Three Ways, and so much more, in the next article.
DevSecOps: performing AppSec, adjusted for a DevOps Environment. The same goals, but with different tactics and strategies to get us there. Changing the way we do things, so that we weave ourselves into the DevOps culture and processes.
This is the first in a many-part blog series on the topic of DevSecOps. Throughout the series we will discuss weaving security through DevOps in effective and efficient ways. We will also discuss the ideas that security is everybody’s job, it is everyone’s duty to perform their jobs in the most secure way they know how, and that it is the security team’s responsibility to enable everyone else in their organization to get their jobs done, securely. We will define DevOps, ‘The Three Ways’, AppSec and DevSecOps. We will get in deep on the many strategies we can adjust security activities for DevOps environments, while still reaching our goals of ensuring that we reliably create and release secure software.
In summary; We will discuss how to make security a part of our daily work.
It cannot be added later or after, it needs to be a part of everything.
But let’s not get ahead of ourselves, I have many more posts planned where I will attempt to sway your opinion my way.
Tanya Janca, also known as SheHacksPurple, presenting her ideas in Sydney Australia, 2019. Artwork by the talented Ashley Willis.
Before we get too deep into anything I’d like to dispel some myths. Look at the image below. This is how *some* security professionals see DevOps.
This slide’s author, Pete Cheslock, is highly intelligent and experienced, this mention is not meant to insult him in any way. The slide is social commentary, it is not literal. That said, many people I’ve met truly feel this way; that DevOps engineers are running around making security messes where ever go, and that we (security professionals) are left to clean up the mess. I disagree with that opinion.
Luckily for me, my introduction to DevOps was at DevSecCon, where they introduced me to this image. Below you can see the security team teaching, providing tooling and enabling the magical DevOps unicorns in doing their jobs, securely. This is how I view DevOps; the security team enabling everyone, working within the confines of the processes and systems that all the other teams use.
This series will be loosely based off a conference talk which I have delivered at countless events, all over the planet, ‘Security is Everybody’s Job’. You can watch the video here.
When working in a DevOps environment security professionals are sometimes overwhelmed with just how fast the dev and ops teams are moving. We’re used to having more control, more time, and more… Time!
Personally, I LOVE DevSecOps (the security team weaving security throughout the processes that Dev and Ops are doing). Due to my enthusiasm I am often asked by clients when, how and where to inject various types of tests and other security activities. Below is my list of options that I offer to clients for automated testing (there’s lots more security to do in DevOps, this is only automated tests). They analyze the list together and decide which places make the most sense based on their current status, and choose tools based on their current concerns.
Tools that check your code almost like a spell checker (not sure what this is called, sometimes called SAST)
Proxy management & dependency tools that only allow you to download secure packages
API and other linting tools that explain where you are not following the definition file
Pre-commit hook:
secret scanning, let’s stop security incidents before they happen
At the code repository level:
weekly schedule: SCA & SAST
In the Pipeline: Must be fast and accurate (no false positives)
secret scanning – again!
Infrastructure as Code scanning (IaC)
DAST with HAR file from Selenium or just passive
SCA (again if you like, preferably with a different tool than the first time)
Container and infra scanning, + their dependencies
Outside the Pipeline:
DAST & fuzzing, automate to run weekly!
VA scanning/infra – weekly
IAST – install during QA testing and PenTests, or in prod if you feel confident
SAST – test for everything for each major release or after each large change – manual review of results
Unit-Tests:
take the dev’s tests and turn them into negative tests/abuse cases
Continuously:
Vulnerability Management. You should be uploading all your scan data into some sort of system to look for patterns, trends, and (most of all) improvements
You do not need to do all of these, or even half of these. But please do some of them. WHP will try to put out a course on this later on in the year!
Do YOU do testing in more places than this? Where, when and how? Do you
have other tools you use that you find helpful? Please comment below
and let us know!