DevSecOps Worst Practices – The Boy Who Cried Wolf

Photo by Lenny Kuhne on Unsplash

The first terrible practice that we will examine in this series is breaking builds on false positives. Whenever I explain this to people who are new to DevOps, I remind them of the story of ‘the boy who cried wolf’. In the age-old story, a young boy lies and says a wolf has come into the village he lives in, and scares all the villagers. The villagers snap into action, to protect everyone from the wolf. After a brief search they realize there is no wolf, and that the child was lying. The boy plays this trick on the villagers more than once, and the villagers become very angry with the boy. At the end of the story, a wolf does enter the village, and the boy attempts to warn everyone, but no one is willing to listen to him anymore. He has been labeled a liar; someone to ignore. But this time the wolf was real, and people were hurt. 

The takeaway of the story is that no one wins when trust is broken. People tell the story to children, to discourage them from lying. I tell the story to security professionals, so that we prioritize building trust with development teams, and thus avoid having our warnings ignored.

Originating from the word for a paper lantern, Andon is a term that refers to an illuminated signal notifying others of a problem within the quality-control or production streams. Activation of the alert – usually by a pull-cord or button – automatically halts production so that a solution can be found.

Toyota

DevOps pipelines are built to model real-life, physical assembly lines. Each assembly line has something called an “Andon cord”, which is pulled when there is an emergency to stop the line. The button or pull cord can save lives, and millions of dollars (imagine cars accidentally piling on top of each other and the potential cost). The cord is only pulled if something extremely dangerous is happening. When we “break the build” in a DevOps pipeline, we are pulling a digital Andon cord, which stops the entire process from continuing. And when we do this, we had better have a good reason.

When a test fails in the CI/CD pipeline, it doesn’t always break the build (stop the pipeline from continuing). It depends on how important the finding is, how badly it failed the rest, the risk profile of the app, etc. It breaks the build if the person who put the test into the pipeline feels it’s important enough to break the build. That it’s (literally) a show-stopper, and that they are willing to stop every other person’s work as a result of this test. It’s a big decision.

Now imagine you have put a lot of thought into all the different tests in your pipeline, and as to if they have the importance to break the build or just let it continue and send notifications or alerts instead. You and your team use this pipeline 10+ times a day to test your work, and you depend on it to help you ensure your work is of extremely high quality.

Now imagine someone from the security team comes along and puts a new security tool into your carefully-tuned pipeline, and it starts throwing false positives. All the time. How would that make you feel? Probably not very good.

Photo by carlos aranda on Unsplash

I have seen this situation more times than I care to count, and (embarrassingly) I have been the cause of it at least once in my life. While working on the OWASP DevSlop project I added a DAST to our Patty-the-pipeline module (an Azure DevOps pipeline with every AppSec tool I could get my hands on). One evening Abel had done an update to the code, and he messaged me to say my scanner had picked something up. I didn’t notice his email, then went to Microsoft to give a presentation for a meetup the next day and… Found out on stage.

When my build broke I thought “OH NO, HOW EMBARRASSING”. But then I had another thought, and proudly announced “wait, it did what it was supposed to do. It stopped a security bug from being released into the wild”. Then we started troubleshooting (40+ nerds in a room, of course we did!), and we figured out it was a false positive. Now that really was embarrassing… I had been trying to convince them that putting a DAST into a CI/CD was a good thing. I did not win my argument that day. Le sigh.

Fast forward a couple years, and I have seen this mistake over and over at various companies (not open source projects, made up of volunteer novices, but real, live, paid professionals). Vendors tell their customers that they can click a few buttons and viola! They are all set! When in fact, generally we should test tools and tune them before we put them into another team’s pipeline.

Tuning your tools means making lots of adjustments until they work ‘just right’. Sometimes this means suppressing false positives, sometimes this means configuration changes, and sometimes it means throwing it in the garbage and buying something else that works better for the way your teams do their everyday work. 

Photo by Birmingham Museums Trust

In 2020, I was doing consulting, helping with an AppSec program, and their only full time AppSec person proudly told me that they had a well-known first-generation SAST tool run on CI/CD every build, and that if it found anything that was high or above it broke the build. I said “COOL! Show me!” Obviously I wanted to see this awesomeness.

We logged into the system and noticed something weird: the SAST tool was installed into the pipeline, but it was disabled. “That’s weird” we both said, and went on to the next one. It was uninstalled. HMMMMM. We opened a third, it was disabled. We sat there looking and looking. We found one that was installed and running, but it was just in alerting mode. 

The next time I saw him his face was long. He told me that in almost 100% of the pipelines his tool had been uninstalled or disabled, except 2 or 3 where it was in alerting mode (running, but it couldn’t break the build). We investigated further to find out that the teams that had it in alerting mode were not checking the notifications, none of them had ever logged into the tool to see the bugs it had found.

To say the guy was heartbroken would be an understatement. He had been so proud to show me all the amazing work he had done. It had taken him over a year to get this tool installed all over his organization. Only to find out, with a peer watching, that behind his back the developers had undone his hard-earned security work. This was sad, uncomfortable, and I felt so much empathy for him. He did not deserve this.

We met with the management of the developer teams to discuss. They all said the right things, but meeting after meeting, nothing actually changed. After about 3 months the AppSec guy quit. I was sad, but not surprised at all. HE was great. But the situation was not.

I kept on consulting there for a while, and discovered a few things:

  1. The SAST tool constantly threw false positives. No matter what the AppSec guy had done, working very closely with the vendor, for over a year. It was not him, it was the tool.
  2. The SAST tool had been selected by the previous CISO, without consultation from the AppSec team (huge mistake), and was licensed for 3 years. So the AppSec guy HAD to use it.
  3. The AppSec guy had spent several hours a week just trying to keep the SAST server up and running, and it was a Windows 2012 server (despite being 2020, the SAST provider did not support newer operating systems). He also wasn’t allowed to add most patches, which meant he had to add a lot of extra security to keep ot safe. It was not a great situation.
  4. The developers had been extremely displeased with the tool, having it report false positives over and over, and they turned it off in frustration. It was not malice, or anger, they had felt they couldn’t get their jobs done. They really liked the AppSec guy. When I talked to them about it, they all felt bad that he had quit. It was clear they had respected him quite a lot, and had given the tool more of a chance because of him.  

It took over a year, but I eventually convinced them to switch from that original SAST to a next generation SAST (read more on the difference between first and second gen here). The new tool provided almost entirely true positives, which made the developers a lot happier. It also was able to run upon code check in, which worked better for the way they liked to do their work in that shop. When I had left, it was scanning every new check in, then sending an email to whoever checked the code in with a report if any bugs were introduced. Althought I didn’t have it breaking builds by the time I left, we went from zero SAST, to SAST-on-every-new-commit. And devs were actually fixing the bugs! Not all the bugs, but quite a few, which was a giant improvement from when I arrived. To me this was a success. 

Photo by Mech-Mind Robotics on Unsplash

Avoiding this fate…

To avoid this fate, carefully pick your toolset (make a list of requirements with the developers, and stick to it), then test it out first on your own, then with developers, before purchase. Next, test the tool manually with a friendly developer team and work out as many kinks as you can before putting it into a CI. Then put it in alerting mode in the Ci with that team, again, watching for issues. If it runs well, start adding it for more teams, a few at a time. Pause if you run into problems, work them out, then continue. 

Tip: You can also set up most static tools (ones that look at written code, not running code) to automatically scan your code repository. This is further ‘left’ in the CI/CD, because it is even earlier in the system development life cycle (SDLC). You can scan the code as it is checked in, or on a daily, weekly or monthly basis, whatever works best for you and your developers!

The next post in this series is Untested Tools.

The Difference Between SCA and Supply Chain Security

Right now, the concept of the software supply chain and securing it is quite trendy. After the solar winds breach, the attack on the crypto wallet, at the log4J fiasco, the entire world appears to be focused on securing the software supply chain. I’m not complaining. If anything, as an application security nerd, I am quite pleased that I am finally getting buy-in that these things need to be protected, and that vulnerable dependencies need to be avoided. Folks, this is GREAT.

Photo by Mika Baumeister on Unsplash

Software composition analysis, often called SCA, means figuring out which dependencies your software has, and of those, which contain vulnerabilities. When we create software, we include third party components, often called libraries, plugins, packages, etc. All third-party components are made-up of code that you, and your team, did not write. That said, because you have included them inside of your software, you have added (at least some) of their risk into your product.

A ‘supply chain’ means all of the things that you need to create an end product. If you were creating soup, you would need all of the ingredients of the soup, you would need things like pots and pans in order to cook and prepare the ingredients of the soup, you would need a can or a jar to put it in, and likely a label on top to tell everyone what type of soup it is. All of those things would be considered your supply chain. 

Photo by Miltiadis Fragkidis on Unsplash

Imagine inside of your soup one of the ingredients is flour. Chances are that it (wheat) was grown in a field, and then it was harvested, and then it was ground down into flour, and then it might have been processed even further, and only then it was sent to you, so that you could create your soup. All of the steps along the way could have been contaminated, or perhaps the wheat could have rotted, or been otherwise spoiled. You have to protect the wheat all along the way before it gets to you, and once you make the soup, in order to ensure the end product is safe to eat.

Protecting all of the parts along the supply chain, from ensuring that there aren’t terrible chemicals sprayed on the ingredients as they grow, to ensuring that the can or jar that you put the soup into has been properly sterilized, is you securing your supply chain.

When we build software, we need to secure our software supply chain. That means not only ensuring the third-party components that we’re putting into our software are safe to use, but the way we are using them is secure [more on this later]. We also have to ensure how we build the software is safe, and this can mean using version control to store our code, ensuring any CI/CD that we use is protected from people meddling and changing it, and every single other tool we use or process we follow are also safe. 

If you’ve followed my work a long time, I am sure you know that I think this includes a secure system development life cycle (S-SDLC). This means each step of the SDLC (requirements, design, coding, testing and release/deploy/maintain) contains at least one security activity (providing security requirements, threat modelling, design review, secure coding training, static or dynamic analysis, penetration testing, manual code review, logging & monitoring, etc.) A secure SDLC is the only way to be sure that you are releasing secure software, every time. 

Tanya Janca

With this in mind, the difference between the two is that SCA only covers third party dependencies, while supply chain security also covers the CI/CD, your IDE (and all your nifty plugins), version control, and everything else you need in order to make your software. 

It is my hope that our industry learns to secure every single part of the software supply chain, as opposed to only worrying about the dependencies. I want securing these systems to be a habit; I want it to be the norm. I want the default IAM (identity and access management) settings for every CI/CD to be locked down. I want checking your changes into source control to be as natural as breathing. I want all new code check-ins to be scanned for vulnerabilities, including their components. I want us to make software that is SAFE.

If you read my blog, you are likely aware that I recently started working at Semgrep **, a company that creates a static analysis tool, and recently released a software supply chain security tool. If you’ve seen their SAST tool, you know they’re pretty different than all the other similar tools on the market, and their new supply chain tool is also pretty unique: it tells you if your app is calling the vulnerable part of your dependencies. They call it ‘reachability’. If your app is calling a vulnerable library, but it’s not calling the function inside of that library where the vulnerability lives, you’re usually safe (meaning it’s not exploitable). If you ARE calling the function that is inside your library where the vulnerability is located, there’s a strong likelihood that the vulnerability could be exploitable from within your application (meaning you are probably not safe). We added this to the product to help teams prioritize which bugs to fix, because although we all want to fix every bug, we know there isn’t always time. In summary, if the vulnerability is reachable in your code, you should run, not walk, back to your desk to fix that bug. 

Me, again
I have worked with more than one company who had programmers who did not 
check in their code regularly (or at all) to source control. Let me tell 
you, every single time it was expensive! Losing years of hard work will 
break your heart, not just your budget. Supply chain security matters.

Join me in this adventure by starting at your own office! Whether you have budget or not, there are paid and free tools that can help you check to see if your supply chain is safe! You can also check some of this stuff manually, easily (the IAM settings on your CI/CD are just a few clicks away). Reviewing the setup for your systems, and ensuring you have everything important backed up, will make your future less stressful, trust me. 

You can literally join me on this adventure, by signing up for the Semgrep newsletter! The Semgrep Community is about to launch live free events, including training on topics like this, and we can learn together. First email goes out next week, don’t miss out!

~ fin ~

Photo by Mika Baumeister on Unsplash

** I work at Semgrep. This means I am positively biased towards our products and my teammates (I think they are awesome!) That said, with 27+ years’ experience in IT, being a best-selling author and world-renown public speaker, there are a LOT of companies that would be happy to let me work for them. I choose Semgrep for a reason; my choice to work there was intentional. That said, I will try not to be annoying by only talking about work on my blog, promise! 

We Hack Purple Joins Forces with Semgrep!

Hey there, fellow developers, AppSec pros, and InfoSec professionals! Have you heard the news? Semgrep and We Hack Purple are joining forces to create an all-star team for securing the world’s software!

At first glance, you might be wondering why this is such a big deal. Well, let me tell you – this is not your average collaboration. Semgrep is a well-known and respected provider of open-source static analysis tools that help identify potential security vulnerabilities in code early on. They also just launched their earth-shattering supply chain security tool that tracks which vulnerabilities are reachable in your code. On the other hand, We Hack Purple is a platform that offers training, community, and practical advice to developers, AppSec Folks, and the rest of IT on how to build secure applications.

By combining their respective strengths, Semgrep and We Hack Purple are creating a powerhouse of application security expertise. With Semgrep’s steadily growing toolset and stellar security research team, plus We Hack Purple’s extensive library of security and developer training, organizations will have application security support like never before. This makes it easier for developers to write secure code throughout the entire system development life cycle.

Will you please do me a small favour? Sign up for the Semgrep Newsletter. I’m going to be inviting everyone for free training, events, contents, sharing content and more via the newsletter, and I want to ensure you get it. 😀

Tanya Janca

You might be wondering about the impact on We Hack Purple’s existing offerings. Well, fear not. We Hack Purple Community will continue to be led by Tanya Janca, and slowly meld together with the Semgrep community, with new content and events being joint efforts. This means EVEN MORE community, events, and content! The academy will remain open for current students, giving everyone a year to finish their courses, but be closed to new students. Most excitingly: We will be offering free live security training to all Semgrep customers, as well as free training for the public! By joining together, we’ll expand our reach and offer even more free and valuable application security resources for developers and security professionals alike.

Overall, this is an exciting development in the world of application security. Through collaboration and innovation, Semgrep and We Hack Purple have shown us how much more we can achieve when we work together. Let’s raise our coffee cups and toast to a brighter future for application security!

Sources:

What’s the difference between Product Security and Application Security?

Recently I have started seeing new job titles in the information security industry and the one that stuck out the most to me is product security engineer. I started seeing people who were previously called an application security engineer having their titles changed to product security, and I was curious. Some of you may remember that I had Ariel Shin on the We Hack Purple podcast, and although she does product security, and I did ask her a few questions about it, but I wasn’t satisfied. I wanted to learn more!

Image of a watch, to illustrate the idea of a product.
Photo by Daniel Korpai on Unsplash

I also started a Twitter thread, which you can read here.

From what I understand, after speaking to many people about this, product security means a person who is dedicated solely to the security of one or more products. This means that if the product has hardware and software, they must understand how to secure both hardware and software. They also need to be extremely well versed in the threats that it faces, the personalities of the users, and anything else that might affect the reliability, confidentiality, an integrity of that system.

An application security professional is concerned with securing the software of the entire organization. If they happen to only have one product, and the product is software, they could be called an application security professional or a product security professional. However, most of the time an application security engineer is expected to do projects with a broader scope, trying to secure several/all applications, trying to ensure that every project follows the secure system development life cycle, and all the other things you’ve heard me drone on about in this blog, in my talks, in my book, etc.

Whereas a product security professional dives extremely deep into one or more products. For example, imagine a company that does e-commerce. It has one gigantic site, where merchants and purchasers both use the site in different ways, but it’s one big system. It may contain APIs, a beautiful GUI front end, one or more databases, a serverless app, and maybe even an integration with Stripe to run the credit cards for them. This could be called one big product, and if a product security person was assigned to it, they would be expected to understand how the entire system works, and how to keep the system, its data, and all of its users safe.

From Adrian Sanabria we have this definition, which I also agree with:

Looking at it from a business/organizational perspective: AppSec is a sub-branch of infosec. Product security is a sub-branch of product.

Adrian Sanabria

Although you may not have heard of a product security professional who reports directly to the product group only (they often report to the information security team, but are embedded in the product team), this also makes a lot of sense. Embedding the product security person in with the product team helps ensure from the very first meeting that the product is secure. This is a huge #SecurityWin!

Continuing down this line of thought, this would mean that the product security person would also be responsible not only for the software, but the infrastructure it’s hosted on, the entire supply chain that leads up to the building of that software, hardware, deployment, etc. Way more than just the software component.

Product security includes the security features of products.   

Ray LeBlanc, of the Hella Secure Blog

Product security being responsible for the product itself having security features for the end users is also an interesting idea, which I had not thought of before Ray pointed it out. I like this as well.

Facts about Product Security

  • ProdSec professionals are embedded in the product team
  • Prodsec pros need to know:
    ⁃ Architecture and design
    ⁃ Threat modelling
    ⁃ Secure coding principals
    ⁃ Be able to use the basic Appsec toolset: DAST, SAST and SCA
    ⁃ How and when to hire a pentester
    ⁃ All the steps of the Secure SDLC, and arranges to do them or ensure they get these steps done (even if they hire out)
    ⁃ Any policies you have that apply to your product
    ⁃ Understanding the product inside and out

To echo/add: Product Security (aka Platform Security) could involve more complex external IAM functions, secrets and cryptographic infrastructure, very closely interlinked and overlapping depending on the org.

@vect0rx

Another resource that may interest you, a podcast with Anshuman Bhartiya on this topic: https://tromzo.com/podcasts/anshuman-bhartiya-product-security. He was also previously on the We Hack Purple Podcast, where we spoke about SAST.

I hope that clarifying the difference between #ProdSec and #AppSec has been helpful. Do you agree? Do you disagree? We’d love to hear from you in the comments below!