Pushing Left, Like a Boss – Part 9: An AppSec Program

In my talk that this blog series is based on, “Pushing Left, Like a Boss”, I detailed what I felt an AppSec program should and could be. Since then, I've learned a lot and now see that there are quite a few activities that you can do, but it's the goals and the outcomes that actually matter. Our industry has also changed quite a bit since I wrote that talk (written in 2016, first seen in public 2017, this article first published in 2019 and republished here in 2021).

My first international talk, at AppSec EU, 2017. It feels so long ago.

My previous thoughts on what a basic AppSec Program should be:

For bonus items I had listed:

And for “extra special situations” I recommended the following (which will be explained in the next blog post):

  • Bug Bounty Programs
  • Capture the Flag Contests (CTFs)
  • Red Team exercises

Anne Gauthier of OWASP Montreal, myself (pre-Microsoft) and Nancy Gariché of Secure That Cert and OWASP DevSlop. In the background is Christian Folini of the OWASP CRS project. I had no idea how important these people would become to me at the time.

Anne Gauthier of OWASP Montreal, myself (pre-WeHackPurple) and Nancy Gariché of Secure That Cert and OWASP DevSlop. In the background is Christian Folini of the OWASP CRS project. I had no idea how important these people would become to me at the time.

I'm going to preface this next part with two thoughts.

You can't do security “right” if you aren't doing IT “right”. If you can't publish fixes for a year+ because your processes are broken, if you are underwater in technical debt, if you have dysfunction within your IT shop already, this is going to be very hard. I suggest starting with modernizing your systems and entire IT team as you modernize your security approaches, hand-in-hand. Don't give up, you can do this! Take one item, aim for it, and continue on until you're doing well.

If you have poor communications between the security team and the rest of IT this will be another hurdle that you have to work on. Culture plays a big part in ensuring your efforts are successful. I've released a bunch of videos on my YouTube channel on this topic, start with this one.

My new vision for an AppSec program:

  • A complete picture of all of your apps. Bonus: alerting, monitoring and logging of those apps.
  • Capability to find vulnerabilities in written code, running code, and 3rd party code. Bonus: the ability to quickly release fixes for said issues
  • The knowledge to fix the vulnerabilities that you have found. Bonus: eliminating entire bug classes.
  • Education and reference materials for developers about security. Bonus: an advocacy program, creating a security champion program, and repetitive re-enforcement of positive security culture.
  • Providing developers with security tools to help them do better. Bonus: writing your own tools or libraries.
  • Having one or more security activities during each phase of your SDLC. Bonus: having security sprints and/or using the partnership model (assigned and/or embedding a security person to/within a project team).
  • Implementing useful and effective application security tooling. Bonus: automating as much as possible to avoid errors and toil.
  • Having a trained incident response team that understands AppSec. Bonus: implementing tools to prevent and/or detect application security incidents (can be homemade), providing job-specific security training to all of IT, including what to do during an incident.
  • Continuously improve your program based on metrics, experimentation and feedback from any and all stakeholders. All feedback is important.

I'd love to hear your thoughts on my new application security ‘prescription'. Please comment below.

Up next in this series we will discuss the AppSec “extras” and special AppSec programs; I will discuss all the things in this article that I have not previously defined for you.

Pushing Left, Like a Boss – Part 8: Testing

Testing can happen as soon as you have something to test.

Suggestion: Provide Developers with security scanning software (such as OWASP Zap), teach them to use it, and ask them to fix everything it finds before sending it to QA.

You can add automated security testing into your pipeline, specifically:
  • VA scanning of infrastructure (missing patches/bad config - this is for containers or VMs, but you often usually use different tools)
  • 3rd party components and libraries for known vulnerabilities (SCA)
  • Dynamic Application Security Testing (DAST) - only do a passive scan so that you don't make the pipeline too slow or use a HAR file to automate which parts are tested and which are not.
  • Static Application Security Testing (SAST) - do this carefully, it can be incredibly slow. Usually people only scan the delta in a pipeline (the newly changed code), and do the rest outside of a pipeline.
  • Security Hygiene - verify your encryption settings, that you are using appropriate security headers, your cookie settings are good, that HTTPS is forced, etc.
  • Anything else you can think of, as long as it's fast. If you slow the pipeline down a lot you will lose friends in the Dev team.

Q&A at #DevSecCon Seattle, 2019

During the testing phase I suggest doing a proper Vulnerability/Security Assessment (VA) or PenTest (if you need management's attention), but early enough that if you find something you can fix it before it's published. More ideas on this:

  • Repurpose unit tests into security regression tests: for each test create an opposite test, that verifies the app can handle poorly formed or malicious input
  • For each result in the Security Assessment that you performed create a unit test that will ensure that bug does not re-appear
  • Ensure developers run and pass all unit tests before even considering pushing to the pipeline
  • Perform all the same testing that you normally would, stress and performance testing, user acceptance testing (hopefully you started with AB testing earlier in the process), and anything else you would normally do.
Penetration testing is an authorized simulated attack on a computer system, performed to evaluate the security of the system. The idea is to find out how far an attacker could get. This differs from a security assessment or vulnerability assessment, in that they are prioritizing exploiting the vulnerabilities they find, rather than just reporting them. If you want to shock management and get some buy-in, a PenTest is the way to go. But if just you want to find the things wrong with your app, and ensure lower risk to your systems, I would recommend a security/vulnerability assessment instead. It depends on your situation.

Up next in this series we will discuss what a formal AppSec program should include, followed by AppSec “extras” and special AppSec programs, which will end this series.

Pushing Left, Like a Boss – Part 7: Code Review and Static Code Analysis

This article is about secure code review and Static Application Security Testing (SAST). Static analysis is a highly valuable activity which can find a lot of security problems, far before you get to the testing or release stages, potentially saving both time and money.

Note: SCA is Static Composition Analysis, verifying 
that your dependencies are not known to be 
vulnerable. I have heard many say "static code analysis" 
when referring to code review/SAST tools, shortening it 
to SCA for simplicity. We will not do that, SCA will only
be used to refer to static composition analysis.

When application security folks say ‘static' analysis, we mean that we will look at written code, as opposed to ‘dynamic', which means when your code is running on a web server.

Since I wrote this article a few years ago, I have had a chance to do more in the code review space and spend some time working with SAST tools. Although my attention span is short, and I can be impatient at times (such as, for example, when I am awake), I can now spot several types of problems fairly easily. If you had asked me a few years ago if I would ever find code review pleasurable, I would have laughed, but now I find validating SAST results rather satisfying. It's funny how much our opinions can change over time.

Code Review can happening both during the coding and during the testing phases of the system development life cycle.

There are two options for doing code review; manual or with a tool. There are pros and cons to each, and using both will get you the best results.

When reviewing code manually for security you don't read every line of code; you just review the security controls to ensure they are in all the places they should be and that they are implemented correctly. Although I have not completed hundreds of secure code reviews in my career, I do recall discovering in delight when there was no input validation on a data field, or that an app was not using stored procedures but inline SQL, both of which are big no-nos. It was so obvious when I knew where to look and what to look for. However, most code reviews are not so simple, and many bugs are difficult or nearly impossible to spot with only the naked eye.

Note: when I say “only review the security controls” I mean things like a login, input validation, authorization, authentication, integration points, etc. Anything that has to do with the security of the app.

When using a tool for code review you would use something called a ‘static code analyzer' or a ‘SAST' (Static Application Security Testing) tool. This special kind of software parses your code into areas of concern and attempts to follow every possible outcome. It takes a lot of processing power and can take hours or even days to complete. It then creates a report with approximately 60-80% false positives.

Note, since I first wrote this article new types of  
static code analysis tools have been created that 
allow for very fancy grepping (regex searching) of 
the code base, with templates to help you find 
problematic code. How cool is that!

I know what you are thinking right now: 80% false positives !?!?!?! Why would anyone want to use a tool like that? Let me explain.

The key to looking at results of an SAST tool is that the items it lists are not answers, they are hints of where to look for problems, that the code reviewer (hopefully a security expert) can investigate. This means instead of reading 20,000 lines of code, the code reviewer uses the tool, it finds 200 ‘clues', and then from those 200 ‘clues' they end up finding 20 real bugs. And many of those bugs they could not have found with just their eyes, because SAST tools can go several layers deep into the code, in a way humans just can't.

When performing code review it is possible to find all sorts of other problems with your application, not just security issues. During one of my projects the code reviewer found several memory leaks. When we fixed them our application became lightening fast, which made our project team look amazing. There is so much more than just security problems that a good code reviewer can find; it is definitely a worth-while task if you want to build truly resilient and secure software.

My best 'superwoman' pose at #DevSecCon Seattle, Sept 2019

Although we already discussed this in Part 5.2 Using Safe Dependencies, I'm going to bring it up again: everyone needs to verify the security of their 3rd party code/components/frameworks/libraries/whatever-you-want-to-call-the-code-in-your-app-that-you-didn't-write. You must verify that they (3rd party components) are not known to be vulnerable. When I say ‘known to be vulnerable', I mean there is currently information available on the internet about the vulnerability that is documented on what the problem is and/or how to exploit it.

Many organizations and industry spokespeople create a lot of fear, uncertainty and doubt (FUD) around zero days (vulnerabilities in popular software for which there is no existing patch), advanced persistent threat (APT - someone living on your network for an extended period of time, spying on you) or very advanced attackers, such as nation states. In reality, almost all serious security incidents are a result of our industry not keeping up with the basics; missing patches, people with admin privileges clicking on a phishing email while logged in, and well-known (and therefore preventable) software vulnerabilities, such as using code with known vulnerabilities in it. Essentially; basic security hygiene.

Bonus resource: My friend Paul Ionescu created a code review series.

Up next we will talk about the Testing Phase of the SDLC!