The training you have selected is too ‘off topic’

In this series we are discussing how to get your technical training approved at work. This is not the first article, and you may want to go back and read it from the start.

In the previous article, we talked about how we need to explain to our boss not only which training we want, but we must overcome any objections, if we are going to get it approved. Let’s look at the second objection in our list.

Back in the day I requested approval to take a web-app hacking course, and I recall my boss saying, “You don’t need to learn that; you just need to run the scanner.” My job was web app penetration testing, and it was clear my boss had no idea what I did all day. He seemed to think that manual security testing was unnecessary, and at the time I had no idea how to explain we needed a lot more if we wanted to ensure our apps were very secure. I ended up watching a lot of videos on the internet, playing around, and wasting a ton of time.

When I switched over into Application Security, it got even more difficult, as most of the courses only offered to teach me “the OWASP Top Ten” (which I already knew well), and then the main security controls (authentication, authorization, encryption, identity) and little else. I wanted to know how to do my job, not theory and not basic web app hacking (I already knew that). Plus, they always seemed to go really deep into encryption, but I already knew my teams would never be writing their own encryption, so I didn’t get why they felt the need to always cover it…

I found the entire situation infuriating. I felt like I couldn't win.

Image of women holding computer
Image of women holding computer provided by #WOCTechChat

If you are asking your boss to take training it must fall into one of two categories:

1. It will help you do your current/new job better

2. It will help you grow within the organization so you can get a promotion someday (also known as career development)

Note: If you are a Rudy developer and you asked your boss to pay for you to take a basket weaving course, this blog article is not going to help you. That said, if you are a Ruby developer and you asked your boss to pay for a secure-coding-in-ruby course or an application security course, then this article can help.

Remember I said in the first article that you needed to read the syllabus and keep track of what’s on the course and how it relates to your job? Now is time to get that info so we can write your justification letter. Just like in the previous article, I am going to use the Application Security Foundations Program from We Hack Purple as the example, but you should be able to use whichever training you have choose.

Dear Boss,

I want to take the We Hack Purple Application Security Foundations Program for my training this year. I know you told me that it’s too ‘off topic’ for my job, but I wanted to explain to you how it will definitely help me do my job better.

Right now, our InfoSec team keeps bringing in a PenTester to test our apps right before we release them. They always find 100 things wrong, because none of our devs know anything about security. We always end up with late projects and everyone freaking out, because it’s so last minute. Lots of overtime and stress. I could help teach them what they need to know, so we could create more-secure software. The PenTesters will find a lot less wrong with our apps.

Also, the program comes with a copy of Alice and Bob Learn Application Security. I know the one copy we have is currently constantly being used by my team for reference, so having a second copy would be really great.

If I took this course, I would know how we could do this better. The dev team could do some testing ourselves (carefully), and other stuff to make sure our apps are in better shape by the time the PenTester comes. The program has a secure coding guideline we could adopt, and even an API best practices guide. We currently have no idea how to secure our APIs, and we keep reading on the internet and we’re lost. This course would help me understand so much! And then I could be the ‘security champion’ on our dev team, the one everyone can turn to when they need help. I know you feel this is outside my job description, but someone has to do it. I want that someone to be me.

Sincerely,
Your-Name-Here

Up next we will cover Objection 3: There’s no time with your current workload for you to take training.

How to get your boss to approve the training you want

*This is a series.*

We’ve all been there. There’s a training you really want to take, but your boss isn’t so sure. This can be because it’s out of budget, they feel it’s too ‘off topic’ from your current job, there’s no time with your current workload, they are afraid they will lose you if you have new skills, or some other reason they won’t tell you. Let’s go through all of these reasons and figure out how YOU can get your training approved.

Photo: #WOCTechChat

Note: I run my own training company, We Hack Purple, that specializes in Application Security, Secure Coding and DevSecOps training. While I am definitely hoping this article helps our customers, I’m also hoping it helps everyone else who needs training! For our examples we will use the Application Security Foundations Program from We Hack Purple, and we will try to justify taking it to your boss.

The first thing you need to do is make sure you are selecting the *best* training for your specific job or career development. Don’t take the popular one, or ‘the cool one’ that people are talking about on Twitter. Evaluate very carefully which one will help you level-up in your career and your current job.

Next, read about the content of the training you are taking. Make notes of what’s in there and keep the syllabus handy, as you will likely need to reference it as you write your justification. You also want to have some other links to other courses to compare it to; both to explain why the one you have selected is better and why it’s (hopefully) more cost-effective.

Let’s start creating our defenses for your boss’s potential objections.

Objection 1: We don’t have the budget/it’s too expensive.

This is the one that I personally have received the most often in my career. I have actually had a boss laugh in my face when I suggested one single course (because it would have cost my combined training budget for 5 years). I explained that cyber security courses are quite costly, and all of my bosses continued to reject my requests. I ended up selecting training from several different places that was cheaper, but nowhere near as good as what I had asked for. At the time, I didn’t know how to get around this hurdle.

With a little more industry experience and a chance to see a lot more training, I realized that I needed to explain the value of what we were getting was greater than what we were spending. Let me explain using the We Hack Purple Application Security Foundations Program as the example (but this should work with whatever you have chosen, if you have chosen the best training for your situation).

Dear Boss,

I want to take the Application Security Foundations Program from We Hack Purple for my training this year. I know you feel it’s too expensive and that we might not have the budget, but let me explain how I think it will save us more money than it costs.

We keep hiring consultants to help us with our AppSec Program, and that is very expensive. And we haven’t been getting the results we want, they show up and write one policy or one guideline, then leave. This program will provide some starter policies, standards and guidelines, so we don’t need to pay that consultant anymore. After taking the training I will know what to do and have tools to start with, so I can hit the ground running.

We also keep changing our strategy, because we haven’t been getting the results we want, and the dev teams don’t seem to be ‘on board’ with what we have been doing. This program will not only help me build and plan an entire AppSec program throughout the three courses, in Level 2 of the program there’s an entire module to teach me how to support our culture change (advocacy), how to build a security champions program, AND how to make presentations that aren’t the death-by-PowerPoint that we are used to giving. They even show us how to measure the effectiveness of our program, so we know if the strategy we are using is actually *working*, so we can know when we need to change or stay the course. Right now, we are just guessing at what to do to make sure our software is secure, but with this program, I would know.

I realize that $999 USD is a lot, and we are a small company. But this is the only training I could find like this on the internet, one that will teach me how to build and launch an AppSec program. That’s what the company needs me to do. Please approve this training so I can get started.

Sincerely,

Your-Name-Here

Up next we will talk about training that is ‘too off topic'.

Security is Everybody’s Job — Part 6 — The Second Way

The Second Way of DevOps is fast feedback. In security, when we see this we should all be thinking the same thing: Pushing Left. We want to start security at the beginning of the system development life cycle (SDLC) and ensure we are there (providing feedback, support and solutions) the whole way through!

Pushing Left, Tanya' Favorite Thing
Pushing Left, Tanya’s Favorite Thing

Fast feedback loops means getting important information to the right people, quickly and regularly. One of the main reasons that Waterfall projects failed in the past was the lack of timely feedback; no one wants to find out twelve months after they made a mistake, when it's too late to fix it.

The goal of security activities in a DevOps environment must be to shorten and amplify feedback loops so security flaws (design issues) and bugs (code issues) are fixed as early as possible, when it’s faster, cheaper and easier to do a better job. These DevOps people are really onto something!

Let’s go over several ideas of how to achieve this.

Activities to create fast feedback loops.

  • Automate as much as humanly possible. Inside or outside the pipeline, automation is key.
  • Whenever possible integrate your tools with the Dev and Ops team’s tools. For instance, have the issues found by your IAST tool  turned into tickets in the developer’s bug tracking system, automagically.
  • When you have a Pentest done, check all your other apps for the things found in the report, then add create unit tests to look for these things and prevent them from coming back.
  • Rename insecure functions or libraries as “insecure” with a wrapper, so programmers see immediately that there is an issue.
  • Add security sprints to your project schedule (to fix all security bugs in backlog).
  • Asking the Dev and Ops what they are concerned about (in relation to security), so you can fix any problems the security team might be causing them.
  • Add important security tests that are quick and accurate to the pipeline. For instance, scan for secrets in the code that is being checked in. That is an important test!
  • If an important security tests fail in the pipeline, the continuous integration server must break the build. Just like quality tests. This is loud feedback.
  • Create a second pipeline that doesn’t release any code, but runs all the long and slow security tests, then have the security team review the results after and turn the important things into tickets for the Devs.
  • Tune all security tools as much as possible and validate all results so that the feedback you are giving is *accurate*. There is no point of sending lots of feedback if half of it is wrong.
  • Work with developers to create negative unit tests (sometimes known as abuse tests). Create copies of regular unit tests, rename them with “Abuse” at the end, then add malicious payloads and ensure that your app fails gracefully and handles bad input well.
  • Have reports from your security tools automatically send their results to a vulnerability management tool such as Defect Dojo or Thread Fix to keep metrics and use them to improve all of your work. You need feedback too.

Be creative. Any way that you can get feedback faster to other teams is a huge win for your team too!

Security is Everybody’s Job — Part 5 — The First Way

The First Way of DevOps

The first “Way” of DevOps is emphasizing the efficiency of the entire system. Many of us tend to focus only on our part of a giant system, and get bogged down improving only our own contributions to the larger process. It’s rare that we stand back, look at the entire thing, and realize that if we helped another team or changed something small within our part, that it could improve other areas for the better. The first way of DevOps is about looking at the entire system, and making sure the entire thing is as efficient as possible. #speed

When we worked in Waterfall development environments security often acted as a gate. You had to jump through their hoops, then you were let through, and you could push your code to prod. Awesome, right? Not really. It was slow. Security activities took FOREVER. And things got missed. It was rigid and unpleasant and didn’t result in reliably secure software.

It may seem obvious to new developers that security should not slow down the SDLC, but I assure you, this concept is very, very new. When I was a software developer I referred to the security team as “Those who say no”, and I found almost all of my interactions with them left me frustrated and without helpful answers.

When we (security practitioners) think about The First Way, we must figure out how to get our work done, without slowing down all the other teams. They won’t wait for us, and we can’t set up gates. We have to learn to work the way they do; FAST.

Tools:

First of all, we need to use modern tooling that is made for DevOps pipelines if we are going to put anything into the CI/CD pipeline. Never take an old tool and toss it in there; no DevOps team is going to wait 5 hours for your SAST tool to run. Tune your tools and ensure you select tools that are made for pipelines if that is how you are going to use them. Whenever possible, only run your tools on the ‘delta’ (the code changed in that release, not the entire code base).

Tool Selection:

When selecting tools, remember that not every tool needs to be put in the pipeline. In fact, having tools that are out-of-band, but located on the ‘left’ of the SDLC, can offer even more value and save time. Examples:

  • Package management tools that only serve packages that are not known to be insecure (pre-approved by a security research team)
  • Adding security tests to your unit tests, which are often run before the code arrives in the pipeline (for instance, write input validation tests that ensures your code properly handles input taken from the XSS Filter Evasion Cheat Sheet)
  • Ddding security tooling to the check-in process, such as secret scans (don’t even let them check it in if it looks like there’s a secret in the code)
  • Scanning your code repository for known-insecure components. It’s just sitting there, why not use it?

Bugs:

This also means that security bugs should be placed in the same bug tracker or ticketing system that the developers and ops teams are using. They shouldn’t check two systems, that is not efficient.

Finding Vulnerabilities:

If at all possible, we should be providing and/or approving tools that assist in finding vulnerabilities in written code (both the code your team wrote, and the code from dependencies) and running code. This could be SAST + SCA + DAST, or it could be SCA + IAST (run during unit testing, QA and in prod). It could also mean manual secure code review plus a PenTest the week before going live (this is the least-efficient of the three options presented here).

Templates and Code Reuse:

If it makes sense, create templates and provide secure code samples, there’s no need to reinvent the wheel. Also, enable the developers and ops teams to scan their own code by providing tools for them (and training on how to use them safely and effectively).

Think Outside The Box

We (security) can no longer be a bottleneck, we must work to enable them to get their jobs done securely, in anyway we can. Examine your processes to ensure they are efficient; create a second asynchronous (which does not release to prod) pipeline to automate your longer tests; write your own tools if you absolutely have to. The sky is the limit.

Security is Everybody’s Job — Part 4 — What is DevSecOps?

In this post we will explore The 3 Ways of DevOps. But first, a definition from a friend.

DevSecOps is Application Security, adjusted for a DevOps environment.

Imran A Mohammed

DevSecOps is the security activities that application security professionals perform, in order to ensure the systems created by DevOps practices are secure. It’s the same thing we (AppSec professionals) have always done, with a new twist. Thanks Imran!

Photo by Marvin Meyer on Unsplash

Refresher on The Three Ways:

  1. Emphasize the efficiency of the entire system, not just your part.
  2. Fast feedback loops.
  3. Continuous learning, risk taking and experimentation (failing fast). Taking time to improve your daily work.

 

Let’s dig in, shall we?

 

1. Emphasize the efficiency of the entire system, not just one part.

This means that Security CANNOT slow down or stop the entire pipeline (break the build/block a release), unless it’s a true emergency. This means Security learning to sprint, just like Ops and Dev are doing. It means focusing on improving ALL value streams, and sharing how securing the final product offers value to all the other steams. It means fitting security activities into the Dev and Ops processes, and making sure we are fast.

2. Fast feedback loops.

Fast feedback loops = “Pushing Left” (in application security)

Pushing or shifting “left” means starting security earlier in the System Development Life Cycle (SDLC). We want security activities to happen sooner in order to provide feedback earlier, which means this goal is 100% inline with that we want. The goal of security activities must be to shorten and amplify feedback loops so security flaws (design/architecture issues) and bugs (code/implementation issues) are fixed as early as possible, when it’s faster, cheaper and easier to do a better job.

3. Continuous learning, risk taking and experimentation

For most security teams this means serious culture change; my favorite thing! InfoSec really needs some culture change if we are going to do DevOps well. In fact, all of IT does (including Dev and Ops) if we want to make security everybody’s job.

Part of The Third Way:

  • Allocating time for the improvement of daily work
  • Creating rituals that reward the team for taking risks: celebrate successes
  • Introducing faults into the system to increase resilience: red team exercises

We are going to delve deep into each of the three ways over the next several articles, exploring several ways that we can weave security through the DevOps processes to ensure we are creating more secure software, without breaking the flow.

If you are itching for more, but can’t wait until the next post, watch this video by Tanya Janca. She will explain this and much more in her talk ‘Security Learns To Sprint'.

Security is Everybody’s Job — Part 3 — What IS DevOps?

What IS DevOps?

There are many definitions of DevOps, too many, some might say. Some people say it’s “People, Processes, and Products”, and that sounds great, but I don’t know what I’m supposed to do with that. When I did waterfall I also had people, processes, and products, and that was not great. I thought DevOps was supposed to be a huge improvement?

I’ve heard other people say that it’s paying one person to do two jobs (Dev and Ops), which can’t be right… Can it? I’ve also been told once by a CEO that their product was “made out of DevOps”, as though it was a substance. I decided not to work there, but that’s another story. Let’s look at some better sources.

Wikipedia says:

DevOps is a set of practices that combines software development and information-technology operations which aims to shorten the systems development life cycle and provide continuous delivery with high software quality.

But what are the practices? Why are we aiming to shorten the SDLC? Are we making smaller software? What is ‘continuous delivery’?

To get to the bottom of it I decided to read The DevOps Handbook. 
Then I knew what DevOps was, and I knew how to do it. And I 
discovered that I LOVED DevOps.

According to the DevOps Handbook, DevOps has three goals.

Improved deployment frequency; Shortened lead time between fixes;

Awesome! This means if a security bug is found it can be fixed extremely quickly. I like this.

Lower failure rate of new releases and faster recovery time;

Meaning better availability, which is a key security concern with any application (CIA). Lower failures mean things are available more often (the ‘A’ in CIA), and that’s definitely in the security wheelhouse. So far, so good.

Faster time to market; meaning the business gets what they want.

Sometimes we forget that the entire purpose of every security team is to enable the business to get the job done securely. And if we are doing DevSecOps, getting them products that are more secure, faster, is a win for everyone. Again, a big checkmark for security.

Great! Now I think the DevOps people want the same things that I, as a security person, want. Excellent! Now: How do I *DO* DevOps?

That is where The Three Ways of DevOps comes in.

  1. Emphasize the efficiency of the entire system, not just one part.

In the next post we will talk more in detail about The 3 Ways (and how security fits in perfectly).

Security is Everybody’s Job — Part 2 — What is Application Security?

Application Security is every action you take towards ensuring the software that you (or someone else) create is secure.

This can mean a formal secure code review, hiring someone to come in and perform a penetration test, or updating your framework because you heard it has a serious security flaw. It doesn’t need to be extremely formal, it just needs to have the goal of ensuring your systems are more secure.

Now that we know AppSec is, why is it important?

For starters, insecure software is (unfortunately), the #1 cause of data breaches (according to the Verizon Breach Reports, 2016, 2017, 2018 and 2019). This is not a list that anyone wants to be #1 on. According to the reports, insecure software causes 30–40% of breaches, year after year, yet 30–40% of the security budget at most organizations is certainly not being spent on AppSec. This is one part of the problem.

The graph above is from the Verizon Breach Report 2017. Hats off to Verizon for creating and freely sharing such a helpful report, year after year.

If the problem is that insecure software causes breaches, and one of the causes is that security budgets don’t appear to prioritize software, what are some of the other root causes of this issue?

For starters, universities, colleges, and programming bootcamps are not teaching the students how to ensure that they are creating secure software. Imagine electricians whet to trade school, but they didn’t teach them safety? They twist two cables together and then just push them into the wall, unaware that they need two more layers of safety (electrical tape, and then a marrett). This is what we are doing with our software developers, we teach them from their very first lesson how to make insecure code.

Hello (insecure) World

Lesson #1 for every bootcamp or programming course: Hello World.
Step 1) Output “Hello World” to screen
Step 2) Output “What is your name?” to screen

Step 3) Read the user’s input into a variable (note: we skip teaching input validation)
Step 4) Output the user’s input to the screen with a hello message (note: we skip output encoding)

The above lesson teaches new programmers the best possible recipe for including reflected Cross Site Scripting (XSS) in their application. As far as I know there is not a follow up lesson provided on how ensure the code is secure.

“Hello World” is the most-taught lesson for starting a new programming language, I’m sure you’ve seen it everywhere. More like “Hello Insecure World”.

Although there has been some headway in universities and colleges recently, most of them barely scratch the surface in regards to application security.

So that's reason #2. Let's move onto reason #3.

Another issue that contributes to this problem is that security training for developers is costly. While this is true for all types of professional training, security training is significantly more expensive then other forms of technical training. A single course at the SANS institute (a well-respected training establishment that specializes in all things cyber), could cost an attendee as much as $5000-$7000 USD, for one week of training. There are other less-pricy options, such as taking a course when attending a conference, which usually range from $2000-$5000, however, those are of varying quality, and there is no standardized curriculum, making them a bit of a gamble. I’ve taken several trainings when attending various conferences over the years, and I’d say about 1/2 were good.

There are much cheaper alternatives to the options above (such as the courses from We Hack Purple and other training providers), and they are of very varying quality levels. I’ve seen both good free courses and some where I wish I could have my time back they were so bad. Most of them do not provide a curriculum to follow either, meaning it is often unclear to the student which other courses they should take in order to get the specific job they want. It is very easy to waste quite a bit of time and money; I know, that is how I started my AppSec career… Although I was quite lucky to have two professional mentors guiding me, which made it a lot easier. Not everyone has a mentor.

See how lonely she looks? She’s the ENTIRE security team! #WOCTechChat

Another cause (#4) of insecure software is that the security team is usually grossly outnumbered. According to several sources there is usually 100 software developers for every 10 operations employees for every single (1) security professional.

Let me repeat that. There are 100/10/1, Dev/Ops/Sec. With numbers like that you can’t work harder, you have to work smarter. Which is where we are going with this series.

Now we know the problem and several of the causes, what can we do about it? The short answer is DevSecOps, and the long answer is ‘read the rest of the blog series’.

For now though, let’s define DevSecOps, before we dive into what DevOps is, The Three Ways, and so much more, in the next article.

DevSecOps: performing AppSec, adjusted for a DevOps Environment. The same goals, but with different tactics and strategies to get us there. Changing the way we do things, so that we weave ourselves into the DevOps culture and processes.

Security is Everybody’s Job — Part 1 — DevSecOps

This is the first in a many-part blog series on the topic of DevSecOps. Throughout the series we will discuss weaving security through DevOps in effective and efficient ways. We will also discuss the ideas that security is everybody’s job, it is everyone’s duty to perform their jobs in the most secure way they know how, and that it is the security team’s responsibility to enable everyone else in their organization to get their jobs done, securely. We will define DevOps, ‘The Three Ways’, AppSec and DevSecOps. We will get in deep on the many strategies we can adjust security activities for DevOps environments, while still reaching our goals of ensuring that we reliably create and release secure software.

In summary; We will discuss how to make security a part of our daily work. 
It cannot be added later or after, it needs to be a part of everything.

But let’s not get ahead of ourselves, I have many more posts planned where I will attempt to sway your opinion my way.

Tanya Janca, also known as SheHacksPurple, presenting her ideas in Sydney Australia, 2019. Artwork by the talented Ashley Willis.

Before we get too deep into anything I’d like to dispel some myths. Look at the image below. This is how *some* security professionals see DevOps.

Slide credit: Pete Cheslock

This slide’s author, Pete Cheslock, is highly intelligent and experienced, this mention is not meant to insult him in any way. The slide is social commentary, it is not literal. That said, many people I’ve met truly feel this way; that DevOps engineers are running around making security messes where ever go, and that we (security professionals) are left to clean up the mess. I disagree with that opinion.

Luckily for me, my introduction to DevOps was at DevSecCon, where they introduced me to this image. Below you can see the security team teaching, providing tooling and enabling the magical DevOps unicorns in doing their jobs, securely. This is how I view DevOps; the security team enabling everyone, working within the confines of the processes and systems that all the other teams use.

Slide Credit: Francois Raynaud & DevSecCon

This series will be loosely based off a conference talk which I have delivered at countless events, all over the planet, ‘Security is Everybody’s Job’. You can watch the video here.

7 Places to do Automated Security Tests

When working in a DevOps environment security professionals are sometimes overwhelmed with just how fast the dev and ops teams are moving. We're used to having more control, more time, and more… Time!

Personally, I LOVE DevSecOps (the security team weaving security throughout the processes that Dev and Ops are doing). Due to my enthusiasm I am often asked by clients when, how and where to inject various types of tests and other security activities. Below is my list of options that I offer to clients for automated testing (there's lots more security to do in DevOps, this is only automated tests). They analyze the list together and decide which places make the most sense based on their current status, and choose tools based on their current concerns.

Man using computer
Photo by rupixen.com on Unsplash

Seven Places For Automated Testing

  1. In the IDE:
    • Tools that check your code almost like a spell checker (not sure what this is called, sometimes called SAST)
    • Proxy management & dependency tools that only allow you to download secure packages
    • API and other linting tools that explain where you are not following the definition file
  2. Pre-commit hook:
    • secret scanning, let's stop security incidents before they happen
  3. At the code repository level:
    • weekly schedule: SCA & SAST
  4. In the Pipeline:
 Must be fast and accurate (no false positives)
    • secret scanning – again!
    • Infrastructure as Code scanning (IaC)
    • DAST with HAR file from Selenium or just passive
    • SCA (again if you like, preferably with a different tool than the first time)
    • Container and infra scanning, + their dependencies
  5. Outside the Pipeline:
    • DAST & fuzzing, automate to run weekly!
    • VA scanning/infra – weekly
    • IAST – install during QA testing and PenTests, or in prod if you feel confident
    • SAST – test for everything for each major release or after each large change – manual review of results
  6. Unit-Tests:
    • take the dev's tests and turn them into negative tests/abuse cases
  7. Continuously:
    • Vulnerability Management. You should be uploading all your scan data into some sort of system to look for patterns, trends, and (most of all) improvements

You do not need to do all of these, or even half of these. But please do some of them. WHP will try to put out a course on this later on in the year!

Do YOU do testing in more places than this? Where, when and how? Do you 
have other tools you use that you find helpful? Please comment below 
and let us know!

 

Pushing Left, Like a Boss – Part 10: Special AppSec Activities and Situations

Special Situations

Not all application security programs are the same, and not all security needs are equal. In this article we will compare security for a small family business, a government and Apple.

Think about this: Not only does Apple make two popular consumer operating systems (OSX for desktop and laptop computers and IOS for phones and iPads), they also make a popular cloud platform (iCloud), a popular programming IDE (xCode), hardware for several types of laptops, phones, tablets, watches, and so, so much more. They also build physical security features directly into their products. It wasn't until I decided to re-publish this article that I realized just how many things depend on Apple. It's staggering.

What this means is that Apple has very special security needs. Their operating system, cloud and other products that we depend on must be secure. They must go far beyond the average company in their efforts to ensure this, and they do.

 

 

Tanya Janca teaching

See that computer to my left? It's an Apple. I own 3 laptops that run OSX,
and even used to work at an Apple repair shop back in the day. I learned 
to program on an Apple computer.

But the average company is not Apple. Which means they don't need to take the same precautions. As a second example, let's take “Alice's Flowers”.

Alice has a website for her floral shop that delivers flowers in her small town. It shows basic info, such where they are located, their phone number, and when they are open. It also has a link to her Shopify shop for online orders (meaning she does not need to secure that, Shopify does; Alice is smart to have outsourced the hard part. This is called Risk Transference.). The rest of Alice's website, in the big scheme of things, is not very important. If her site goes down for a day or two, it would be inconvenient but it would not the end of the world.

Most companies fall somewhere between Apple and “Alice's Flowers” in regard to their risk. It has been my experience that many places, when I look at where they spend their security dollars, seem to be very confused as to where they sit on this scale. This is not my attempt to make fun or insult any company, I think it's a sign of our times that not all companies are receiving good (and unbiased) advice.

The AppSec activities listed below do not apply to all IT shops. I invite you, reader, to try to imagine where your workplace would be on this scale. Please remember your place on the scale as you read the rest of these examples, to help you decide if any of this activities may apply to your place of work.

Special AppSec Activities

Responsible Disclosure

Responsible Disclosure (also known as Coordinated Disclosure), is a process where someone finds a security problem in a product or site, reports it to a company, and the company 1) does not sue them 2) thanks them 3) sometimes offers a token of appreciation but generally does not offer money and/or 4) sometimes the person who reported it is publicly acknowledged by the company or their bug is reported formally as a CVE to Mitre.

Last week (when I wrote the first version of this article, in 2019) I used a government website, I saw a bug, I figured out who to talk to (the Canadian Government doesn't have a disclosure process, of any kind), and I emailed it to them. They said thanks, I offered ideas on how to fix it, and they were great. This is one version of responsible disclosure. See how I was responsible?

Some places have a formal program, whereby security researchers (or normal users like me), can report issues to them in a secure manner (me sending details over twitter then an email to the government employee was not very ‘secure'). If the product they found the issue in is something well-known or used often, they may file a CVE (Common Vulnerability Exposure) so that other's are aware that version of that product is known to be insecure. But also for credit; having your name on a CVE is pretty cool.

Industry standard for fixing such things is (theoretically) 90 days, but not every company complies, and not every person who reports such an issue is so patient. When you hear that someone “dropped O Day”, what they mean is they released the info about a vulnerability onto the internet, and there is no known patch for it (also known as a ‘zero day'). This is often done in order to pressure a company to fix the issue. Because if one person found it, that means others might have found it (and they may be exploiting it in the wild, causing people problems, and that's no good).

Note: “dropping O day” is NOT a part of responsible disclosure.

Bug Bounties

Katie Moussouris basically invented Bug Bounties as we know them today, she speaks on this topic often and is a wealth of knowledge on this and many other security topics. Since then several large tech companies have started their own programs including Shopify, Apple, and Netflix.

The invention of bug bounties spawned an entirely new industry; dedicated security researchers or “bug hunters”, as well as large companies that sell these people's services on a pay-per-find basis.

The thing about working as part of a bounty program is you only get paid if you find something, if no one else has found it before, if your finding is in scope, and if your report actually makes sense. Submitting things that aren't in scope is a great way to get yourself banned (such as taking over accounts of employees at the company you are supposed to be finding bugs for, don't do that). What this means is that many, many bug hunters make little-to-no money, and a small few do quite well. I've heard people call this “a gig economy”, which means no job security, benefits or anything to fall back on if you have a bad month.

The economics for the researchers aside, this is an advanced AppSec activity. I've been asked many times “Should we do a bounty?” to which I have responded “How is your disclosure program going? Oh, you don't have one, okay. Ummm, how is your AppSec program going? Oh, you don't really have a formal program you just hire a pentester from time-to-time, okay. Hmmmm, do you have any security-savvy developers that could fix the bugs the bounty finds? No, okay, ummmmm. So you already know that you should DEFINITELY NOT DO A BOUNTY, right? Okay, yeah, thanks.”

I realize that doing a bounty program is “hot” right now, and that the companies that sell bounty programs are happy to tell you that it's good value for your money to start no matter where you are at in your AppSec program. I disagree. I often sugar-coat things in my blog, but for this I can't. If you don't already have an AppSec program and you do a bug bounty program you are setting your money on fire. If you want to hear from an expert on the topic though, you should watch Katie Moussouris explain it much more gracefully than I, here: Bug Bounty Botox Versus Natural Beauty.

Capture The Flag, Cyber Ranges, and other forms of Gamification

Capture the Flag contests, also known as CTFs, are not a bunch of security people running around in a field with flags; it's a contest made up of security puzzles. Sometimes it is vulnerable systems you need to exploit, sometimes it's intentional puzzles for you to solve. When you ‘solve' one of the challenges you get a ‘flag', which means points. The person or team at the end with the most points wins.

Cyber Ranges and other gamification systems are similar, you play, solve security problems, and learn at the same time.

Why do security professionals sit around playing games and solving puzzles? Because this is a great way to learn. And it's FUN! Also: if you find a vulnerability and it's something you have done before in your code you will never, ever make that mistake again. Trust me on that one.

Inviting your developers to participate in security gamification can be a great team building exercise and it can teach them many of the lessons you wish they knew!

Snowflakes

There are many more special situations that demand interesting and exciting AppSec activities, such as chaos engineering, red teaming, and so much more. You can read a lot more about it in my book, Alice and Bob Learn Application Security.

Thank you

Thank you for reading my first blog series; this is the end. When I started my blog I honestly wasn't sure anyone would read it, but I wanted to share all of the things I had learned so I went for it. Thank you for coming on this journey with me, I hope you follow me on many more.

Pushing Left, Like a Boss – Part 9: An AppSec Program

In my talk that this blog series is based on, “Pushing Left, Like a Boss”, I detailed what I felt an AppSec program should and could be. Since then, I've learned a lot and now see that there are quite a few activities that you can do, but it's the goals and the outcomes that actually matter. Our industry has also changed quite a bit since I wrote that talk (written in 2016, first seen in public 2017, this article first published in 2019 and republished here in 2021).

My first international talk, at AppSec EU, 2017. It feels so long ago.

My previous thoughts on what a basic AppSec Program should be:

For bonus items I had listed:

And for “extra special situations” I recommended the following (which will be explained in the next blog post):

  • Bug Bounty Programs
  • Capture the Flag Contests (CTFs)
  • Red Team exercises

Anne Gauthier of OWASP Montreal, myself (pre-Microsoft) and Nancy Gariché of Secure That Cert and OWASP DevSlop. In the background is Christian Folini of the OWASP CRS project. I had no idea how important these people would become to me at the time.

Anne Gauthier of OWASP Montreal, myself (pre-WeHackPurple) and Nancy Gariché of Secure That Cert and OWASP DevSlop. In the background is Christian Folini of the OWASP CRS project. I had no idea how important these people would become to me at the time.

I'm going to preface this next part with two thoughts.

You can't do security “right” if you aren't doing IT “right”. If you can't publish fixes for a year+ because your processes are broken, if you are underwater in technical debt, if you have dysfunction within your IT shop already, this is going to be very hard. I suggest starting with modernizing your systems and entire IT team as you modernize your security approaches, hand-in-hand. Don't give up, you can do this! Take one item, aim for it, and continue on until you're doing well.

If you have poor communications between the security team and the rest of IT this will be another hurdle that you have to work on. Culture plays a big part in ensuring your efforts are successful. I've released a bunch of videos on my YouTube channel on this topic, start with this one.

My new vision for an AppSec program:

  • A complete picture of all of your apps. Bonus: alerting, monitoring and logging of those apps.
  • Capability to find vulnerabilities in written code, running code, and 3rd party code. Bonus: the ability to quickly release fixes for said issues
  • The knowledge to fix the vulnerabilities that you have found. Bonus: eliminating entire bug classes.
  • Education and reference materials for developers about security. Bonus: an advocacy program, creating a security champion program, and repetitive re-enforcement of positive security culture.
  • Providing developers with security tools to help them do better. Bonus: writing your own tools or libraries.
  • Having one or more security activities during each phase of your SDLC. Bonus: having security sprints and/or using the partnership model (assigned and/or embedding a security person to/within a project team).
  • Implementing useful and effective application security tooling. Bonus: automating as much as possible to avoid errors and toil.
  • Having a trained incident response team that understands AppSec. Bonus: implementing tools to prevent and/or detect application security incidents (can be homemade), providing job-specific security training to all of IT, including what to do during an incident.
  • Continuously improve your program based on metrics, experimentation and feedback from any and all stakeholders. All feedback is important.

I'd love to hear your thoughts on my new application security ‘prescription'. Please comment below.

Up next in this series we will discuss the AppSec “extras” and special AppSec programs; I will discuss all the things in this article that I have not previously defined for you.

Pushing Left, Like a Boss – Part 8: Testing

Testing can happen as soon as you have something to test.

Suggestion: Provide Developers with security scanning software (such as OWASP Zap), teach them to use it, and ask them to fix everything it finds before sending it to QA.

You can add automated security testing into your pipeline, specifically:
  • VA scanning of infrastructure (missing patches/bad config - this is for containers or VMs, but you often usually use different tools)
  • 3rd party components and libraries for known vulnerabilities (SCA)
  • Dynamic Application Security Testing (DAST) - only do a passive scan so that you don't make the pipeline too slow or use a HAR file to automate which parts are tested and which are not.
  • Static Application Security Testing (SAST) - do this carefully, it can be incredibly slow. Usually people only scan the delta in a pipeline (the newly changed code), and do the rest outside of a pipeline.
  • Security Hygiene - verify your encryption settings, that you are using appropriate security headers, your cookie settings are good, that HTTPS is forced, etc.
  • Anything else you can think of, as long as it's fast. If you slow the pipeline down a lot you will lose friends in the Dev team.

Q&A at #DevSecCon Seattle, 2019

During the testing phase I suggest doing a proper Vulnerability/Security Assessment (VA) or PenTest (if you need management's attention), but early enough that if you find something you can fix it before it's published. More ideas on this:

  • Repurpose unit tests into security regression tests: for each test create an opposite test, that verifies the app can handle poorly formed or malicious input
  • For each result in the Security Assessment that you performed create a unit test that will ensure that bug does not re-appear
  • Ensure developers run and pass all unit tests before even considering pushing to the pipeline
  • Perform all the same testing that you normally would, stress and performance testing, user acceptance testing (hopefully you started with AB testing earlier in the process), and anything else you would normally do.
Penetration testing is an authorized simulated attack on a computer system, performed to evaluate the security of the system. The idea is to find out how far an attacker could get. This differs from a security assessment or vulnerability assessment, in that they are prioritizing exploiting the vulnerabilities they find, rather than just reporting them. If you want to shock management and get some buy-in, a PenTest is the way to go. But if just you want to find the things wrong with your app, and ensure lower risk to your systems, I would recommend a security/vulnerability assessment instead. It depends on your situation.

Up next in this series we will discuss what a formal AppSec program should include, followed by AppSec “extras” and special AppSec programs, which will end this series.