CIA: Confidentiality, Integrity and Availability

** This article is for beginners in security or other IT folk, not experts. 😀

The mandate and purpose of every IT security team is to protect the confidentiality, integrity and availability of the systems and data that the company, government or organization that they work for. That is why the security team hassles you about having administrator rights on your work machine, won’t let you plug unknown devices into the network, and all the other things that feel inconvenient; they want to ensure these three things. We call it “The CIA Triad” for short.

The CIA Triad is the reason IT Security teams exist.

Let’s examine this using examples with our friends Alice and Bob. Alice has type 1 diabetes and uses a tiny device implanted in her arm to check her insulin several times a day, while Bob has a ‘smart’ pacemaker that regulates his heart, which he accessed via a mobile app on this phone.

Availability

If Alice’s insulin measuring device was unavailable due to malfunction, tampering or if it ran out of batteries, she may end up being inconvenienced. Since Alice usually checks her insulin levels several times a day, but she is able to do manual testing of her insulin if she needs to, it is somewhat important to her that this service is available.

Bob, on the other hand, has irregular heartbeats from time to time, and never knows when his arrhythmia will strike. If Bob’s pacemaker was not available when his heart is behaving erratically, this could be a life or death situation if enough time elapses. It is very important that his pacemaker is available, and that it reacts in real-time when an emergency happens. Bob works for the federal government as a clerk managing secret and top-secret documents and has for many years. He is a proud grandfather and is trying hard to live a healthy live since his pacemaker was installed.

Many customers move to ‘the cloud’ for the sole reason that it is extremely reliable when compared to more traditional in-house data center service levels.

Confidentiality

Alice is the CEO of a large fortune 500 company, and although she is not ashamed that she is a type 1 diabetic, she does not want this information to become public. She is often interviewed by the media and does public speaking, serving as a role model for many other women in business. Alice works hard to keep her personal life private, and this includes her health condition. She believes that some people within her organization are after her job and would do anything to try to portray her as ‘weak’ in efforts to undermine her authority. If her device where to accidentally leak her information, show itself on public networks, or if her account information became part of a breach, this would be highly embarrassing for her, and potentially damaging to her career. Keeping her private life private is very important to Alice.

Confidentiality is often undervalued in our personal lives. Many people tell me they ‘have nothing to hide’. Then I ask “Do you have curtains on your windows at home? Why? I thought that you had nothing to hide?” — I’m great at parties.

Bob, on the other hand, is very open about his heart condition and happy to tell anyone that he has a pacemaker. He has a great insurance plan with the federal government and is very grateful that when he retires that he can continue with his plan, despite his pre-existing condition. Confidentiality is not a priority for Bob in this respect.

Integrity

Integrity in data means that the data is correct and accurate. Integrity in a computer system means that the results it gives you are precise and factual. For Bob and Alice, this may be the most important of CIA factors: if either of their systems give them incorrect treatment it could result in death. For a human being (as opposed to a company or nation-state), there does not exist a more serious repercussion than end of life. The integrity of their health systems is crucial to ensuring they both remain in good health.

Integrity means accuracy.

The next time you IT Security team is enforcing a new policy or asking for security requirements as part of your project, I challenge you to try to figure out which of the CIA triad they might be trying to protect with their actions. Sometimes it is a combination of 2 or of the triad. More importantly, I challenge you to ask yourself when you work if you are ensuring each of these three factors of security are being protected by your actions, as the IT Security team cannot succeed alone.

The Importance of Inventory

Tanya Janca, OWASP AppSecDay, Melbourne, Australia, October 2019

The real reason companies do software inventory exercises is to have a sense of control over their application portfolio. Because you need to KNOW that everything is tested, monitored and patched. That feeling of being certain, rather than ‘fairly confident’, is the difference between knowing that you are doing a good job of securing your software and just hoping that you are.

Most enterprise-level companies have hundreds or even thousands of assets, and do not have accurate inventory of them. This happens for many reasons; developers don’t always report their work, no one is assigned to keeping track of application updates, shadow IT, people not realizing that an API or SaaS (Software as a Service) qualifies as an ‘app’, APIs being split into multiple APIs and the new one(s) not being recorded, lost excel spreadsheets, other priorities getting in the way, employee turnover, etc. There are too many reasons to list, and the cause of the problem doesn’t matter as much as what can be done about it.

Why is having an incomplete inventory of your web assets a problem? Because you can’t protect something if you don’t know you have it.

Buying application security tools and then using them on only 60%-90% of your apps means many of your web assets not hardened, protected, monitored and logged. This presents a potentially large hole in your web-armour. It’s almost as if there’s a good reason that the NIST Cyber Security Framework has asset management (Identify) in the #1 spot, SC Magazine lists improper inventory as one of the top 5 risks to application security, IBM lists it has #1 on Five Steps to Achieve Risk-Based Application Security Management, Contrast Security lists it as the #1 step of The 4 Dimensions of a Sound Application Security Strategy, and Veracode lists it as a precursor to even starting the Five Best Practices of Vendor Application Security Management. Ensuring you have a current, complete, and accurate inventory is the foundational step for every application security program.

As you might have guessed by this point; I was working at a company that did application inventory when I wrote this article. Trust me dear reader, even if I don’t work there anymore inventory is still a very important issue.

Story from Tanya:

Back in the day when I was a software developer, one of the government departments that I worked for hired a consultant to do ‘application portfolio management’, his name was Sanjeev. Sanjeev was the coolest guy in our office, well-dressed, charming, always in the right meetings with the powerful people. As you can guess, Sanjeev was paid a lot more than I was. His entire project, for the whole year, was to interview my 14 team members and I about which apps we had, to get links to where they were, and gather any documentation we had about them. He was a patient guy, talking to developers all day and asking us for information that we just didn’t have. Also, none of us thought what he was doing was important, and he was sometimes treated as a pest.

Long story short, we thought we had 200 apps, but he only found evidence of 72. Knowing what I know now…. I wonder how many we really had.

In order to perform a manual application inventory you could interview members of your dev teams and ask them what apps they have. Although having good communication with the dev team is always a good idea, this would be a long and tedious process if you are at a large company. If you are reading this blog, you are likely to be looking for a more technical solution, so let’s look at how we can try to automate this.

For a DevOps shop (those using automated pipelines for releases), inventorying all of the apps and APIs released from various pipelines would be a good starting point. For example: add triggers to the git `pre-push` hook or to a Jenkins pipeline. Any that are not released via a pipeline would not be included, but ideally this would cover the majority of your newer applications.

From there you could look at this from a network angle. If you have the IP range(s) for your network and you have permission to scan them, you may run a network scanning tool such as nmap or masscan against the IP range and look for open ports on TCP port 80 (HTTP) and 443 (HTTPS). If either of these ports are open, they are likely running something web-related. From there you could use an automated tool such as EyeWitness to visit each one of the IP addresses, one at a time, and take a snapshot of the page to see what’s there. Or you could hire someone and have them visit each one manually, to find out what is there.

There are a few problems with the network scanning approach.

Web applications are not restricted to running only on port 80 or 443 (ask most developers where they test locally, and they’ll probably tell you port 8080). Web servers are capable of having several APIs and apps on them as well; they could be separated by ports or through the host name sent in the request. This means that you may not even see all the applications or APIs at a given IP address with a simple network scan.

You may not even be able to reach all your IP addresses due to firewalls, VLANS, or physical network separation.

What if your applications are hosted in the cloud? In cloud environments, IP addresses change frequently, and applications cycle rapidly (for example — serverless applications). You’re certainly not going to scan all of your cloud provider’s network hosts; you neither have the time (that’s a lot of IP addresses) nor the legal permission (remember — the “cloud” is just someone else’s computer) to do so.

Another approach could be to do an external scan of your domain name(s). If you know all of your domain names, you could use a tool like OWASP Zap or Burp Suite to crawl all the links on your domain and have it find as many pages as it can.

Again, there are issues with this method as well, mostly around visibility.

Crawling software can only see what it has access to. If there is no link on a page to an API call, it will never see that API. Are you using MFA (multi-factor authentication)? If so, the crawler won’t be able to get around that (it’s literally what MFA was designed for). Do you have authentication or authorization in your applications? If so, you probably don’t want to write evergreen login credentials into an application (this is also known as a “backdoor”) and then give those credentials to a crawler. That crawler may also hit some sensitive parts of your application (e.g. admin functionality) and wreak wanton destruction to your database, users, or critical business functions.

An external network agent will only be able to access what an external network user will be able to access. This means that it does not give a complete and accurate view of your applications.

If your domain list is incomplete, or there are subdomains you didn’t know about, again — the crawler will completely miss these too.

These approaches all come down to two main problems:

1. These actions require significant overhead to continuously update your targets and initiate scans.

2. You will never be able to get a complete view of your applications and their functionality.

I ask you to ponder the following question:

How do you create and maintain your application inventory?

The training you have selected is too ‘off topic’

In this series we are discussing how to get your technical training approved at work. This is not the first article, and you may want to go back and read it from the start.

In the previous article, we talked about how we need to explain to our boss not only which training we want, but we must overcome any objections, if we are going to get it approved. Let’s look at the second objection in our list.

Back in the day I requested approval to take a web-app hacking course, and I recall my boss saying, “You don’t need to learn that; you just need to run the scanner.” My job was web app penetration testing, and it was clear my boss had no idea what I did all day. He seemed to think that manual security testing was unnecessary, and at the time I had no idea how to explain we needed a lot more if we wanted to ensure our apps were very secure. I ended up watching a lot of videos on the internet, playing around, and wasting a ton of time.

When I switched over into Application Security, it got even more difficult, as most of the courses only offered to teach me “the OWASP Top Ten” (which I already knew well), and then the main security controls (authentication, authorization, encryption, identity) and little else. I wanted to know how to do my job, not theory and not basic web app hacking (I already knew that). Plus, they always seemed to go really deep into encryption, but I already knew my teams would never be writing their own encryption, so I didn’t get why they felt the need to always cover it…

I found the entire situation infuriating. I felt like I couldn’t win.

Image of women holding computer
Image of women holding computer provided by #WOCTechChat

If you are asking your boss to take training it must fall into one of two categories:

1. It will help you do your current/new job better

2. It will help you grow within the organization so you can get a promotion someday (also known as career development)

Note: If you are a Rudy developer and you asked your boss to pay for you to take a basket weaving course, this blog article is not going to help you. That said, if you are a Ruby developer and you asked your boss to pay for a secure-coding-in-ruby course or an application security course, then this article can help.

Remember I said in the first article that you needed to read the syllabus and keep track of what’s on the course and how it relates to your job? Now is time to get that info so we can write your justification letter. Just like in the previous article, I am going to use the Application Security Foundations Program from We Hack Purple as the example, but you should be able to use whichever training you have choose.

Dear Boss,

I want to take the We Hack Purple Application Security Foundations Program for my training this year. I know you told me that it’s too ‘off topic’ for my job, but I wanted to explain to you how it will definitely help me do my job better.

Right now, our InfoSec team keeps bringing in a PenTester to test our apps right before we release them. They always find 100 things wrong, because none of our devs know anything about security. We always end up with late projects and everyone freaking out, because it’s so last minute. Lots of overtime and stress. I could help teach them what they need to know, so we could create more-secure software. The PenTesters will find a lot less wrong with our apps.

Also, the program comes with a copy of Alice and Bob Learn Application Security. I know the one copy we have is currently constantly being used by my team for reference, so having a second copy would be really great.

If I took this course, I would know how we could do this better. The dev team could do some testing ourselves (carefully), and other stuff to make sure our apps are in better shape by the time the PenTester comes. The program has a secure coding guideline we could adopt, and even an API best practices guide. We currently have no idea how to secure our APIs, and we keep reading on the internet and we’re lost. This course would help me understand so much! And then I could be the ‘security champion’ on our dev team, the one everyone can turn to when they need help. I know you feel this is outside my job description, but someone has to do it. I want that someone to be me.

Sincerely,
Your-Name-Here

Up next we will cover Objection 3: There’s no time with your current workload for you to take training.

How to get your boss to approve the training you want

*This is a series.*

We’ve all been there. There’s a training you really want to take, but your boss isn’t so sure. This can be because it’s out of budget, they feel it’s too ‘off topic’ from your current job, there’s no time with your current workload, they are afraid they will lose you if you have new skills, or some other reason they won’t tell you. Let’s go through all of these reasons and figure out how YOU can get your training approved.

Photo: #WOCTechChat

Note: I run my own training company, We Hack Purple, that specializes in Application Security, Secure Coding and DevSecOps training. While I am definitely hoping this article helps our customers, I’m also hoping it helps everyone else who needs training! For our examples we will use the Application Security Foundations Program from We Hack Purple, and we will try to justify taking it to your boss.

The first thing you need to do is make sure you are selecting the *best* training for your specific job or career development. Don’t take the popular one, or ‘the cool one’ that people are talking about on Twitter. Evaluate very carefully which one will help you level-up in your career and your current job.

Next, read about the content of the training you are taking. Make notes of what’s in there and keep the syllabus handy, as you will likely need to reference it as you write your justification. You also want to have some other links to other courses to compare it to; both to explain why the one you have selected is better and why it’s (hopefully) more cost-effective.

Let’s start creating our defenses for your boss’s potential objections.

Objection 1: We don’t have the budget/it’s too expensive.

This is the one that I personally have received the most often in my career. I have actually had a boss laugh in my face when I suggested one single course (because it would have cost my combined training budget for 5 years). I explained that cyber security courses are quite costly, and all of my bosses continued to reject my requests. I ended up selecting training from several different places that was cheaper, but nowhere near as good as what I had asked for. At the time, I didn’t know how to get around this hurdle.

With a little more industry experience and a chance to see a lot more training, I realized that I needed to explain the value of what we were getting was greater than what we were spending. Let me explain using the We Hack Purple Application Security Foundations Program as the example (but this should work with whatever you have chosen, if you have chosen the best training for your situation).

Dear Boss,

I want to take the Application Security Foundations Program from We Hack Purple for my training this year. I know you feel it’s too expensive and that we might not have the budget, but let me explain how I think it will save us more money than it costs.

We keep hiring consultants to help us with our AppSec Program, and that is very expensive. And we haven’t been getting the results we want, they show up and write one policy or one guideline, then leave. This program will provide some starter policies, standards and guidelines, so we don’t need to pay that consultant anymore. After taking the training I will know what to do and have tools to start with, so I can hit the ground running.

We also keep changing our strategy, because we haven’t been getting the results we want, and the dev teams don’t seem to be ‘on board’ with what we have been doing. This program will not only help me build and plan an entire AppSec program throughout the three courses, in Level 2 of the program there’s an entire module to teach me how to support our culture change (advocacy), how to build a security champions program, AND how to make presentations that aren’t the death-by-PowerPoint that we are used to giving. They even show us how to measure the effectiveness of our program, so we know if the strategy we are using is actually *working*, so we can know when we need to change or stay the course. Right now, we are just guessing at what to do to make sure our software is secure, but with this program, I would know.

I realize that $999 USD is a lot, and we are a small company. But this is the only training I could find like this on the internet, one that will teach me how to build and launch an AppSec program. That’s what the company needs me to do. Please approve this training so I can get started.

Sincerely,

Your-Name-Here

Up next we will talk about training that is ‘too off topic’.

Security is Everybody’s Job — Part 6 — The Second Way

The Second Way of DevOps is fast feedback. In security, when we see this we should all be thinking the same thing: Pushing Left. We want to start security at the beginning of the system development life cycle (SDLC) and ensure we are there (providing feedback, support and solutions) the whole way through!

Pushing Left, Tanya' Favorite Thing
Pushing Left, Tanya’s Favorite Thing

Fast feedback loops means getting important information to the right people, quickly and regularly. One of the main reasons that Waterfall projects failed in the past was the lack of timely feedback; no one wants to find out twelve months after they made a mistake, when it’s too late to fix it.

The goal of security activities in a DevOps environment must be to shorten and amplify feedback loops so security flaws (design issues) and bugs (code issues) are fixed as early as possible, when it’s faster, cheaper and easier to do a better job. These DevOps people are really onto something!

Let’s go over several ideas of how to achieve this.

Activities to create fast feedback loops.

  • Automate as much as humanly possible. Inside or outside the pipeline, automation is key.
  • Whenever possible integrate your tools with the Dev and Ops team’s tools. For instance, have the issues found by your IAST tool  turned into tickets in the developer’s bug tracking system, automagically.
  • When you have a Pentest done, check all your other apps for the things found in the report, then add create unit tests to look for these things and prevent them from coming back.
  • Rename insecure functions or libraries as “insecure” with a wrapper, so programmers see immediately that there is an issue.
  • Add security sprints to your project schedule (to fix all security bugs in backlog).
  • Asking the Dev and Ops what they are concerned about (in relation to security), so you can fix any problems the security team might be causing them.
  • Add important security tests that are quick and accurate to the pipeline. For instance, scan for secrets in the code that is being checked in. That is an important test!
  • If an important security tests fail in the pipeline, the continuous integration server must break the build. Just like quality tests. This is loud feedback.
  • Create a second pipeline that doesn’t release any code, but runs all the long and slow security tests, then have the security team review the results after and turn the important things into tickets for the Devs.
  • Tune all security tools as much as possible and validate all results so that the feedback you are giving is *accurate*. There is no point of sending lots of feedback if half of it is wrong.
  • Work with developers to create negative unit tests (sometimes known as abuse tests). Create copies of regular unit tests, rename them with “Abuse” at the end, then add malicious payloads and ensure that your app fails gracefully and handles bad input well.
  • Have reports from your security tools automatically send their results to a vulnerability management tool such as Defect Dojo or Thread Fix to keep metrics and use them to improve all of your work. You need feedback too.

Be creative. Any way that you can get feedback faster to other teams is a huge win for your team too!

Security is Everybody’s Job — Part 5 — The First Way

The First Way of DevOps

The first “Way” of DevOps is emphasizing the efficiency of the entire system. Many of us tend to focus only on our part of a giant system, and get bogged down improving only our own contributions to the larger process. It’s rare that we stand back, look at the entire thing, and realize that if we helped another team or changed something small within our part, that it could improve other areas for the better. The first way of DevOps is about looking at the entire system, and making sure the entire thing is as efficient as possible. #speed

When we worked in Waterfall development environments security often acted as a gate. You had to jump through their hoops, then you were let through, and you could push your code to prod. Awesome, right? Not really. It was slow. Security activities took FOREVER. And things got missed. It was rigid and unpleasant and didn’t result in reliably secure software.

It may seem obvious to new developers that security should not slow down the SDLC, but I assure you, this concept is very, very new. When I was a software developer I referred to the security team as “Those who say no”, and I found almost all of my interactions with them left me frustrated and without helpful answers.

When we (security practitioners) think about The First Way, we must figure out how to get our work done, without slowing down all the other teams. They won’t wait for us, and we can’t set up gates. We have to learn to work the way they do; FAST.

Tools:

First of all, we need to use modern tooling that is made for DevOps pipelines if we are going to put anything into the CI/CD pipeline. Never take an old tool and toss it in there; no DevOps team is going to wait 5 hours for your SAST tool to run. Tune your tools and ensure you select tools that are made for pipelines if that is how you are going to use them. Whenever possible, only run your tools on the ‘delta’ (the code changed in that release, not the entire code base).

Tool Selection:

When selecting tools, remember that not every tool needs to be put in the pipeline. In fact, having tools that are out-of-band, but located on the ‘left’ of the SDLC, can offer even more value and save time. Examples:

  • Package management tools that only serve packages that are not known to be insecure (pre-approved by a security research team)
  • Adding security tests to your unit tests, which are often run before the code arrives in the pipeline (for instance, write input validation tests that ensures your code properly handles input taken from the XSS Filter Evasion Cheat Sheet)
  • Ddding security tooling to the check-in process, such as secret scans (don’t even let them check it in if it looks like there’s a secret in the code)
  • Scanning your code repository for known-insecure components. It’s just sitting there, why not use it?

Bugs:

This also means that security bugs should be placed in the same bug tracker or ticketing system that the developers and ops teams are using. They shouldn’t check two systems, that is not efficient.

Finding Vulnerabilities:

If at all possible, we should be providing and/or approving tools that assist in finding vulnerabilities in written code (both the code your team wrote, and the code from dependencies) and running code. This could be SAST + SCA + DAST, or it could be SCA + IAST (run during unit testing, QA and in prod). It could also mean manual secure code review plus a PenTest the week before going live (this is the least-efficient of the three options presented here).

Templates and Code Reuse:

If it makes sense, create templates and provide secure code samples, there’s no need to reinvent the wheel. Also, enable the developers and ops teams to scan their own code by providing tools for them (and training on how to use them safely and effectively).

Think Outside The Box

We (security) can no longer be a bottleneck, we must work to enable them to get their jobs done securely, in anyway we can. Examine your processes to ensure they are efficient; create a second asynchronous (which does not release to prod) pipeline to automate your longer tests; write your own tools if you absolutely have to. The sky is the limit.

Security is Everybody’s Job — Part 3 — What IS DevOps?

What IS DevOps?

There are many definitions of DevOps, too many, some might say. Some people say it’s “People, Processes, and Products”, and that sounds great, but I don’t know what I’m supposed to do with that. When I did waterfall I also had people, processes, and products, and that was not great. I thought DevOps was supposed to be a huge improvement?

I’ve heard other people say that it’s paying one person to do two jobs (Dev and Ops), which can’t be right… Can it? I’ve also been told once by a CEO that their product was “made out of DevOps”, as though it was a substance. I decided not to work there, but that’s another story. Let’s look at some better sources.

Wikipedia says:

DevOps is a set of practices that combines software development and information-technology operations which aims to shorten the systems development life cycle and provide continuous delivery with high software quality.

But what are the practices? Why are we aiming to shorten the SDLC? Are we making smaller software? What is ‘continuous delivery’?

To get to the bottom of it I decided to read The DevOps Handbook. 
Then I knew what DevOps was, and I knew how to do it. And I 
discovered that I LOVED DevOps.

According to the DevOps Handbook, DevOps has three goals.

Improved deployment frequency; Shortened lead time between fixes;

Awesome! This means if a security bug is found it can be fixed extremely quickly. I like this.

Lower failure rate of new releases and faster recovery time;

Meaning better availability, which is a key security concern with any application (CIA). Lower failures mean things are available more often (the ‘A’ in CIA), and that’s definitely in the security wheelhouse. So far, so good.

Faster time to market; meaning the business gets what they want.

Sometimes we forget that the entire purpose of every security team is to enable the business to get the job done securely. And if we are doing DevSecOps, getting them products that are more secure, faster, is a win for everyone. Again, a big checkmark for security.

Great! Now I think the DevOps people want the same things that I, as a security person, want. Excellent! Now: How do I *DO* DevOps?

That is where The Three Ways of DevOps comes in.

  1. Emphasize the efficiency of the entire system, not just one part.

In the next post we will talk more in detail about The 3 Ways (and how security fits in perfectly).

Security is Everybody’s Job — Part 4 — What is DevSecOps?

In this post we will explore The 3 Ways of DevOps. But first, a definition from a friend.

DevSecOps is Application Security, adjusted for a DevOps environment.

Imran A Mohammed

DevSecOps is the security activities that application security professionals perform, in order to ensure the systems created by DevOps practices are secure. It’s the same thing we (AppSec professionals) have always done, with a new twist. Thanks Imran!

Photo by Marvin Meyer on Unsplash

Refresher on The Three Ways:

  1. Emphasize the efficiency of the entire system, not just your part.
  2. Fast feedback loops.
  3. Continuous learning, risk taking and experimentation (failing fast). Taking time to improve your daily work.

 

Let’s dig in, shall we?

 

1. Emphasize the efficiency of the entire system, not just one part.

This means that Security CANNOT slow down or stop the entire pipeline (break the build/block a release), unless it’s a true emergency. This means Security learning to sprint, just like Ops and Dev are doing. It means focusing on improving ALL value streams, and sharing how securing the final product offers value to all the other steams. It means fitting security activities into the Dev and Ops processes, and making sure we are fast.

2. Fast feedback loops.

Fast feedback loops = “Pushing Left” (in application security)

Pushing or shifting “left” means starting security earlier in the System Development Life Cycle (SDLC). We want security activities to happen sooner in order to provide feedback earlier, which means this goal is 100% inline with that we want. The goal of security activities must be to shorten and amplify feedback loops so security flaws (design/architecture issues) and bugs (code/implementation issues) are fixed as early as possible, when it’s faster, cheaper and easier to do a better job.

3. Continuous learning, risk taking and experimentation

For most security teams this means serious culture change; my favorite thing! InfoSec really needs some culture change if we are going to do DevOps well. In fact, all of IT does (including Dev and Ops) if we want to make security everybody’s job.

Part of The Third Way:

  • Allocating time for the improvement of daily work
  • Creating rituals that reward the team for taking risks: celebrate successes
  • Introducing faults into the system to increase resilience: red team exercises

We are going to delve deep into each of the three ways over the next several articles, exploring several ways that we can weave security through the DevOps processes to ensure we are creating more secure software, without breaking the flow.

If you are itching for more, but can’t wait until the next post, watch this video by Tanya Janca. She will explain this and much more in her talk ‘Security Learns To Sprint’.

Security is Everybody’s Job — Part 2 — What is Application Security?

Application Security is every action you take towards ensuring the software that you (or someone else) create is secure.

This can mean a formal secure code review, hiring someone to come in and perform a penetration test, or updating your framework because you heard it has a serious security flaw. It doesn’t need to be extremely formal, it just needs to have the goal of ensuring your systems are more secure.

Now that we know AppSec is, why is it important?

For starters, insecure software is (unfortunately), the #1 cause of data breaches (according to the Verizon Breach Reports, 2016, 2017, 2018 and 2019). This is not a list that anyone wants to be #1 on. According to the reports, insecure software causes 30–40% of breaches, year after year, yet 30–40% of the security budget at most organizations is certainly not being spent on AppSec. This is one part of the problem.

The graph above is from the Verizon Breach Report 2017. Hats off to Verizon for creating and freely sharing such a helpful report, year after year.

If the problem is that insecure software causes breaches, and one of the causes is that security budgets don’t appear to prioritize software, what are some of the other root causes of this issue?

For starters, universities, colleges, and programming bootcamps are not teaching the students how to ensure that they are creating secure software. Imagine electricians whet to trade school, but they didn’t teach them safety? They twist two cables together and then just push them into the wall, unaware that they need two more layers of safety (electrical tape, and then a marrett). This is what we are doing with our software developers, we teach them from their very first lesson how to make insecure code.

Hello (insecure) World

Lesson #1 for every bootcamp or programming course: Hello World.
Step 1) Output “Hello World” to screen
Step 2) Output “What is your name?” to screen

Step 3) Read the user’s input into a variable (note: we skip teaching input validation)
Step 4) Output the user’s input to the screen with a hello message (note: we skip output encoding)

The above lesson teaches new programmers the best possible recipe for including reflected Cross Site Scripting (XSS) in their application. As far as I know there is not a follow up lesson provided on how ensure the code is secure.

“Hello World” is the most-taught lesson for starting a new programming language, I’m sure you’ve seen it everywhere. More like “Hello Insecure World”.

Although there has been some headway in universities and colleges recently, most of them barely scratch the surface in regards to application security.

So that’s reason #2. Let’s move onto reason #3.

Another issue that contributes to this problem is that security training for developers is costly. While this is true for all types of professional training, security training is significantly more expensive then other forms of technical training. A single course at the SANS institute (a well-respected training establishment that specializes in all things cyber), could cost an attendee as much as $5000-$7000 USD, for one week of training. There are other less-pricy options, such as taking a course when attending a conference, which usually range from $2000-$5000, however, those are of varying quality, and there is no standardized curriculum, making them a bit of a gamble. I’ve taken several trainings when attending various conferences over the years, and I’d say about 1/2 were good.

There are much cheaper alternatives to the options above (such as the courses from We Hack Purple and other training providers), and they are of very varying quality levels. I’ve seen both good free courses and some where I wish I could have my time back they were so bad. Most of them do not provide a curriculum to follow either, meaning it is often unclear to the student which other courses they should take in order to get the specific job they want. It is very easy to waste quite a bit of time and money; I know, that is how I started my AppSec career… Although I was quite lucky to have two professional mentors guiding me, which made it a lot easier. Not everyone has a mentor.

See how lonely she looks? She’s the ENTIRE security team! #WOCTechChat

Another cause (#4) of insecure software is that the security team is usually grossly outnumbered. According to several sources there is usually 100 software developers for every 10 operations employees for every single (1) security professional.

Let me repeat that. There are 100/10/1, Dev/Ops/Sec. With numbers like that you can’t work harder, you have to work smarter. Which is where we are going with this series.

Now we know the problem and several of the causes, what can we do about it? The short answer is DevSecOps, and the long answer is ‘read the rest of the blog series’.

For now though, let’s define DevSecOps, before we dive into what DevOps is, The Three Ways, and so much more, in the next article.

DevSecOps: performing AppSec, adjusted for a DevOps Environment. The same goals, but with different tactics and strategies to get us there. Changing the way we do things, so that we weave ourselves into the DevOps culture and processes.

Security is Everybody’s Job — Part 1 — DevSecOps

This is the first in a many-part blog series on the topic of DevSecOps. Throughout the series we will discuss weaving security through DevOps in effective and efficient ways. We will also discuss the ideas that security is everybody’s job, it is everyone’s duty to perform their jobs in the most secure way they know how, and that it is the security team’s responsibility to enable everyone else in their organization to get their jobs done, securely. We will define DevOps, ‘The Three Ways’, AppSec and DevSecOps. We will get in deep on the many strategies we can adjust security activities for DevOps environments, while still reaching our goals of ensuring that we reliably create and release secure software.

In summary; We will discuss how to make security a part of our daily work. 
It cannot be added later or after, it needs to be a part of everything.

But let’s not get ahead of ourselves, I have many more posts planned where I will attempt to sway your opinion my way.

Tanya Janca, also known as SheHacksPurple, presenting her ideas in Sydney Australia, 2019. Artwork by the talented Ashley Willis.

Before we get too deep into anything I’d like to dispel some myths. Look at the image below. This is how *some* security professionals see DevOps.

Slide credit: Pete Cheslock

This slide’s author, Pete Cheslock, is highly intelligent and experienced, this mention is not meant to insult him in any way. The slide is social commentary, it is not literal. That said, many people I’ve met truly feel this way; that DevOps engineers are running around making security messes where ever go, and that we (security professionals) are left to clean up the mess. I disagree with that opinion.

Luckily for me, my introduction to DevOps was at DevSecCon, where they introduced me to this image. Below you can see the security team teaching, providing tooling and enabling the magical DevOps unicorns in doing their jobs, securely. This is how I view DevOps; the security team enabling everyone, working within the confines of the processes and systems that all the other teams use.

Slide Credit: Francois Raynaud & DevSecCon

This series will be loosely based off a conference talk which I have delivered at countless events, all over the planet, ‘Security is Everybody’s Job’. You can watch the video here.

7 Places to do Automated Security Tests

When working in a DevOps environment security professionals are sometimes overwhelmed with just how fast the dev and ops teams are moving. We’re used to having more control, more time, and more… Time!

Personally, I LOVE DevSecOps (the security team weaving security throughout the processes that Dev and Ops are doing). Due to my enthusiasm I am often asked by clients when, how and where to inject various types of tests and other security activities. Below is my list of options that I offer to clients for automated testing (there’s lots more security to do in DevOps, this is only automated tests). They analyze the list together and decide which places make the most sense based on their current status, and choose tools based on their current concerns.

Man using computer
Photo by rupixen.com on Unsplash

Seven Places For Automated Testing

  1. In the IDE:
    • Tools that check your code almost like a spell checker (not sure what this is called, sometimes called SAST)
    • Proxy management & dependency tools that only allow you to download secure packages
    • API and other linting tools that explain where you are not following the definition file
  2. Pre-commit hook:
    • secret scanning, let’s stop security incidents before they happen
  3. At the code repository level:
    • weekly schedule: SCA & SAST
  4. In the Pipeline:
 Must be fast and accurate (no false positives)
    • secret scanning – again!
    • Infrastructure as Code scanning (IaC)
    • DAST with HAR file from Selenium or just passive
    • SCA (again if you like, preferably with a different tool than the first time)
    • Container and infra scanning, + their dependencies
  5. Outside the Pipeline:
    • DAST & fuzzing, automate to run weekly!
    • VA scanning/infra – weekly
    • IAST – install during QA testing and PenTests, or in prod if you feel confident
    • SAST – test for everything for each major release or after each large change – manual review of results
  6. Unit-Tests:
    • take the dev’s tests and turn them into negative tests/abuse cases
  7. Continuously:
    • Vulnerability Management. You should be uploading all your scan data into some sort of system to look for patterns, trends, and (most of all) improvements

You do not need to do all of these, or even half of these. But please do some of them. WHP will try to put out a course on this later on in the year!

Do YOU do testing in more places than this? Where, when and how? Do you 
have other tools you use that you find helpful? Please comment below 
and let us know!

 

Pushing Left, Like a Boss – Part 10: Special AppSec Activities and Situations

Special Situations

Not all application security programs are the same, and not all security needs are equal. In this article we will compare security for a small family business, a government and Apple.

Think about this: Not only does Apple make two popular consumer operating systems (OSX for desktop and laptop computers and IOS for phones and iPads), they also make a popular cloud platform (iCloud), a popular programming IDE (xCode), hardware for several types of laptops, phones, tablets, watches, and so, so much more. They also build physical security features directly into their products. It wasn’t until I decided to re-publish this article that I realized just how many things depend on Apple. It’s staggering.

What this means is that Apple has very special security needs. Their operating system, cloud and other products that we depend on must be secure. They must go far beyond the average company in their efforts to ensure this, and they do.

 

 

Tanya Janca teaching

See that computer to my left? It's an Apple. I own 3 laptops that run OSX,
and even used to work at an Apple repair shop back in the day. I learned 
to program on an Apple computer.

But the average company is not Apple. Which means they don’t need to take the same precautions. As a second example, let’s take “Alice’s Flowers”.

Alice has a website for her floral shop that delivers flowers in her small town. It shows basic info, such where they are located, their phone number, and when they are open. It also has a link to her Shopify shop for online orders (meaning she does not need to secure that, Shopify does; Alice is smart to have outsourced the hard part. This is called Risk Transference.). The rest of Alice’s website, in the big scheme of things, is not very important. If her site goes down for a day or two, it would be inconvenient but it would not the end of the world.

Most companies fall somewhere between Apple and “Alice’s Flowers” in regard to their risk. It has been my experience that many places, when I look at where they spend their security dollars, seem to be very confused as to where they sit on this scale. This is not my attempt to make fun or insult any company, I think it’s a sign of our times that not all companies are receiving good (and unbiased) advice.

The AppSec activities listed below do not apply to all IT shops. I invite you, reader, to try to imagine where your workplace would be on this scale. Please remember your place on the scale as you read the rest of these examples, to help you decide if any of this activities may apply to your place of work.

Special AppSec Activities

Responsible Disclosure

Responsible Disclosure (also known as Coordinated Disclosure), is a process where someone finds a security problem in a product or site, reports it to a company, and the company 1) does not sue them 2) thanks them 3) sometimes offers a token of appreciation but generally does not offer money and/or 4) sometimes the person who reported it is publicly acknowledged by the company or their bug is reported formally as a CVE to Mitre.

Last week (when I wrote the first version of this article, in 2019) I used a government website, I saw a bug, I figured out who to talk to (the Canadian Government doesn’t have a disclosure process, of any kind), and I emailed it to them. They said thanks, I offered ideas on how to fix it, and they were great. This is one version of responsible disclosure. See how I was responsible?

Some places have a formal program, whereby security researchers (or normal users like me), can report issues to them in a secure manner (me sending details over twitter then an email to the government employee was not very ‘secure’). If the product they found the issue in is something well-known or used often, they may file a CVE (Common Vulnerability Exposure) so that other’s are aware that version of that product is known to be insecure. But also for credit; having your name on a CVE is pretty cool.

Industry standard for fixing such things is (theoretically) 90 days, but not every company complies, and not every person who reports such an issue is so patient. When you hear that someone “dropped O Day”, what they mean is they released the info about a vulnerability onto the internet, and there is no known patch for it (also known as a ‘zero day’). This is often done in order to pressure a company to fix the issue. Because if one person found it, that means others might have found it (and they may be exploiting it in the wild, causing people problems, and that’s no good).

Note: “dropping O day” is NOT a part of responsible disclosure.

Bug Bounties

Katie Moussouris basically invented Bug Bounties as we know them today, she speaks on this topic often and is a wealth of knowledge on this and many other security topics. Since then several large tech companies have started their own programs including Shopify, Apple, and Netflix.

The invention of bug bounties spawned an entirely new industry; dedicated security researchers or “bug hunters”, as well as large companies that sell these people’s services on a pay-per-find basis.

The thing about working as part of a bounty program is you only get paid if you find something, if no one else has found it before, if your finding is in scope, and if your report actually makes sense. Submitting things that aren’t in scope is a great way to get yourself banned (such as taking over accounts of employees at the company you are supposed to be finding bugs for, don’t do that). What this means is that many, many bug hunters make little-to-no money, and a small few do quite well. I’ve heard people call this “a gig economy”, which means no job security, benefits or anything to fall back on if you have a bad month.

The economics for the researchers aside, this is an advanced AppSec activity. I’ve been asked many times “Should we do a bounty?” to which I have responded “How is your disclosure program going? Oh, you don’t have one, okay. Ummm, how is your AppSec program going? Oh, you don’t really have a formal program you just hire a pentester from time-to-time, okay. Hmmmm, do you have any security-savvy developers that could fix the bugs the bounty finds? No, okay, ummmmm. So you already know that you should DEFINITELY NOT DO A BOUNTY, right? Okay, yeah, thanks.”

I realize that doing a bounty program is “hot” right now, and that the companies that sell bounty programs are happy to tell you that it’s good value for your money to start no matter where you are at in your AppSec program. I disagree. I often sugar-coat things in my blog, but for this I can’t. If you don’t already have an AppSec program and you do a bug bounty program you are setting your money on fire. If you want to hear from an expert on the topic though, you should watch Katie Moussouris explain it much more gracefully than I, here: Bug Bounty Botox Versus Natural Beauty.

Capture The Flag, Cyber Ranges, and other forms of Gamification

Capture the Flag contests, also known as CTFs, are not a bunch of security people running around in a field with flags; it’s a contest made up of security puzzles. Sometimes it is vulnerable systems you need to exploit, sometimes it’s intentional puzzles for you to solve. When you ‘solve’ one of the challenges you get a ‘flag’, which means points. The person or team at the end with the most points wins.

Cyber Ranges and other gamification systems are similar, you play, solve security problems, and learn at the same time.

Why do security professionals sit around playing games and solving puzzles? Because this is a great way to learn. And it’s FUN! Also: if you find a vulnerability and it’s something you have done before in your code you will never, ever make that mistake again. Trust me on that one.

Inviting your developers to participate in security gamification can be a great team building exercise and it can teach them many of the lessons you wish they knew!

Snowflakes

There are many more special situations that demand interesting and exciting AppSec activities, such as chaos engineering, red teaming, and so much more. You can read a lot more about it in my book, Alice and Bob Learn Application Security.

Thank you

Thank you for reading my first blog series; this is the end. When I started my blog I honestly wasn’t sure anyone would read it, but I wanted to share all of the things I had learned so I went for it. Thank you for coming on this journey with me, I hope you follow me on many more.