Pushing Left, Like a Boss — Part 5.3 — Browser and Client-Side Hardening

Disabling ‘Remember Me’ Features

Do Not Allow Caching of Sensitive Data

HTTPS Everywhere

I feel strongly about client-side hardening. — http://sector.ca/

Use Every Possible Security Header

Content-Security-Policy: default-src ‘self’; block-all-mixed-content;
Referrer-Policy: strict-origin-when-cross-origin
Strict-Transport-Security: max-age=31536000; includeSubDomains
X-Permitted-Cross-Domain-Policies: none
Access-Control-Allow-Origin: https://site-that-you-trust.com
Expect-CT: max-age=86400, enforce, report-uri=”https://myserver/report"
Feature-Policy: camera ‘none’; microphone ‘none’; speaker ‘self’; vibrate ‘none’; geolocation ‘none’; accelerometer ‘none’; ambient-light-sensor ‘none’; autoplay ‘none’; encrypted-media ‘none’; gyroscope ‘none’; magnetometer ‘none’; midi ‘none’; payment ‘none’; picture-in-picture ‘none’; usb ‘none’; vr ‘none’; fullscreen *;
<httpRuntime enableVersionHeader=”false” />

A special note on the X-XSS-Protection header: This header is now considered legacy/deprecated. It has vulnerabilities within the header implementation itself. It is only used for backward compatibility, and unfortunately as of 2019 it is being attacked in older browsers. It is no longer advisable that we use this security header.

Pushing Left, Like a Boss — Part 5.2- Use Safe Dependencies

<rant>

Automating this should be part of every CD/CI pipeline. You should also automate scanning of your source code repository on a regular basis. Everyone should do this, for every project, no matter how small. It’s so easy, and it’s such a huge win for the security of your applications, there is no excuse not to do it.

</rant>

** The CVE list of vulnerabilities is not exhaustive. Many nation-states (including the one you live in), as well as criminal, terrorist, hacktivist, and other malicious groups, actors or organizations do not report zero days that they find(vulnerabilities that are not known to the public and for which there is no known patch), in order to keep them for use as part of their own nefarious activities. Just because you have scanned your third party components for vulnerabilities does not mean they are bulletproof. Sorry folks.

Photo: #WOCTechChat

Pushing Left, Like a Boss — Part 5.1 — Input Validation, Output Encoding and Parameterized Queries

Image Credit: #WOCTechChat

Input Validation

Image Credit: #WOCTechChat

Output Encoding

Image Credit: #WOCTechChat

Parameterized Queries

Pushing Left, Like a Boss: Part 4 — Secure Coding

In the previous article in this series we discussed secure design concepts such as least privilege, reducing attack surface, failing safe and defense in depth (layered protection). In this article, we are going to talk about secure coding principles which could be used to help guide developers when implementing security controls within the software they create.

Coding Phase of the SDLC 

What is “secure coding”?

The one thing that you should always remember when coding defensively, is that you need to assume that users will do things that you did not plan. Strange things that you would have never dreamed of. Trust me.

Why is it ‘secure coding’ important?

I’m not going to answer that. If you are reading this blog, you already understand why secure coding is important. I think the real question here is: “How do I explain how important it is to developers? To project managers? To executives? How do I get them to give me time in the project for it?” I’m asked this quite often, so let me give you a few options.

  1. You can explain using statistics and numbers, to predict the financial implications of a major security incident or breach. You can provide a cost/benefit analysis of how much less an AppSec program would cost, how your business would receive a good return on investment (ROI). I used this approach and was approved to launch my very first AppSec program.
  2. You can explain the business implications of a major incident, the loss of reputation, financial loss or legal implications that would result from a major incident or data breach. I tend to use this when trying to justify large changes such as creating a disaster recovery plan/site, or an AppSec advocacy program, or giving developers security tools (that tends to scare the pants off of most management types).
  3. You can create a proof of concept to explain a current vulnerability you have in your product, to show them directly the consequences that can occur. This might lose you some friends, but it will certainly get your point across.
  4. You can sit down with whoever is blocking you and have a real discussion about why you are worried about your current security posture. Explain it to them like they are a highly intelligent person, who happens to not know much about security (which means respectfully, and with enough detail that they understand the gravity of the situation.) It is at this point that I would tell them that I need them to sign off on the risk if we do not correct the problem, and that I can no longer be responsible for this. It is at this point that either 1) I get what I want or 2) I know this is no longer my responsibility.

If I get to step 4, I start to look for a new job.

 

Why are users the absolute worst?

Broken security control – Photo credit

Pushing Left, Like a Boss: Part 1

In all of the talks and articles I have ever written and all the advice I have ever given, I am always telling people they should “push left”. When security people say they want to “shift left”, they are referring to the left side of the System Development Life Cycle (SDLC), which is the way software engineers describe the methodology or process for making software. I say “push” because sometimes I am not invited to “shift”.

If you look at the image below, the further “left” you look, the earlier you are in the process. When we say we want to “push left”, we mean we want to start security at the very beginning and perform security in every step of the SDLC.

You might be reading this and thinking “Of course! Doesn’t everyone do that? It’s so obvious.” But from I’ve seen in industry, I have to tell you, it’s not obvious. And it’s definitely not what software developers are being taught in school.

As someone who was previously a web application penetration tester, I was generally asked to come in during either the testing phase, or after the product was already in production. At many places that I was hired I was also the very first person to look at the security of the application at all. When I was not the first person, often the person before me had been an auditor or compliance person, who had sent the developers a policy that was practically unreadable, generally unactionable, and often left the developers confused. As you can probably tell, I have not had productive experiences when interacting with compliance-focused security professionals.

Due to this situation, during engagements I looked AWESOME! I would always find a long list of things that were wrong. Not because I’m an incredibly talented hacker, but because no one else had looked. I seemed like a hero; swooping in at the last minute and saving the day. When in fact, coming in late meant the dev team didn’t have time to fix most of the problems I found, and that the ops team would have to scramble to try to patch servers or fix bad configs. They almost never had time to address all of the issues before the product went live, which means an insecure application went out on the internet.

As a security person, this is not a good experience. This is never the outcome I am hoping for. I want people releasing bulletproof apps.

You might be thinking: why are developers writing insecure code in the first place? Why can’t they just “make secure applications”? The answer is somewhat complicated, but bear with me.

One of the reasons is that most software development and engineering programs in university or college do not teach security at all. If they do, it’s often very short, and/or concentrates on enterprise security or network security, as opposed to the security of software (keeping in mind that they are learning how to make software, this should seem strange). Could you imagine if someone studied to become an electrician, and they were taught to twist the wires together and then push them directly into the wall? Skipping the step of wrapping them in electrical tape (safeguard #1) and then twisting a marrette around it (safeguard #2)? This is what is happening in universities and colleges all over the world, every day. They teach developers to write “Hello world” to the screen, but skip teaching them how to ensure the software that they are creating is safe to use.

Another reason we are having issues securing software is that traditional security teams have focused on enterprise security (locking down admin rights, ensuring you don’t click on phishing emails and that you reset your password every X number of days) and network security (ensuring we have a firewall and that servers are patched and have secure settings). Most people who work on security teams have a networking or system administrator background, meaning many of them don’t know how to code, and don’t have experience performing the formalized process of developing software. The result of this is that most people on a traditional security team don’t have the background to make them a good application security professional. I’m not saying it’s impossible for them to learn to become an AppSec person, but it would be quite a bit of work. More importantly, this means we have security teams full of people who don’t know what advice to give to a developer who is trying to create a secure application. This often leaves developers with very few resources as they try to accomplish the very difficult task of creating perfectly safe software.

The last reason that I will present here is that application security is HARD. When you go to the hardware store and buy a 2×4 that is 8 feet long, it should be the exact same no matter where you buy it. If you get instructions of how to build a shed, you know that each thing you buy to build it will be standardized. That is not so with software. When building software developers can (and should) use a framework to help them write their code, but each framework is changing all the time, and upgrading regularly can be tricky. On top of that, most apps contain many libraries and other 3rd party components, and often they contain vulnerabilities that users are unaware of (but still responsible for). Add to this the fact that every custom software application they build is a custom snowflake, meaning they can’t just copy what someone else made…. And all the while attackers are attacking websites constantly on the internet, all day, every day. This makes creating software that is secure a very difficult task.

With this in mind, I’m writing a series of blogs on the topic of “Pushing Left, Like a Boss”, which will outline what different types of application security activities exist that you could implement in your workplace, and what types of activities that developers can start doing themselves, right now, to start creating more secure code.

In part 2 of this series we will discuss Security Requirements.