Pushing Left, Like a Boss! — Part 3: Secure Design

In the previous article in this series we discussed security requirements. When making any product, requirements are a must, and ensuring you have security built into your requirements from the beginning is the first step to ensure your final product will be of high quality. In this article we will discuss the next phase of the system development life cycle: Design.

As you recall, the system development life cycle generally looks like the image below:

System Development Life Cycle (SDLC)

When designing software applications, software architects not only need to worry about what the customer has asked for (business requirements), functional requirements (user requirements, scheduling, system requirements), but also non-functional requirements that are often taken for granted, such as usability, quality, and of course, security.
Unfortunately, when we design applications we often forget to think of all the angles, focusing more on ensuring it works, rather than ensuring that it onlyworks the way we have intended. This is where threat modelling comes in, the process of identifying potential threats to your business and application, and ensure that proper mitigations are in place. This article will focus on what concepts we need to consider when designing for security, and in a future article we will discuss threat modelling.

*Secure by design, in software engineering, means that the software has been designed from the ground up to be secure. Malicious practices are taken for granted and care is taken to minimize impact when a security vulnerability is discovered or on invalid user input. — Wikipedia *

Design Flaw VS Security Bug

A security flaw is an error in the design of the application that allows a user to perform actions they should not be allowed to do. Malicious or damaging actions. This is a flaw, a problem with the design. We use secure design concepts, security project requirements and perform threat modelling in attempts to avoid or minimize opportunities for design flaws.

A security bug is an implementation issue, a problem with the code, that allows a user to use the application in a malicious way. We perform code review, security testing (many types, during different stages of the project), provide secure coding training, and use secure coding concepts and guidelines in order to protect against security bugs.

Tanya Janca recording a lesson
Tanya Janca recording a lesson for We Hack Purple Academy.

Discovering a flaw late

The later you fix a problem in the SDLC, the more it will cost. An article from Slashdot states that a bug found in requirements may cost $1 to fix, while in design $10, coding $100 and in testing or release $1000. There are many different estimates of cost all over the internet, but instead of using ‘guesstimates’ to try to explain the idea, let me tell you a story.

Imagine you and your spouse have been saving for years and you are having your dream home built for you. It’s almost done, they are putting on the handles for the cupboards, and rolling out the carpets. It’s at this point that you look at your partner and say, “Oh honey, we have seven children, maybe we should have asked for more than one bathroom?”

Adding a bathroom this late in the construction will cost quite a bit, and make your project late, but you know you cannot continue with only 1 bathroom. You speak to the construction company and they explain that you will have to sacrifice a bedroom to add two more bathrooms or make the living room 1/2 the size. It will also mean your family can’t move in for another month. It will cost an arm and a leg.

This is the same situation for software. When you make design changes last minute they aren’t always pretty, they almost always make you miss deadlines, and they are extremely expensive.

The “not enough bathrooms” problem is something that threat modelling would have found. This is something that secure design concepts might have made visible more early on. This problem is the reason that we need to begin security at the start, not the end, of all projects. This is why we need to ‘push left’.

Secure Design Concepts

With this in mind, let’s talk about several secure design concepts that should be discussed when designing software applications.

Defence in Depth

(using multiple layers of security precautions)

The idea of defence in depth is that security should be applied in layers; one level of defence is not necessarily enough. What happens if an attack gets past your Web Application Firewall (WAF)? I certainly hope you have secure code back there. It just doesn’t make sense to only use one precaution if you can use two or more (assuming it’s not “too expensive”).

For instance, if you call the same input sanitization function every time, why not call it for data from the database? Who knows if whoever put it there sanitized it first? Maybe something was missed? Maybe data was dumped in there from a 3rd party? Sanitizing it as it comes out of the database will take fractions of a millisecond. I wouldn’t call that expensive.

Minimize Attack Surface

(removing unused resources and code)

The smaller your app, your network, or even your country, the less you have to worry about protecting. If you haven’t released that new feature yet, why do you have the code in your app, but the button “hidden”? If you have a secret page, attackers could find it. If you have a ton of your code commented out, why is it still in the final product? If you have virtual machines or other resources on your network, but you aren’t using them, why are they still there (and likely on the internet)? Doing regular “clean up” of your resources, and ensuring you remove commented code, as well removing unused or “secret” features, are all great ways to ensure there are less options for malicious actors to attack.

Least Privilege

Giving everyone exactly how much access and control they need to do their jobs, but nothing more, is the concept of least privilege. Why would a software developer need domain admin rights? Why would an administrative assistant need administrative controls on their PC? This is no different for software. If you are using Role Based Access Control (RBAC) to give users different abilities and powers within your application, you wouldn’t give everyone access to everything, would you? Of course not. Because the more people with access, the more risk there is of someone causing a security issue.

This means several things in regard to developing software, and some of it you’re probably not going to like.

Not only does the software itself need to follow the rules of least privilege, but that least privilege must apply to the people creating the software. Software developers are a huge risk to IT security, if one has malicious intent, or has a bad day and acts carelessly, if they are given too much access… The consequences can be severe.

Let’s leave that there for now and continue further into the secure design rabbit hole.

Fail Safe or Fail “Closed”

Whenever something fails in your application it must always fail to a known state, preferably it’s original one. Let’s say you’ve run a transaction to transfer money from one account to another, and there’s an error part way through; you certainly wouldn’t want that money to be in limbo. You would want the money returned to the original account, the user given an error that they need to try again, and the system to log whatever happened. You would not want it to fail into an unknown state, uncertain of where the money is, or if it was transferred multiple times, or if it disappeared altogether. Failing safe means rolling back the transaction and starting again, and handling errors gracefully.

Use Existing Security Controls

(do not write your own)

I’m sure that many of you were just like I me when I was a new software developer: I thought I was the bee’s knees. I was sure that whatever I wrote was THE BEST version ever created. The fastest and definitely the most efficient. But now that I’ve got a few more years under my belt, and perhaps a bit of maturity, I’ve realized that it’s usually best to leave certain things to the experts, and only write custom code when it is truly needed. This means if you are going to perform encryption, input sanitization, output encoding, use keysor connection strings, or anything else that would be considered a security control, you should use the one available to you in your framework or platform.

Hardcoding (not ever, not never)

Just don’t.

Careful Comments

When you put comments in your code, ensure that you never save passwords, connection strings, or anything else sensitive. This includes your email address, insider-information about your application, and anything else that could allow an attacker a leg up in regards to attacking your application or organization.

Re-authentication for Important Transactions (avoiding CSRF)

Cross Site Request Forgery (CSRF) is a vulnerability first defined by OWASP, where an attacker convinces the victim to click on a link, the link triggers a transaction within an application (let’s say the purchase of a fancy new TV, to be shipped to the attacker), and because the user was already logged into that account (who doesn’t leave their browser open for days on end?), the vulnerable web application completes the transaction (purchase) and the user is none-the-wiser, until the bill arrives and it is already too late.

The best way to defend against this is to ask the user for something that only the user could provide, before every important transaction (purchase, account deactivation, password change, etc.). This could be asking the user to re-enter their password, complete a captcha, or for a secret token that only the user would have. The most common approach is the secret token.

Pro Tip: users hate captchas.

Authorization

Always use the authorization functionality available to you in your framework. I know we covered this before, but there’s a reason why everyone does it one of the following ways:

  • Role-Based
  • Claims-Based
  • Policy-Based
  • Resource-Based

Segregation of Production Data

Your production data should not be used for testing, development, or any other purpose than what the business intended. This means a masked (anonymized) dataset should be used for all development and testing, and only your ‘real’ data should be in prod.

This means less people will have access to your data; a reduced attack surface. It also means less employees peeking on personal data. Imagine if you have been using a popular messaging platform and you found out that employees were reading your messages, which you thought were private. This would be a violation of your privacy, and most likely also the user agreement. Segregation of production data would eliminate most opportunities for this type of threat.

Threat Modelling (affectionately known as ‘evil brainstorming’)

Threat modelling, in it’s simplest of forms, is a brainstorming session in search of defining all threats that your application, system or product will likely face. Will people try to intercept your data and sell it on the dark web? Would it have any value if they did? What harm could come if it was? How can we protect against this? These are some of the types of questions you may find yourself doing during a session. You would then test your app and review it’s design to ensure you have properly mitigated these threats.

Threat modelling is such a large topic, that it merits it’s own blog post, as mentioned earlier.

Protection of Source Code

I realize that many people will argue with me that “Security Through Obscurity” is not a true defense tactic, but I beg to differ. It should never be your only defence, but if it is one of many, why not? Many companies do not put their code in open repositories in order to make it much more difficult for competing companies to try to replicate their products. Yes, a malicious actor can try to reverse engineer Windows 10, but who has that kind of time?

Is this defense foolproof? Certainly not. Would I put my code for an unreleased and/or highly valuable product in a public GitHub repo? I think not.

Error Handling

In order to make our applications appear professional we should always catch our errors; no one wants to see a stack trace all over the screen. But there are security concerns to be considered as well.

When a stack trace or unhandled error is shown to the user, it gives details to malicious actors as to what technology stack you are running or other information that could potentially help them plan a better attack against you.

Always catch your errors.

Logging and Alerting

We log security issues so that others may have the joy of auditing later… All kidding aside, if important things are not logged, when there is a security investigation, investigators have nothing to work with. And alerting is to ensure people know about problems in a timely manner.

Ensure you log anything important an investigator may need, but be careful not to log any sensitive information such as SIN numbers, passwords, etc.

Sensitive Data

Label all of your applicable data as sensitive when you design your data formats and ensure the application treats it that way. Design your app with protecting sensitive data in mind.

Up next in part 4 we will discuss secure coding concepts that can be adopted in order to avoid common security bugs (implementation issues).

Pushing Left, Like a Boss! — Part 2: Security Requirements

In the previous article in this series we discussed why ensuring the security of software is an elusive task; application security is hard to achieve with how the InfoSec and software development industries and education system(s) currently work. We talked about the importance of starting security actives early in the SDLC and formalizing them as part of your process. But what ARE these actives? How do they work, and when do we do what? That, dear reader, is what this article is about.

As you recall from the previous article, the system development life cycle generally looks like the image below:

System Development Life Cycle (SDLC)

Whether you are doing Agile, Waterfall, or if you have a DevOps culture at your office, you always need to know what you are building (requirements), you need a plan (design), you need to code it (the fun part), testing is obviously a must, and then you release it out into the wild (hopefully you also maintain and monitor it as well, which is all part of the “release” phase). Each one of these phases should involve security activities. Let’s look a little deeper, shall we?

Requirements

When writing requirements there will always be security questions, such as; does it contain sensitive or Personally Identifiable (PII) data? Where and how is the data being stored? Will this application be available to the public (Internet) or internally only (intranet)? Does this application perform sensitive or important tasks (such as transferring money, unlocking doors or delivering medicine)? Does this application perform any risky software activities (such as allowing users to upload files or other data)? What level of availability do you need? 99.999% up time? These and many more are the questions that security professionals should be asking when assisting with requirements gathering and analysis.

Here is a list of default security requirements that I would suggest for most software development projects:

  • Encrypt all data at rest (while in the database)
  • Encrypt all data in transit (on its way to and from the user, the database, an API, etc)
  • Trust no one: validate (and sanitize if specialize circumstances apply) all data, even from your own database
  • Encode (and escape if need be) all output
  • Scan all libraries and third-party components for vulnerable components before use, and regularly after use (new vulnerabilities and versions are released all the time). To do this you can use any one of the following tools: OWASP Dependency Check, Snyk, Synopsis, etc.
  • Use all applicable security headers
  • Hash and salt all passwords. Make the salt at least 28 characters.
  • Only allow your site to be accessible via HTTPS. Redirect from HTTP to HTTPS.
  • Ensure you are using the latest version of TLS for encryption (currently 1.3)
  • Never hardcode anything. Ever.
  • Never put sensitive information in comments, ever. This includes connection strings and passwords.
  • Use all the security features within your framework, for instance session management features or input sanitization functions, never write your own.
  • Use only the latest version of your framework of choice, and keep it up to date
  • If performing a file upload, ensure you are following the advice from OWASP for this highly risky activity. This includes scanning all uploaded files with a scanner such as AssemblyLine, available for free from the Communications Security Establishment of Canada (CSE).
  • Ensure all errors are logged (but not any sensitive information), and if any security errors happen, trigger an alert
  • All sanitization must be performed server-side, using a whitelist (not blacklist) approach
  • Security testing must be performed on your application before being released
  • Threat modelling must be performed on your application before being released
  • Code review (specifically of security functions) must be performed on your application before being released
  • If the application errors it must catch all errors and fail safe or closed (never fail into an unknown state)
  • Specifics on role based authorization
  • Specifics on what authentication methods will be used. Will you use Azure Active Directory?  There are many options and it’s a good idea to ensure whatever you choose works with how you are manging identity for your enterprise and/or other apps
  • Only using parameterized queries, never inline SQL
  • Forbid passing variables that are of any importance in the URL. For example, you can pass which language (“en”, “fr”, “sp”) but not your userid, bank account number or anything of any importance within your application or your life
  • Ensure your application enforces least privilege, especially in regard to accessing the database or APIs
  • Minimize your attack surface whenever possible
  • Allow users to cut and paste into the password field, which will allow for use of password managers. Disable password autocomplete features in browsers, to ensure users do not save their passwords into the browser.
  • Disable caching on pages that contain sensitive information
  • Ensure passwords for your application’s users are long, but not necessarily complex. The longer the better; encourage use of passphrases. Do not force users to change their passwords after a certain amount of time, unless a breach is suspected. Verify that new user’s passwords have not previously been in a breach by comparing sha1 hashes using the HaveIBeenPnwed API service.
  • All connection strings, certificates, passwords and secrets must be kept in a secret store, such as Vault by Hashicorp.

Depending upon what your application does, you may want to add more requirements, or remove some. The point of this article is to get you thinking about security while you are writing up your requirements. If developers know from the beginning that they need to adhere to the above requirements, you are already on your way to creating more secure software.

Up next in part 3 we will discuss secure design.

Pushing Left, Like a Boss: Part 1

In all of the talks and articles I have ever written and all the advice I have ever given, I am always telling people they should “push left”. When security people say they want to “shift left”, they are referring to the left side of the System Development Life Cycle (SDLC), which is the way software engineers describe the methodology or process for making software. I say “push” because sometimes I am not invited to “shift”.

If you look at the image below, the further “left” you look, the earlier you are in the process. When we say we want to “push left”, we mean we want to start security at the very beginning and perform security in every step of the SDLC.

You might be reading this and thinking “Of course! Doesn’t everyone do that? It’s so obvious.” But from I’ve seen in industry, I have to tell you, it’s not obvious. And it’s definitely not what software developers are being taught in school.

As someone who was previously a web application penetration tester, I was generally asked to come in during either the testing phase, or after the product was already in production. At many places that I was hired I was also the very first person to look at the security of the application at all. When I was not the first person, often the person before me had been an auditor or compliance person, who had sent the developers a policy that was practically unreadable, generally unactionable, and often left the developers confused. As you can probably tell, I have not had productive experiences when interacting with compliance-focused security professionals.

Due to this situation, during engagements I looked AWESOME! I would always find a long list of things that were wrong. Not because I’m an incredibly talented hacker, but because no one else had looked. I seemed like a hero; swooping in at the last minute and saving the day. When in fact, coming in late meant the dev team didn’t have time to fix most of the problems I found, and that the ops team would have to scramble to try to patch servers or fix bad configs. They almost never had time to address all of the issues before the product went live, which means an insecure application went out on the internet.

As a security person, this is not a good experience. This is never the outcome I am hoping for. I want people releasing bulletproof apps.

You might be thinking: why are developers writing insecure code in the first place? Why can’t they just “make secure applications”? The answer is somewhat complicated, but bear with me.

One of the reasons is that most software development and engineering programs in university or college do not teach security at all. If they do, it’s often very short, and/or concentrates on enterprise security or network security, as opposed to the security of software (keeping in mind that they are learning how to make software, this should seem strange). Could you imagine if someone studied to become an electrician, and they were taught to twist the wires together and then push them directly into the wall? Skipping the step of wrapping them in electrical tape (safeguard #1) and then twisting a marrette around it (safeguard #2)? This is what is happening in universities and colleges all over the world, every day. They teach developers to write “Hello world” to the screen, but skip teaching them how to ensure the software that they are creating is safe to use.

Another reason we are having issues securing software is that traditional security teams have focused on enterprise security (locking down admin rights, ensuring you don’t click on phishing emails and that you reset your password every X number of days) and network security (ensuring we have a firewall and that servers are patched and have secure settings). Most people who work on security teams have a networking or system administrator background, meaning many of them don’t know how to code, and don’t have experience performing the formalized process of developing software. The result of this is that most people on a traditional security team don’t have the background to make them a good application security professional. I’m not saying it’s impossible for them to learn to become an AppSec person, but it would be quite a bit of work. More importantly, this means we have security teams full of people who don’t know what advice to give to a developer who is trying to create a secure application. This often leaves developers with very few resources as they try to accomplish the very difficult task of creating perfectly safe software.

The last reason that I will present here is that application security is HARD. When you go to the hardware store and buy a 2×4 that is 8 feet long, it should be the exact same no matter where you buy it. If you get instructions of how to build a shed, you know that each thing you buy to build it will be standardized. That is not so with software. When building software developers can (and should) use a framework to help them write their code, but each framework is changing all the time, and upgrading regularly can be tricky. On top of that, most apps contain many libraries and other 3rd party components, and often they contain vulnerabilities that users are unaware of (but still responsible for). Add to this the fact that every custom software application they build is a custom snowflake, meaning they can’t just copy what someone else made…. And all the while attackers are attacking websites constantly on the internet, all day, every day. This makes creating software that is secure a very difficult task.

With this in mind, I’m writing a series of blogs on the topic of “Pushing Left, Like a Boss”, which will outline what different types of application security activities exist that you could implement in your workplace, and what types of activities that developers can start doing themselves, right now, to start creating more secure code.

In part 2 of this series we will discuss Security Requirements.

One Year Anniversary of We Hack Purple

One year ago, I decided to start my own company. It’s called We Hack Purple.

When I decided to start this company, I wasn’t actually 100% sure what I wanted to do. I had found myself suddenly unemployed, because my previous startup had failed. Unsure of my next steps, I did what any Internet nerd would do, I posted on Twitter that I didn’t know what I wanted to do with myself and asked if anyone wanted to make suggestions. Person, after person, after person asked me to start a training company. Several people offered me jobs doing application security or developer relations work on behalf of their companies, but the most common request was “Will you come in and train our devs?”. Since I love public speaking, teaching and mentoring, it seemed like it could be a good fit.

At first, I started by creating a small online community, and posting content for them to read about application security. Before I knew it I had over 100 members! After about 6 months I decided that I wanted to find a new platform for us, one that would allow everyone to talk to each other, not just me. I didn’t want it to just be “The Tanya Show”, I wanted it to be a community where everyone could share and shine. I also wanted a safe place for people to talk about everything to do with security, and know that no one would harass them or be, well, “Twitter” at them. Last week we just relaunched the We Hack Purple Community, with even more members than before, and we now have a mobile application, chat rooms, over 200 articles of content, a content drip, and real human moderators! We also have events planned throughout 2021, and we are planning so much more for the future.

After creating the online community, in April 2020 I released my first online course, titled AppSec 101. It was a hit, we sold over 100! But as we sold them, my perfectionism kicked in; I didn’t think it was ‘good enough’ for our students. My team and I decided to re-record the entire thing, add more quizzes, samples, stories, videos and articles, as well as a textbook and a certification when you finish all three courses. We call it Application Security Foundations, and it’s available on the brand-new We Hack Purple Academy!

(We also quietly announced 2 weeks ago that we are now offering live virtual training, and I am already realizing that I probably need to hire another trainer. It’s a pretty exciting place to be, when there is more demand than supply.)

Another very exciting thing that happened in the past 12 months is that my book was published, Alice and Bob Learn Application Security. It became a bestseller on Amazon in the first week, and We Hack Purple has sold hundreds of copies itself, to our clients and customers. The book has opened a lot of doors for me, and the company, but more importantly, it has helped a lot of people learn how to make more secure software. I could not be happier with the wonderful response from readers. I am starting free online book discussions, on March 20th, 2021. If you want the schedule, and invites to the event, all for free, sign up here.

We Hack Purple also has a swag store now, so you can wear as ‘SheHacksPurple’, ‘He Hacks Purple’, ‘I Hack Purple’ or ‘We Hack Purple’ T-shirt, hoodie, toque, socks or even a baby onesie! Honestly, the most exciting part for me is that people have actually bought them. Also, that I finally have a cute security hoodie, that actually fits my lady curves.

In the past year we also started the We Hack Purple Podcast, in August. No one tells you before you start a podcast that the absolute best part is going to be having the chance to meet your amazing guests. It has been such a wonderful opportunity for me to be able to meet and spend time with the outstanding individuals who form our guestlist. Also, it’s been really fun.

Another amazing thing that happened in the past 12 months is people reaching out and asking if they could buy training courses for us to give away to people from underrepresented groups. We have quietly been doing this since the very beginning, but we aren’t being quiet about it anymore. People would write me and say “thanks for the course I really liked it, but you didn’t charge enough, could I pay for another person to take this course?” I was really surprised at first. People are so generous, and maybe it shouldn’t surprise me, but I was caught off guard. After this happening enough times we decided to create the We Hack Purple Diversity Scholarship. We have already enrolled over 30 people in the application security foundations program, and we have tentative pledges for over 30 more. It has absolutely humbled me to have so many individuals and companies support a cause so close to my heart, diversity in tech. Thank you to everyone who has been a part of this effort.

For year two we are planning to release courses on the topics of secure coding and DevSecOps, sponsorship and participation of several community events, more WHP content, live book discussions, and so much. Thank you for being a part of We Hack Purple!