(Over)Communication With Your Security Champions

As mentioned in the previous article (Recognizing & Rewarding Your Security Champions), the most common reason for failure of a security champions program is the security team losing steam, and/or the champions losing interest. In this article, we will discuss a few ways to avoid this. The best way? Communication.

https://youtu.be/nm1MpTuSNyI

To start off with, pace yourself. Often when I speak to security teams who have a failed program, they tell me how they started off very strong. “We gave them 2 different trainings, 2 workshops, and 3 lunch and learns, all in the first three months. Then we were exhausted. We haven’t done anything with them in over a year.” This scenario is far too common.

To pace yourself, I suggest meeting with each champion once a month, for 30 minutes. Then hold one lunch & learn and send one email to the champions. This might not sound like much, but you must remember, they are already doing a full-time job for your organization.

In my 1:1 meetings I like to ask the following questions (adopted from Ray Leblanc’s Security Champions article on Hella Secure blog):

  • What are you working on?
  • What are you going to work on next?
  • Do you need any help?

Each of these questions is open-ended, with the hope that it will prompt a meaningful conversation. I usually take notes during the meeting, and then send them after to both of us, with any action items for either of us highlighted in bold. (Note: I’ve used this technique to get many of my previous bosses to do things for me. Set a reminder for a week from then, and then reply-all to that email chain and ask: “Any updates on these action items?” It works like a charm!)

In your lunch and learn (which does not need to be at lunch time, or involve food), teach them something you want them to know. Do not teach them things they do not need to know, unless they asked for that topic specifically. During this session you or a teammate can teach, or you can show them a training video you like, or even a recording of a conference talk that really hit home for you. If you show them something pre-recorded, ensure you watched it first, you don’t want to waste anyone’s time with death-by-powerpoint. The more fun you can make these sessions, the better. If you’re up for it, invite all of the developers and let everyone learn something new!

Woman running
Photo by Greg Rosenke on Unsplash

Ideas for lunch and learn topics:

  • The specifics on how to apply policies, standards and guidelines. This could be a secure coding workshop, or a threat modelling session.
  • Talks about the top vulnerabilities that you are seeing in your own products, including the risks they pose to your specific business model.
  • Workshops on how to use the tools that your team wants them to be responsible for. Especially how to configure them, how to validate results, and where to find information on how to fix what they find.
  • If they are responsible for design or architecture, give them secure design training.
  • Tell them about a security incident your team had, and how it could have been prevented (assuming you are allowed to share this information).
  • Hold a consultation on the new policy, standard, or guideline your team is considering publishing. Ask for their feedback, then adjust your documents accordingly.
  • Remember to take attendance (for metrics) and take notes of any questions for you to follow up.

The monthly email:

Sometimes you just don’t have time to do a lunch and learn event or hold 1:1s, but you still need to send a monthly email. The monthly email lets the security champions know what’s going on, and that they still matter to you. The program is still running, because you sent an email. If you don’t send this email, and you haven’t touched base in any other way, this leaves a space where your program may start to disappear.

The monthly email does not need to be fancy and doesn’t need to say a lot. Generally, the monthly email says:

  • What events are happening this month at your org (lunch and learn, all staff, any other meeting they should know about)
  • Any updates your team has (new policy, new tool, project updates, etc)
  • Anything interesting from the news that they may find valuable
  • Any local security events they may be interested in
  • Any podcasts, videos, blog posts or any other media that is relevant and you feel relates to them, about security (of course)

I live in Canada, and in Canada we are a country of immigrants. This means we have many, many different religions represented in most workplaces. In December, there’s Hannukah, Ramadan, Christmas, and more, and often people take time off for these special holidays. This means having a large meeting in December is darn-near impossible. This is the type of situation where you just send the monthly email! It could say something like the following:

Hello Security Champions!

As it is December and many of you will be off celebrating various holidays, we are not going to have any events this month. We also want to wish you happy holidays, and we hope you enjoy all the snow we got this past weekend!

In January we are going to boot the Champions program back up with a lunch and learn on XSS. As some of you are aware, we’ve found it in about 1/3 of our custom apps, and we want to stomp​ it out in the new year (with your help of course!) An invitation will arrive later this week.

In the meantime, please check out this XSS Deep Dive by Tanya Janca. We’re going to cover this topic a bit differently than she does, but it gives you a good idea of what we are up against.

Have a great December folks!

Sincerely,

The Security Team

My hope from this blog post is that you remember to continue to communicate with your champions. Don’t let your program slip, it will disappear faster than you think. When in doubt, send them an email and check in. Up next, we will discuss Metrics.

Recognizing & Rewarding Security Champions

If you’ve ever read the book The 5 Love Languages, or articles summarizing the 5 love languages, then you are aware that there are predictable patterns of how people respond to various acts of kindness. Someone’s “love language” is the specific type of kindness that they are most affected by. For example, someone for whom their love language is “words of affirmation” would respond very well to receiving a glowing performance review, a compliment on a new article of clothing, or accolades from their colleagues about a project they worked on.

The previous article in this series is Teaching Security Champions.

You may be wondering at this point if you accidentally clicked on an article from a women’s fashion magazine, not a technical article from We Hack Purple. But please have a bit more faith, and read on.

The 5 love languages are:

  1. Gifts
  2. Words of Affirmation
  3. Physical Affection
  4. Spending Quality Time
  5. Acts of Service
Two people sitting using laptops
Security Champions at work!

When we are creating a security champions program, it’s very important that we ensure they feel appreciated. We don’t want them to feel squished into doing two jobs, for only one paycheck. One of the biggest challenges that security team’s face when creating a champions program is having it fall apart after the first few months, either due to the security team losing steam, or champions losing interest. We need them to feel very aware of our gratitude, and interested in the program itself, for them to continue to want to serve the security team’s agenda.

As you likely already figured out, not all the love languages listed above are work appropriate. We can’t run around giving hugs or holding hands with other employees. That said, we can adopt most of them for work situations, so that we can show the champions they matter to us, in appropriate ways, that support our security program.

Below is a non-exhaustive list of several ideas to make your champions feel as valuable as you know they are for your program.

  1. (Security Related) Gifts
  • Physical or digital security-related gifts – books, videos, training, CTFs, perhaps a copy of Alice and Bob Learn AppSec?
  • Create a Certificate to put on their wall.
  • Stickers, posters or any other decoration that is security focused.
  • Tickets to a conference or training.
  1. Words of Affirmation
  • Make sure to put a note in their performance review about them being a champion.
  • Tell their boss every time they do something that makes a big difference.
  • Send them an email and tell them when they did something big, let them know that YOU saw.
  • Recognize them in front of their peers (special virtual background, star on their name is slack, etc.)
  • Digital badges for signature blocks.
  1. Physical Affection
  • High Fives are the only recommended form of physical affection that you should show another employee. High fives signal success, and your approval of whatever they just did.
    • *** And only do this if you are confident that the employee is comfortable. Please be mindful that some religions and cultures do not allow those of the opposite sex to touch each other and be respectful if this applies. Never push physical touching at work.
  1. Spending Quality Time
  • Giving them your time is a reward. When you do, give them your undivided attention (put your phone away), and turn your body towards them.
  • Let them see a new tool first, give them a “sneak preview” ahead of everyone else.
  • Let them help you make decisions. Ask for advice from them and feedback, then take it seriously.
  • Invite them to attend security events with you.
  • Whenever you meet with them, this is quality time. Ask them: What are you working on? What are you going to work on next? Do you need any help?
  1. Acts of Service
  • Help them with more than just security. Are you good at design? Help them with it! Are you great at presentations? Offer to let them practice in front of you. You don’t need to do this very often, just once can make a huge impression.
  • Make introductions, where appropriate. “Oh yeah, Chris from QA uses that tool, I’ll introduce you so you can learn.”
  • Find answers they need to security questions and problems. Never leave them hanging.

When people feel appreciated and valued at work, they work harder (many studies show this to be true). Your champions already have full time jobs on other teams, they are going above and beyond for you. Let them know that you are very aware of them, by always making them aware of it with your actions, not just your words.

In the next article we will discuss communication with your champions!

Teaching Security Champions

In the previous article, we talked about how to engage your champions. We want them interested, revved up and ready to go.

You are in a room full of brand-new security champions and they are itching to learn all about ‘cyber’, what do you do? What do you teach them? How do you impress them?

Only teach them what they need to know. Nothing more.

As someone who creates security training professionally, I have to say, I’ve seen a LOT of filler. Extra content that just does not need to be there. Software developers do not need to know the history of Diffie-Hellman, or the difference between symmetric and asymmetric encryption, unless they are building encryption software. So don’t try to teach it to them unless they have a keen interest and have asked about it.

What they really DO need to know is:

What you need, expect and want from them, as champions.

You should define the goals of your program and share them with your champions. Share your plans for them, as much as you can. Give them timelines, training information or anything else you have. You need to make clear what you are expecting, or you may not get it.

 

Technical topics for teaching your security champions:

  • Formal training on secure coding, with labs!
  • Threat modelling
  • Secure architecture (whiteboarding)
  • Code Review
  • How to fix the bugs they find
  • Repeat yearly as a minimum

 

Topics specific to your organization:

  • Which policies, standards and guidelines apply to them
  • Help them create missing guidelines
  • Teach them how to be compliant, help them get there
  • Their role during an incident
  • Job shadowing

Hold consultations to let them provide input on the policies that will affect them. Trust me, their feedback will be priceless AND it will make them feel heard.

 

The last topic you need to ensure they learn is tooling. If you expect them to use a tool you need to show them how, what the output means, how to validate the results, how to install and configure it. It is also your job to either help them pick excellent tools or involve them when you are choosing tools for them.

 

In the next article we are going to discuss how to Recognize Your Champions.

Pushing Left, Like a Boss – Part 10: Special AppSec Activities and Situations

Special Situations

Not all application security programs are the same, and not all security needs are equal. In this article we will compare security for a small family business, a government and Apple.

Think about this: Not only does Apple make two popular consumer operating systems (OSX for desktop and laptop computers and IOS for phones and iPads), they also make a popular cloud platform (iCloud), a popular programming IDE (xCode), hardware for several types of laptops, phones, tablets, watches, and so, so much more. They also build physical security features directly into their products. It wasn’t until I decided to re-publish this article that I realized just how many things depend on Apple. It’s staggering.

What this means is that Apple has very special security needs. Their operating system, cloud and other products that we depend on must be secure. They must go far beyond the average company in their efforts to ensure this, and they do.

 

 

Tanya Janca teaching

See that computer to my left? It's an Apple. I own 3 laptops that run OSX,
and even used to work at an Apple repair shop back in the day. I learned 
to program on an Apple computer.

But the average company is not Apple. Which means they don’t need to take the same precautions. As a second example, let’s take “Alice’s Flowers”.

Alice has a website for her floral shop that delivers flowers in her small town. It shows basic info, such where they are located, their phone number, and when they are open. It also has a link to her Shopify shop for online orders (meaning she does not need to secure that, Shopify does; Alice is smart to have outsourced the hard part. This is called Risk Transference.). The rest of Alice’s website, in the big scheme of things, is not very important. If her site goes down for a day or two, it would be inconvenient but it would not the end of the world.

Most companies fall somewhere between Apple and “Alice’s Flowers” in regard to their risk. It has been my experience that many places, when I look at where they spend their security dollars, seem to be very confused as to where they sit on this scale. This is not my attempt to make fun or insult any company, I think it’s a sign of our times that not all companies are receiving good (and unbiased) advice.

The AppSec activities listed below do not apply to all IT shops. I invite you, reader, to try to imagine where your workplace would be on this scale. Please remember your place on the scale as you read the rest of these examples, to help you decide if any of this activities may apply to your place of work.

Special AppSec Activities

Responsible Disclosure

Responsible Disclosure (also known as Coordinated Disclosure), is a process where someone finds a security problem in a product or site, reports it to a company, and the company 1) does not sue them 2) thanks them 3) sometimes offers a token of appreciation but generally does not offer money and/or 4) sometimes the person who reported it is publicly acknowledged by the company or their bug is reported formally as a CVE to Mitre.

Last week (when I wrote the first version of this article, in 2019) I used a government website, I saw a bug, I figured out who to talk to (the Canadian Government doesn’t have a disclosure process, of any kind), and I emailed it to them. They said thanks, I offered ideas on how to fix it, and they were great. This is one version of responsible disclosure. See how I was responsible?

Some places have a formal program, whereby security researchers (or normal users like me), can report issues to them in a secure manner (me sending details over twitter then an email to the government employee was not very ‘secure’). If the product they found the issue in is something well-known or used often, they may file a CVE (Common Vulnerability Exposure) so that other’s are aware that version of that product is known to be insecure. But also for credit; having your name on a CVE is pretty cool.

Industry standard for fixing such things is (theoretically) 90 days, but not every company complies, and not every person who reports such an issue is so patient. When you hear that someone “dropped O Day”, what they mean is they released the info about a vulnerability onto the internet, and there is no known patch for it (also known as a ‘zero day’). This is often done in order to pressure a company to fix the issue. Because if one person found it, that means others might have found it (and they may be exploiting it in the wild, causing people problems, and that’s no good).

Note: “dropping O day” is NOT a part of responsible disclosure.

Bug Bounties

Katie Moussouris basically invented Bug Bounties as we know them today, she speaks on this topic often and is a wealth of knowledge on this and many other security topics. Since then several large tech companies have started their own programs including Shopify, Apple, and Netflix.

The invention of bug bounties spawned an entirely new industry; dedicated security researchers or “bug hunters”, as well as large companies that sell these people’s services on a pay-per-find basis.

The thing about working as part of a bounty program is you only get paid if you find something, if no one else has found it before, if your finding is in scope, and if your report actually makes sense. Submitting things that aren’t in scope is a great way to get yourself banned (such as taking over accounts of employees at the company you are supposed to be finding bugs for, don’t do that). What this means is that many, many bug hunters make little-to-no money, and a small few do quite well. I’ve heard people call this “a gig economy”, which means no job security, benefits or anything to fall back on if you have a bad month.

The economics for the researchers aside, this is an advanced AppSec activity. I’ve been asked many times “Should we do a bounty?” to which I have responded “How is your disclosure program going? Oh, you don’t have one, okay. Ummm, how is your AppSec program going? Oh, you don’t really have a formal program you just hire a pentester from time-to-time, okay. Hmmmm, do you have any security-savvy developers that could fix the bugs the bounty finds? No, okay, ummmmm. So you already know that you should DEFINITELY NOT DO A BOUNTY, right? Okay, yeah, thanks.”

I realize that doing a bounty program is “hot” right now, and that the companies that sell bounty programs are happy to tell you that it’s good value for your money to start no matter where you are at in your AppSec program. I disagree. I often sugar-coat things in my blog, but for this I can’t. If you don’t already have an AppSec program and you do a bug bounty program you are setting your money on fire. If you want to hear from an expert on the topic though, you should watch Katie Moussouris explain it much more gracefully than I, here: Bug Bounty Botox Versus Natural Beauty.

Capture The Flag, Cyber Ranges, and other forms of Gamification

Capture the Flag contests, also known as CTFs, are not a bunch of security people running around in a field with flags; it’s a contest made up of security puzzles. Sometimes it is vulnerable systems you need to exploit, sometimes it’s intentional puzzles for you to solve. When you ‘solve’ one of the challenges you get a ‘flag’, which means points. The person or team at the end with the most points wins.

Cyber Ranges and other gamification systems are similar, you play, solve security problems, and learn at the same time.

Why do security professionals sit around playing games and solving puzzles? Because this is a great way to learn. And it’s FUN! Also: if you find a vulnerability and it’s something you have done before in your code you will never, ever make that mistake again. Trust me on that one.

Inviting your developers to participate in security gamification can be a great team building exercise and it can teach them many of the lessons you wish they knew!

Snowflakes

There are many more special situations that demand interesting and exciting AppSec activities, such as chaos engineering, red teaming, and so much more. You can read a lot more about it in my book, Alice and Bob Learn Application Security.

Thank you

Thank you for reading my first blog series; this is the end. When I started my blog I honestly wasn’t sure anyone would read it, but I wanted to share all of the things I had learned so I went for it. Thank you for coming on this journey with me, I hope you follow me on many more.

Pushing Left, Like a Boss – Part 9: An AppSec Program

In my talk that this blog series is based on, “Pushing Left, Like a Boss”, I detailed what I felt an AppSec program should and could be. Since then, I’ve learned a lot and now see that there are quite a few activities that you can do, but it’s the goals and the outcomes that actually matter. Our industry has also changed quite a bit since I wrote that talk (written in 2016, first seen in public 2017, this article first published in 2019 and republished here in 2021).

My first international talk, at AppSec EU, 2017. It feels so long ago.

My previous thoughts on what a basic AppSec Program should be:

For bonus items I had listed:

And for “extra special situations” I recommended the following (which will be explained in the next blog post):

  • Bug Bounty Programs
  • Capture the Flag Contests (CTFs)
  • Red Team exercises

Anne Gauthier of OWASP Montreal, myself (pre-Microsoft) and Nancy Gariché of Secure That Cert and OWASP DevSlop. In the background is Christian Folini of the OWASP CRS project. I had no idea how important these people would become to me at the time.

Anne Gauthier of OWASP Montreal, myself (pre-WeHackPurple) and Nancy Gariché of Secure That Cert and OWASP DevSlop. In the background is Christian Folini of the OWASP CRS project. I had no idea how important these people would become to me at the time.

I’m going to preface this next part with two thoughts.

You can’t do security “right” if you aren’t doing IT “right”. If you can’t publish fixes for a year+ because your processes are broken, if you are underwater in technical debt, if you have dysfunction within your IT shop already, this is going to be very hard. I suggest starting with modernizing your systems and entire IT team as you modernize your security approaches, hand-in-hand. Don’t give up, you can do this! Take one item, aim for it, and continue on until you’re doing well.

If you have poor communications between the security team and the rest of IT this will be another hurdle that you have to work on. Culture plays a big part in ensuring your efforts are successful. I’ve released a bunch of videos on my YouTube channel on this topic, start with this one.

My new vision for an AppSec program:

  • A complete picture of all of your apps. Bonus: alerting, monitoring and logging of those apps.
  • Capability to find vulnerabilities in written code, running code, and 3rd party code. Bonus: the ability to quickly release fixes for said issues
  • The knowledge to fix the vulnerabilities that you have found. Bonus: eliminating entire bug classes.
  • Education and reference materials for developers about security. Bonus: an advocacy program, creating a security champion program, and repetitive re-enforcement of positive security culture.
  • Providing developers with security tools to help them do better. Bonus: writing your own tools or libraries.
  • Having one or more security activities during each phase of your SDLC. Bonus: having security sprints and/or using the partnership model (assigned and/or embedding a security person to/within a project team).
  • Implementing useful and effective application security tooling. Bonus: automating as much as possible to avoid errors and toil.
  • Having a trained incident response team that understands AppSec. Bonus: implementing tools to prevent and/or detect application security incidents (can be homemade), providing job-specific security training to all of IT, including what to do during an incident.
  • Continuously improve your program based on metrics, experimentation and feedback from any and all stakeholders. All feedback is important.

I’d love to hear your thoughts on my new application security ‘prescription’. Please comment below.

Up next in this series we will discuss the AppSec “extras” and special AppSec programs; I will discuss all the things in this article that I have not previously defined for you.

Pushing Left, Like a Boss – Part 8: Testing

Testing can happen as soon as you have something to test.

Suggestion: Provide Developers with security scanning software (such as OWASP Zap), teach them to use it, and ask them to fix everything it finds before sending it to QA.

You can add automated security testing into your pipeline, specifically:
  • VA scanning of infrastructure (missing patches/bad config - this is for containers or VMs, but you often usually use different tools)
  • 3rd party components and libraries for known vulnerabilities (SCA)
  • Dynamic Application Security Testing (DAST) - only do a passive scan so that you don’t make the pipeline too slow or use a HAR file to automate which parts are tested and which are not.
  • Static Application Security Testing (SAST) - do this carefully, it can be incredibly slow. Usually people only scan the delta in a pipeline (the newly changed code), and do the rest outside of a pipeline.
  • Security Hygiene - verify your encryption settings, that you are using appropriate security headers, your cookie settings are good, that HTTPS is forced, etc.
  • Anything else you can think of, as long as it’s fast. If you slow the pipeline down a lot you will lose friends in the Dev team.

Q&A at #DevSecCon Seattle, 2019

During the testing phase I suggest doing a proper Vulnerability/Security Assessment (VA) or PenTest (if you need management’s attention), but early enough that if you find something you can fix it before it’s published. More ideas on this:

  • Repurpose unit tests into security regression tests: for each test create an opposite test, that verifies the app can handle poorly formed or malicious input
  • For each result in the Security Assessment that you performed create a unit test that will ensure that bug does not re-appear
  • Ensure developers run and pass all unit tests before even considering pushing to the pipeline
  • Perform all the same testing that you normally would, stress and performance testing, user acceptance testing (hopefully you started with AB testing earlier in the process), and anything else you would normally do.
Penetration testing is an authorized simulated attack on a computer system, performed to evaluate the security of the system. The idea is to find out how far an attacker could get. This differs from a security assessment or vulnerability assessment, in that they are prioritizing exploiting the vulnerabilities they find, rather than just reporting them. If you want to shock management and get some buy-in, a PenTest is the way to go. But if just you want to find the things wrong with your app, and ensure lower risk to your systems, I would recommend a security/vulnerability assessment instead. It depends on your situation.

Up next in this series we will discuss what a formal AppSec program should include, followed by AppSec “extras” and special AppSec programs, which will end this series.

Pushing Left, Like a Boss – Part 7: Code Review and Static Code Analysis

This article is about secure code review and Static Application Security Testing (SAST). Static analysis is a highly valuable activity which can find a lot of security problems, far before you get to the testing or release stages, potentially saving both time and money.

Note: SCA is Static Composition Analysis, verifying 
that your dependencies are not known to be 
vulnerable. I have heard many say "static code analysis" 
when referring to code review/SAST tools, shortening it 
to SCA for simplicity. We will not do that, SCA will only
be used to refer to static composition analysis.

When application security folks say ‘static’ analysis, we mean that we will look at written code, as opposed to ‘dynamic’, which means when your code is running on a web server.

Since I wrote this article a few years ago, I have had a chance to do more in the code review space and spend some time working with SAST tools. Although my attention span is short, and I can be impatient at times (such as, for example, when I am awake), I can now spot several types of problems fairly easily. If you had asked me a few years ago if I would ever find code review pleasurable, I would have laughed, but now I find validating SAST results rather satisfying. It’s funny how much our opinions can change over time.

Code Review can happening both during the coding and during the testing phases of the system development life cycle.

There are two options for doing code review; manual or with a tool. There are pros and cons to each, and using both will get you the best results.

When reviewing code manually for security you don’t read every line of code; you just review the security controls to ensure they are in all the places they should be and that they are implemented correctly. Although I have not completed hundreds of secure code reviews in my career, I do recall discovering in delight when there was no input validation on a data field, or that an app was not using stored procedures but inline SQL, both of which are big no-nos. It was so obvious when I knew where to look and what to look for. However, most code reviews are not so simple, and many bugs are difficult or nearly impossible to spot with only the naked eye.

Note: when I say “only review the security controls” I mean things like a login, input validation, authorization, authentication, integration points, etc. Anything that has to do with the security of the app.

When using a tool for code review you would use something called a ‘static code analyzer’ or a ‘SAST’ (Static Application Security Testing) tool. This special kind of software parses your code into areas of concern and attempts to follow every possible outcome. It takes a lot of processing power and can take hours or even days to complete. It then creates a report with approximately 60-80% false positives.

Note, since I first wrote this article new types of  
static code analysis tools have been created that 
allow for very fancy grepping (regex searching) of 
the code base, with templates to help you find 
problematic code. How cool is that!

I know what you are thinking right now: 80% false positives !?!?!?! Why would anyone want to use a tool like that? Let me explain.

The key to looking at results of an SAST tool is that the items it lists are not answers, they are hints of where to look for problems, that the code reviewer (hopefully a security expert) can investigate. This means instead of reading 20,000 lines of code, the code reviewer uses the tool, it finds 200 ‘clues’, and then from those 200 ‘clues’ they end up finding 20 real bugs. And many of those bugs they could not have found with just their eyes, because SAST tools can go several layers deep into the code, in a way humans just can’t.

When performing code review it is possible to find all sorts of other problems with your application, not just security issues. During one of my projects the code reviewer found several memory leaks. When we fixed them our application became lightening fast, which made our project team look amazing. There is so much more than just security problems that a good code reviewer can find; it is definitely a worth-while task if you want to build truly resilient and secure software.

My best 'superwoman' pose at #DevSecCon Seattle, Sept 2019

Although we already discussed this in Part 5.2 Using Safe Dependencies, I’m going to bring it up again: everyone needs to verify the security of their 3rd party code/components/frameworks/libraries/whatever-you-want-to-call-the-code-in-your-app-that-you-didn’t-write. You must verify that they (3rd party components) are not known to be vulnerable. When I say ‘known to be vulnerable’, I mean there is currently information available on the internet about the vulnerability that is documented on what the problem is and/or how to exploit it.

Many organizations and industry spokespeople create a lot of fear, uncertainty and doubt (FUD) around zero days (vulnerabilities in popular software for which there is no existing patch), advanced persistent threat (APT - someone living on your network for an extended period of time, spying on you) or very advanced attackers, such as nation states. In reality, almost all serious security incidents are a result of our industry not keeping up with the basics; missing patches, people with admin privileges clicking on a phishing email while logged in, and well-known (and therefore preventable) software vulnerabilities, such as using code with known vulnerabilities in it. Essentially; basic security hygiene.

Bonus resource: My friend Paul Ionescu created a code review series.

Up next we will talk about the Testing Phase of the SDLC!

Pushing Left, Like a Boss – Part 6: Threat Modelling

The last security-related part of the Design Phase of the System Development Life Cycle (SDLC) that we will talk about in this blog, is threat modelling, affectionately known as “evil brainstorming”.

Threat modelling happens during the design phase of the system development lifecycle.

The purpose of threat modelling is to discuss the possible threats to your system, then to do your best to mitigate them, and if not, to manage or accept the risks. There are multiple formalized methods for doing this, which I will not discuss here, each one already has its’ own book, advocate or dedicated blog, likely doing a better job detailing it than I ever could. In fact, my friend Adam Shostack, wrote an amazing book about it. Check it out!

That said, dear reader, I want you to understand why threat modelling is important, who needs to do it, as well as when and how you can start.

In order to create a threat model, a representative from each project stakeholder group needs to be present, this means someone from the business/someone representing the customer, a security rep, and someone from the development team. Yes, someone from the tech team needs to be there; they often have the most-frightening threat ideas!

Then you discuss what the risks are to the system. “What keeps you up at night?”, “If you were going to attack your app, how would you do it?”, “What threat actors should we be aware of? Should we prepare for?”, etc. You want to look at the system from the viewpoint of an attacker, what could go wrong? How could the system be misused? How can we ensure we protect the user (including from us)? What is the worst-case scenario? This session can be incredibly formal (creating attack trees, for instance), or quite informal (which is how I would suggest you start, if you have never done one before). You can read about two informal threat modelling sessions I documented on my blog; Serverless with Bryan Hughes and Robots with Jesse Hones.

Once you have a list of concerns, you will need to evaluate which ones are more (or less) likely and which may require security testing of your app (to see if it is vulnerable or not). You also need to evaluate which ones matter more or less; not all risks are created equal. You may be surprised (and frightened) by the justifications for the value of each risk; recently I had to deliver the news that the potential damage was “absolutely catastrophic”. Even though the risk itself was ‘fairly unlikely”, the project team changed their course of action immediately once they understood the potential long-term ramifications.

When you have your list of risks, and how much each one matters, you need to plan. Will you mitigate (fix/remove) some of the issues? Will you manage some, by keeping an eye on them to see if they get worse? Will you accept some of the risks? Perhaps some are highly unlikely or pose only a tiny threat? Why spend funds on something that is not worrisome?

The entire process should be documented, especially the decisions you make at the end, with management sign off. These decisions must be made with management or someone that has the authority to make large decisions like this. A software developer cannot “accept the risk”, nor likely can you as a security engineer; it is probable that you will need a C-level executive to accept risks that are above the level “medium” or “low”.

If you are doing an iterative design, you will need to do several shorter threat modelling sessions, for new features or large changes to existing features. If you are doing a large waterfall-style approach, one thorough session should be enough (assuming no large changes afterward). You will need to decide this for yourself and your org.

I threat model and/or do design reviews all the time, and I really enjoy it. I genuinely feel that the experience I’ve had threat modelling has made me a much better software developer, and likely a slightly more thoughtful person.

If you like what you have read here and want to delve in deeper with threat modelling, I suggest first reading up on STRIDE, and of course read Alice and Bob! After that, using your favorite search engine, look up “OWASP Threat Modelling”, STRIDE, and PASTA, as a start down your new path.

Pushing Left, Like a Boss, Part 5.14 Secure Coding Summary

This article will summarize the previous articles in Part 5 of this series, and is hopefully something that you can use for your organization as a start for a secure coding guideline for your developers.

Secure Coding Guideline

In order to ensure that your developers are following these 
guidelines, code review is recommended.
Tanya the tree hugger
Tanya Janca, hugging a giant tree

I’d like to thank all of my professional mentors and the OWASP volunteers that have taught me about Application Security, that is where and how I have learned the majority of what I know on this topic. Without the OWASP community, and it’s free and vendor-neutral teachings, many of us would not be where we are today. The OWASP community has my unwavering and unending gratitude and support. Thank you.

Special thanks to the following people who have helped me directly in learning these concepts, and so much more: Dominique Righetto, Jim Manico, Sherif Koussa, Adrien de Beaupre, Sonny Wear, Nicole Becher, Chenxi Wang, Zane Lackey and Kim Tremblay. I’d never have gotten this far without them.

If you like this blog series, you will love the OWASP Cheat Sheet project! My favorite OWASP project of all time. Check it out!

Up next in part 6 we will discuss the testing phase of the SDLC, what types of security testing we can do, the approaches we can take, as well as other strategies and phases within the SDLC that we can test our apps.

Do you have any more secure coding principles that you would like to add? Guidance you’d like to share? Please add it to the comments below!

Pushing Left, Like a Boss — Part 5.13 — HTTPS only

HTTPS only — for every app, everyone, always.

Temple in South Korea, 2019, Photo Credit: Bryan Hughes

Temple in South Korea, 2019, Photo Credit: Bryan Hughes

Now that encryption is fast, and free, and we know the risks of not using it, there is literally no excuse not to use HTTPS only for every application on the Internet. Literally every application, even for static pages that contain no sensitive information. For everyone (there is no class of user that does not need protection on the internet). Always (there is no time limit, and you can auto-renew your certificates; you don’t even need to really think about it).

Every public website and web application (including APIs) should force the use of HTTPS (and disallow or redirect connections using HTTP). This can be done using security headers in your code or forced on the server.

There is no reasonable excuse for not using HTTPS only for public-facing applications. Feel free to argue with me in the comments. 😀

Up next we will summarize “Part 5: secure coding” of this series.

Pushing Left, Like a Boss — Part 5.12 — Authentication (AuthN), Identity and Access Control

Note: much of this comes from the OWASP Cheat Sheet on Access Control, by Shruti Kulkarni, Adinath Raveendra Raj, Mennouchi Islam Azeddine and Jim Manico. And if not, it may come from one of the other offerings from the amazing OWASP Cheat Sheets Project. For more information on almost any AppSec topic, check out the project, it’s definitely worth your time!

B-Sides Vancouver, 2019

B-Sides Vancouver, 2019

Let’s start with some definitions.

Authentication is ensuring that the user who is using the application is the actual person they purport to be. For instance, when I log into my webmail, it verifies that I am the one-and-only Tanya Janca that owns this account. Not a different person who is also named “Tanya Janca”, and not someone pretending to be me. The real, authentic, me; the person who owns the account.

Identity (digitally speaking) is a hardware or software-based solution for proof of identity of citizens, users or organizations. For example; in order to access benefits or services provided by government authorities, banks or other companies in person, you must verify your identity, usually with a driver’s license, passport or another physical document. However, if you are verifying your identity digitally (electronically), you must use a software or hardware based solution to prove your identity.

Access Control is allowing (or not) users to access systems, features, data, etc. based on the permissions that the system has assigned to that user. For instance, perhaps you have access to the main parts of your building, but there is an electrical room to which you do not have access. Your badge will not get you it. This is access control, and it works the same way with software, granting or restricting access based on your role and/or identity within the system.

As usual, I recommend using the features as provided in your programming framework for AuthN, Identity and Access Management features. I also suggest strenuous testing of your implementation, because if someone breaks these security controls there shall be dire consequences.

General Rules of Authentication (AuthN)

  • Applications that use password-based authentication should follow the standards put forth in my book, Alice and Bob Learn Application Security and/or the current NIST Password Standard. Ex: do not forces users to change their passwords often, allow very long passwords, do not force complexity, allow and encourage the use of password managers, etc.
  • The principle of least privilege is the practice of limiting access to the minimal level that will allow normal functioning. This principle should be applied not only to the users of web applications, but to the applications themselves, and as they are given access to databases, web services and other resources. For example, it is rare that an application requires a database user that is the database owner (DBO); generally, read/write or CRUD is enough.
  • Passwords will be salted and hashed (one-way), not encrypted (two-way), before storing. Use a strong hashing algorithm (again, refer to my book, Alice and Bob Learn Application Security or NIST if you are unsure).
  • Passwords will be encrypted in transit (HTTPS only).
  • Re-authentication will be performed for “Sensitive Features”. Sensitive features could include, but are not limited to; changing passwords or security questions, changing bank account information, deleting a user account, transferring large sums of money.
  • Measures must be taken to prevent or circumvent brute force attacks. Measures could be a maximum number of login attempts, requiring a captcha after 5 failed logins attempts or throttling (slowing down) the system to make a brute force attack more difficult.
  • Passwords must be masked (not echoed to the screen) while the user enters the password.
  • Validate that a user is authorized to access every new page and for every action. Ensure that this is applied using a pre-approved list of approved users, not a block list of unapproved users. See blog post Pushing Left 5.1 for more information on input validation and approved lists..
  • Never assume that “hiding” a page or feature means that it is protected or ‘safe’, that is not enough.
  • Login fails and errors should be logged. Ensure not to log sensitive information such as the text used for the attempted password, as it is likely only one or two characters off from the real password. Refer to blog post Pushing Left 5.9 for more information on what to log.
  • Brute force attempts (defined as 10 or more successive failed attempts to login in under 1 minute, or 100 or more failed attempts in one 24-hour period) must also be logged. If possible, the IP address of said attacker should be blocked, and account owner notified.

I realize that this list is not exhaustive, as this is a huge topic that could easily fill an entire book. I invite you, my readers, to provide more thoughts, topics and ideas in the comments section below. Thank you for reading.

Up next in the ‘Pushing Left, Like a Boss’ series: HTTPS only.

 

Pushing Left, Like a Boss — Part 5.11 — Authorization (AuthZ)

Authorization (also known as ‘AuthZ’) is verifying that the user who is trying to perform an action within your application is allowed (is authorized/has permissions) to use that functionality. For instance, is the user an admin user? If so, allow them to view the admin page. If not, block access.

There are several different models used within our industry for authorization, with RBAC (Role Based Access Control) being the most popular. RBAC means assigning people different roles in your system(s), just like people play different roles within your organization, and give them access based on the role they are assigned.

For instance, meet Angela, a hypothetical software developer who is new to my project team (pictured below).

#WOCTechChat: Angela the Software Developer

#WOCTechChat: Angela the Software Developer

 

As a software developer Angela is going to need access to all sorts of things; source control, perhaps permission to publish to the CD/CI pipeline, and various file systems.

Now look at the second image to see our project team: Sarah, Angela and Jennifer. A project manager, software developer, and a database administrator (DBA). They all play different roles within the project and our organization, so they need different sets of permissions and access. Angela the software developer should not need Database Owner (DBO) permissions, but the DBA definitely will. The project manager is unlikely to need access to the web server.

This is where Role-Based Access Control (RBAC) is extremely helpful, the system administrator can easily assign the proper roles to each of our project members, to ensure they are only authorized access to the things they need to get their jobs done (least privilege).

Project manager, software developer, and DBA, Photo Credit: #WOCinTechChat

Project manager, software developer, and DBA, Photo Credit: #WOCTechChat

When writing code for authorization within applications, use the features in your framework, and re-verify access for every feature and/or page of your application. Test your implementation thoroughly, with each role, for best results.

This is something that is often gotten wrong by software developers, which can cause huge issues, so please take care to do thorough testing.
For a deeper dive into this topic, check out the OWASP Cheat Sheet on Authorization Testing Automation, by Dominique Righetto.

Up next in the ‘Pushing Left, Like a Boss’ series: Authentication (AuthN), Identity and Access Control.