We Hack Purple, Acquired by Bright!

Big things are happening! Tanya’s friends over at Bright recently bought We Hack Purple, and we could not be more thrilled. Bright makes a DAST (dynamic application security testing tool) that is developer-friendly. Tanya has worked on their advisory board for a few years and has gotten to know the team. Additionally, they just released a brand new tool for the lucky framework (crystal programming language), which creates security-focused unit tests, automagically! Plus, they have more on the way!

Bright Security!
Bright Security!

As part of the deal, all We Hack Purple courses are now FREE to We Hack Purple community members! The community has no hidden fees, so you can learn with no strings attached!

In the coming years, Tanya plans on working with Bright to create more content (which means more free courses!), running the community, speaking at conferences, and helping make Bright's products even more amazing!

Additionally, Tanya will start writing her next book, Alice and Bob Learn Secure Coding! Stay tuned for more updates; thank you all for your continued support!

The future is Bright!

Conclusion: Security Champions

In the previous article we talked about Metrics, and in this article, I will conclude this series on Building Security Champions.

A few more tips:

  • Start by defining the focus of your program and what is expected from champions. Be realistic; you can only expect 1-4 hours maximum effort from them per week.
  • If someone is taking a security course, but they are not on the security team, they may make a good champion. Reach out and introduce yourself.
  • If the mantra of the security team is “it’s my job to help you do your job, securely”, “you’re my customer” or “I’m here to serve you”, that is very attractive. If your team is known as ‘the ministry of NO!’, you will have difficulty attracting volunteers until you turn over a new leaf.
  • Record every group session and save them. Create an on-boarding set of champion videos from these recordings, so you can auto-onboard new champions. Some of the videos can also be used to on-board new software developers or other IT staff.
  • Save all the videos so anyone who missed them can see them later. Offer up the list of videos to everyone at your organization, if appropriate.
  • Include a TTT (train the trainer) package so that your security champions can train their own teams as needed. For instance, if you want your champions to give training or talks to their own teams, have them follow your package. The package should contain 1) your slides, 2) demo information and instructions to set it up, 3) a video of you giving the talk/training, and, 4) a video of you explaining what you are trying to get across for each slide and the entire demo, spoken as though you are teaching someone to give the talk on your behalf. For an example of this, see mine!
  • PS… Feel free to give these talks yourself, at your own workplace.

Lastly, don’t stop. Don’t give up. Perseverance is the thing that will make this program work. As your program continues it will grow and the value you that you receive from it will also grow, scaling upwards over time. You and your organization can do this, all it takes is dedication and time.

Please feel free to email me with questions, or even better, tell me about your success with your own security champions program!

Security Champions: Metrics & Data

The previous article in this series is Recognizing and Rewarding Your Security Champions.

If you’ve followed my conference talks, you likely saw my Security Metrics That Matter presentation, and understand that I absolutely love data. Here’s a general list of security metrics that matter, if you don’t want to read the whole article or watch the entire talk.

You may wonder, why are metrics important? The answer is twofold.

  1.  We can use data and metrics to report up to our bosses and show them we are succeeding. It’s evidence that what we are doing is working, and how well it is working. You can then use that data again to ask for more resources (staff, tools, budget), a raise, or other changes.
  2. The second reason is so that we, ourselves, can improve. We want to improve our program, ourselves, and our results. When we measure our activities and their impacts, we can see which activities or methods produce better results. We can then use that information to change our approach, for the better.

It is important, however, that we do not become fooled by vanity metrics. Vanity metrics are numbers that make us look good, but don’t necessarily mean anything. My talk on this subject has several stories, but for now let’s just tell one.

I used to work somewhere, and we all wrote blog posts. We were measured on how many “clicks” we got. A colleague of mine got 10X the number of clicks that I did, and I asked him how he did it. He explained he got the most clicks on Reddit. I was unfamiliar with the platform but thought I would give it a try. First though, I asked for extra data: I wanted to know how long people were staying on our articles. It turned out that people were staying on my articles approximately 1.5 minutes (which means they were reading the whole thing), and on his they were staying an average of 1.5 seconds (which means almost no one was reading the article, they were just clicking the link. This is commonly known as a “bounce”.) The purpose of our jobs was to write articles to help customers know how to use our products, and this means a bounce wasn’t valuable. Armed with this new information, we started comparing different platforms, and it turned out almost all traffic from Reddit were ‘bounces’. I also noticed that my Twitter followers were significantly more likely to read the article when compared to LinkedIn, and LinkedIn got better results than Reddit. My colleague started focussing on sharing links on Twitter (he had more followers than I did), and I started trying to get more followers on the same platform. It turns out that measuring clicks was a vanity metric. The rest, as they say, is history.

Now for your security champion program metrics! Measure the following things so you can see what’s working and what is not. Don’t forget to report upwards about the ROI (return on investment) your champions program has produced!

  • How many new security champions you have attracted
  • Measuring program engagement: how many people attended an event, how many people reported issues to you, how many people asked questions,
  • Use the bug tracker for metrics on how many security bugs are being reported and fixed, especially if you have targeted a specific bug class. Also, count how many new instances of that type of bug appear, hopefully this number will be very low.
  • Instances where champions have told you about a security issue you would not have known about otherwise
  • If the champions report better work satisfaction and/or fewer missed days of work
  • Gather stories of your champs saving the day, providing help to their teammates, or anything else that makes for a good story-telling session for upper management.

Up next, I will share a few more tips that don’t fit into any of the previous categories and conclude this series. Please feel free to email me with any questions!

(Over)Communication With Your Security Champions

As mentioned in the previous article (Recognizing & Rewarding Your Security Champions), the most common reason for failure of a security champions program is the security team losing steam, and/or the champions losing interest. In this article, we will discuss a few ways to avoid this. The best way? Communication.

https://youtu.be/nm1MpTuSNyI

To start off with, pace yourself. Often when I speak to security teams who have a failed program, they tell me how they started off very strong. “We gave them 2 different trainings, 2 workshops, and 3 lunch and learns, all in the first three months. Then we were exhausted. We haven’t done anything with them in over a year.” This scenario is far too common.

To pace yourself, I suggest meeting with each champion once a month, for 30 minutes. Then hold one lunch & learn and send one email to the champions. This might not sound like much, but you must remember, they are already doing a full-time job for your organization.

In my 1:1 meetings I like to ask the following questions (adopted from Ray Leblanc’s Security Champions article on Hella Secure blog):

  • What are you working on?
  • What are you going to work on next?
  • Do you need any help?

Each of these questions is open-ended, with the hope that it will prompt a meaningful conversation. I usually take notes during the meeting, and then send them after to both of us, with any action items for either of us highlighted in bold. (Note: I’ve used this technique to get many of my previous bosses to do things for me. Set a reminder for a week from then, and then reply-all to that email chain and ask: “Any updates on these action items?” It works like a charm!)

In your lunch and learn (which does not need to be at lunch time, or involve food), teach them something you want them to know. Do not teach them things they do not need to know, unless they asked for that topic specifically. During this session you or a teammate can teach, or you can show them a training video you like, or even a recording of a conference talk that really hit home for you. If you show them something pre-recorded, ensure you watched it first, you don’t want to waste anyone’s time with death-by-powerpoint. The more fun you can make these sessions, the better. If you’re up for it, invite all of the developers and let everyone learn something new!

Woman running
Photo by Greg Rosenke on Unsplash

Ideas for lunch and learn topics:

  • The specifics on how to apply policies, standards and guidelines. This could be a secure coding workshop, or a threat modelling session.
  • Talks about the top vulnerabilities that you are seeing in your own products, including the risks they pose to your specific business model.
  • Workshops on how to use the tools that your team wants them to be responsible for. Especially how to configure them, how to validate results, and where to find information on how to fix what they find.
  • If they are responsible for design or architecture, give them secure design training.
  • Tell them about a security incident your team had, and how it could have been prevented (assuming you are allowed to share this information).
  • Hold a consultation on the new policy, standard, or guideline your team is considering publishing. Ask for their feedback, then adjust your documents accordingly.
  • Remember to take attendance (for metrics) and take notes of any questions for you to follow up.

 

The monthly email:

Sometimes you just don’t have time to do a lunch and learn event or hold 1:1s, but you still need to send a monthly email. The monthly email lets the security champions know what’s going on, and that they still matter to you. The program is still running, because you sent an email. If you don’t send this email, and you haven’t touched base in any other way, this leaves a space where your program may start to disappear.

The monthly email does not need to be fancy and doesn’t need to say a lot. Generally, the monthly email says:

  • What events are happening this month at your org (lunch and learn, all staff, any other meeting they should know about)
  • Any updates your team has (new policy, new tool, project updates, etc)
  • Anything interesting from the news that they may find valuable
  • Any local security events they may be interested in
  • Any podcasts, videos, blog posts or any other media that is relevant and you feel relates to them, about security (of course)

I live in Canada, and in Canada we are a country of immigrants. This means we have many, many different religions represented in most workplaces. In December, there’s Hannukah, Ramadan, Christmas, and more, and often people take time off for these special holidays. This means having a large meeting in December is darn-near impossible. This is the type of situation where you just send the monthly email! It could say something like the following:

Hello Security Champions!

As it is December and many of you will be off celebrating various holidays, we are not going to have any events this month. We also want to wish you happy holidays, and we hope you enjoy all the snow we got this past weekend!

In January we are going to boot the Champions program back up with a lunch and learn on XSS. As some of you are aware, we’ve found it in about 1/3 of our custom apps, and we want to stomp​ it out in the new year (with your help of course!) An invitation will arrive later this week.

In the meantime, please check out this XSS Deep Dive by Tanya Janca. We’re going to cover this topic a bit differently than she does, but it gives you a good idea of what we are up against.

Have a great December folks!

Sincerely,

The Security Team

My hope from this blog post is that you remember to continue to communicate with your champions. Don’t let your program slip, it will disappear faster than you think. When in doubt, send them an email and check in. Up next, we will discuss Metrics.

Recognizing & Rewarding Security Champions

If you’ve ever read the book The 5 Love Languages, or articles summarizing the 5 love languages, then you are aware that there are predictable patterns of how people respond to various acts of kindness. Someone’s “love language” is the specific type of kindness that they are most affected by. For example, someone for whom their love language is “words of affirmation” would respond very well to receiving a glowing performance review, a compliment on a new article of clothing, or accolades from their colleagues about a project they worked on.

The previous article in this series is Teaching Security Champions.

You may be wondering at this point if you accidentally clicked on an article from a women’s fashion magazine, not a technical article from We Hack Purple. But please have a bit more faith, and read on.

The 5 love languages are:

  1. Gifts
  2. Words of Affirmation
  3. Physical Affection
  4. Spending Quality Time
  5. Acts of Service
Two people sitting using laptops
Security Champions at work!

When we are creating a security champions program, it’s very important that we ensure they feel appreciated. We don’t want them to feel squished into doing two jobs, for only one paycheck. One of the biggest challenges that security team’s face when creating a champions program is having it fall apart after the first few months, either due to the security team losing steam, or champions losing interest. We need them to feel very aware of our gratitude, and interested in the program itself, for them to continue to want to serve the security team’s agenda.

As you likely already figured out, not all the love languages listed above are work appropriate. We can’t run around giving hugs or holding hands with other employees. That said, we can adopt most of them for work situations, so that we can show the champions they matter to us, in appropriate ways, that support our security program.

Below is a non-exhaustive list of several ideas to make your champions feel as valuable as you know they are for your program.

  1. (Security Related) Gifts
  • Physical or digital security-related gifts – books, videos, training, CTFs, perhaps a copy of Alice and Bob Learn AppSec?
  • Create a Certificate to put on their wall.
  • Stickers, posters or any other decoration that is security focused.
  • Tickets to a conference or training.

 

  1. Words of Affirmation
  • Make sure to put a note in their performance review about them being a champion.
  • Tell their boss every time they do something that makes a big difference.
  • Send them an email and tell them when they did something big, let them know that YOU saw.
  • Recognize them in front of their peers (special virtual background, star on their name is slack, etc.)
  • Digital badges for signature blocks.

 

  1. Physical Affection
  • High Fives are the only recommended form of physical affection that you should show another employee. High fives signal success, and your approval of whatever they just did.
    • *** And only do this if you are confident that the employee is comfortable. Please be mindful that some religions and cultures do not allow those of the opposite sex to touch each other and be respectful if this applies. Never push physical touching at work.

 

  1. Spending Quality Time
  • Giving them your time is a reward. When you do, give them your undivided attention (put your phone away), and turn your body towards them.
  • Let them see a new tool first, give them a “sneak preview” ahead of everyone else.
  • Let them help you make decisions. Ask for advice from them and feedback, then take it seriously.
  • Invite them to attend security events with you.
  • Whenever you meet with them, this is quality time. Ask them: What are you working on? What are you going to work on next? Do you need any help?

 

  1. Acts of Service
  • Help them with more than just security. Are you good at design? Help them with it! Are you great at presentations? Offer to let them practice in front of you. You don’t need to do this very often, just once can make a huge impression.
  • Make introductions, where appropriate. “Oh yeah, Chris from QA uses that tool, I’ll introduce you so you can learn.”
  • Find answers they need to security questions and problems. Never leave them hanging.

 

When people feel appreciated and valued at work, they work harder (many studies show this to be true). Your champions already have full time jobs on other teams, they are going above and beyond for you. Let them know that you are very aware of them, by always making them aware of it with your actions, not just your words.

 

In the next article we will discuss communication with your champions!

Announcing a new partnership with OWASP! 

For those of you hiding under a rock, OWASP is an international non-profit foundation which supports over 100 open source projects, over 300 meetup chapters worldwide and regular large international conferences, all with the aim of helping everyone to build secure software. Since We Hack Purple's mission is very similar, we thought a partnership was very necessary!

As part of the OWASP & We Hack Purple partnership, all OWASP members are now provided free access to the Application Security Foundations Level 1 course from WHP! This introductory AppSec course will answer all your burning questions and define all the technical terms right at the start. Then we will set goals for your AppSec program at work as an exercise. After this, we dive deep into every type of application security activity and tool on the market while sprinkling you with quizzes and exercises. As a final project, we make an AppSec program action plan for you to bring back to work with you. This on-demand course is FREE for all OWASP members!

To access the course, read on, sign up with your OWASP.org email address, and start learning.We Hack Purple’s Application Security Foundations Level 1 course consists of 108 short modules. It is virtually trained by Tanya Janca, our head nerd at WHP. All you need to do is be an OWASP member and sign up! If you’re not a member, joining or renewing a lapsed OWASP membership is easy! Just remember to sign up using your owasp.org emailso you can complete the We Hack Purple registration process.Join OWASP as a new member, or renew your membership

Enroll in We Hack Purple’s Application Security Foundations Level 1 course

 

Pushing Left, Like a Boss – Part 10: Special AppSec Activities and Situations

Special Situations

Not all application security programs are the same, and not all security needs are equal. In this article we will compare security for a small family business, a government and Apple.

Think about this: Not only does Apple make two popular consumer operating systems (OSX for desktop and laptop computers and IOS for phones and iPads), they also make a popular cloud platform (iCloud), a popular programming IDE (xCode), hardware for several types of laptops, phones, tablets, watches, and so, so much more. They also build physical security features directly into their products. It wasn't until I decided to re-publish this article that I realized just how many things depend on Apple. It's staggering.

What this means is that Apple has very special security needs. Their operating system, cloud and other products that we depend on must be secure. They must go far beyond the average company in their efforts to ensure this, and they do.

 

 

Tanya Janca teaching

See that computer to my left? It's an Apple. I own 3 laptops that run OSX,
and even used to work at an Apple repair shop back in the day. I learned 
to program on an Apple computer.

But the average company is not Apple. Which means they don't need to take the same precautions. As a second example, let's take “Alice's Flowers”.

Alice has a website for her floral shop that delivers flowers in her small town. It shows basic info, such where they are located, their phone number, and when they are open. It also has a link to her Shopify shop for online orders (meaning she does not need to secure that, Shopify does; Alice is smart to have outsourced the hard part. This is called Risk Transference.). The rest of Alice's website, in the big scheme of things, is not very important. If her site goes down for a day or two, it would be inconvenient but it would not the end of the world.

Most companies fall somewhere between Apple and “Alice's Flowers” in regard to their risk. It has been my experience that many places, when I look at where they spend their security dollars, seem to be very confused as to where they sit on this scale. This is not my attempt to make fun or insult any company, I think it's a sign of our times that not all companies are receiving good (and unbiased) advice.

The AppSec activities listed below do not apply to all IT shops. I invite you, reader, to try to imagine where your workplace would be on this scale. Please remember your place on the scale as you read the rest of these examples, to help you decide if any of this activities may apply to your place of work.

Special AppSec Activities

Responsible Disclosure

Responsible Disclosure (also known as Coordinated Disclosure), is a process where someone finds a security problem in a product or site, reports it to a company, and the company 1) does not sue them 2) thanks them 3) sometimes offers a token of appreciation but generally does not offer money and/or 4) sometimes the person who reported it is publicly acknowledged by the company or their bug is reported formally as a CVE to Mitre.

Last week (when I wrote the first version of this article, in 2019) I used a government website, I saw a bug, I figured out who to talk to (the Canadian Government doesn't have a disclosure process, of any kind), and I emailed it to them. They said thanks, I offered ideas on how to fix it, and they were great. This is one version of responsible disclosure. See how I was responsible?

Some places have a formal program, whereby security researchers (or normal users like me), can report issues to them in a secure manner (me sending details over twitter then an email to the government employee was not very ‘secure'). If the product they found the issue in is something well-known or used often, they may file a CVE (Common Vulnerability Exposure) so that other's are aware that version of that product is known to be insecure. But also for credit; having your name on a CVE is pretty cool.

Industry standard for fixing such things is (theoretically) 90 days, but not every company complies, and not every person who reports such an issue is so patient. When you hear that someone “dropped O Day”, what they mean is they released the info about a vulnerability onto the internet, and there is no known patch for it (also known as a ‘zero day'). This is often done in order to pressure a company to fix the issue. Because if one person found it, that means others might have found it (and they may be exploiting it in the wild, causing people problems, and that's no good).

Note: “dropping O day” is NOT a part of responsible disclosure.

Bug Bounties

Katie Moussouris basically invented Bug Bounties as we know them today, she speaks on this topic often and is a wealth of knowledge on this and many other security topics. Since then several large tech companies have started their own programs including Shopify, Apple, and Netflix.

The invention of bug bounties spawned an entirely new industry; dedicated security researchers or “bug hunters”, as well as large companies that sell these people's services on a pay-per-find basis.

The thing about working as part of a bounty program is you only get paid if you find something, if no one else has found it before, if your finding is in scope, and if your report actually makes sense. Submitting things that aren't in scope is a great way to get yourself banned (such as taking over accounts of employees at the company you are supposed to be finding bugs for, don't do that). What this means is that many, many bug hunters make little-to-no money, and a small few do quite well. I've heard people call this “a gig economy”, which means no job security, benefits or anything to fall back on if you have a bad month.

The economics for the researchers aside, this is an advanced AppSec activity. I've been asked many times “Should we do a bounty?” to which I have responded “How is your disclosure program going? Oh, you don't have one, okay. Ummm, how is your AppSec program going? Oh, you don't really have a formal program you just hire a pentester from time-to-time, okay. Hmmmm, do you have any security-savvy developers that could fix the bugs the bounty finds? No, okay, ummmmm. So you already know that you should DEFINITELY NOT DO A BOUNTY, right? Okay, yeah, thanks.”

I realize that doing a bounty program is “hot” right now, and that the companies that sell bounty programs are happy to tell you that it's good value for your money to start no matter where you are at in your AppSec program. I disagree. I often sugar-coat things in my blog, but for this I can't. If you don't already have an AppSec program and you do a bug bounty program you are setting your money on fire. If you want to hear from an expert on the topic though, you should watch Katie Moussouris explain it much more gracefully than I, here: Bug Bounty Botox Versus Natural Beauty.

Capture The Flag, Cyber Ranges, and other forms of Gamification

Capture the Flag contests, also known as CTFs, are not a bunch of security people running around in a field with flags; it's a contest made up of security puzzles. Sometimes it is vulnerable systems you need to exploit, sometimes it's intentional puzzles for you to solve. When you ‘solve' one of the challenges you get a ‘flag', which means points. The person or team at the end with the most points wins.

Cyber Ranges and other gamification systems are similar, you play, solve security problems, and learn at the same time.

Why do security professionals sit around playing games and solving puzzles? Because this is a great way to learn. And it's FUN! Also: if you find a vulnerability and it's something you have done before in your code you will never, ever make that mistake again. Trust me on that one.

Inviting your developers to participate in security gamification can be a great team building exercise and it can teach them many of the lessons you wish they knew!

Snowflakes

There are many more special situations that demand interesting and exciting AppSec activities, such as chaos engineering, red teaming, and so much more. You can read a lot more about it in my book, Alice and Bob Learn Application Security.

Thank you

Thank you for reading my first blog series; this is the end. When I started my blog I honestly wasn't sure anyone would read it, but I wanted to share all of the things I had learned so I went for it. Thank you for coming on this journey with me, I hope you follow me on many more.

Pushing Left, Like a Boss – Part 9: An AppSec Program

In my talk that this blog series is based on, “Pushing Left, Like a Boss”, I detailed what I felt an AppSec program should and could be. Since then, I've learned a lot and now see that there are quite a few activities that you can do, but it's the goals and the outcomes that actually matter. Our industry has also changed quite a bit since I wrote that talk (written in 2016, first seen in public 2017, this article first published in 2019 and republished here in 2021).

My first international talk, at AppSec EU, 2017. It feels so long ago.

My previous thoughts on what a basic AppSec Program should be:

For bonus items I had listed:

And for “extra special situations” I recommended the following (which will be explained in the next blog post):

  • Bug Bounty Programs
  • Capture the Flag Contests (CTFs)
  • Red Team exercises

Anne Gauthier of OWASP Montreal, myself (pre-Microsoft) and Nancy Gariché of Secure That Cert and OWASP DevSlop. In the background is Christian Folini of the OWASP CRS project. I had no idea how important these people would become to me at the time.

Anne Gauthier of OWASP Montreal, myself (pre-WeHackPurple) and Nancy Gariché of Secure That Cert and OWASP DevSlop. In the background is Christian Folini of the OWASP CRS project. I had no idea how important these people would become to me at the time.

I'm going to preface this next part with two thoughts.

You can't do security “right” if you aren't doing IT “right”. If you can't publish fixes for a year+ because your processes are broken, if you are underwater in technical debt, if you have dysfunction within your IT shop already, this is going to be very hard. I suggest starting with modernizing your systems and entire IT team as you modernize your security approaches, hand-in-hand. Don't give up, you can do this! Take one item, aim for it, and continue on until you're doing well.

If you have poor communications between the security team and the rest of IT this will be another hurdle that you have to work on. Culture plays a big part in ensuring your efforts are successful. I've released a bunch of videos on my YouTube channel on this topic, start with this one.

My new vision for an AppSec program:

  • A complete picture of all of your apps. Bonus: alerting, monitoring and logging of those apps.
  • Capability to find vulnerabilities in written code, running code, and 3rd party code. Bonus: the ability to quickly release fixes for said issues
  • The knowledge to fix the vulnerabilities that you have found. Bonus: eliminating entire bug classes.
  • Education and reference materials for developers about security. Bonus: an advocacy program, creating a security champion program, and repetitive re-enforcement of positive security culture.
  • Providing developers with security tools to help them do better. Bonus: writing your own tools or libraries.
  • Having one or more security activities during each phase of your SDLC. Bonus: having security sprints and/or using the partnership model (assigned and/or embedding a security person to/within a project team).
  • Implementing useful and effective application security tooling. Bonus: automating as much as possible to avoid errors and toil.
  • Having a trained incident response team that understands AppSec. Bonus: implementing tools to prevent and/or detect application security incidents (can be homemade), providing job-specific security training to all of IT, including what to do during an incident.
  • Continuously improve your program based on metrics, experimentation and feedback from any and all stakeholders. All feedback is important.

I'd love to hear your thoughts on my new application security ‘prescription'. Please comment below.

Up next in this series we will discuss the AppSec “extras” and special AppSec programs; I will discuss all the things in this article that I have not previously defined for you.

Pushing Left, Like a Boss – Part 8: Testing

Testing can happen as soon as you have something to test.

Suggestion: Provide Developers with security scanning software (such as OWASP Zap), teach them to use it, and ask them to fix everything it finds before sending it to QA.

You can add automated security testing into your pipeline, specifically:
  • VA scanning of infrastructure (missing patches/bad config - this is for containers or VMs, but you often usually use different tools)
  • 3rd party components and libraries for known vulnerabilities (SCA)
  • Dynamic Application Security Testing (DAST) - only do a passive scan so that you don't make the pipeline too slow or use a HAR file to automate which parts are tested and which are not.
  • Static Application Security Testing (SAST) - do this carefully, it can be incredibly slow. Usually people only scan the delta in a pipeline (the newly changed code), and do the rest outside of a pipeline.
  • Security Hygiene - verify your encryption settings, that you are using appropriate security headers, your cookie settings are good, that HTTPS is forced, etc.
  • Anything else you can think of, as long as it's fast. If you slow the pipeline down a lot you will lose friends in the Dev team.

Q&A at #DevSecCon Seattle, 2019

During the testing phase I suggest doing a proper Vulnerability/Security Assessment (VA) or PenTest (if you need management's attention), but early enough that if you find something you can fix it before it's published. More ideas on this:

  • Repurpose unit tests into security regression tests: for each test create an opposite test, that verifies the app can handle poorly formed or malicious input
  • For each result in the Security Assessment that you performed create a unit test that will ensure that bug does not re-appear
  • Ensure developers run and pass all unit tests before even considering pushing to the pipeline
  • Perform all the same testing that you normally would, stress and performance testing, user acceptance testing (hopefully you started with AB testing earlier in the process), and anything else you would normally do.
Penetration testing is an authorized simulated attack on a computer system, performed to evaluate the security of the system. The idea is to find out how far an attacker could get. This differs from a security assessment or vulnerability assessment, in that they are prioritizing exploiting the vulnerabilities they find, rather than just reporting them. If you want to shock management and get some buy-in, a PenTest is the way to go. But if just you want to find the things wrong with your app, and ensure lower risk to your systems, I would recommend a security/vulnerability assessment instead. It depends on your situation.

Up next in this series we will discuss what a formal AppSec program should include, followed by AppSec “extras” and special AppSec programs, which will end this series.

Pushing Left, Like a Boss – Part 7: Code Review and Static Code Analysis

This article is about secure code review and Static Application Security Testing (SAST). Static analysis is a highly valuable activity which can find a lot of security problems, far before you get to the testing or release stages, potentially saving both time and money.

Note: SCA is Static Composition Analysis, verifying 
that your dependencies are not known to be 
vulnerable. I have heard many say "static code analysis" 
when referring to code review/SAST tools, shortening it 
to SCA for simplicity. We will not do that, SCA will only
be used to refer to static composition analysis.

When application security folks say ‘static' analysis, we mean that we will look at written code, as opposed to ‘dynamic', which means when your code is running on a web server.

Since I wrote this article a few years ago, I have had a chance to do more in the code review space and spend some time working with SAST tools. Although my attention span is short, and I can be impatient at times (such as, for example, when I am awake), I can now spot several types of problems fairly easily. If you had asked me a few years ago if I would ever find code review pleasurable, I would have laughed, but now I find validating SAST results rather satisfying. It's funny how much our opinions can change over time.

Code Review can happening both during the coding and during the testing phases of the system development life cycle.

There are two options for doing code review; manual or with a tool. There are pros and cons to each, and using both will get you the best results.

When reviewing code manually for security you don't read every line of code; you just review the security controls to ensure they are in all the places they should be and that they are implemented correctly. Although I have not completed hundreds of secure code reviews in my career, I do recall discovering in delight when there was no input validation on a data field, or that an app was not using stored procedures but inline SQL, both of which are big no-nos. It was so obvious when I knew where to look and what to look for. However, most code reviews are not so simple, and many bugs are difficult or nearly impossible to spot with only the naked eye.

Note: when I say “only review the security controls” I mean things like a login, input validation, authorization, authentication, integration points, etc. Anything that has to do with the security of the app.

When using a tool for code review you would use something called a ‘static code analyzer' or a ‘SAST' (Static Application Security Testing) tool. This special kind of software parses your code into areas of concern and attempts to follow every possible outcome. It takes a lot of processing power and can take hours or even days to complete. It then creates a report with approximately 60-80% false positives.

Note, since I first wrote this article new types of  
static code analysis tools have been created that 
allow for very fancy grepping (regex searching) of 
the code base, with templates to help you find 
problematic code. How cool is that!

I know what you are thinking right now: 80% false positives !?!?!?! Why would anyone want to use a tool like that? Let me explain.

The key to looking at results of an SAST tool is that the items it lists are not answers, they are hints of where to look for problems, that the code reviewer (hopefully a security expert) can investigate. This means instead of reading 20,000 lines of code, the code reviewer uses the tool, it finds 200 ‘clues', and then from those 200 ‘clues' they end up finding 20 real bugs. And many of those bugs they could not have found with just their eyes, because SAST tools can go several layers deep into the code, in a way humans just can't.

When performing code review it is possible to find all sorts of other problems with your application, not just security issues. During one of my projects the code reviewer found several memory leaks. When we fixed them our application became lightening fast, which made our project team look amazing. There is so much more than just security problems that a good code reviewer can find; it is definitely a worth-while task if you want to build truly resilient and secure software.

My best 'superwoman' pose at #DevSecCon Seattle, Sept 2019

Although we already discussed this in Part 5.2 Using Safe Dependencies, I'm going to bring it up again: everyone needs to verify the security of their 3rd party code/components/frameworks/libraries/whatever-you-want-to-call-the-code-in-your-app-that-you-didn't-write. You must verify that they (3rd party components) are not known to be vulnerable. When I say ‘known to be vulnerable', I mean there is currently information available on the internet about the vulnerability that is documented on what the problem is and/or how to exploit it.

Many organizations and industry spokespeople create a lot of fear, uncertainty and doubt (FUD) around zero days (vulnerabilities in popular software for which there is no existing patch), advanced persistent threat (APT - someone living on your network for an extended period of time, spying on you) or very advanced attackers, such as nation states. In reality, almost all serious security incidents are a result of our industry not keeping up with the basics; missing patches, people with admin privileges clicking on a phishing email while logged in, and well-known (and therefore preventable) software vulnerabilities, such as using code with known vulnerabilities in it. Essentially; basic security hygiene.

Bonus resource: My friend Paul Ionescu created a code review series.

Up next we will talk about the Testing Phase of the SDLC!

Pushing Left, Like a Boss – Part 6: Threat Modelling

The last security-related part of the Design Phase of the System Development Life Cycle (SDLC) that we will talk about in this blog, is threat modelling, affectionately known as “evil brainstorming”.

Threat modelling happens during the design phase of the system development lifecycle.

The purpose of threat modelling is to discuss the possible threats to your system, then to do your best to mitigate them, and if not, to manage or accept the risks. There are multiple formalized methods for doing this, which I will not discuss here, each one already has its’ own book, advocate or dedicated blog, likely doing a better job detailing it than I ever could. In fact, my friend Adam Shostack, wrote an amazing book about it. Check it out!

That said, dear reader, I want you to understand why threat modelling is important, who needs to do it, as well as when and how you can start.

In order to create a threat model, a representative from each project stakeholder group needs to be present, this means someone from the business/someone representing the customer, a security rep, and someone from the development team. Yes, someone from the tech team needs to be there; they often have the most-frightening threat ideas!

Then you discuss what the risks are to the system. “What keeps you up at night?”, “If you were going to attack your app, how would you do it?”, “What threat actors should we be aware of? Should we prepare for?”, etc. You want to look at the system from the viewpoint of an attacker, what could go wrong? How could the system be misused? How can we ensure we protect the user (including from us)? What is the worst-case scenario? This session can be incredibly formal (creating attack trees, for instance), or quite informal (which is how I would suggest you start, if you have never done one before). You can read about two informal threat modelling sessions I documented on my blog; Serverless with Bryan Hughes and Robots with Jesse Hones.

Once you have a list of concerns, you will need to evaluate which ones are more (or less) likely and which may require security testing of your app (to see if it is vulnerable or not). You also need to evaluate which ones matter more or less; not all risks are created equal. You may be surprised (and frightened) by the justifications for the value of each risk; recently I had to deliver the news that the potential damage was “absolutely catastrophic”. Even though the risk itself was ‘fairly unlikely”, the project team changed their course of action immediately once they understood the potential long-term ramifications.

When you have your list of risks, and how much each one matters, you need to plan. Will you mitigate (fix/remove) some of the issues? Will you manage some, by keeping an eye on them to see if they get worse? Will you accept some of the risks? Perhaps some are highly unlikely or pose only a tiny threat? Why spend funds on something that is not worrisome?

The entire process should be documented, especially the decisions you make at the end, with management sign off. These decisions must be made with management or someone that has the authority to make large decisions like this. A software developer cannot “accept the risk”, nor likely can you as a security engineer; it is probable that you will need a C-level executive to accept risks that are above the level “medium” or “low”.

If you are doing an iterative design, you will need to do several shorter threat modelling sessions, for new features or large changes to existing features. If you are doing a large waterfall-style approach, one thorough session should be enough (assuming no large changes afterward). You will need to decide this for yourself and your org.

I threat model and/or do design reviews all the time, and I really enjoy it. I genuinely feel that the experience I’ve had threat modelling has made me a much better software developer, and likely a slightly more thoughtful person.

If you like what you have read here and want to delve in deeper with threat modelling, I suggest first reading up on STRIDE, and of course read Alice and Bob! After that, using your favorite search engine, look up “OWASP Threat Modelling”, STRIDE, and PASTA, as a start down your new path.

Pushing Left, Like a Boss, Part 5.14 Secure Coding Summary

This article will summarize the previous articles in Part 5 of this series, and is hopefully something that you can use for your organization as a start for a secure coding guideline for your developers.

Secure Coding Guideline

In order to ensure that your developers are following these 
guidelines, code review is recommended.
Tanya the tree hugger
Tanya Janca, hugging a giant tree

I'd like to thank all of my professional mentors and the OWASP volunteers that have taught me about Application Security, that is where and how I have learned the majority of what I know on this topic. Without the OWASP community, and it's free and vendor-neutral teachings, many of us would not be where we are today. The OWASP community has my unwavering and unending gratitude and support. Thank you.

Special thanks to the following people who have helped me directly in learning these concepts, and so much more: Dominique Righetto, Jim Manico, Sherif Koussa, Adrien de Beaupre, Sonny Wear, Nicole Becher, Chenxi Wang, Zane Lackey and Kim Tremblay. I'd never have gotten this far without them.

If you like this blog series, you will love the OWASP Cheat Sheet project! My favorite OWASP project of all time. Check it out!

Up next in part 6 we will discuss the testing phase of the SDLC, what types of security testing we can do, the approaches we can take, as well as other strategies and phases within the SDLC that we can test our apps.

Do you have any more secure coding principles that you would like to add? Guidance you'd like to share? Please add it to the comments below!