DevSecOps Worst Practices – The Series

Photo by Samuel Sascha of #Unsplash

When I started working in DevSecOps, I hadn’t even done DevOps before (gasp!). I had been following the Waterfall software development methodology in the Canadian Public Service (with a tiny bit of agile thrown in) for over a decade, before I joined Microsoft and my entire world changed. I had been working on the OWASP DevSlop open-source project with my friend Nikky Becher at the time, muddling our way through, trying to figure out how to pentest in a DevOps environment, and suddenly I was supposed to be a DevOps expert. I thought “How can I catch up? And how can I do it FAST?”

I decided that I would create an app, using the Azure DevOps CI/CD, and I would ‘share’ my pipeline that I used to do it. Quickly I figured out there wasn’t a way in Azure DevOps (at the time) to open-source or ‘share’ a pipeline, so I decided I would film myself, live, as I built. I made a basic app, and a basic dev -> QA -> UAT -> Prod deployment, and I was off to the races. I started coding live on Twitch, and adding every security tool to my pipeline that I could get my hands on.

Not every episode of the OWASP DevSlop show went well, and neither did every tool. The DAST I chose threw a false positive while I was presenting at Microsoft in Ottawa, Canada (live on stage, naturally). There’s one episode with Nancy Gariché (one of the project leaders) and I just FAILED the whole time, and after 3 hours we never got the tool working at all. There was a 5 hour episode where I updated the .Net CORE framework, and all my dependencies that had vulnerabilities in them, and that was exhausting. But I LEARNED. And I learned fast.

Person on the computer, learning

In 2018 I joined a company called IANs Research, doing ‘Ask an Expert’ calls, helping clients with Azure and AppSec problems. As I learned more about DevOps and DevSecOps, I started helping clients with those topics as well. I would often have to do research to figure out the solution before a call, but I didn’t mind. All of their questions helped give direction to my learning. Over the years I have helped hundreds of AppSec teams crush their problems, and learned a LOT along the way.

In 2020 I started coaching two different companies with their AppSec programs, for a few hours a week, building out their DevSecOps programs. When contrasted with the short types of assistance I was giving to IANS clients, these long term relationships helped me see programs over extended periods of time. And I got my hands dirty on a weekly basis, trying many different types of tools (for better or worse).

I also attended numerous conference talks and read lots of articles, to see what others were doing, and learning best practices a long the way.

All of this learning was A LOT of work. One day an IANs client told me they were going to start in DevSecOps, and asked if I could make them a “do not do” list. A list of pitfalls that they should work to avoid. Dear reader, I dove deep down this rabbit hole. It had never occurred to me to start with “what not to do”, rather than “this is the way”. And how many headaches could be avoided…

WIth this in mind, I wrote a conference talk (video below), on this topic. And this blog series is going to explore each of the 15 ‘worst practices’ that I cover in the talk.

Me, keynoting the #RSAC Conference, April 2023

The 15 items I will present in this series are as follows:

  1. Breaking Builds on False Positives
  2. Untested Tools
  3. Artificial Gates
  4. Missing Test Results
  5. Runaway Tests
  6. Impossible SLAs
  7. Untrained Staff
  8. Forgotten Bugs
  9. No Positive Reinforcement
  10. Only Worrying About Your Part
  11. Multiple Bug Trackers
  12. Insecure SDLC
  13. Overly Permissive CI/CD
  14. Automation Only in the CI/CD
  15. Hiding Mistakes and Errors

I hope that by sharing mistakes that I have seen and made, all of us can avoid these issues going forward.

~ The next article, ‘The Boy Who Cried Wolf’, is here. ~

The Difference Between SCA and Supply Chain Security

Right now, the concept of the software supply chain and securing it is quite trendy. After the solar winds breach, the attack on the crypto wallet, at the log4J fiasco, the entire world appears to be focused on securing the software supply chain. I’m not complaining. If anything, as an application security nerd, I am quite pleased that I am finally getting buy-in that these things need to be protected, and that vulnerable dependencies need to be avoided. Folks, this is GREAT.

Photo by Mika Baumeister on Unsplash

Software composition analysis, often called SCA, means figuring out which dependencies your software has, and of those, which contain vulnerabilities. When we create software, we include third party components, often called libraries, plugins, packages, etc. All third-party components are made-up of code that you, and your team, did not write. That said, because you have included them inside of your software, you have added (at least some) of their risk into your product.

A ‘supply chain’ means all of the things that you need to create an end product. If you were creating soup, you would need all of the ingredients of the soup, you would need things like pots and pans in order to cook and prepare the ingredients of the soup, you would need a can or a jar to put it in, and likely a label on top to tell everyone what type of soup it is. All of those things would be considered your supply chain. 

Photo by Miltiadis Fragkidis on Unsplash

Imagine inside of your soup one of the ingredients is flour. Chances are that it (wheat) was grown in a field, and then it was harvested, and then it was ground down into flour, and then it might have been processed even further, and only then it was sent to you, so that you could create your soup. All of the steps along the way could have been contaminated, or perhaps the wheat could have rotted, or been otherwise spoiled. You have to protect the wheat all along the way before it gets to you, and once you make the soup, in order to ensure the end product is safe to eat.

Protecting all of the parts along the supply chain, from ensuring that there aren’t terrible chemicals sprayed on the ingredients as they grow, to ensuring that the can or jar that you put the soup into has been properly sterilized, is you securing your supply chain.

When we build software, we need to secure our software supply chain. That means not only ensuring the third-party components that we’re putting into our software are safe to use, but the way we are using them is secure [more on this later]. We also have to ensure how we build the software is safe, and this can mean using version control to store our code, ensuring any CI/CD that we use is protected from people meddling and changing it, and every single other tool we use or process we follow are also safe. 

If you’ve followed my work a long time, I am sure you know that I think this includes a secure system development life cycle (S-SDLC). This means each step of the SDLC (requirements, design, coding, testing and release/deploy/maintain) contains at least one security activity (providing security requirements, threat modelling, design review, secure coding training, static or dynamic analysis, penetration testing, manual code review, logging & monitoring, etc.) A secure SDLC is the only way to be sure that you are releasing secure software, every time. 

Tanya Janca

With this in mind, the difference between the two is that SCA only covers third party dependencies, while supply chain security also covers the CI/CD, your IDE (and all your nifty plugins), version control, and everything else you need in order to make your software. 

It is my hope that our industry learns to secure every single part of the software supply chain, as opposed to only worrying about the dependencies. I want securing these systems to be a habit; I want it to be the norm. I want the default IAM (identity and access management) settings for every CI/CD to be locked down. I want checking your changes into source control to be as natural as breathing. I want all new code check-ins to be scanned for vulnerabilities, including their components. I want us to make software that is SAFE.

If you read my blog, you are likely aware that I recently started working at Semgrep **, a company that creates a static analysis tool, and recently released a software supply chain security tool. If you’ve seen their SAST tool, you know they’re pretty different than all the other similar tools on the market, and their new supply chain tool is also pretty unique: it tells you if your app is calling the vulnerable part of your dependencies. They call it ‘reachability’. If your app is calling a vulnerable library, but it’s not calling the function inside of that library where the vulnerability lives, you’re usually safe (meaning it’s not exploitable). If you ARE calling the function that is inside your library where the vulnerability is located, there’s a strong likelihood that the vulnerability could be exploitable from within your application (meaning you are probably not safe). We added this to the product to help teams prioritize which bugs to fix, because although we all want to fix every bug, we know there isn’t always time. In summary, if the vulnerability is reachable in your code, you should run, not walk, back to your desk to fix that bug. 

Me, again
I have worked with more than one company who had programmers who did not 
check in their code regularly (or at all) to source control. Let me tell 
you, every single time it was expensive! Losing years of hard work will 
break your heart, not just your budget. Supply chain security matters.

Join me in this adventure by starting at your own office! Whether you have budget or not, there are paid and free tools that can help you check to see if your supply chain is safe! You can also check some of this stuff manually, easily (the IAM settings on your CI/CD are just a few clicks away). Reviewing the setup for your systems, and ensuring you have everything important backed up, will make your future less stressful, trust me. 

You can literally join me on this adventure, by signing up for the Semgrep newsletter! The Semgrep Community is about to launch live free events, including training on topics like this, and we can learn together. First email goes out next week, don’t miss out!

~ fin ~

Photo by Mika Baumeister on Unsplash

** I work at Semgrep. This means I am positively biased towards our products and my teammates (I think they are awesome!) That said, with 27+ years’ experience in IT, being a best-selling author and world-renown public speaker, there are a LOT of companies that would be happy to let me work for them. I choose Semgrep for a reason; my choice to work there was intentional. That said, I will try not to be annoying by only talking about work on my blog, promise! 

API10:2019 Insufficient Logging & Monitoring

In the previous post we covered API9:2019 Improper Assets Management, which was the 9th post in this series. If you want to start from the beginning, go to the first post, API1:2019 Broken Object Level Authorization. You can see the formal document from the OWASP API Security Top Ten Team, here.

Photo by Ales Krivec on Unsplash

Years ago, I worked at Microsoft, as a developer advocate. A few of us were working together, creating demos, for ‘Microsoft Ignite The Tour’. I recall two of my colleagues asking me why Azure couldn’t detect an SQL injection attack they had done as part of their demo. I checked their settings and explained that logging and monitoring was turned off. If monitoring is off, that means there was no observation of the application. How would an attack be noticed if they had removed Azure’s ability to watch what as happening? I also explained that since they had also turned off logging, that would mean that an incident responder would have nothing to investigate. They would not only miss the attack happening, but they would also never be able to find out later what had happened if they investigated. It had been turned off to save money (we didn’t want to spend a small fortune just on demonstrations, we had a budget). They turned both logging and monitoring back on, tried the attack, and immediately Azure went into red alert. All was well for the developer advocates and our demos.

Imagine finding your data on the dark web for sale, and not even knowing how it got there. Obviously, this has never happened to me before at a client site… If it had, I would tell you how incredibly frustrating it is not to be able to explain what happened, and therefore ensure it could never happen again. You can’t do that if you have no logs AND no monitoring. Again, this is totally hypothetical and definitely did not happen to me or any of the clients that I have worked with.

– Not me

Back when I was a full-time developer, I remember asking to turn on logging. I had asked the client during the requirements phase, explaining why I wanted it (so we could provide better reliability, and investigate any outages, there was no security slant for me, at the time). The client had agreed immediately. Then we got into the costing phase, trying to calculate how much the final project would be. When the client saw how much logging was going to cost, it was cut immediately. I had this happen several times as a dev, always being told it was a cost-saving measure. Although I didn’t love this decision at the time, it wasn’t a hill I was going to die on.

Fast forward 8 years to when I got my first AppSec job. I recall us having a security incident, and me being able to search through the logs and find the attack in about 30 minutes (the logs were HUGE, and I didn’t have log viewing software, that’s why it took so long). I learned powershell that day, or, the basics of powershell, and wrote a script to de-obfuscate, so then I could see the exact attack commands. It took me quite bit longer to figure out a perfect timeline, and where our AppSec program had broken down to allow this vulnerability into prod…  But that said, I realized that logs were so incredibly valuable for investigating a security incident, there’s just no other way you can find our exactly what happened without them!

Over the years I have learned that 1) working in incident response is absolutely fascinating and 2) I become far, far too stimulated to do incident response work on a regular (full-time) basis. I have a lot of respect for people who do that type of work full time.

– Tanya’s Thoughts on Incident Response
Photo by Charlotte Harrison on Unsplash

Around this time, I also learned that sometimes attackers will modify the logs, erasing their tracks as part of the attack. One of the ways that logs can be manipulated is via attacks against user input fields, where the attacker bypasses the input validation, that input is logged, and that type of attack is referred to as “log injection”. Attacks against the integrity of our logs is the reason I always go on and on about why we need to protect our logs, and back them up to a secure location. Ideally logs should be protected because they are *sensitive information*, they are literally evidence that could be used one day in court. We should treat them as the precious resource they are, a living record of all that has happened to our applications.

When an API, or any other IT system, has logging and monitoring turned off, or have improperly configured or insufficiently protected their logs, this vulnerability applies. It can apply to any IT system, but APIs are the focus of this blog series, and thus we shall concentrate on how to find, avoid, and solve this problem in APIs specifically.

Let’s talk specifics!

  1. Turn on monitoring. Give your monitoring system contact info for the correct people (it should not go to an unmonitored inbox, or phone number that no one answers). Someone needs to receive the alerts, otherwise why bother to pay for monitoring…
  2. You should log every activity that has to do with a security control, even failed ones. Logins, log outs, changes to privilege, account creation or deletion, password changes, authentication and authorization, input validation, system errors (especially if the global exception handler gets called), changing the contact info for the account, etc.
  3. Do not log sensitive info. Examples of sensitive info: complete credit card numbers, name + home address, name + date of birth, SIN/SSN, the text entries for failed password attempts (those are often typos that would allow you to guess the password), anything that could identify the person (PII) from the log data alone, personal health data + name or other identifying info, anything else that qualifies for your specific organization, system or customers.
  4. Do log: user ID, time stamp, what the user was trying to do, if they succeeded or not, their IP address and any other identifying information you can get about the user’s computer.
  5. Ideally your logs would be formatted so that your SIEM is able to consume them. It’s not very common for organizations to feed their custom app logs into the SIEM, but I hope this changes over time. It’s incredibly helpful.
  6. Your logs should not be stored on the web server or whatever your app lives on. It should be in a different place, inside/behind the firewall/perimeter. That location should not have execution privileges (read only), and only the incident response team should have access to this system.
  7. Monitor where your logs are stored. If an inappropriate account attempts to access this file server, initiate the incident response process immediately. Part of this process should be stopping whoever is accessing it, but then also investigating if these logs of been previously disturbed or altered in any other way. This might not be the first attempt to mess with your logs.
  8. Backup your logs! In a geographically differing place than your app server.

Download a PDF with more specifics for logging, error handling, and logging, here.

Advice straight from the OWASP API Security Top Ten Project Team:

  • Log all failed authentication attempts, denied access, and input validation errors.
  • Logs should be written using a format suited to be consumed by a log management solution and should include enough detail to identify the malicious actor.
  • Logs should be handled as sensitive data, and their integrity should be guaranteed at rest and transit.
  • Configure a monitoring system to continuously monitor the infrastructure, network, and the API functioning.
  • Use a Security Information and Event Management (SIEM) system to aggregate and manage logs from all components of the API stack and hosts.
  • Configure custom dashboards and alerts, enabling suspicious activities to be detected and responded to earlier.

This concludes the We Hack Purple blog series on the OWASP API Security Top Ten! Thank you to the volunteers of that project for all of their hard work to create this list and share this information with the world. Hopefully soon they will release the next version, and we can write more posts about their amazing research!

API9:2019 Improper Assets Management

In the previous post we covered API8:2019 Injection, which was the 8th post in this series. If you want to start from the beginning, go to the first post, API1:2019 Broken Object Level Authorization. You can see the formal document from the OWASP API Security Top Ten Team, here.

Inventory Photo by Petrebels on Unsplash

Photo by Petrebels on Unsplash

Taking inventory is the first thing I do whenever I start or join an AppSec program. Figuring out all the applications and APIs that an organization has built, bought (COTS), or are using (SaaS), then doing a fast evaluation of the state they are in, is the best way to figure out where an organization is at regarding their security posture. Doing this helps me know just how much work we have to do and sets the stage for future conversations with management and the developer teams on how we can get them on track for securing all of their apps.

This vulnerability is the reason I start with inventory. I would argue that the majority of organizations around the planet do not have a current and accurate inventory of all of their web assets, including APIs. If they aren’t in the inventory, that means these assets aren’t being managed, which usually also means the security team does not have them on their radar. If the security team doesn’t know about an asset, how can they secure it? Not being in the inventory generally also means no testing, monitoring, logging, or documentation, at a minimum.

Taking inventory (regularly or continuously), and ensuring we properly decommission old versions of APIs when we release new versions, is the way we avoid this vulnerability. Then document all of it, or update documentation as you update your inventory. I realize this is easier said than done!

I recall a penetration tester telling me years ago that one of his tricks for finding vulnerabilities during an engagement was to try to call earlier versions of any APIs that were in scope. If there was a version 2.x, he would try to call version 1.x. He told me that at least once every year he would get a response of a phantom API. And that API was always a complete security disaster. He would earn his entire paycheck with that one Postman call.

PenTester Name Redacted

APIs and web applications that are not part of your inventory are generally also unmonitored, meaning no one is watching them to see if something goes wrong. They are often not behind a WAF, API gateway, or any other shielding that might protect them from common threats. If they are not a part of your inventory, there’s also a good chance that there’s no team in charge of maintenance, meaning no bug fixing is happening and technical debt is accruing. Lastly, it’s very unlikely that they are receiving regular security testing, or any type of security scanning, which can lead to all sorts of problems building up, invisibly.

Cheese melting in the hot sun. Image compliments of https://drawception.com.

Software doesn’t age like wine, getting better over the years. Software ages like cheese in the hot sun; extremely badly! The longer we do not update, test, or patch our software, the more likely it is to have vulnerabilities found within it. Without proper care, software accrues technical debt, which can make it even more difficult to fix security vulnerabilities, because you have to update so many different components (framework, plugins, operating system patches, etc.) in order to fix the real problem at hand (the vulnerability).

The risks of having APIs (or web apps) that are not a part of your inventory and maintenance plans has no bounds. Any type of vulnerability could happen, as no one is watching or paying attention, except perhaps malicious actors. Attacks upon such resources could result in damage to the availability of the system, sensitive data exposure, changes to the data leading to poor integrity, and worse. This makes the risk of this vulnerability very high. On top of no one knowing that the API exists and is live in production, there’s very likely to be little or no documentation about this API. This situation brings me to back to when I was a dev, and the DBA told me I wasn’t allowed to kill an old database server (I wanted to repurpose it), because there were a whole bunch of scripts on there. She said she has no idea which scripts did what, but she turned the server off once and “everything broke” (including payroll being missed for the entire company, yikes!). She said, “Do not touch, I don’t care why, buy a new server!” The DBA lady meant business, so I got a new server. That said… What if there had been documentation? This situation could easily happen to a company with unknown APIs running wild over their network…

Advice From the OWASP Project Team: ‘How to Prevent’

  • Inventory all API hosts and document important aspects of each one of them, focusing on the API environment (e.g., production, staging, test, development), who should have network access to the host (e.g., public, internal, partners) and the API version.
  • Inventory integrated services and document important aspects such as their role in the system, what data is exchanged (data flow), and its sensitivity.
  • Document all aspects of your API such as authentication, errors, redirects, rate limiting, cross-origin resource sharing (CORS) policy and endpoints, including their parameters, requests, and responses.
  • Generate documentation automatically by adopting open standards. Include the documentation build in your CI/CD pipeline.
  • Make API documentation available to those authorized to use the API.
  • Use external protection measures such as API security firewalls for all exposed versions of your APIs, not just for the current production version.
  • Avoid using production data with non-production API deployments. If this is unavoidable, these endpoints should get the same security treatment as the production ones.
  • When newer versions of APIs include security improvements, perform risk analysis to make the decision of the mitigation actions required for the older version: for example, whether it is possible to backport the improvements without breaking API compatibility or you need to take the older version out quickly and force all clients to move to the latest version.

In the next blog post we will be talking about API10:2019 Insufficient Logging & Monitoring.

API8:2019 Injection

In the previous post we covered API7:2019 Security Misconfiguration, which was the 7th post in this series. If you want to start from the beginning, go to the first post, API1:2019 Broken Object Level Authorization.

Injection has been part of the original ‘OWASP Top Ten Risks to Web Apps’ list since the very beginning. Injection happens when an attacker is able to trick an application (or API) into executing malicious code. It does this by adding code to a place in the application where data belongs, and the app becomes confused, and then executes it.

Think of a search field at the top of any website. Imagine if instead of entering in your search term, you added a bunch of code. Then imagine the application executes the code you added. That code would execute with the full authority of that application, and all the same access, behind your firewall. It could result in damage to a database, a web service, your LDAP system, and potentially even worse, assuming there are other vulnerabilities the attacker can combine with this one.

Unfortunately, APIs are subject to this vulnerability, just like a regular web app. No front end does not protect us from injection.

Injection
Photo by Diana Polekhina on Unsplash

What types of injection exist?

If there’s code involved, someone will try to inject their own code. SQL, LDAP, or NoSQL queries, OS commands, XML parsers, and ORM are all potentially problematic (list provided by the OWASP API Security project team). Even Mongo DB databases, that don’t use the SQL language, are potentially vulnerable to NOSQL injection.  

Special note on XSS: Cross Site Scripting (XSS) is also a form of code injection, but it has it’s own classification because of the following reasons:

  • It occurs in the browser, as opposed to back on the server side like every other form of injection.
  • Only works with javascript (because that’s all browsers execute).
  • Is incredibly prevalent, so much so that OWASP felt it was necessary to give it it’s own category.
  • Has several defenses made just for this one vulnerability (cookie settings, and security headers).
  • Does not work on APIs, because they have no GUI front end, meaning no browser.

What do we do?

Hopefully by this point you agree that injection is dangerous and should be remediated as soon as possible if you find it in one of your apps. But how do we find it? How do we fix it? How can we ensure this never happens? Dear reader, secure coding is my favorite topic! Let’s go!

Finding Injection

First off you want to go through your APIs and figure out if you have injection. The most expensive way to do this would be to hire a PenTester to find and then test all the APIs. A cheaper and more sustainable way to do this would be to

  1. Buy a tool that can find all your APIs for you (No, I am not going recommend one at this time, there are several on the market of various qualities, and prices). This function is called “inventory” or “enumeration”.
  2. Run a SAST (static application security testing tool) on all of the APIs, ideally a next gen one, that has low false positives. Fix anything that says injection.
  3. Use a linter on your API, ensure you have completed your API definition file, as per the linter’s instructions. If you can find an API-specific linter, all the better.
  4. Run a DAST tool on the APIs that is made for APIs OR, use an old school DAST but first ensure you’ve linted your API perfectly, so it can hopefully do a good job. It will be easier and faster if you have an API-specific testing tool. Fix anything that says injection.

Fixing Injection

“That’s nice you told me to fix it. Exactly HOW do I do that?”

The first defense for injection is thorough input validation on any input to your app. This means data in the parameters, in a data field, in a hidden field, from an API you called, from the database, any input to your app needs to be validated that it is what you are expecting. What type is it? What size? What’s the content? Is it what we are expecting? If not, reject.

This is functionality is best performed using an approved list, on the server side. By ‘approved list’, we mean using a list of stuff you know is good, rather than a list of what you know is bad. It’s easy for malicious actions to get around a block list, using encoding, obfuscation, and other tactics. But if you give a regular expression and say “if it’s not in here, I’m just not having it”, bad things cannot get in.

As an example, imagine you have a username. It likely accepts numbers and letters. You could use a regular expression (REGEX) like this to say what is okay: [a-z,A-Z,0-9]. That’s an approved list or ‘accept list’. If instead you try to block bad characters such as <, >, ‘, “ and more, you (and your app) are in for a world of hurt.

The next thing you want to do is ensure you perform this check on the server side. Do not do it on the client side, and by this, I mean in the browser/JavaScript. Anyone with a web proxy can get behind your JavaScript in about 5 seconds, unfortunately. If you want to check in your JavaScript for speed, you can do that, in addition to checking on the server.

Input validation is defense number #1. Other defenses include:

  • Always using parameterized queries when making requests to any database (even non-SQL databases long mongo DB). It takes away the ability for it to be interpreted as code.
  • Use output encoding when you put stuff onto the screen. Some frameworks do this for you by default. It takes away and superpowers of the characters, before it puts them on screen, making XSS impossible. Okay, maybe this is only for XSS and doesn’t apply to injection in general, but I would still do it if I were you.

Preventing Injection

If we want to prevent injection (and a myriad of other vulnerabilities), follow this advice:

  • Have your development team take a secure coding course. It can be free or paid, formal or informal, live or recorded, interactive or lecture, the only important part is that they learn. Do the type of training that works best for you and your team.
  • Follow a secure system development life cycle (S-SDLC). Add security steps to each part of your SDLC, such as security requirements, code review, or threat modelling.
  • Ensure your application has thorough testing, which can mean any or several of the following: static analysis, dynamic analysis, manual code review, penetration testing, stress testing, performance testing, unit testing or any other testing you can think of!
  • Whenever possible, use modern and up-to-date frameworks that have security features built in. JavaScript frameworks like Angular and React have so many cool features that help protect your users! They aren’t just nifty dev tools, they can help you build stronger, tougher apps.
  • Never stop learning. Keep reading, studying, learning and hacking.

How To Prevent: OWASP API Security Top Ten Team Advice!

Preventing injection requires keeping data separate from commands and queries.

  • Perform data validation using a single, trustworthy, and actively maintained library.
  • Validate, filter, and sanitize all client-provided data, or other data coming from integrated systems.
  • Special characters should be escaped using the specific syntax for the target interpreter.
  • Prefer a safe API that provides a parameterized interface.
  • Always limit the number of returned records to prevent mass disclosure in case of injection.
  • Validate incoming data using sufficient filters to only allow valid values for each input parameter.
  • Define data types and strict patterns for all string parameters.

In the next blog post we will be talking about API9:2019 Improper Assets Management.

API7:2019 Security Misconfiguration

In the previous post we covered API6:2019 Mass Assignment, which was the 6th post in this series. If you want to start from the beginning, go to the first post, API1:2019 Broken Object Level Authorization.

Security misconfiguration has been on the original OWASP Top Ten list (critical web app risks) for many, many years. It basically means lack of hardening, poor implementation, poor maintenance, mistakes, missing patches, and human error. There’s no difference between web apps and APIs for this; if the server and/or network has not been properly secured, your API may be in danger.

What can happen?

Someone has the giggles.

Because this category of vulnerability is so vague, the risk is anywhere from low to critical, depending upon what you misconfigured and how you misconfigured it. It could result in a complete system compromise, damage to the confidentiality, availably, and/or integrity of your system, and a plethora of other issues. It could result in as little as embarrassing error messages for the attacker, but no actual impact. That said, this vulnerability should not be taken lightly, it’s on this list for a reason.

How do we avoid such a fate?

Prepare for me to sound like a broken record:

  • Follow a secure system development life cycle that includes extensive testing of both the application later, but also the network and infrastructure layer.
  • Following the hardening guide for all infrastructure, middleware, COTS, and SaaS products
  • Scan (apps, network, infrastructure) continuously
  • Create and follow a fast and effective patching process
  • Monitor and log all apps, APIs and any other endpoints you have, for potential danger and/or attacks
  • Ensure you have access for configuring all of these systems locked down, using the principal of least privilege
  • Have an up-to-date and effective incident response (IR) process, and a well-trained IR team

I realize that this blog post is probably not only a bit underwhelming, but you may feel that I have greatly simplified how to avoid this problem. If you feel this way… You’re right. Creating and implementing an effective patch management process in an enterprise is HARD. Continuous scanning is HARD. Getting people to fix misconfigurations (or any vulnerability) that you’ve found is REALLY HARD. None of the things on the list above are easy. Let’s see what the Project Team suggests.

How To Prevent


The API life cycle should include:

  • A repeatable hardening process leading to fast and easy deployment of a properly locked down environment.
  • A task to review and update configurations across the entire API stack. The review should include: orchestration files, API components, and cloud services (e.g., S3 bucket permissions).
  • A secure communication channel for all API interactions access to static assets (e.g., images).
  • An automated process to continuously assess the effectiveness of the configuration and settings in all environments.


Furthermore: (From the project team)

  • To prevent exception traces and other valuable information from being sent back to attackers, if applicable, define and enforce all API response payload schemas including error responses.
  • Ensure API can only be accessed by the specified HTTP verbs. All other HTTP verbs should be disabled (e.g. HEAD).
  • APIs expecting to be accessed from browser-based clients (e.g., WebApp front-end) should implement a proper Cross-Origin Resource Sharing (CORS) policy.

OWASP References (The best kind of references!)

In the next blog post we will be talking about API8:2019 Injection.

API6:2019 Mass Assignment

In the previous post we covered API5:2019 Broken Function Level Authorization, which was the 5th post in this series. If you want to start from the beginning, go to the first post, API1:2019 Broken Object Level Authorization.

Tanya proudly displaying her Sec4Dev T-Shirt.
Tanya proudly displaying her Sec4Dev T-Shirt

This vulnerability is quite poorly named, it is not at all intuitive. When you first hear it, you might think “oh-oh, the API is called in a way where it mass assigns/creates/deletes records, but it should have only done it to one record”. That is not at all what this vulnerability is about, so please set that idea aside.

Mass assignment refers to linking the incorrect fields within a record in a data model to the parameters in an API, such that when the API is called a user is able to update fields they should not be allowed to. This only applies to one single record. Not sure where the word ‘mass’ comes in to play here, but ‘incorrect assignment’ might make a bit more sense if you want to think of it that way.

This situation can happen when we bind data that was provided by the user (and is therefore not trustworthy) to data models, without checking if those fields should be accessible to the user. Should the user be allowed to specific ‘role=admin’? Probably not. But if you bind the entire record (first name, last name, username, password, role), rather than just the properties the user is allowed to update, this can happen.

How does this happen?

Malicious actors can send a GET request to an API, and see if it sends more parameters than just the ones it sends on the PUT, UPDATE, DELETE or POST request. If the developer sends everything (the entire record) back to the front end, someone proxying the web app can easily see “Hey, look at all the fields are here! This is way more than I asked for! What a gold mine!”. Obviously, we do not want this.

Let’s Avoid This, Shall We?

You should never be updating any data or making decisions in your system with values provided by the user until after you have validated that the data is trustworthy/what you are expecting. Validating your inputs is OWASP 101, we all already know this!

You do not need to call default ‘get’ or ‘set’ constructor functions directly from the API, you can write your own functions to be called directly, or have code in-between the original call and the data, that only passes the parameters you need, after you validated the input you got from the user.
 

Let’s hear more advice from the OWASP Project Team!

  • If possible, avoid using functions that automatically bind a client’s input into code variables or internal objects.
  • Create an approved list of only the properties that should be updated by the client, then add checks against that list when performing data updates.
  • Use built-in features to block properties that should not be accessed by clients.
  • If applicable, explicitly define and enforce schemas for the input data payloads.

External Reference from the OWASP API Top Ten Project Team

CWE-915: Improperly Controlled Modification of Dynamically-Determined Object Attributes

In the next blog post we will be talking about API7:2019 Security Misconfiguration!

API5:2019 Broken Function Level Authorization

In the previous post we covered API4:2019 Lack of Resources & Rate Limiting, which was the fourth post in this series. If you want to start from the beginning, go to the first post, API1:2019 Broken Object Level Authorization. You can read the official OWASP listing for this article here.

Image of Tanya commemorating the infamous "Ottawa Sinkhole Van"
Image of Tanya commemorating the infamous “Ottawa Sinkhole Van

_________________________________

Different users within an application are often given different roles within that system. Roles can include administrator, auditor, approver, editor, etc. Often those roles can be organized in a hierarchy, referred to as levels, with the idea of one role being a level above or below another. For instance, the administrative user is usually at the top of the hierarchy, then a regular user and then at the bottom perhaps a guest user of the system.

Each different system role has different areas of the application’s functionality available to them, such as the ability to create or delete users, look through someone’s financial records, approve a new blog post, or edit a document that belongs to a team member. Each one of those features within an application is often called a function. When the application grants or denies access to features within itself, this is called ‘function level authorization’. It is (or is not) authorizing that user to access a specific function.

When we hear the term escalation or elevation of privilege, we mean that a user has moved up one or more levels in the hierarchy of the system. If this happens in a system, we generally consider this to be a serious (critical) vulnerability, that we would work hard to mitigate.

When function level authorization is broken within an API, as is the case with item #5 on this top ten list, this means users are able to successfully call functions that their user role should not have access to. Imagine someone who is an unauthenticated user on your system using an API to call a function that only administrators should have access to, they could cause all sorts of damage, such as deleting users, changing other user’s privileges (perhaps locking out the real administrators), or worse. Depending upon what the API is used for, they could negatively affect the confidentiality, availability and integrity of the system itself and its data. It could even negatively affect other systems that it interacts with. This is dangerous stuff.

What can we do about it?

How do we prevent this you might ask? Let’s go over a mix of our recommendations and those from the OWASP API Top Ten Project Team.

Tanya’s Advice

  • You must test every single HTTP method, for every function, for every user. I know, this doesn’t sound that exciting. But guess what? This is one of the best ways you can ensure you avoid this type of problem. Make a grid with user roles on one axis, and functions they are/are not allowed to access on the other. For each entry in your grid, go through all of the HTTP methods that are enabled on the server. Maybe get a nice beverage or snack, then put on some music, so this task can be at least somewhat enjoyable.
  • Every single function should check in with the authorization mechanism for your app before it does anything, to verify you are still you (authentication) and that you are allowed to use that function (authorization). This is the same as every page within a web app/GUI front end. Make it a habit.
  • One use one authentication and authorization system for all your APIs, if possible. If you use multiple authorization controls, especially within the same API, it’s very easy to make a mistake. Simplify your architecture whenever possible, you will be a much happier developer.
  • Deny access to everything, for every role, by default. Specifically grant permission only to roles that require such access. Apply least privilege (all the time, whenever possible, not just in this situation!).
  • If possible, buy a system that will perform authorization for you, and implement it carefully, following the advice from its makers. Whenever possible, we should follow the order of buy -> borrow -> build. Buy a system that is tried, tested and trust. If not buy then borrow, use open source or a 3rd party component that implements this functionality. As a last resort, write this complex functionality yourself, with a plan for extensive testing and long-term maintenance, as this is a system that many other systems will depend upon. If you don’t think you can maintain something like this in the long term, then you should be buying or borrowing, not building.

The OWASP Project Team’s advice:

  • The enforcement mechanism(s) should deny all access by default, requiring explicit grants to specific roles for access to every function.
  • Review your API endpoints against function level authorization flaws, while keeping in mind the business logic of the application and groups hierarchy.
  • Make sure that all of your administrative controllers inherit from an administrative abstract controller that implements authorization checks based on the user’s group/role.
  • Make sure that administrative functions inside a regular controller implements authorization checks based on the user’s group and role.

Helpful Links from OWASP!

In the next blog post we will be talking about API6:2019 Mass Assignment!

API4:2019 Lack of Resources & Rate Limiting

In the previous post we covered API3:2019 Excessive Data Exposure, which was the third post in this series. If you want to start from the beginning, go to the first post, API1:2019 Broken Object Level Authorization.

You can read the official document from the OWASP Project team here.

Tanya on stage
Who here has perfectly secure APIs? What? No hands? – OWASP Global AppSec, Ireland, Feb 2023

Before diving into this one, I want to briefly discuss bots online.

An Internet bot, web robot, robot or simply bot, is a software application that runs automated tasks over the Internet, usually with the intent to imitate human activity on the Internet, such as messaging, on a large scale. – Wikipedia

While bots can be great (I used to have an automated message for all new twitter followers, to greet them and make them feel welcome), they can also be quite bad (slowly eating away at our defenses, with automated requests).

Bots are one of the quiet enemies of APIs. They hike up our cloud bills, they make our APIs seem unresponsible or slow, and they can be used to brute force an API that is not properly protected.  Because APIs can do so many things, it is possible for them to eat up all sorts of resources on your network, such as CPU, storage and memory on the host of the API, or whatever the API is calling.

There are all sorts of ways that APIs can have their limits tested, including: uploading very large files or amounts of data, making several requests at once, requesting huge amounts of data (above what the system or supporting infrastructure can handle), etc.

Setting Boundaries

The OWASP API Security top ten team recommends setting limits on the following settings:

  • Execution timeouts
  • Max allocable memory
  • Number of file descriptors
  • Number of processes
  • Request payload size (e.g., uploads)
  • Number of requests per client/resource (this is also called resource quotas)
  • Number of records per page to return in a single request response

So how do we avoid this happening to our APIs?

  • We can set throttling limits, to slow down requests that all come from the same source .
  • We can add resource quotas, limits to how many requests someone can make, and then they have to wait a time period to start making requests again.
  • Docker containers has several options built in for adding the limits described earlier in this article, as suggested by the team that maintains this great OWASP project.
  • Send messages back to whoever is calling the API ‘too much’, informing them they’ve reached the limit, and that now they must wait.
  • Design your API to ensure it takes into account if requests are “too big”. This is something threat modelling could help with, but ideally you would start with looking at each function and thinking about this as a problem your API will face at some point. Design with this in mind.
  • Also design into your API maximum amounts of data that it can accept and that it can return to the caller.  This might mean breaking a large request into multiple responses or blocking it altogether. This is something you should talk to your team about, ideally during the requirements or design phase(s) of your project.

They provide several very helpful resources, which you can find here:

OWASP Resources!

Even more resources!

In the next blog post we will be talking about API5:2019 Broken Function Level Authorization!

API3:2019 Excessive Data Exposure

In the previous post we covered API2:2019 Broken User Authentication, which was the second post in this series. If you want to start from the beginning, go to the first post, API1:2019 Broken Object Level Authorization.

#RSAC 2022

Excessive data exposure is something that web applications can face, not just APIs. That said, because web-based APIs are basically services on the web, they can be abused even more easily to exfiltrate sensitive data than a regular web app. It’s easy for an attacker to find APIs (just connect to any web app or mobile app using a web proxy and see the API calls for yourself!), to call them, and then look at the responses to see if anything being sent looks potentially sensitive. For instance, if the data in a field passed back is named “password”, “sin” or “secret”, you’re most likely onto something.

Using a web proxy to watch the API calls go back and forth is sometimes called “sniffing”, but no matter what you call it, it’s easy to do! Anyone with the tiniest amount of web-app hacking training can do this on day one. This means this threat is prevalent (happens all the time) and very dangerous (because unsophisticated attackers can easily execute it).

Some APIs are *supposed* to return sensitive data. This vulnerability is when sensitive data is exposed to someone it should not be (for instance, someone who is not a valid user, seeing another user’s sensitive data, or for whom that specific data should not be shown due to their role within the system). Since whether data is sensitive in nature is not obvious to automated testing tools, it can be a bit more difficult to identify than other types of vulnerabilities.

* Note: occasionally the vulnerability rears its head via poorly-generated and/or overly-populated responses. For instance, the API delivers the entire table worth of data, which includes sensitive information, but then the client-side front-end sifts through it and only reveals the non-sensitive/appropriate data to the end user. Unfortunately, if the API call is not encrypted in transit, this means a malicious actor could see all of the data if they were sniffing the API at the time.

How do we avoid this?

Let’s look at some great advice from the project team (I may have added a bit onto their list):

  • Never rely on the client side to filter sensitive data. By this we mean, only return the data you need to return! Don’t send a ton of stuff you do not need to, then let the GUI/front end decide what to show the user. Make these important decisions on the server.
  • Classifying then label all your data. If you know immediately when you look at something that it is sensitive, it’s automatic to treat it in a certain way. Educate your developers and other areas of IT on how to classify data, and to ask the security team if they aren’t sure.
  • In the design of your API, add user stories and/or threat models around this potential vulnerability. Making protecting sensitive data part of your design.
  • Review the responses from the API to make sure they contain only legitimate data, data that the specific user (or users with that role inside your system) are allowed to access.
  • Back-end engineers should always ask themselves “who is the consumer of the data?” before exposing a new API endpoint. Or better yet, perform threat modelling on your data flows, THEN design.
  • Avoid using generic methods such as to_json() and to_string(). Instead, cherry-pick specific properties you really want to return. You do not need to return everything. In fact, it’s better for your cloud bills to return only what you need, even if it requires a bit more programming.
  • Classify sensitive and personally identifiable information (PII) that your application stores and works with, reviewing all API calls returning such information to see if these responses pose a security issue.
  • Implement a schema-based response validation mechanism as an extra layer of security. As part of this mechanism define and enforce data returned by all API methods, including errors.
  • Perform strict linting on your API definition file, to ensure you have input validation built-in, by default, for every variable.

In the next blog post we will be talking about API4:2019 Lack of Resources & Rate Limiting!

The OWASP API Security Top Ten: API1:2019 Broken Object Level Authorization

The following blog series will be on the OWASP API Security Top Ten, 2019 version. The new version is coming out shortly, and we will add even more to this series when the time comes.

What is “The OWASP Top Ten”?

OWASP is an international non-profit, with a huge community, over 100 active open source projects, and over 300 in-person chapters around the world. The thing they are most well known for is “The OWASP Top Ten”, an awareness document created to teach the world about the most dangerous risk to web applications. Over the years, various project teams have created more “Top Ten” documents for Serverless, IoT, and APIs!

You can visit the project page here, and see the PDF from the project team here. Thank you very much to the project team who worked so hard on this, especially the leaders: Erez Yalon, Inon Shkedy and Paulo Silva. We Hack Purple applaud your efforts to help make the internet a safer place!

Now down to business! Let’s dive into #1 on this infamous list.

"Shifting Left is not enough." Tanya Janca presenting at OWASP Global AppSec, Dublin, Ireland, Feb 2023
“Shifting Left is not enough.” Tanya Janca, presenting at OWASP Global AppSec, Dublin, Ireland, Feb 2023

API1:2019 Broken Object Level Authorization

Back in the day, PenTesters used to be able to “minus 1” from any USER ID located in the URL parameters and they could quite often see the previous user’s account data. The URL parameters would have “userid=622”, the tester switches it to “userid=621”, and suddenly they were reporting a vulnerability. Fast forward to today, and although most Web Apps don’t fall for this trick anymore, unfortunately APIs are often in a state where changing the parameters a bit can fetch all sorts of data that they should not be able to access. It’s very easy using a intercept web proxy to sniff the API calls, see the parameters, and change the value of one of them. This vulnerability is the most-often exploited and most-damaging of everything listed in the API Security Top Ten document, which is why it’s #1.

But how bad is it really?

It’s pretty bad! It can result in sensitive data exposure (confidentiality broken), changed data (integrity ruined), deleted data (no availability) and even complete account takeover. No one wants this for their users.

How does this happen?

When the IT world moved from monolithic applications into the world of micro service architecture, a few things were lost in the translation. Monolithic applications have been performing session management for a very long time, meaning it knows what state it is in, and thus they can see when an attacker tries to access things it should not. Session management (keeping track of which user is logged in, and that access they are allowed) isn’t native to RESTful APIs. REST APIs are *supposed* to be stateless. And that creates a problem for us since we need to keep track of this in order to avoid this vulnerability.

Every time any user asks for access to anything, we should validate that they are 1) still the user we think they are (we validate the session) and 2) that they are allowed access or have permission to see what they want to see or do what they want to do. Every single time we must check. That one time we forget, is when PenTesters and malicious actors alike do a happy dance, because we have left open a hole in our armour.

How do we avoid this?

As you might have guessed, the OWASP API Security Top Ten Project Team has some thoughts on the matter! We’ve added some thoughts to each item below, please see the original document for a more-succinct description.

  • Implement a proper authorization mechanism that relies on the user policies and hierarchy. There are tools/products that you can buy that perform these functions for you. This is one of those programming things that is difficult to get right, and most security folks recommend you buy a well-trusted solution over attempting to build your own, due to complexity and cost.
  • Use an authorization mechanism to check if the logged-in user has access to perform the requested action on the record in every function that uses an input from the client to access a record in the database. Check every single time.
  • Use random and unpredictable values as GUIDs for records’ IDs. Do not use incremental, guessable, numbers. If a GUID or other record ID is sent to your system that your never issued, trigger an alert and block that IP address immediately, because your API is under attack.
  • Write tests to evaluate the authorization mechanism. Do not deploy vulnerable changes that fail the tests. Ensure the entire project team understands that failing any of these tests blocks all releases to production until they are fixed. You must pass these tests to get to prod.

Helpful links from the OWASP API Security Top Ten Project Team!

In the next article we are going to discuss API2:2019 Broken User Authentication!

From the OWASP API Security Top Ten!

What’s the difference between Product Security and Application Security?

Recently I have started seeing new job titles in the information security industry and the one that stuck out the most to me is product security engineer. I started seeing people who were previously called an application security engineer having their titles changed to product security, and I was curious. Some of you may remember that I had Ariel Shin on the We Hack Purple podcast, and although she does product security, and I did ask her a few questions about it, but I wasn’t satisfied. I wanted to learn more!

Image of a watch, to illustrate the idea of a product.
Photo by Daniel Korpai on Unsplash

I also started a Twitter thread, which you can read here.

From what I understand, after speaking to many people about this, product security means a person who is dedicated solely to the security of one or more products. This means that if the product has hardware and software, they must understand how to secure both hardware and software. They also need to be extremely well versed in the threats that it faces, the personalities of the users, and anything else that might affect the reliability, confidentiality, an integrity of that system.

An application security professional is concerned with securing the software of the entire organization. If they happen to only have one product, and the product is software, they could be called an application security professional or a product security professional. However, most of the time an application security engineer is expected to do projects with a broader scope, trying to secure several/all applications, trying to ensure that every project follows the secure system development life cycle, and all the other things you’ve heard me drone on about in this blog, in my talks, in my book, etc.

Whereas a product security professional dives extremely deep into one or more products. For example, imagine a company that does e-commerce. It has one gigantic site, where merchants and purchasers both use the site in different ways, but it’s one big system. It may contain APIs, a beautiful GUI front end, one or more databases, a serverless app, and maybe even an integration with Stripe to run the credit cards for them. This could be called one big product, and if a product security person was assigned to it, they would be expected to understand how the entire system works, and how to keep the system, its data, and all of its users safe.

From Adrian Sanabria we have this definition, which I also agree with:

Looking at it from a business/organizational perspective: AppSec is a sub-branch of infosec. Product security is a sub-branch of product.

Adrian Sanabria

Although you may not have heard of a product security professional who reports directly to the product group only (they often report to the information security team, but are embedded in the product team), this also makes a lot of sense. Embedding the product security person in with the product team helps ensure from the very first meeting that the product is secure. This is a huge #SecurityWin!

Continuing down this line of thought, this would mean that the product security person would also be responsible not only for the software, but the infrastructure it’s hosted on, the entire supply chain that leads up to the building of that software, hardware, deployment, etc. Way more than just the software component.

Product security includes the security features of products.   

Ray LeBlanc, of the Hella Secure Blog

Product security being responsible for the product itself having security features for the end users is also an interesting idea, which I had not thought of before Ray pointed it out. I like this as well.

Facts about Product Security

  • ProdSec professionals are embedded in the product team
  • Prodsec pros need to know:
    ⁃ Architecture and design
    ⁃ Threat modelling
    ⁃ Secure coding principals
    ⁃ Be able to use the basic Appsec toolset: DAST, SAST and SCA
    ⁃ How and when to hire a pentester
    ⁃ All the steps of the Secure SDLC, and arranges to do them or ensure they get these steps done (even if they hire out)
    ⁃ Any policies you have that apply to your product
    ⁃ Understanding the product inside and out

To echo/add: Product Security (aka Platform Security) could involve more complex external IAM functions, secrets and cryptographic infrastructure, very closely interlinked and overlapping depending on the org.

@vect0rx

Another resource that may interest you, a podcast with Anshuman Bhartiya on this topic: https://tromzo.com/podcasts/anshuman-bhartiya-product-security. He was also previously on the We Hack Purple Podcast, where we spoke about SAST.

I hope that clarifying the difference between #ProdSec and #AppSec has been helpful. Do you agree? Do you disagree? We’d love to hear from you in the comments below!