API4:2019 Lack of Resources & Rate Limiting
In the previous post we covered API3:2019 Excessive Data Exposure, which was the third post in this series. If you want to start from the beginning, go to the first post, API1:2019 Broken Object Level Authorization.
You can read the official document from the OWASP Project team here.
Before diving into this one, I want to briefly discuss bots online.
An Internet bot, web robot, robot or simply bot, is a software application that runs automated tasks over the Internet, usually with the intent to imitate human activity on the Internet, such as messaging, on a large scale. – Wikipedia
While bots can be great (I used to have an automated message for all new twitter followers, to greet them and make them feel welcome), they can also be quite bad (slowly eating away at our defenses, with automated requests).
Bots are one of the quiet enemies of APIs. They hike up our cloud bills, they make our APIs seem unresponsible or slow, and they can be used to brute force an API that is not properly protected. Because APIs can do so many things, it is possible for them to eat up all sorts of resources on your network, such as CPU, storage and memory on the host of the API, or whatever the API is calling.
There are all sorts of ways that APIs can have their limits tested, including: uploading very large files or amounts of data, making several requests at once, requesting huge amounts of data (above what the system or supporting infrastructure can handle), etc.
The OWASP API Security top ten team recommends setting limits on the following settings:
- Execution timeouts
- Max allocable memory
- Number of file descriptors
- Number of processes
- Request payload size (e.g., uploads)
- Number of requests per client/resource (this is also called resource quotas)
- Number of records per page to return in a single request response
So how do we avoid this happening to our APIs?
- We can set throttling limits, to slow down requests that all come from the same source .
- We can add resource quotas, limits to how many requests someone can make, and then they have to wait a time period to start making requests again.
- Docker containers has several options built in for adding the limits described earlier in this article, as suggested by the team that maintains this great OWASP project.
- Send messages back to whoever is calling the API ‘too much’, informing them they’ve reached the limit, and that now they must wait.
- Design your API to ensure it takes into account if requests are “too big”. This is something threat modelling could help with, but ideally you would start with looking at each function and thinking about this as a problem your API will face at some point. Design with this in mind.
- Also design into your API maximum amounts of data that it can accept and that it can return to the caller. This might mean breaking a large request into multiple responses or blocking it altogether. This is something you should talk to your team about, ideally during the requirements or design phase(s) of your project.
They provide several very helpful resources, which you can find here:
- Blocking Brute Force Attacks
- Docker Cheat Sheet – Limit resources (memory, CPU, file descriptors, processes, restarts)
- REST Assessment Cheat Sheet
Even more resources!
- CWE-307: Improper Restriction of Excessive Authentication Attempts
- CWE-770: Allocation of Resources Without Limits or Throttling
- “Rate Limiting (Throttling)” – Security Strategies for Microservices-based Application Systems, NIST
In the next blog post we will be talking about API5:2019 Broken Function Level Authorization!