Now that encryption is fast, and free, and we know the risks of not using it, there is literally no excuse not to use HTTPS only for every application on the Internet. Literally every application, even for static pages that contain no sensitive information. For everyone (there is no class of user that does not need protection on the internet). Always (there is no time limit, and you can auto-renew your certificates; you don’t even need to really think about it).
Every public website and web application (including APIs) should force the use of HTTPS (and disallow or redirect connections using HTTP). This can be done using security headers in your code or forced on the server.
There is no reasonable excuse for not using HTTPS only for public-facing applications. Feel free to argue with me in the comments. 😀
Up next we will summarize “Part 5: secure coding” of this series.
Note: much of this comes from the OWASP Cheat Sheet on Access Control, by Shruti Kulkarni, Adinath Raveendra Raj, Mennouchi Islam Azeddine and Jim Manico. And if not, it may come from one of the other offerings from the amazing OWASP Cheat Sheets Project. For more information on almost any AppSec topic, check out the project, it’s definitely worth your time!
Authentication is ensuring that the user who is using the application is the actual person they purport to be. For instance, when I log into my webmail, it verifies that I am the one-and-only Tanya Janca that owns this account. Not a different person who is also named “Tanya Janca”, and not someone pretending to be me. The real, authentic, me; the person who owns the account.
Identity (digitally speaking) is a hardware or software-based solution for proof of identity of citizens, users or organizations. For example; in order to access benefits or services provided by government authorities, banks or other companies in person, you must verify your identity, usually with a driver’s license, passport or another physical document. However, if you are verifying your identity digitally (electronically), you must use a software or hardware based solution to prove your identity.
Access Control is allowing (or not) users to access systems, features, data, etc. based on the permissions that the system has assigned to that user. For instance, perhaps you have access to the main parts of your building, but there is an electrical room to which you do not have access. Your badge will not get you it. This is access control, and it works the same way with software, granting or restricting access based on your role and/or identity within the system.
As usual, I recommend using the features as provided in your programming framework for AuthN, Identity and Access Management features. I also suggest strenuous testing of your implementation, because if someone breaks these security controls there shall be dire consequences.
General Rules of Authentication (AuthN)
Applications that use password-based authentication should follow the standards put forth in my book, Alice and Bob Learn Application Security and/or the current NIST Password Standard. Ex: do not forces users to change their passwords often, allow very long passwords, do not force complexity, allow and encourage the use of password managers, etc.
The principle of least privilege is the practice of limiting access to the minimal level that will allow normal functioning. This principle should be applied not only to the users of web applications, but to the applications themselves, and as they are given access to databases, web services and other resources. For example, it is rare that an application requires a database user that is the database owner (DBO); generally, read/write or CRUD is enough.
Passwords will be encrypted in transit (HTTPS only).
Re-authentication will be performed for “Sensitive Features”. Sensitive features could include, but are not limited to; changing passwords or security questions, changing bank account information, deleting a user account, transferring large sums of money.
Measures must be taken to prevent or circumvent brute force attacks. Measures could be a maximum number of login attempts, requiring a captcha after 5 failed logins attempts or throttling (slowing down) the system to make a brute force attack more difficult.
Passwords must be masked (not echoed to the screen) while the user enters the password.
Validate that a user is authorized to access every new page and for every action. Ensure that this is applied using a pre-approved list of approved users, not a block list of unapproved users. See blog post Pushing Left 5.1 for more information on input validation and approved lists..
Never assume that “hiding” a page or feature means that it is protected or ‘safe’, that is not enough.
Login fails and errors should be logged. Ensure not to log sensitive information such as the text used for the attempted password, as it is likely only one or two characters off from the real password. Refer to blog post Pushing Left 5.9 for more information on what to log.
Brute force attempts (defined as 10 or more successive failed attempts to login in under 1 minute, or 100 or more failed attempts in one 24-hour period) must also be logged. If possible, the IP address of said attacker should be blocked, and account owner notified.
I realize that this list is not exhaustive, as this is a huge topic that could easily fill an entire book. I invite you, my readers, to provide more thoughts, topics and ideas in the comments section below. Thank you for reading.
Up next in the ‘Pushing Left, Like a Boss’ series: HTTPS only.
Authorization (also known as ‘AuthZ’) is verifying that the user who is trying to perform an action within your application is allowed (is authorized/has permissions) to use that functionality. For instance, is the user an admin user? If so, allow them to view the admin page. If not, block access.
There are several different models used within our industry for authorization, with RBAC (Role Based Access Control) being the most popular. RBAC means assigning people different roles in your system(s), just like people play different roles within your organization, and give them access based on the role they are assigned.
For instance, meet Angela, a hypothetical software developer who is new to my project team (pictured below).
#WOCTechChat: Angela the Software Developer
As a software developer Angela is going to need access to all sorts of things; source control, perhaps permission to publish to the CD/CI pipeline, and various file systems.
Now look at the second image to see our project team: Sarah, Angela and Jennifer. A project manager, software developer, and a database administrator (DBA). They all play different roles within the project and our organization, so they need different sets of permissions and access. Angela the software developer should not need Database Owner (DBO) permissions, but the DBA definitely will. The project manager is unlikely to need access to the web server.
This is where Role-Based Access Control (RBAC) is extremely helpful, the system administrator can easily assign the proper roles to each of our project members, to ensure they are only authorized access to the things they need to get their jobs done (least privilege).
Project manager, software developer, and DBA, Photo Credit: #WOCTechChat
When writing code for authorization within applications, use the features in your framework, and re-verify access for every feature and/or page of your application. Test your implementation thoroughly, with each role, for best results.
Trust data from…. No one. Not the database, not APIs, not even your mom.
Sydney, Australia, 2019. I'm the tiny dot.
Any data sent to your application needs to be treated as untrusted, and thus validated before it is used or saved. When I say this, I mean ALL DATA. Whoever saved the data to that database may have made an error while validating that input. The API you are calling may have been compromised. Even a highly intelligent user, such as my mother (degrees in both chemistry and mathematics, an accounting designation, and several certifications, including adult education – She's very bright.), could make a simple error when using an application, such as entering a single quote instead of a double quote, which could potentially send your application into an error state, causing a crash or worse. I realize that generally we assume that we are guarding against only malicious actors, but this is not true: even well-meaning, well-educated and computer-literate users can cause problems if your application is too trusting of the data it receives. If you treat all data as potentially malicious you will ensure that your application is not only battle-ready, but also error-proof.
All errors should be caught and handled gracefully; there should never be a stack trace or database error on the screen. Not only so that we look like professionals, but so that attackers are not given extra information to use against us when creating their attacks. When errors happen, an application should never fail into an unknown state, it should always roll back any transaction it was performing, and ‘close’ anything it may have opened. Such a situation should also always be logged, so that if an incident were to arise incident responders would have something to work with when they investigate, and so that auditors can verify that the system is and has been working correctly.
All application errors must be ‘caught’ and handled, they can never be left ‘unhandled’.
Having a catch-all mechanism (global exception handling) is highly advisable, to ensure unexpected errors are always handled properly.
Internal information, stack traces or other crash information should never be revealed to the user or potential attackers.
Error messages should reveal as little as possible. Ensure they do not “leak” information, such as details about the server version or patching levels.
Do not reveal if it is the username or password that is incorrect if there is a login error, as this allows for username enumeration.
Always “fail safe” or “fail closed”, do not “fail open” or to an unknown state. If an error occurs, do not grant access or complete the transaction, always roll back.
Security-related errors (repeated login fails, access control failures, repeated server-side input validation failures) should issue a system alert. Ideally, log files will feed into an intrusion prevention/detection system or an application SIEM. This can be tested by running a vulnerability scanner against your application, it should cause events that trigger logging.
What and When to Log:
System logs must not contain sensitive information.
Login fails and other login-related errors should be logged.
Brute force attempts should be logged (defined here as 10 or more successive failed attempts to login, in under 1 minute, or 100 or more failed attempts in one 24 hour period).
All security related events. Examples: a user being authenticated, a user being locked out after several failed login attempts, an unaccounted-for error, anything the global exception handler catches, input validation errors.
The following information must be contained in your logs:
what type of event occurred (why this event is security-related/name for event),
when the event occurred (timestamp),
where the event occurred (URL),
the source of the event (IP address), **
the outcome of the event, and
(if possible) the identity of any individuals, users or subjects associated with the event.
** If the IP comes from X-Forwarded-For header do not forget to properly validate it, as it could have been tampered with. Special thanks to Dominique Righetto for this point! **
You can, of course, log more than what is listed here.
Using and Protecting Your Logs
Log files and audit trails must be protected, even the backups.
Ideally logs are all saved into the same space, in the same format, to be easily consumable by a SIEM or other security tools.
Only authorized individuals should have access to logs.
Logs should be stored in an encrypted format.
Logs should be accessible by your incident response team.
Logs should be stored on a secure server or other secure area.
Log files must be incorporated into your organization’s overall backup strategy.
Log files and media must be deleted and disposed of in the same way you would dispose of any sensitive information.
The previous article in this series is 5.7 URL Parameters.
Continuing on our long trek through secure coding principles we have come to the topic of cookies, which are used for sending information back and forth from the client and server.
In order to secure the decision-making and/or sensitive data that we need to pass between the client and the server, we need to put in a secure cookie. Secure cookies are encrypted, not encoded, which means someone needs a key in order to decrypt, change or reveal the information that they contain. In the case of secure cookies, that key is stored on the server (a secure location). Anything that is sensitive, used for decision-making within your application, or is otherwise inappropriate to put in a URL parameter or a hidden field, should be passed in a secure cookie. This includes, but is not limited to:
You do not need to secure all your cookies/information. If they deal with user preference for layout, language or font sizes, this is the sort of information that is unlikely to need to be protected. Your business analyst or clients can guide you on what is or is not sensitive.
My friend Imran Mohamed and I, in Singapore, at the Null Meetup
The following settings should be used, as a bare minimum, to ensure your cookies are secure:
· Use the ‘secure’ cookie attribute when you send cookies.
· Use the HTTP Strict Transport Security Header or set the web servers to force HTTPS traffic only.
· Use the HttpOnly security header to block cookies from access attempts from client-side scripts
Never put information in the parameters in the URL of your application that are important. When I say “important”, I mean something that would potentially be used to make a decision in your application that is not trivial. The same goes for hidden fields, don’t store or pass anything valuable there either. Important information must be transmitted in a secure manner, and hidden fields and URL parameters are not the place for that.
Risks of putting sensitive information in the URL include; sensitive data being cached, sensitive data exposed in the case of a man-in-the-middle attack, or an attacker potentially injecting their own values.
Examples of things that should not be in URL parameters:
User IDs (for a user logging into a system, not when it is used to bookmark a public page, and nothing more. Bookmarks on public pages are not sensitive.)
Dates of birth and other combinations of information that could possibly be used to impersonate someone
Examples of things that could be in URL parameters:
Which language the user wants to view the site in, for instance “fr” for French or ‘en” for English. If an attacker switches it, the user will see the same information, in a different language. No harm, no foul.
The page number for a form that the user is allowed to see all the pages of and there is no reason that they cannot skip ahead or back in the document or form.
Viewing preferences for the form, for instance contrast or brightness settings. Although it would be an inconvenience if an attacker changed the brightness or contrast of a page, the user would not be harmed, nor the application or it’s data.
Query terms in a search engine.
Key takeaway: when in doubt, do not pass it in the URL.
Up next in the ‘Pushing Left, Like a Boss’ series: Securing Your Cookies.
Allowing files to be uploaded to your applications (and therefore your network) is the most high-risk activity an application can do. It is truly a dangerous thing.
If you decide to include file uploads in your applications, you should:
1. Scan all uploaded files with an application to analyze the files for malicious characteristics such AssemblyLine (free from the Canadian Government, can be installed locally so you do not need to share your files with a 3rd party), Cylance, FireEye or Virus Total.
2. Buy a third party tool to do this work for you.
· Ensure the application is receiving the expected file type which is within an acceptable size range. If not, reject it.
· If possible, avoid accepting Zip files. If you must accept zip files, be extremely careful.
· Rename the file, do not use user-supplied information to name the file, even temporarily.
· Do not allow the user to specify a path to save the file, always have the application decide, and do not share this location with the end user.
· Pay special attention to files with double file extensions and ensure the fake extension is removed. For example: myfile.php.txt would become systemcreatedfilename.txt.
· Use image processing libraries to verify the image is valid and to strip away extraneous content.
· Set the extension of the stored image to be a valid image extension based on the detected content type of the image from image processing (do not trust the header from the upload).
· Ensure the detected content type of the image is within a list of defined acceptable image types (jpg, png, etc).
· Ensure that you can attribute the file to the authenticated and authorized user that uploaded it for auditing purposes. It is not advised to let unauthenticated users to upload files.
· It is preferable to save files to a to properly secured blob storage, then a database or to a file system. If on a file system, ensure it is on a file server (not the web server), preferably isolated and/or on a different domain/network zone, in a directory that does not have any execute permissions and has had all the script handlers removed. If at all possible store it in the cloud in blob storage instead.
It is my firm opinion that only the session management features in your framework, or a third party tool/product used for this sole purpose, should be used to manage identity or a user session. The HTTP 1.1 protocol was never designed to manage these concepts and thus there is no default way built in for it. When you choose a framework, such as .Net, Ruby or Spring, they have built in features to handle this, and you should always use those features. Don’t be tempted to think you can do better on your own, let the experts handle this for you.
Below is general guidance on session management. Again, always use the features in your framework, purchase a third party tool to do this, and as a last resort write your own by following the advice below.
· Session IDs should be at least 128 characters long.
· The session ID should be unpredictable (randomized) to prevent guessing attacks. Use a well-recognized random number generator; do not write custom code for this. Users should receive a new session ID each time they log in.
· The session ID content or value should only be used to identify the session, it cannot also be used for other functionality, such as the user’s ID within the system, a social number or any other direct object reference.
· The session ID should never be passed in the URL, it will be passed in a secure cookie.
· Use the current built-in session management implementation in your framework. I know I said this twice, it’s important.
· The session ID should have an expiration date and/or time, it cannot last forever, even if the user is still logged in.
· The session ID should only be passed over encrypted channels.
· The session should be destroyed after a user logs out.
· Web applications must never accept a session ID they have never generated. In the case of receiving one, this scenario must be detected as a suspicious activity; an alert should be generated and the IP blocked.
· Session IDs should be regenerated during authentication, re-authentication, or any other event that changes the level of privilege with the associated user.
Keep your frameworks up to date; there’s no sense in using the session management features if your framework is broken. If your framework hasn’t been updated in 15 years the previous advice to use your framework does not apply, and you likely have larger issues on your hands!
Browser and client-side hardening focuses on enabling and using the security features within a user’s browser to protect them. The following settings protect the users of your web applications from their side (client-side/browser), rather than the server-side (the application itself), as with most application security advice.
Disabling ‘Remember Me’ Features
While I believe that all applications should allow for users to “paste” values into the password field (to allow for the use of password managers), I do not believe that applications should allow browsers to store passwords using the “remember me” feature. I may be forced to eat my words at some point, but until the security of browsers improves, it is my opinion that all passwords should be stored in a password manager.
Do Not Allow Caching of Sensitive Data
Another item for browser and client-side hardening is to disable caching for pages containing sensitive data. The cache is not a safe place to store anything sensitive.
In the past, I would have advised that if a page is delivered over HTTPS, that it must contain sensitive information, and therefore should not be cached. But now that things have changed, and quite frankly since Troy Hunt has opened my eyes, I believe that all pages of every web application should be delivered over HTTPS, that would prescribe that nothing should ever be cached. With that caveat, I would like to suggest that if a page of your application contains anything sensitive, you should not allow it to be cached. Pages that do not contain sensitive information may be cached for faster retrieval.
HTTPS should be forced (redirect HTTP to HTTPS) on the server or from within the application, using security headers. The newest version of TLS (Transport Layer Security) should always be supported, which as of this writing/publish date is TLS 1.3. Older versions of encryption that have known vulnerabilities should be disabled unless you have a clear business reason that has been evaluated against the risk of supporting backwards compatibility. As of this writing all versions of SSL (Secure Sockets Layer) and TLS 1.1 & 1.0 should be disabled on all servers.
Use Every Possible Security Header
Security headers are used to tell browsers what to do, in regard to security. They can turn on features, ensure certain data is not accessed, and much more. Below is a brief explanation of many security headers and which settings should be used in general. If you do not have a valid business justification to do otherwise, you should always use these restrictive settings, in order to protect your users.
Content-Security-Policy (CSP): Naming the sources of your content that comes from outside your site, such as scripts or images. This is important because the first thing many attackers do is call out to other resources in order to do more damage. This can stop or greatly reduce the damage caused by many attacks.
If you have no content from another site (a rare situation), you can use this:
X-Frame-Options: Used to block your website or web app from being framed (an overlay over your webpage, that you do not control). This is not an exhaustive control, but it is enforced by most modern browsers. This should be used in conjunction with CSP above.
X-Frame-Options: SAMEORIGIN (to allow your site to frame itself)
X-Frame-Options: DENY (to block all framing)
X-Content-Type-Options: Tells the browser not to “sniff” the content-type (take an educated guess as to what the content type is), and to rely on the content-type provided by the application. This blocks a specific type of attack using malicious interpretation of the content, and there is rarely a legitimate business reason to not specify the content-type.
Referrer-Policy: when users leave your site to go to another site, your site sends “referrer information”, including the URL of where the user came from. This header strips all subdomain information (which could be sensitive or personal) and sends only the main URL information. https://DevSlop.co, as opposed to https://DevSlop.co/home/teams.cshtml. This header protects the privacy of your users.
Strict-Transport-Security (HSTS): Forces HTTPS, even if the user tries to connect via HTTP. Always include subdomains.
X-Permitted-Cross-Domain-Policies: This header is only required if you have content that you do not want to be reused as content for other sites. This could, for instance, be used by an artist to block other sites from linking their art within another site. This would not stop other sites from downloading the images and serving them as part of the site content, but it would make it more difficult, and it would also be a clear copyright violation. This security header is optional.
Access-Control-Allow-Origin: when using CORS (cross-origin resource sharing), this security header will specify which resources (from another domain) are allowed to be accessed and used by your site. If you are not using CORS you do not need this security header. Note: this should NEVER be set to “*”.
Expect-CT: This is a brand new security header and is not yet supported by all providers. Expect-CT triggers the user’s browser to verify the certificate of your application’s server. As taken from the OWASP Security Headers Project: The Expect-CT header is used by a server to indicate that browsers should evaluate connections to the host emitting the header for Certificate Transparency compliance. Note: if you allow your certificates to expire or become otherwise non-compliant, this will ensure that user’s browsers refuse connection to your site.
Feature-Policy: This new security header tells modern browsers which features we want to allow for our website. Most websites do not require the use of the microphone, webcam, or gyroscope. Disabling these features for our site means we protect our user if our application was compromised, and also that they are not concerned that we are violating their privacy.
enableVersionHeader: We want to remove this header, which gives attackers extra information that they do not require, such as the version of the programming framework we are using. The example below is for ASP.Net. Removal of this header is different for every framework.
<httpRuntime enableVersionHeader=”false” />
Server: We also want to remove this informational header which shares the operating system our server is running, including the version. This information can be used by an attacker to plan a more targeted attack, and there is no advantage to sharing this information publicly.
Examples of how to disable this on your server this can be found on Scott Helme’s site. It cannot be done programmatically.
A special note on the X-XSS-Protection header: This header is now considered legacy/deprecated. It has vulnerabilities within the header implementation itself. It is only used for backward compatibility, and unfortunately as of 2019 it is being attacked in older browsers. It is no longer advisable that we use this security header.
According to many sources between 70–90% of application code is contained within libraries and other 3rd party components. When we use libraries, frameworks and other 3rd party components, we are accepting all of the risks that come with them (including vulnerabilities). Luckily for us, when security researchers find security vulnerabilities in products (including libraries, frameworks and other components) they often report them to Mitre, who log them in the Common Vulnerability Enumerator (CVE) database, a publicly searchable database containing all publicly-disclosed known vulnerabilities**. Using the CVE database either manually or (preferably) through use of an automated tool, to verify if your application is using known-vulnerable components is a key strategy to improve the security of your custom-built applications. There are *many* free and premium tools on the market (listed below), and I would suggest that you use at least one of them to ensure that the 3rd party code you are using is safe.
Automating this should be part of every CD/CI pipeline. You should also automate scanning of your source code repository on a regular basis. Everyone should do this, for every project, no matter how small. It’s so easy, and it’s such a huge win for the security of your applications, there is no excuse not to do it.
** The CVE list of vulnerabilities is not exhaustive. Many nation-states (including the one you live in), as well as criminal, terrorist, hacktivist, and other malicious groups, actors or organizations do not report zero days that they find(vulnerabilities that are not known to the public and for which there is no known patch), in order to keep them for use as part of their own nefarious activities. Just because you have scanned your third party components for vulnerabilities does not mean they are bulletproof. Sorry folks.
Non-exhaustive list of software that scans 3rd party components for security vulnerabilities, also known as Software Composition Analysis (SCA):