You can read the official document from the OWASP Project team here.
Broken User Authentication is a vulnerability that can affect any web app or API that has user accounts, which is a very large percentage of applications currently on the internet. The app or API asks the user “Who are you?” (authentication) and if the user is able to trick the app into ‘recognizing them’ when they are not a valid user, that is broken authentication.
When someone exploits this vulnerability, they have access to a system they should not, and quite likely, sensitive data of that user, plus the ability to change account settings of whatever account they have broken into. This is a total nightmare for the user whose account has been compromised, not only the security team.
The most common ways this vulnerability is exploited would be via credential stuffing (trying thousands of stolen credentials to see if any of them work), and brute force attacks (allowing end users to try over and over again to guess a password and not locking them out when they deserve it).
Other types of attacks of this nature include:
Allowing users to make very weak passwords, such as “Winter2023”, which is likely the English-speaking-world’s most popular password this very moment. </sigh>
Sending auth-tokens in insecure ways, such as in an unencrypted/insecurely configured cookie, or the URL parameters
Not checking if your auth tokens are valid, every-single-call/page/action/time
Using auth tokens incorrectly/with poor config/very old versions they shouldn’t use anymore
Following all the right programming steps, but then sending it using very insecure encryption, such that it the data can be decrypted
Insecure storage of passwords (passwords should be salted and hashed, possibly peppered, but never plain text or encrypted)
How can I ensure this never happens?
Follow auth best practices and *ideally* use an API gateway. API Gateways can handle a lot of this for you, and that makes life WAY BETTER. Writing your own is very complex, time consuming, and potentially risky.
Test. Test this manually, and with tools. Test this thoroughly and if you make changes test it again.
Threat model this or make a security user story about this. It’s a user story/potential threat to almost every app and API on the net. Take it very seriously.
Where possible, implement multi-factor authentication.
Implement anti brute force mechanisms to mitigate credential stuffing, dictionary attack, and brute force attacks on your authentication endpoints. This mechanism should be stricter than the regular rate limiting mechanism on your API.
Implement account lockout / captcha mechanism to prevent brute force against specific users. Implement weak-password checks.
DevSecOps is a marketing buzzword word now, but it wasn’t always. Just as DevOps is not only a modern way to build and support software, it’s an entire culture shift away from the way we used tobuild software. DevSecOps was supposed to be a change in the way we (security professionals) work with the rest of IT. It was never meant to be a marketing word, used to demand higher prices by vendors, with the promise it would solve all of our software security woes. It was supposed to be a cultural shift, removing silos and barriers, to help us build more secure software, faster.
This blog series is going be a long string of stories of when things have gone wrong. I’ve chosen this format not only because story telling is a great way to teach, and tends to give readers more enjoyment, but because in all the talks or lessons I have ever given, people have constantly referred to one of my stories when they tell me what they have learned. Over and over, it’s a story that I told that helped them remember and then apply the knowledge. With this in mind, I will be telling a lot of stories in this series!
But before I dive too deep into the many, many mistakes I have made and witnessed others make, I want us to have a clear understanding of DevSecOps. It will be brief, I promise. But if you want a less-brief version, please read this series that I wrote a few years ago: Security is Everybody’s Job.
DevOps is a modern way to build software that involves the software developers and operations folks working more closely together, using automation, to create well-tested, reliable, and high-quality end products. Generally DevOps requires adherence to ‘the three ways’ of DevOps.
Focus on the efficiency of the entire system (not just your part).
Providing fast and accurate feedback, to the right people.
Taking time to improve your daily work via continuous learning, risk taking and experimentation.
People who work in a DevOps shop use a CI/CD pipeline to automate testing, building, and deployment of infrastructure and applications. Infrastructure is often build from code configuration files, written in YAML, referred to as infrastructure-as-code (IaC). DevOps folks are obsessed with making reliable systems that delight their customers, and are constantly striving for improvements in performance, efficiency, and quality.
And if you are reading this blog, you likely agree that these goals are very security-friendly. Great performance and reliability means more availability! Automating as much as possible means less human error, and more opportunities for security testing! If we build our infrastructure using code, then we can scan it for security issues, just like application code! All of this spells W-I-N for the AppSec team.
But… It doesn’t always go perfectly. Even with the best intentions.
SAST Gone Wrong
My first story in this series will be about a SAST (Static Application Security Testing) roll out I did in late 2021.
At the time I was switching out some AppSec tools for a client. They were part way through their DevOps journey; they weren’t done yet, but things were getting better every single day. At the time they would push code to version control (also know as ‘the code repository’ or just ‘repo’) regularly, but they would only run the CI/CD pipelines every second Friday. All of the architecture team and development leads would meet once every two weeks, and each team that wanted ‘to go to prod’ would have to present their case. Sometimes this is called a change management board, and thus it’s meetings are called “CAB meetings”. Of all the change management board meetings I’ve ever been to, these were the friendliest. At the end of every big meeting they would save 5 minutes to do shoutouts; recognizing someone else who worked there for do something that was great. “Shout out to Jenny for fixing all the security bugs I sent her last week! You rock Jenny!” for example. (Obviously, Jenny is fictional and not a very real and great human being who I had the chance to work with. <winky face> )
Continuing on… Once you were approved at CAB, you got to push your code to the CI/CD. All the tests would run, and ideally pass, then you would get to go to the promise land: production!
However. If you did not pass all the tests in the CI/CD, it was kinda embarrassing. So pretty much everyone would do their own tests in advance, fix anything that was wrong, and then go to the CAB meetings.
Then I come along and tell them I want to add scanning for SCA, IaC, and SAST into the CI/CD. Three more tests to pass! Some of their apps were quite new, and they all passed pretty much everything I threw at them. But some of the apps were…. Ancient. They did not pass. In fact, the scanner’s report looked sort of like a Christmas tree: lots of bright colours everywhere. Especially red.
Circling back to the great big error I made; The developers had asked me if I could make the AppSec tools scan whenever they checked in code, instead of it being only available in the CI/CD. They wanted this so they could receive the report, fix the bugs, have it test again, and then know *for sure* that they would pass the CI/CD tests. I said sure, sounded great to me: “Developers wanting to fix security bugs? Earlier in the SDLC? Yes please!”
I rolled out the first tool (SAST) such that it scanned the entire code base, then would re-scan each project as it was updated (when new code was checked it). It would just scan the “delta”, the parts that had changed since your last change, and then email a report if it found something. The developer would receive a report approximately ten minutes later.
I have to say, I thought I was pretty cool. Got it all rolled out in about two days, it’s working well, finding lots of bugs. I’m awesome! Go me! I definitely thought I had done a great job.
Except that the tool would email every single software developer who worked at the entire company, telling them about the brand new mistake that you just made, for every new code check in. Over and over again, all day long. Oops! I was spamming the whole company, and embarrassing the developers, what a mess~!
The way I had rolled out the tool had made perfect sense TO ME. I could look at our entire organization, and see all the bugs, in every app! I could then create reports about which apps were the worst, which teams made mistakes more often, if we were repeating the same problems, etc. I could see EVERYTHING and I thought it was GREAT!
A few days later, when a software developer told me the tool was embarrassing them, I had no idea what I had done wrong. I didn’t understand, my amazing tool was not helpful? How could this happen! I asked a few others, and they told me everyone was receiving several emails a day, or one gigantic email with all the bugs for the entire company, meaning they had to look a long time to find the bugs that belonged to them. Great, so I’m spamming everyone in the company, embarrassing people, and wasting tons of their time because they can’t figure out which bugs are theirs. AHHHHHH!
You know what I did? I took a few minutes to think about it, and realized that I had, in fact, made a big mistake. So I started fixing it. I ripped that tool out and re-did everything. I made two setups: one with all the results for all of the projects for the entire organization, and another setup that created a separate org within the tool for every team at the company. That way I could still have a bird’s eye view of the entire organization’s security posture (especially once I rolled out additional tools), and each DevOps team could receive only reports on their own projects. No one was being spammed or embarrassed by the tool anymore, and I could still give accurate reports to management at the drop of a hat.
It took 3 work days to rip the tool out and re-install it all. I’m pretty sure we saved exponentially more developer time that I spent re-doing that project. On top of that, I did the ‘third way of DevOps’: I took time to improve our daily work! Sometimes it can be difficult to see that we have made a mistake, but once we can get over ourselves, we can fix it. Then we’re back to being awesome again.
Read on for more war stories in the next article (coming soon)!