The previous post in this series was The Boy Who Cried Wolf.
Although on first blush it may seem that this second ‘worst practice’ is the same as the first (breaking builds on false positives), it’s slightly different, but almost as problematic. Developers constantly test and retest everything, so should we. Unfortunately, I find this is not the norm with security folks.
Security tools can cause more problems than just false positives. Some of them run a very, very long time. Some of them don’t generate a report, or it’s is in the wrong format, or you can’t seem to find it anywhere… Some tests crash or seem to never complete.
If you’ve ever put a DAST or first generation SAST into a CI/CD, you’ve likely experienced at least one “test that never seems to finish”. There’s a well-known DAST tool, which will remain unnamed, that has a “kill this scan after 6 hours” checkbox in the config for each scan. I remember thinking “SIX HOURS?!?!?!? It will never run that long!” Ah-hem. Over the coming months of that consulting engagement I learned to always check that box….
Sometimes security tools boast that they will work with your developer tools but… That’s a matter of opinion. I once had a vendor suggest that I create a container in my pipeline, download their product live from the internet, install it, update it, run it against my code, save the report, then destroy the container. ON EVERY BUILD. Is it possible? Yes. Absolutely. Is it also a giant waste of time, resources, and bandwidth? Also yes. I gently said no, as their product was not-yet-compatible, and we bought a competing product. Even though their product is otherwise really great, and it had been my first choice before I discovered the mismatch with my client’s CI system. It just didn’t make sense to jerry-rig it like that.
At another company, we had a SAST tool that ran on its own server. Because the information on the server was sensitive (all of their bugs), plus it only worked on a terribly old and insecure operating system, they segmented it away from the rest of the network. For developers to see their bugs they had to remote desktop (RDP) over to a jump box, sign into it with MFA, then sign into the SAST tool, then see their bugs. The SAST tool, for security reasons, locked users out after 30 days of non-activity. This led to developers constantly being locked out, and getting really frustrated. So they stopped logging in, very quickly. The first time I logged into it I told the rest of the team “this will never work, this is too hard”, and when we saw only 9% of the developers had logged in that year… We saw I was (unfortunately) correct.
All of this is because we didn’t test the tools first. Or “play with them” as I sometimes instruct my peers. It’s extremely important to set up a proof of concept with a new tool, run it a whole lot of times, invite others to play with it with you (especially developers), and get their feedback. By following this less-than-scientific-sounding process, I have ended up NOT getting tools I originally wanted, because I found a better fit. And sometimes the better fit is a surprise, even for me!
Proof of Concept
A proof of concept (POC) means setting something up and trying it out, in order to prove it actually works. For an AppSec tool, this means installing it, using it against one or more of your systems, ensuring it works with your CI/CD, your code repository, your IDE, your ticketing system, an ASOC (application security orchestration and coorelation tool) if you have one, and anything else you may want to use it with. You’re checking to see if it works the way you need it to (not how the vendor told you that it works). Sometimes I’ve bought tools and had them disappoint, not performing the way I remember the vendor told me they would. Other times I’ve been able to get them to do so much more than the sales person explained it to me, and been extremely pleased, getting value above and beyond what I had expected.
Whenever I perform a POC for an AppSec tool, I work on it solo at first. I like to fail privately, whenever possible. Once I have decided it’s working respectably, then I show it to the other AppSec or InfoSec folks on the team and see what they think of it. Assuming they still think it’s good, then I meet with a friendly developer team (a team that is willing to give me their time and attention, and I’ve had good dealings with before). I give them accounts, and show them how to use it, and ask them their opinion after either an hour or a few days, depending.
It’s really important to me that the developers do not hate any tool I try to bring in. I will not get good adoption if the super-friendly dev team rejects it. That’s not to say they have to LOVE the tool, I mean, it is reporting bugs and all of us would prefer to be told or work is perfect… But if they actually like the tool, that’s a huge win! I remember doing a feedback session with a bunch of devs and at the end one of them shyly asking “Hmmmm, Tanya. Can we keep it?” and I was overjoyed! I said “YEAH you can!” And the decision was made.
Avoiding this fate
The advice from this section is likely obvious already. Perform a proof of concept before purchasing any tool (unless the tool is free or ‘almost free’), and validate that it does everything you need it to. Get feedback from key stakeholders, generally the AppSec team, CISO or management (for reporting and coverage) and then the software developers and/or security champions. Test that the tool works as expected, that it’s not spitting false positives all the time, that it’s fast enough, that it’s interoperable with your toolset (works well with your other products) and that people are willing to use it.
Roll out in stages once you’ve made your decision. Start with friendly teams, and alerting mode or commenting on Pull Requests (PRs) only. If you play your cards right, and you have a great culture where you work, you might not even ever need to turn on blocking mode. Or perhaps you will just block for certain key vuln types or specific customer rules that are very important for your organization. If everyone is already fixing bugs before it gets to the CI (because they can test in their IDEs and when their code is checked in), validating again in the IDE may be fully automated or minimal. This is an organizational decision, and lots of organizations in the world function just fine without ever enabling blocking on any of their tools. It’s about what’s right for your org, not what you saw some person at a conference proposes.
The next post in this series is Artificial Gates.