Check out our newest course in our academy: Practical TLS!
In episode 78 of the We Hack Purple Podcast host Tanya Janca brings Jason Haddix on to talk about artificial intelligence, and (of course) how to hack it! Jason discussed how to use AI for both defense and offence, using plain language (conversational), rather than code, and what a red teaming exercise looks for such a system. We talked about what a large language model looks like, cleaning up data, and how easy it is to get them to do bad things. Jason invited everyone to the AI Village at Def Con this year, and so much more! There was also much love for Daniel Miessler, his articles on AI, and his newsletter Unsupervised Learning. Listen to hear the whole thing!
More about Jason:
Jason Haddix AKA jhaddix is the CISO and “Hacker in Charge” at BuddoBot, a world-class adversary emulation and red teaming consultancy. He’s had a distinguished 18-year career in cybersecurity previously serving as the CISO of Ubisoft, Head of Trust/Security/Operations at Bugcrowd, Director of Penetration Testing at HP, and Lead Penetration Tester at Redspin. He has also held positions doing mobile penetration testing, network/infrastructure security assessments, and static analysis. Jason is a hacker, bug hunter and currently ranked 51st all-time on Bugcrowd’s bug bounty leaderboards. Currently, he specializes in recon, web application analysis, and emerging technologies. Jason has also authored many talks on offensive security methodology, including speaking at cons such as DEFCON, Besides, BlackHat, RSA, OWASP, Nullcon, SANS, IANS, BruCon, Toorcon and many more.
Jason Links!
Buddobot: A red team consultancy offering adversary simulation, emulation, red teaming, and penetration testing. We focus on year-long campaigns, and a purple mindset of defensive consulting.
https://buddobot.com/
https://twitter.com/BuddoBot
https://www.linkedin.com/company/buddobot/mycompany/
Find Jason:
https://twitter.com/Jhaddix
https://www.jhaddix.com/
https://www.linkedin.com/in/jhaddix/
Jason’s Newsletter:
https://executiveoffense.beehiiv.com/
Jason’s training happening in July:
Links to learn WAY MORE about AI
https://owasp.org/www-project-top-10-for-large-language-model-applications/descriptions/
https://atlas.mitre.org/
https://danielmiessler.com/p/the-ai-attack-surface-map-v1-0/
https://aivillage.org/large%20language%20models/threat-modeling-llm/
https://gandalf.lakera.ai/
https://incidentdatabase.ai/
https://github.com/Mooler0410/LLMsPracticalGuide
https://github.com/Azure/AI-Security-Risk-Assessment/blob/main/AI_Risk_Assessment_v4.1.4.pdf
https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf
https://github.com/Trusted-AI/adversarial-robustness-toolbox/wiki/ART-Attacks
https://learn.microsoft.com/en-us/security/engineering/threat-modeling-aiml
https://github.com/moohax
https://twitter.com/moo_hax
Very special thanks to our sponsor: Semgrep!
Semgrep Supply Chain’s reachability analysis lets you ignore the 98% of false positives in open source vulnerabilities and quickly find and fix the 2% of issues that are actually reachable.
Semgrep also makes a ludicrously fast static analysis tool They have a free and paid version of this tool, which uses an open-source engine, and offers a community-created rule set! Check out Semgrep Code HERE
Join We Hack Purple!
Check out our brand new courses in We Hack Purple Academy. Join us in the We Hack Purple Community: A fun and safe place to learn and share your knowledge with other professionals in the field. Subscribe to our newsletter for even more free knowledge! You can find us, in audio format, on Podcast Addict, Apple Podcast, Overcast, Pod, Amazon Music, Spotify, and more!