BSides Perth 2025
Following on from my previous talk at BSides Perth 2023, I'm back to provide more interesting updates on receiving and decoding satellite transmissions! Topics will include X band weather satellite reception (both LEO and GEO), looking for Chinese spaceplanes and instead finding Chinese spy satellite networks, as well as attempting to receive Deep Space Network (DSN) targets. An update on the wider state of satellite receiving opportunities will also be provided.
AI is no longer a novelty—it’s a powerful tool in the modern hacker’s arsenal. From automating recon and crafting phishing lures to building scripts, writing exploit code, or even reverse-engineering binaries, generative AI is lowering the barrier of entry for all kinds of creative (and questionable) activity.
ARM Mali GPUs are one of the most commonly used graphics processors in mobile SoCs. They can be found in many Android smartphones running Google Tensor, Samsung Exynos, HiSilicon Kirin and Mediatek chips.
This talk discusses a vulnerability in the ARM Mali GPU kernel driver for Android. The bug, which was publicly reported several years ago, acts as an interesting case study of how misunderstandings of Linux kernel APIs can lead to exploitable vulnerabilities.
In this talk, Angus gives an overview of key Linux kernel and GPU driver concepts, discusses the vulnerability and how it works, and walks through a public proof-of-concept exploit.
Public generative AI tools like OpenAI’s models provide significant advantages in processing and generating human-like text, but when it comes to cybersecurity Governance, Risk, and Compliance (GRC), they pose critical security and privacy risks. Transmitting sensitive or proprietary information to external cloud-based AI services can result in data leakage, non-compliance with regulatory requirements, and increased attack surfaces. As a result, many organizations are reluctant or outright prohibited from using these public AI platforms for their cybersecurity operations.
Building a local large language model (LLM) tailored to cybersecurity GRC needs offers a secure and compliant alternative, but this path is fraught with challenges. Many practitioners attempting to set up their own local models face technical frustrations such as model compilation errors, dependency conflicts, and the steep learning curve involved in training or fine-tuning large models on domain-specific data. Furthermore, integrating continuously evolving local context—such as organizational policies, compliance documents, and threat intelligence—into a static AI model is often complicated and resource-intensive.
In this session, I will present a practical, hands-on approach to overcoming these challenges through Retrieval-Augmented Generation (RAG). This approach enables you to augment a pretrained local LLM with dynamically retrieved local data without the need for costly retraining or deep technical expertise. Using this method, you can seamlessly incorporate relevant, up-to-date information into the AI’s responses, ensuring that your generative AI system remains contextually accurate and compliant.
This workshop-style talk will walk attendees through a proven tool and workflow I discovered that simplifies local LLM deployment for cybersecurity GRC use cases. Attendees will learn how to navigate common technical pitfalls, such as compilation problems, and how to easily add their own data to enrich the model’s knowledge base. This practical guide empowers cybersecurity professionals to harness generative AI technology securely and effectively within their own environments—maintaining control over sensitive data and improving GRC workflows with AI-driven insights.
Spending a bit more time on finding weird files in internal pentests, I decided to pivot those techniques into external assets instead. This'll be a bit of a run around with some war stories and multiple vulnerability classes all coming from the same places, so join me in this little hunt in the safari zone.
Email addresses are a common data type which can be highly inconsistent, with various parsers behaving differently depending on its implementation. In some cases, parsers may accept an RFC-compliant email address that can lead to high impact vulnerabilities in applications, because the developers assumed that the parser will parse an email address according to their expectations. This concept is not just restricted to web applications, but also other types of services that rely on parsing email addresses to establish identities.
In this presentation, I will talk about my experience when researching on Jakarta Mail (previously known as JavaMail javax.mail
) for email parsing issues and it will be presented in a journey-style manner. This research was inspired by one of our recent engagements, where a client utilised a library that has JavaMail as one of its dependencies. While researching about Mail vulnerabilities, I recalled how Gareth Heyes from PortSwigger published about the use of encoded strings in email addresses and how email parsers may decode and accept them. After reading such an inspiring write-up, I attempted to extend the research Gareth did, against Jakarta Mail this time, and was surprised to find other interesting behaviours that were exhibited.
One of the main highlights in this sharing will be on InternetAddress.java
, a default class shipped with Jakarta Mail that is used to parse and represent email addresses. It has some inconsistencies that can potentially lead to situations where developers assume that emails are always validated when in fact they are not. As InternetAddress
is not typically used directly, I have also looked into how other libraries utilised it, namely Angus Mail and Spring Framework. In addition to the InternetAddress
class, I will also be going through my observations from other classes such as MimeMessage
(from Jakarta Mail), as well as InternetAddressEditor
, MimeMessageHelper
, MimeMailMessage
and SimpleMailMessage
(from Spring Framework).
Throughout this research, I have noted down various interesting primitives which I will be sharing, hoping that it will be useful for other researchers if they ever encounter them in the wild.
Since at least 2018, North Korea-based threat actor Black Ara (a.k.a. DPRK IT Workers) has operated under the guise of legitimate remote contractors, subcontractors, and full-time employees. These actors pose as freelance developers and IT professionals, often using fake identities and AI-generated profile pictures to secure employment. Their activities form part of a broader North Korean strategy to generate revenue for the regime and gain access to organisations of strategic interest.
In this presentation, we'll take a deeper look at the tools, techniques, and procedures (TTPs) used by Black Ara, including the creation of fake companies, social media profiles, and resumes to support employment fraud. The talk will highlight how these actors use VPNs, facilitators, and laptop farms to obscure their true locations and identities. We will also explore how Black Ara has successfully embedded IT Workers in companies across Australia, the US, the UK, India, and Kenya, targeting roles such as software engineers, UI/UX designers, and data scientists.
Understanding Black Ara is critical for both technical analysts and executive decision-makers. For analysts, this session provides actionable insights into detection, attribution, and mitigation strategies against a threat actor that blends social engineering with operational stealth. For executives, it highlights the strategic risks of inadvertently hiring sanctioned actors and the broader implications for corporate security, compliance, and reputation. By providing this in-depth analysis of Black Ara, enriched with real-world insights from PwC Global Threat Intelligence's experience, this presentation will equip attendees with the knowledge to identify, respond to, and prevent such intrusions.
Learn how to integrate threat modeling directly into your development workflow using open-source tools and Infrastructure-as-Code principles. This talk demonstrates how to write threat models as configuration files, integrate them into GitHub Actions pipelines, and even use SAST tools to validate security assumptions automatically. Don’t want to write configuration files? Don’t worry - there’s an MCP Server for that too!
We'll explore practical examples including:
- Writing your first threat model in HCL using threatcl
- Setting up CI/CD pipelines to validate threat models on every commit
- Using Semgrep rules to enforce security patterns in your threat models
- Generating visual diagrams and security documentation automatically
- Leveraging Git's version control to track how threats evolve with your codebase
Attendees will leave with working examples, GitHub Action templates, and the knowledge to implement "Threat Modeling as Code" in their projects immediately. We'll also touch on emerging patterns using AI tools to assist with threat identification.
This session dives into real-world vulnerabilities by dissecting CVEs directly in the code where they occurred. Each example showcases not just what went wrong, but why, with a focus on the subtle coding patterns, missed assumptions, and language misunderstandings that led to the bugs.
For every vulnerability, we will extract a few key lessons: principles or warnings that developers and reviewers can apply to prevent similar issues.
The goal of cybersecurity engineers is to reduce risk and solve vulnerabilities for organisations we are entrusted to defend, whether it be by finding the vulnerabilities or helping to remediate them. However this is not a task we take on alone, we often have to work with developers, product managers, non-technical co-workers and leadership which can often be frustrating.
Ever wondered how much of your team’s time is wasted on repetitive, low-value tasks instead of actual security work? Most internal security teams are stretched thin, juggling incident response, compliance demands, and endless manual processes. In this talk, I’ll walk through how we experimented with automation and AI to take the boring, repetitious stuff off our plates, while still keeping security tight. We’ll look at how prompt engineering can give workflows the right context so automations can safely make the first call on routine decisions, and where that approach breaks down. I’ll share the good, the bad, and the ugly of what we tried, including lessons learned about trust, oversight, and failure cases. The real goal isn’t AI hype. It’s making defence less exhausting so people have the bandwidth to tackle harder problems, like scaling automation across the business.
Operational Technology (OT) networks are some of the hardest environments to secure. Legacy systems, fragile infrastructure, and limited monitoring often leave defenders blind to attacker movement. But where visibility fails, deception can step in. This talk explores the use of honeypots as active defence tools in OT environments - traps designs not only to detect adversaries but to misdirect, delay, and expose their tactics.
Through real-world case studies we will examine when and where honeypots make sense in OT, including:
1. Environments where traditional SIEM/NIDS cannot reach.
2. High-risk legacy networks that can't be patched or modified.
3. Situations requiring early threat detection, attacker behaviour mapping, and validation of security controls.
Through this presentation you will gain practical insights into honeypot design and deployment - from low-interaction perimeter sensors, to high-interaction internal systems - alongside lessons learned about maintenance, alerting and avoiding detection by modern adversaries.
Do you find keeping up with infosec content tricky or frustrating? Keen to find useful content faster? Perhaps you grew up on the OG internet and think this 2025 internet feels like a weird black mirror episode? Well, maybe this talk is relevant and useful.
In this talk Matt will walk-through Talkback.sh, a smart infosec library to help others be more productive. He'll share the why, how, and what, with demos and tips on using the Talkback app, feeds and API to be more productive, save your time and sanity, and hopefully also be a more efficient hacker day to day.
What started as a harmless attempt to re-enable a disabled feature on my home router led me down a rabbit hole — one that ended in multiple 0-day discoveries: a LAN-side Remote Code Execution vulnerability, and a WAN-side Denial-of-Service bug that can knock out the router’s firmware update service until reboot. And when it's rebooted? You can just do it again.
In this talk, I’ll share the story of how casual tinkering turned into serious vulnerability research, and how the devices we trust to sit quietly at the edge of our networks often hide surprising weaknesses. We’ll explore how these bugs were found, what makes them valuable, and why routers — often ignored — remain highly attractive to APTs seeking stealthy, long-term access to small business, home and corporate networks.
Enterprise application security programs are traditionally measured on their capability to address specific aspects of the technology stack, and then the coverage for rolling that out across the enterprise. While this sounds effective, it's anything but as can be seen by the proliferation of ASPM and AI-AST based products and their subsequent wringing in the markets.
So what should modern product security functions aim for? In this talk I'll outline a different Five I's that defines the baseline for an effective ProdSec function and how you can take quick steps as either a scale-up business or enterprise to align and move forward
Dive into the cross-section between common email security controls and how Microsoft's Direct Send feature can bypass a bunch of them - even in some 'fixed' environments.