BSides Perth 2025

Developing Your Own Local LLM (GenAI) for Cybersecurity GRC
2025-10-18 , Wesfarmers Theatre

Public generative AI tools like OpenAI’s models provide significant advantages in processing and generating human-like text, but when it comes to cybersecurity Governance, Risk, and Compliance (GRC), they pose critical security and privacy risks. Transmitting sensitive or proprietary information to external cloud-based AI services can result in data leakage, non-compliance with regulatory requirements, and increased attack surfaces. As a result, many organizations are reluctant or outright prohibited from using these public AI platforms for their cybersecurity operations.
Building a local large language model (LLM) tailored to cybersecurity GRC needs offers a secure and compliant alternative, but this path is fraught with challenges. Many practitioners attempting to set up their own local models face technical frustrations such as model compilation errors, dependency conflicts, and the steep learning curve involved in training or fine-tuning large models on domain-specific data. Furthermore, integrating continuously evolving local context—such as organizational policies, compliance documents, and threat intelligence—into a static AI model is often complicated and resource-intensive.
In this session, I will present a practical, hands-on approach to overcoming these challenges through Retrieval-Augmented Generation (RAG). This approach enables you to augment a pretrained local LLM with dynamically retrieved local data without the need for costly retraining or deep technical expertise. Using this method, you can seamlessly incorporate relevant, up-to-date information into the AI’s responses, ensuring that your generative AI system remains contextually accurate and compliant.
This workshop-style talk will walk attendees through a proven tool and workflow I discovered that simplifies local LLM deployment for cybersecurity GRC use cases. Attendees will learn how to navigate common technical pitfalls, such as compilation problems, and how to easily add their own data to enrich the model’s knowledge base. This practical guide empowers cybersecurity professionals to harness generative AI technology securely and effectively within their own environments—maintaining control over sensitive data and improving GRC workflows with AI-driven insights.

Lee Yang Peng (CISA, CISM, CRISC, OSCP) is a Lead Cybersecurity Consultant at DACTA Global specialising in Governance, Risk, Compliance (GRC) services. He graduated from the National University of Singapore (8th in the World) with a Bachelor of Computing (Information Security) with Honours (Distinction).

With a strong technical foundation, including experience in Penetration Testing, Yang Peng is able to bridge the gap between business strategy and cybersecurity compliance, translating complex technical and organizational risks into actionable strategies. Over the past few years, he has delivered enterprise-level cybersecurity solutions across both IT and OT environments, leading multi-domain initiatives that typically require multiple senior roles.

Past Talks:
▪ BSides Montreal (Canada) Cybersecurity Conference – 2025 (https://bsidesmtl.ca/bsides-montreal-en/program-en-2025/)
▪ Python Conference Asia-Pacific (PyCon APAC) – 2015 (https://tw.pycon.org/2015apac/en/lightning_en/index.html)
▪ Python Conference Singapore (PyCon SG) – 2015 (https://pycon.sg/archive/2015/speaker/profile/60/)