Unleashing AI and Large Language Models in Offensive Security: A New Era of Ethical Hacking
- Length: 45 minutes.
- Scheduled: 16:00 (UTC+2)
Offensive security has traditionally been a human-centric field, reliant on manual investigation, intuition, and creative problem-solving. However, the rise of Artificial Intelligence (AI) and Large Language Models (LLMs) such as GPT-4 is rapidly changing the landscape of ethical hacking.
This talk will explore the various ways in which AI and LLMs can be leveraged to bolster offensive security strategies, automate repetitive tasks, and provide deeper insights for professionals in the field. We will discuss real-world case studies, and present methods for integrating these technologies into existing workflows. Additionally, we will tackle the ethical implications and potential pitfalls of this evolution.
Martin Ingesen
Martin Ingesen is the founder of the offensive security company Kovert. He has won several hacking-related competitions in Norway, and has been featured several times in news and media.
Martins day- (and night)-job is to ensure that he and his team is up-to-date and equipped with the latest techniques, tactics and procedures to conduct realistic and high quality offensive security services to their clients.