This review study explores the growing use of Large Language Models (LLMs), such as ChatGPT, by threat actors to develop new and sophisticated forms of malware. With the public having open access to AI tools like ChatGPT, there is an increasing risk that these technologies can be exploited to craft polymorphic and evasive malware.
The research highlights the evolving threat landscape, where traditional malware detection systems are being challenged by AI-enhanced threats. As these conventional defenses improve, malicious actors are shifting toward GPT-based solutions to bypass security measures and automate the generation of malicious code.
Objectives
Key Concerns
This study emphasizes the importance of proactive security innovation in an era where AI tools can be both a boon and a threat to cybersecurity.