Back to all projects
Research

GPT Based Malware

My research regrading how Large Language Models (LLMs) are being exploited by threat actors to generate malicious code. Also the paper shows how LLMs give rise to polymorphic malware

GPT Based Malware

This review study explores the growing use of Large Language Models (LLMs), such as ChatGPT, by threat actors to develop new and sophisticated forms of malware. With the public having open access to AI tools like ChatGPT, there is an increasing risk that these technologies can be exploited to craft polymorphic and evasive malware.

The research highlights the evolving threat landscape, where traditional malware detection systems are being challenged by AI-enhanced threats. As these conventional defenses improve, malicious actors are shifting toward GPT-based solutions to bypass security measures and automate the generation of malicious code.

Objectives

  • Examine how LLMs are enabling the creation of advanced malware.
  • Analyze recent trends in GPT-based attacks, including polymorphic behavior.
  • Discuss the limitations of current detection systems against AI-generated threats.
  • Propose possible mitigation strategies to reduce the risks posed by LLM-driven malware.
  • Key Concerns

  • LLMs can assist in creating undetectable code by generating variations of malware with minimal effort.
  • The ease of access to tools like ChatGPT lowers the entry barrier for less skilled attackers.
  • There is a pressing need for adaptive security measures capable of identifying LLM-generated threats.
  • This study emphasizes the importance of proactive security innovation in an era where AI tools can be both a boon and a threat to cybersecurity.