Recently, a group of researchers from the University of Illinois Urbana-Champaign have shown that GPT-4, the latest iteration of the model, has the capability to identify security vulnerabilities without human assistance. Additionally, it can exploit zero-day flaws by utilizing knowledge of common vulnerabilities and exposures (CVE).
In their study, the researchers compiled a dataset of 15 critical severity vulnerabilities from the vulnerable list and common exposures to demonstrate how GPT-4 can act against them. The results showed that GPT-4 was able to exploit 87 percent of the vulnerabilities, while GPT-3.5 was unable to exploit any.
The success of GPT-4 in exploiting vulnerabilities was enabled by its complete CVE descriptions. However, this also means that security organizations may need to rethink their approach to publishing detailed reports on vulnerabilities as a mitigation strategy.
To prevent cybercriminals from exploiting ‘zero-day’ vulnerabilities using GPT-4, the researchers recommend proactive security measures such as regular security package updates. They stress the importance of staying ahead of potential threats posed by advancements in language models.