For a while now, ChatGPT has been a popular topic on the internet and in social media. The open source chatbot that uses artificial intelligence (AI) has been taught to respond in depth to a prompt and to follow instructions.
ChatGPT was created by OpenAI, an independent research organisation formed by Elon Musk. A new study demonstrates how hackers can create phishing emails and codes using conversational bots.
According to a study by Check Point Research (CPR), AI models can be used to build a complete infection flow that includes spear-phishing and running a reverse shell. It created malicious code and phishing emails using ChatGPT and another platform called OpenAI’s Codex, an AI-based technology that converts natural language to code.
First, CPR instructed ChatGPT to create a phishing email that looks to be from the fictitious Webhosting company Host4u by pretending to be a hosting provider. Although OpenAI cautioned that this content might go against its content guideline, it was able to produce the phishing email.
The researchers then asked ChatGPT to improve the email with a number of inputs, such as changing the email body’s link prompt to text encouraging recipients to download an excel file.
The malicious VBA code was then written in the Excel document as the next step. The researcher claimed that ChatGPT produced improved code after a few brief iterations, as the initial code was somewhat crude and utilised libraries like WinHttpReq.
Researchers used Codex to build a simple reverse shell using a placeholder IP and port because Codex can do more than just write scripts, unlike ChatGPT, which has a larger range of activities it can perform. The researchers also requested that Codex increase defences. The study reaches to the conclusion that while LLM and AI’s rising positions in the cyber world present many opportunities, they also carry some concerns.