Hackers Use ChatGPT to Create Malware


 Cybercriminals are leveraging ChatGPT to build Telegram bots that can write malware and steal data.

So far, if a user asks ChatGPT to write a phishing email impersonating a bank or creating malware, it won't comply.


Then, hackers look for loopholes and try to overcome these ChatGPT restrictions. There is an active chat on underground forums revealing how to use the OpenAI API to bypass ChatGPT limitations.



"This is mostly done by creating Telegram bots that use the API (OpenAI). These bots are advertised on hacking forums to increase their exposure," according to CheckPoint Research (CPR), quoted from News18.


Previously, the cybersecurity firm discovered that cybercriminals were using ChatGPT to enhance the coding in the base Infostealer malware starting in 2019.


There has been much discussion and research about how cybercriminals are leveraging the OpenAI platform, specifically ChatGPT, to generate malicious content such as phishing emails and malware.


The current version of the OpenAI API is used by external applications and very few functions enable anti-abuse measures.


Cybercriminals are looking for ways to leverage these platforms for the creation of malicious content such as phishing emails and malware code.


On underground forums, CPR found cybercriminals advertising newly created services, and Telegram bots using the OpenAI API without restrictions.


"Cybercriminals create basic scripts that use OpenAI APIs to bypass anti-abuse restrictions," the security researchers wrote.



The cybersecurity firm also noted attempts by Russian cybercriminals to circumvent OpenAI limitations and use ChatGPT for nefarious purposes.


"Cybercriminals are increasingly interested in ChatGPT, because the AI technology behind it can make hackers more cost-effective," said the researcher.

Previous Post Next Post

Contact Form