OpenAI Confirms ChatGPT Is Used To Help Write Malware



One of the more interesting features of ChatGPT's AI service is its ability to generate a wide variety of content, including programming code if you know how to ask for it. What's interesting and perhaps troubling is that this can be extended to many different types of programming, including building malware.


That's what seems to have happened recently, when OpenAI has said that they have blocked more than 20 operations of hacking groups that have used ChatGPT for the purpose of developing malware that includes debugging, disseminating false information and so on.


This was discovered earlier this year, and the artificial intelligence company has worked hard to ensure that ChatGPT's technology is not used for the benefit of malicious parties. Hackers from China, Iran and others, who are reported as "hackers working for the government" or state threat actors have used ChatGPT to speed up the process of developing hacker attack vectors that are seen to have attacked the IT infrastructure of countries such as the United States and so on.


It has also been reported that malware built using ChatGPT can steal details such as contact lists, files stored on the computer, take screenshots of the user's software and desktop, view a list of the user's web browsing history and even obtain the location of the attacked computer.

Previous Post Next Post

Contact Form