ChatGPT promises to streamline a number of everyday tasks, such as searching for information, writing code, or writing emails. It can also facilitate the development of malware, scams and other attacks. Cybercriminals are dodging restrictions and putting artificial intelligence to do their dirty work.
OpenAI, the company behind ChatGPT, has put some barriers in the algorithm to prevent it from being used to develop malicious programs or information theft campaigns.
By directly asking artificial intelligence to write a phishing email or malware code, for example, the program responds that it cannot do these tasks because they are illegal.
However, cybercriminals have given her a way to perform such work.
Telegram and scripts violate ChatGPT security
The Check Point Research lab at cybersecurity firm Check Point discovered two ways used by malicious agents.
One involves the GPT-3 API, which serves as the basis for ChatGPT. It doesn’t have the same restrictions as the show that has become famous in recent months.
From it, the criminals managed to create a Telegram bot that connects to artificial intelligence, but does not have the same restrictions as it.
Using this robot, Checkpoint researchers were able to ask the algorithm to write a message by impersonating a bank, to be used in a phishing email, and a script that collects PDF files and sends them to an external server using FTP.
Bot developers even use the tool as a business. The first 20 messages are free. From there, they charge $5.50 per 100 shipments.
This isn’t the only way to circumvent security. Checkpoint Security scientists found, in an online forum, a script that circumvents chatgpt security and lets you use it as the user wants —even to develop malware.
If you ask him well, he does
This isn’t the first time Check Point Research has caught ChatGPT creating malware.
In January 2023, researchers found posts on hacker forums that featured reports of using artificial intelligence to improve code, supplement scripts, or even make malware from scratch.
In response, OpenAI placed restrictions to prevent the tool from being exploited in this way.