Spread the love

In 2023, Kaspersky Digital Footprint Intelligence service discovered nearly 3000 posts on the dark web, discussing the use of ChatGPT for illegal purposes or talking about tools that rely on AI technologies. Even though chatter peaked in March, discussions persist.

“Threat actors are actively exploring various schemes to implement ChatGPT and AI. Topics frequently include the development of malware and other types of illicit use of language models, such as processing of stolen user data, parsing files from infected devices, and beyond. The popularity of AI tools has led to the integration of automated responses from ChatGPT or its equivalents into some cybercriminal forums. In addition, threat actors tend to share jailbreaks via various dark web channels – special sets of prompts that can unlock additional functionality – and devise ways to exploit legitimate tools, such as those for pentesting, based on models for malicious purposes,” explained Alisa Kulishenko, digital footprint analyst at Kaspersky. Apart from the chatbot and artificial intelligence mentioned, considerable attention is being given to projects like XXXGPT, FraudGPT, and others. These language models are marketed on the dark web as alternatives to ChatGPT, boasting additional functionality and the absence of original limitations.

Stolen ChatGPT accounts for sale
One more threat for users and companies is the market for accounts for the paid version of ChatGPT. In 2023, another 3000 posts (in addition to the previously mentioned ones) advertising ChatGPT accounts for sale were identified across the dark web and shadow Telegram-channels. These posts either distribute stolen accounts or promote auto-registration services massively creating accounts on request. Notably, certain posts were repeatedly published across multiple dark web channels.

“While AI tools themselves are not inherently dangerous, cybercriminals are trying to come up with efficient ways of using language models, thereby fueling a trend of lowering the entry barrier into cybercrime and, in some cases, potentially increasing the number of cyberattacks. However, it’s unlikely that generative AI and chatbots will revolutionise the attack landscape – at least in 2024. The automated nature of cyberattacks often means automated defenses. Nonetheless, staying informed about attackers’ activities is crucial to being ahead of adversaries in terms of corporate cybersecurity”, says Alisa Kulishenko, digital footprint analyst at Kaspersky.

Detailed research is presented on the official Kaspersky Digital Footprint Intelligence website. To avoid threats related to the cybercriminal activities in the shadow segment of the internet, it is worth implementing the following security measures:

  • Use Kaspersky Digital Footprint Intelligence to help security analysts explore an adversary’s view of their company resources, promptly discover the potential attack vectors available to them. This also helps raise awareness about existing threats from cybercriminals in order to adjust your defenses accordingly or take counter and elimination measures timely.
  • Choose a reliable endpoint security solution such as Kaspersky Endpoint Security for Business that is equipped with behavior-based detection and anomaly control capabilities for effective protection against known and unknown threats.
  • Dedicated services can help combat high-profile attacks. The Kaspersky Managed Detection and Response service can help identify and stop intrusions in their early stages, before the perpetrators achieve their goals. If you encounter an incident, Kaspersky Incident Response service will help you respond and minimise the consequences, in particular – identify compromised nodes and protect the infrastructure from similar attacks in the future.

One thought on “Cybercrime AI experimentation in the dark web – new Kaspersky study”

Leave a Reply

Your email address will not be published. Required fields are marked *

Free web hosting
try it


No, thank you. I do not want.
100% secure your website.