AI fails to impress cybercriminals, study reveals

The Challenges of Cybercriminals Adopting AI

A recent analysis has revealed that cybercriminals are struggling to effectively integrate artificial intelligence (AI) into their operations. This study, conducted by the University of Edinburgh, examined over 100 million forum posts from underground networks, using a database called CrimeBB. The data was reviewed both manually and through the use of large language models (LLMs).

Despite the growing interest among cybercriminals in AI tools, the research found that these technologies have not significantly altered their methods of operation. According to the study, many discussions on forums describe AI tools as “not particularly useful.” There is also “no significant evidence” that hackers have successfully used AI to improve their hacking activities, whether for learning purposes or developing more effective tools.

AI’s Limited Impact on Hacking Activities

The study highlights that AI coding assistants are most beneficial for individuals who already have a strong foundation in programming. For those without such skills, AI models that offer coding assistance do not provide a substantial advantage when attempting to break into devices or find security workarounds.

One post quoted in the study states: “You’ve gotta first learn the ropes of programming by yourself before you can use AI and ACTUALLY benefit from it.” This suggests that the effectiveness of AI in the hands of cybercriminals is limited by their existing knowledge and skills.

AI’s Role in Less-Legal Online Activities

So far, the impact of AI on illegal online activities has been primarily seen in areas that are easy to automate. These include the creation of social media bots, some romance scams, search engine optimization (SEO) fraud, and the development of fake websites designed to boost search rankings for advertising revenue.

Although some experienced hackers use chatbots to answer coding questions or generate “cheatsheets” to assist with their work, the AI they rely on is largely from mainstream and legitimate products. Examples include Anthropic’s Claude or OpenAI’s Codex. These are not specifically designed for cybercrime, unlike models such as WormGPT, which hackers have created to produce malware code or phishing emails.

Bypassing AI Safety Measures

Many of the forum posts analyzed in the study involve cybercriminals seeking techniques to bypass the security measures of these mainstream AI models. However, they often struggle to get these systems to override their safety settings. As a result, they are forced to turn to older, lower-quality open-source AI models that are easier to manipulate.

These models, however, tend to be less effective and require significant resources to use. The researchers found that while cybercriminals are experimenting with these alternatives, they are not achieving the same level of success as they might with more advanced AI tools.

The Effectiveness of AI Guardrails

The study concludes that the guardrails put in place by AI companies are currently working to prevent cybercriminals from exploiting these technologies. While there is ongoing experimentation and attempts to bypass these safeguards, the overall impact of AI on cybercrime remains limited.

This finding suggests that, at least for now, the barriers set by AI developers are proving effective in limiting the potential misuse of these powerful tools by those with malicious intent.

Tinggalkan Balasan

Alamat email Anda tidak akan dipublikasikan. Ruas yang wajib ditandai *