Press ESC to close

Why generative AI is a double-edged sword for the cybersecurity sector

Large language models (LLMs) and generative AI have the power to completely transform the security sector by increasing productivity, accuracy, and efficiency. By helping with code authoring, scanning, and real-time threat analysis, these technologies may increase the productivity and efficiency of security teams. But, as their use is still relatively new, organizations are still working out how to utilize them appropriately. The potential advantages of generative AI can potentially be abused by adversaries who are looking for methods to apply it for their nefarious ends. Organizations must thus manage the difficulties and hazards that come with using these technologies.

Understanding generative AI’s potential and knowing how to employ it ethically will be crucial as it gets more sophisticated and widely deployed.

Leveraging LLMs and generative AI 

Programming and coding might be revolutionized by generative AI models such as ChatGPT. They can assist in carrying out concepts for programs or apps, even when they have yet to be able to write code from scratch. Gen AI may be a good place to start because it makes changing existing code simple and frees up developers and engineers to focus on initiatives which remain in line with their areas of expertise.

People Also read – This is how generative AI will free up your time at work

Attackers can benefit from Gen AI and LLMs as they can provide results based on pre-existing material. Malicious code that is sufficiently different from already-existing material can be produced by them to avoid detection. Attackers can create customized payloads or assaults that circumvent security measures thanks to this technology. Webshell variations, which are malicious programs used to sustain persistence on hacked servers, are one-way attackers employ AI. These variations can be exploited to avoid detection when combined with a hacked server’s remote code execution vulnerability (RCE).

More advanced attacks and zero-day vulnerabilities replace LLMs and AI

With the use of time-consuming and highly skilled techniques like generative AI and LLMs, well-funded attackers may find weaknesses in source code. These hackers are also capable of reverse engineering commercial software and examining the source code of open-source projects. Attackers are more likely to employ open-source LLMs, which contain security features and frequently have tools or plugins for automating this process. As a result, there is a rise in harmful exploits and zero-day hacks. Organizations’ code bases already include a large number of open vulnerabilities, nevertheless. 

The introduction of AI-generated code without scanning will lead to a spike in unsolved vulnerabilities because of bad coding practices. Advanced organizations and nation-state attackers will be prepared to take advantage of these weaknesses, and generative AI technologies will facilitate their exploits.

Slowly gaining development 

While there are no simple answers to this issue, organizations should take some precautions to make sure these new technologies are being used responsibly and safely. One approach to doing that is to operate in the same manner as attackers: Organizations can find potentially exploitative parts of their code and fix them before attackers strike by employing AI technologies to search their code bases for vulnerabilities. This is especially crucial for companies that want to leverage LLMs and other AI technologies to help with code creation. It is crucial to confirm that open-source code imported by an AI from an existing source does not have known security flaws. 

Conclusion

The use of generative AI and LLMs has security experts worried, which has caused an “AI pause” because of anticipated social concerns. While these technologies might increase productivity, before letting AI run amok, businesses must carefully examine how they will use them and put the right protections in place.