Press ESC to close

5 ways CISOs can prepare for generative AI’s security challenges and opportunities

ChatGPT and other generative AI technologies are revolutionizing cybersecurity; however, they also offer hazards. CISOs must weigh performance advantages against unknown hazards. AI is improving cybersecurity precision while also being employed in new attack tools such as FraudGPT.

Five ways for CISOs and their teams

1. Securing ChatGPT and generative AI engagements in the browser

Despite the security risk of sensitive data being leaked into LLMs, organizations are enticed by the prospect of increasing productivity using gen AI and ChatGPT. According to interviews with CISOs, these experts are divided on how to define AI governance. To be effective, any solution to this challenge must safeguard access at the browser, app, and API levels. Many startups and bigger cybersecurity businesses are developing solutions in this area. The recent revelation of a novel security mechanism by Nightfall AI is remarkable. Users may self-correct using the company’s configurable data rules and remedial insights. The platform provides CISOs with visibility and control, allowing them to deploy AI while maintaining data security.

2. Always be on watch for new attack channels and types of breaches

SOC teams are experiencing greater social engineering, phishing, malware, and business email compromise (BEC) threats that they attribute to next-generation artificial intelligence (AI). While attacks against LLMs and AI programs are still in their early stages, CISOs are already pushing the practice of zero trust to mitigate these vulnerabilities.

This involves continually monitoring and analyzing gen AI traffic patterns for abnormalities that might suggest upcoming threats, as well as testing and red-teaming systems under development regularly to find possible vulnerabilities. While zero trust cannot remove all risks, it can help organizations become more robust to next-generation AI threats.

3. Finding and resolving micro-segmentation gaps and mistakes

Airgap Networks, a microsegmentation pioneer, has been designated one of the top 20 zero-trust startups for 2023. The company’s agentless micro-segmentation decreases every network endpoint’s attack surface, enabling seamless integration into existing networks lacking device adjustments, downtime, or hardware upgrades. Airgap also debuted ThreatGPT, a Zero Trust Firewall (ZTFW) that combines graph databases and GPT-3 models to give new threat insights. GPT-3 models analyze plain language queries to identify security vulnerabilities, whilst graph databases give context on endpoint traffic relationships.

People Also read – How Will AI Change the CISO Role?

4. Preventing generative AI-based supply chain threats

Security is frequently checked immediately before deployment, towards the conclusion of the software development lifecycle (SDLC). In an age of growing generative AI threats, security must pervade the SDLC, with ongoing testing and verification. API security must also be prioritized, with API testing and security monitoring automated across all DevOps processes.

While not immune to new-generation AI attacks, these practices considerably increase the threshold and allow for rapid threat identification. Integrating security throughout the SDLC and enhancing API defences will assist organizations in combating AI-powered attacks.

5. Approaching every generative AI app, platform, tool, and endpoint with zero-trust

A zero-trust approach to all interactions with AI tools, applications, and platforms, as well as the endpoints on which they rely, is essential in any CISO’s playbook. Continuous monitoring and dynamic access controls are required to offer the granular visibility required for controlling least privilege access and always-on verification of people, devices, and the data they utilize at rest and in transit. 

CISOs are particularly concerned about how next-generation AI may provide new threat vectors that they are unable to defend against. Protecting against query assaults, rapid injections, model manipulation, and data poisoning are important goals for corporate LLMs.


CISOs and CIOs are debating whether next-generation AI solutions like ChatGPT are free to improve productivity or must be managed. The rising frequency of AI-based assaults is concerning, yet no board of directors wants to engage in capital expenditure planning. Instead, many organizations are instituting zero-trust programs to restrict the blast radius of AI assaults and to provide a first line of defence in preserving identities and privileged access credentials.