Is AI killing cybersecurity?
By Nir Zuk, Founder and CEO of Cylake.
To understand the effect of AI on cybersecurity, let’s roll back the clock for more than a decade. Back then, machine learning was already being used in cybersecurity to find patterns in data and create prediction models and remediation workflows. The ability to use previous datasets and statistical analysis to make assumptions and predictions about an attacker’s behavior became a crucial part of cybersecurity. Where humans struggled with the vast amounts of data, machine learning allowed for much faster and more accurate threat mitigation.
Much of today’s discussion about AI is related to its additional and very promising generative capabilities. These generative models can operate on vastly more data to review and understand code and infrastructure, identify anomalies, find threats and vulnerabilities, and prioritize and automate remediation, to name a few areas. Does that mean cybersecurity products as we know them are in trouble?
I think about it differently. It is my belief that GenAI will segment cybersecurity offerings into ones that operate on an organization’s complete data and ones that don’t. The distinction is important because, for AI to be most effective, it needs to have unthrottled access to very large amounts of data – orders of magnitude more than what current cybersecurity products can base their models on.
So the question is really about the data, and less about the AI itself. What’s stopping AI today from having the necessary access to data? The answer, in my opinion, is two-fold. The first answer is that the cloud has made it prohibitively costly for most companies to ingest, process, and retain the data needed for AI to truly power end-to-end cyber protection. Cybersecurity companies and the operators of their products were faced with difficult choices and realized that significant compromises had to be made in order to take some advantage of AI without placing it out of reach for most organizations. The second answer is related to the nature of the data that’s needed. Many organizations, for regulatory, privacy, or security reasons, can’t have their data live in a public infrastructure or be consumed by AI that runs in a public infrastructure.
In other words, we’re dealing with a data access and sovereignty challenge to truly make AI work for cybersecurity. At extreme regulatory density and data sensitivity, this becomes more than an operational inconvenience. The economics and multi-tenant architecture of public hyperscale cloud can make it structurally difficult to give AI unfettered access to complete security data without compromise. That does not mean the cloud is broken for cybersecurity. It means that, for a specific class of organizations, the assumptions that defined the cloud-era security model do not fully align with what AI-native security now requires.
Ironically, perhaps, the answer to this challenge comes back to where we started: on-premises. What if organizations could use all their data, store it for as long as they want, analyse it as much as they want, and have the AI that powers their cybersecurity protection be based on it? What if they can do so without having to make privacy or security compromises?
That was my thinking when I founded Cylake. Our mission is to create an AI-native platform to support organizations and enterprises that need sovereign, complete, and transparent agentic protection against cyberattacks. A platform without compromises: our customers can enjoy optimal security and governance under their full control.
I’m excited to update you on what we’ll be doing in the next months.




100%. That's precisely also the reason we are building our solution to be on-prem based.
Great perspective. I’d add what I believe is a foundational architectural principle for AI platforms: AI applications should be designed across two distinct but connected domains:
service/agent domain
data domain.
They need to communicate constantly, but they should not be governed or protected in the same way.
In this model, AI doesn’t necessarily need broad access to raw enterprise data. It needs governed access to the context layer - metadata, semantic relationships, and retrieval paths - with a built-in security layer at that boundary.
The AI operates on the "map" of the data first; direct access to sensitive resources happens only after a filtered, policy-controlled request is validated.
This is why AI-native security must be a platform design principle, not just an infrastructure choice. The Service Domain handles orchestration and execution control, while the Data Domain handles sovereignty, lineage, and policy enforcement.
This separation is still under-discussed, yet it’s exactly how secure AI platforms move from risky experiments to core, sovereign business capabilities.
That is why the direction you’re taking with Cylake is so relevant.