General-purpose Large Language Models have proven their power across industries, transforming how we process and understand information. But in cybersecurity, language alone isn’t enough. Investigations demand a different kind of intelligence, one that combines analytical reasoning with a specialized cybersecurity knowledge, enabling the ability to reason over logs, alerts, and digital evidence, and to follow complex trails of evidence to uncover meaningful insights.
To meet this challenge, we developed a Specialized Language Model (Kindi) , a small language model designed to assist cybersecurity investigations.
In security, mistakes aren’t just inconvenient , they can be costly. We needed something different: a model that assists real investigations, not just talks about them.
In this blog, we’ll take you inside Kindi to show how it helps security teams tackle complex investigations and make sense of overwhelming amounts of security data.
Inside Kindi
Security teams are under constant pressure. Alerts pile up, attacks are increasingly sophisticated, and traditional tools can’t keep up. General-purpose LLMs can summarize or explain but they don’t follow structured investigative workflows, consistently and reliably reason across signals, or connect the dots the way a human analyst does.
What teams need is intelligence that behaves like a human analyst and delivers actionable insights. That’s exactly what Kindi was built to do.
Our goal was clear: give LLM the cybersecurity knowledge, reasoning skills, and investigative workflows of a professional analyst, so it could take alerts, logs, and threat intelligence and produce actionable insights.
To achieve this, we leveraged the distil labs platform to fine-tune our custom model on a variety of key security datasets.
The fine-tuning process was critical. It wasn’t just about exposing the model to security data, it was designed to teach the model to simulate the investigative workflows of a human analyst. During fine-tuning, the model learned to follow step-by-step reasoning, analyze attack patterns, and present conclusions in a clear, actionable format. Essentially, fine-tuning transformed the LLM from a general-purpose language model into a specialized investigative assistant.
After fine-tuning Kindi to behave like a human analyst, we applied knowledge distillation techniques to optimize its performance and usability in real-world environments while preserving all of its investigative reasoning and analytical capabilities.
By combining both fine-tuning and distillation, Kindi gains the best of both worlds: the reasoning and investigative approach of a human analyst, with the speed and efficiency needed for real-world security operations.
Kindi’s Performance
To assess the effectiveness of Kindi in cybersecurity investigations, we selected several LLMs and compared their performance with our smaller, fine-tuned model (Kindi). This comparison was conducted across a variety of attack scenarios to evaluate how the model performs.
We used a combination of two evaluation methods :
The table below presents the results of this evaluation :
Our benchmarking shows that the model delivers reasoning performance that approaches or even surpasses larger models in a smaller, faster package, achieving perfect schema adherence, low hallucination rates, and stronger explainability, while approaching their performance in relevance.
What’s next
The first version of Kindi has already shown how a security-specialized model can assist analysts in reasoning through alerts and producing actionable insights. The next version takes this further: it will be fine-tuned on a broader set of data and feedback from experienced analysts.
This expanded training will help Kindi improve accuracy, reasoning, and reliability, better handle complex scenarios, correlate evidence more effectively, and assist analysts in even the most challenging investigations.
Read more to see Kindi in action and learn how it helps analysts tackle real-world incidents.