Researchers from Abu Dhabi’s Khalifa University just solved a pressing problem. They built RedSage, an open-source AI model that organizations can run on their own computers without sending data anywhere else.
The research team partnered with the University of Bonn and the University of Milan to create this specialized cybersecurity assistant. Their work earned acceptance to ICLR 2026, one of the world’s top AI research conferences taking place in Rio de Janeiro this April.
How RedSage Changes Security Operations
RedSage packs 8 billion parameters into a model small enough to run on standard office hardware. Companies can deploy it on consumer-grade graphics cards rather than expensive cloud infrastructure.
The model handles diverse security work. It understands frameworks like MITRE ATT&CK, which catalogs how cybercriminals attack systems. It knows offensive security techniques that ethical hackers use to test defenses. It can work with various security tools without needing constant human guidance.
Testing shows RedSage outperforms competing models by significant margins. It scored 5.59 points higher than baseline models on cybersecurity benchmarks. On general AI tasks measured by the Open LLM Leaderboard, it beat competitors by 5.05 points.
The instruction-tuned version of RedSage even surpassed Qwen3-32B, a model four times its size. This efficiency matters because smaller models cost less to run and respond faster.
Building a Security Expert From Scratch
The research team took a different approach than most AI developers. Instead of training a general model and hoping it learns security knowledge, they fed it cybersecurity information from the start.
They curated 11.8 billion tokens of security-focused training data. This collection spans 28,600 documents covering security frameworks, attack techniques, and defensive tools. The team filtered massive amounts of web content and manually selected high-quality resources.
Training data alone doesn’t create an expert assistant. The researchers designed an automated system that simulates how security professionals actually work. This pipeline generated 266,000 multi-turn conversations showing realistic security workflows.
They released three versions of RedSage for different uses. RedSage-8B-Base gives developers a foundation for further customization. RedSage-8B-Ins handles multi-turn conversations and provides step-by-step explanations. RedSage-8B-DPO serves as the production-ready assistant with refined behaviour.
Testing Real Security Knowledge
The team created RedSage-Bench to measure how well AI models understand cybersecurity. This benchmark includes 30,000 multiple-choice questions testing technical knowledge and skills. Another 240 open-ended questions assess how models apply their knowledge to realistic scenarios.
Rather than just checking for correct answers, the evaluation uses an AI judge to assess the quality and accuracy of detailed responses. This approach mirrors how human experts would evaluate each other’s work.
Organizations using closed AI services like ChatGPT or Claude face privacy risks. These services send prompts and data to external servers controlled by other companies. Security teams working with threat intelligence, vulnerability reports, or incident data cannot risk this exposure.
On-premises deployment solves this problem. Companies keep all data within their own infrastructure. They maintain complete control over how the model processes information. They can customize the model for their specific security environment without sharing proprietary knowledge.
UAE Builds AI Research Reputation
Khalifa University has developed multiple specialized AI models beyond RedSage. The university’s 6G Research Centre previously released TelecomGPT for the telecommunications industry. They created both global and Arabic versions of this model.
The team also published Open-Telco LLM Benchmarks in collaboration with GSMA, the global mobile industry association. These projects support UAE goals to implement 6G networks by 2030.
This pattern of building sector-specific AI demonstrates strategic thinking. Rather than competing with giant general-purpose models from major tech companies, UAE researchers target specialized domains where focused expertise delivers better results.
The acceptance of RedSage research to ICLR 2026 validates this approach. The conference attracts top AI researchers worldwide. Getting research accepted there signals that UAE institutions contribute meaningful advances to the field.
Open Access Accelerates Progress
The research team will release all models, datasets, and code publicly. This openness allows other researchers to verify results, build on the work, and create their own specialized models.
Many previous cybersecurity AI projects kept their data and methods private. This secrecy prevents the research community from learning what works and what doesn’t. It forces every team to solve the same problems independently.
Open-source release particularly benefits smaller organizations. A startup or mid-sized company can deploy RedSage without paying licensing fees. They can modify it for their specific needs. They can examine exactly how it works rather than trusting a black box.
Organizations can download RedSage models from Hugging Face, access the code on GitHub, and read the full research paper. This transparency builds trust and enables rapid improvement through community contributions.
Implications for Security Teams
Security operations teams spend significant time on repetitive tasks. They analyze logs, investigate alerts, research vulnerabilities, and document incidents. AI assistants can handle much of this work faster than humans.
RedSage gives teams a capable assistant they can trust with sensitive information. A security analyst can ask it to explain an attack technique, suggest defensive measures, or help write detection rules. The conversation stays within the organization’s network.
The model understands context from security frameworks that professionals already use. When someone mentions a MITRE ATT&CK technique identifier, RedSage knows what attack method that represents. It can suggest relevant defensive strategies based on the organization’s environment.
Training new security staff takes months or years. RedSage provides instant access to expert-level knowledge. Junior analysts can consult it while learning the field. Experienced professionals can use it to quickly research unfamiliar topics.
The Research Team Behind RedSage
The project brought together researchers from three continents. Naufal Suryanto and Muzammal Naseer from Khalifa University led the work. Pengfei Li joined from the same institution.
Syed Talal Wasim, Jinhui Yi, and Juergen Gall contributed from the University of Bonn. Paolo Ceravolo and Ernesto Damiani participated in the University of Milan.
This international collaboration shows how AI research increasingly depends on global teams. Different institutions bring complementary expertise. Khalifa University provided cybersecurity domain knowledge and AI capabilities. European partners added experience with large-scale model training and evaluation.
The team’s diverse backgrounds helped them avoid blind spots that single institutions might miss. Security needs differ across regions and sectors. A global perspective ensures the model works for various use cases.










