Foundation-Sec-8B Red Team Edition
By Ironcybersec
Fine-tuned Foundation-Sec-8B-Instruct by Ironcybersec on red-teaming & security assessment dataset with 10,027 examples.
Model Details
- Base Model:
fdtn-ai/Foundation-Sec-8B-Instruct(8B parameters) - Fine-tuning Method: LoRA (Unsloth 2026)
- LoRA Rank: 16 | Alpha: 32
- Training Data: 10,027 security-focused examples
- Training Epochs: 3
- Final Loss: 0.053
- Evaluation Perplexity: 2.1228 (excellent)
Features
โ
Optimized for red-team operations and security assessments
โ
Trained on Active Directory enumeration, privilege escalation, persistence, and exploitation techniques
โ
8B parameters (40% smaller than 14B models, fast inference)
โ
GGUF format for LM Studio & llama.cpp compatibility
โ
LoRA weights merged for standalone deployment
Usage
With Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "YourUsername/foundation-sec-8b-red-team"
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "[INST] How to enumerate Active Directory users? [/INST]"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
With LM Studio
- Download the GGUF file
- Open LM Studio โ Load Model
- Select the
.gguffile - Start chatting
With llama.cpp
./main -m foundation-sec-8b-red-team.gguf -p "[INST] Red team prompt [/INST]"
Training Data
The model was fine-tuned on a curated dataset of:
- Active Directory reconnaissance & enumeration
- Privilege escalation techniques
- Persistence mechanisms
- Post-exploitation scenarios
- Red team methodology and tactical operations
Dataset Size: 10,027 examples (90% train, 10% val)
Format: Instruction-Input-Output (LLaMA-style)
Performance
- Training Loss: 0.053 (after 3 epochs)
- Validation Perplexity: 2.1228 (excellent generalization)
- Model Size: 15GB (F16 GGUF)
- Inference Speed: ~5-15 tokens/sec on CPU (varies by hardware)
Limitations
โ ๏ธ Responsible Use Only: This model is designed for authorized security testing and red-teaming exercises.
- Not intended for malicious purposes
- Should only be used on systems you own or have explicit permission to test
- Follows ethical hacking and security research guidelines
- Educational and authorized testing only
Training Details
Framework: Unsloth + PEFT LoRA
Optimizer: Adam (lr=2e-4)
Max Length: 4096 tokens
Batch Size: 2 (4-bit quantization)
Hardware: Modal Labs H100 GPU
Model Card
Model Type: Causal Language Model (LLM)
License: Same as base model (Foundation-Sec-8B)
Finetuned From: fdtn-ai/Foundation-Sec-8B-Instruct
Language: English
Task: Security Assessment, Red Teaming, Authorized Penetration Testing
References
- Base Model: Foundation-Sec-8B-Instruct
- Training Framework: Unsloth
- LoRA: PEFT
Disclaimer
This model is provided for educational and authorized security testing purposes only. Users are responsible for ensuring compliance with all applicable laws and regulations. Unauthorized access to computer systems is illegal. Always obtain proper authorization before conducting security assessments.
About Ironcybersec
Ironcybersec is a specialized security company focused on red-teaming, penetration testing, and advanced security research. This model represents our commitment to advancing the field of cybersecurity through cutting-edge AI and machine learning technologies.
- ๐ Security-First Development
- ๐ฅ Red Team Expertise
- ๐ Advanced AI Research
- ๐ Knowledge Sharing
Contact: Ironcybersec
Citation
@misc{foundation-sec-8b-red-team,
title = {Foundation-Sec-8B Red Team Edition},
author = {Ironcybersec},
year = {2026},
publisher = {Hugging Face Hub},
howpublished = {\url{https://huggingface.co/ironcybersec/foundation-sec-8b-red-team}}
}
- Downloads last month
- 47