Purple Squirrel R1

Fine-tuned DeepSeek-R1-Distill-Llama-8B for Purple Squirrel AI Platform

Research Paper GitHub

Model Details

  • Base Model: DeepSeek-R1-Distill-Llama-8B
  • Parameters: 8B
  • Context Length: 4096 tokens
  • Quantization: 4-bit NF4 (GGUF f16 available)
  • Specialization: Purple Squirrel AI platform operations

Research Paper

This model is deployed in the AIDP Neural Cloud distributed inference system. Read the full research paper:

📄 AIDP Neural Cloud: Distributed LLM Inference on Decentralized GPU Networks

Key Results:

  • 47% cost reduction vs OpenAI
  • 28% faster latency (p50: 180ms vs 250ms)
  • 50 req/s throughput with fault tolerance

Capabilities

Fine-tuned to excel at:

  • Video Analysis: AI-powered transcription and tagging
  • Blockchain Operations: Multi-chain NFT minting (Solana, Ethereum, Polygon)
  • Cloud Integration: OCI, AWS, IPFS storage operations
  • Video Editing: Professional workflow understanding
  • Platform Operations: Purple Squirrel feature guidance

Quick Start

Using Ollama

ollama pull purplesquirrelnetworks/purple-squirrel-r1
ollama run purplesquirrelnetworks/purple-squirrel-r1

Using Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "purplesquirrelnetworks/purple-squirrel-r1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

Via AIDP Neural Cloud API

import openai

client = openai.OpenAI(
    base_url="https://neural-cloud.aidp.store/v1",
    api_key="your-api-key"
)

response = client.chat.completions.create(
    model="purple-squirrel-r1",
    messages=[{"role": "user", "content": "What is Purple Squirrel?"}]
)

Training Details

  • Method: LoRA fine-tuning
  • Dataset: Purple Squirrel platform documentation
  • Framework: TRL (Transformer Reinforcement Learning)
  • Hardware: AIDP decentralized GPU network

Downloads

Available Formats:

  • Safetensors: Hugging Face Transformers (default)
  • GGUF f16: Ollama, llama.cpp (15GB) - Download

Links

Citation

@misc{purplesquirrel-r1,
  title={Purple Squirrel R1: Fine-tuned DeepSeek-R1-Distill-Llama-8B},
  author={Karsten, Matthew},
  year={2026},
  publisher={Purple Squirrel Networks},
  url={https://huggingface.co/purplesquirrelnetworks/purple-squirrel-r1}
}

License

MIT License - Based on DeepSeek-R1-Distill-Llama-8B


Built by Purple Squirrel Networks

Downloads last month
18
Safetensors
Model size
8B params
Tensor type
F32
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for purplesquirrelnetworks/purple-squirrel-r1

Quantized
(189)
this model