{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Router Models AWQ Quantization with LLM Compressor (vLLM Native)\n", "\n", "This notebook quantizes the CourseGPT-Pro router models to AWQ (Activation-aware Weight Quantization) format using **LLM Compressor** - vLLM's native quantization tool.\n", "\n", "**Models to quantize:**\n", "- `Alovestocode/router-gemma3-merged` (27B)\n", "- `Alovestocode/router-qwen3-32b-merged` (33B)\n", "\n", "**Output:** AWQ-quantized models ready for vLLM inference with optimal performance.\n", "\n", "**Why LLM Compressor?**\n", "- Native vLLM integration (better compatibility)\n", "- Supports advanced features (pruning, combined modifiers)\n", "- Actively maintained by vLLM team\n", "- Optimized for vLLM inference engine\n", "\n", "**⚠️ IMPORTANT:** If you see errors about `AWQModifier` parameters, **restart the kernel** (Runtime → Restart runtime) and run all cells from the beginning. The notebook uses `AWQModifier()` without parameters (default 4-bit AWQ).\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. Install Dependencies\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Install required packages\n", "# LLM Compressor is vLLM's native quantization tool\n", "# Note: Package name is 'llmcompressor' (no hyphen), may need to install from GitHub\n", "%pip install -q transformers accelerate huggingface_hub\n", "%pip install -q torch --index-url https://download.pytorch.org/whl/cu118\n", "\n", "# Try installing llmcompressor from PyPI first, fallback to GitHub if not available\n", "try:\n", " import llmcompressor\n", " print(\"✅ llmcompressor already installed\")\n", "except ImportError:\n", " print(\"Installing llmcompressor...\")\n", " # Try PyPI first\n", " import subprocess\n", " import sys\n", " result = subprocess.run([sys.executable, \"-m\", \"pip\", \"install\", \"-q\", \"llmcompressor\"], \n", " capture_output=True, text=True)\n", " if result.returncode != 0:\n", " # Fallback to GitHub installation\n", " print(\"PyPI installation failed, trying GitHub...\")\n", " subprocess.run([sys.executable, \"-m\", \"pip\", \"install\", \"-q\", \n", " \"git+https://github.com/vllm-project/llm-compressor.git\"], \n", " check=False)\n", " print(\"✅ llmcompressor installed\")\n", "\n", "# Utility function to check disk space\n", "import shutil\n", "def check_disk_space():\n", " \"\"\"Check available disk space.\"\"\"\n", " total, used, free = shutil.disk_usage(\"/\")\n", " print(f\"Disk Space: {free / (1024**3):.2f} GB free out of {total / (1024**3):.2f} GB total\")\n", " return free / (1024**3) # Return free space in GB\n", "\n", "print(\"Initial disk space:\")\n", "check_disk_space()\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. Authenticate with Hugging Face\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from huggingface_hub import login\n", "import os\n", "\n", "# Login to Hugging Face (you'll need a token with write access)\n", "# Get your token from: https://huggingface.co/settings/tokens\n", "HF_TOKEN = \"your_hf_token_here\" # Replace with your token\n", "\n", "login(token=HF_TOKEN)\n", "os.environ[\"HF_TOKEN\"] = HF_TOKEN\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. Configuration\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Model-specific AWQ overrides. Keys match MODELS_TO_QUANTIZE entries.\n", "MODEL_AWQ_OVERRIDES = {\n", " \"router-gemma3-merged\": {\"group_size\": 16},\n", "}\n", "\n", "# Derived AWQ configs per model (defaults + overrides)\n", "MODEL_AWQ_CONFIGS = {\n", " model_key: {**AWQ_CONFIG, **MODEL_AWQ_OVERRIDES.get(model_key, {})}\n", " for model_key in MODELS_TO_QUANTIZE.keys()\n", "}\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Model configurations\n", "MODELS_TO_QUANTIZE = {\n", " \"router-gemma3-merged\": {\n", " \"repo_id\": \"Alovestocode/router-gemma3-merged\",\n", " \"output_repo\": \"Alovestocode/router-gemma3-merged-awq\", # Or keep same repo\n", " \"model_type\": \"gemma\",\n", " },\n", " \"router-qwen3-32b-merged\": {\n", " \"repo_id\": \"Alovestocode/router-qwen3-32b-merged\",\n", " \"output_repo\": \"Alovestocode/router-qwen3-32b-merged-awq\", # Or keep same repo\n", " \"model_type\": \"qwen\",\n", " }\n", "}\n", "\n", "# AWQ quantization config\n", "AWQ_CONFIG = {\n", " \"num_bits\": 4, # Weight bit-width\n", " \"group_size\": 128, # Group size for weight quantization\n", " \"zero_point\": True, # False would force symmetric quant (no zero-point)\n", " \"strategy\": \"group\", # Quantize per group for best AWQ accuracy\n", " \"targets\": [\"Linear\"], # Modules to quantize (QuantizationMixin default)\n", " \"ignore\": [\"lm_head\"], # Skip final LM head\n", " \"format\": \"pack-quantized\",\n", " \"observer\": \"minmax\",\n", " \"dynamic\": False,\n", " \"version\": \"GEMM\", # Kept for logging/back-compat\n", "}\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Note: build_awq_modifier_config helper function is defined in the next cell\n", "# It properly constructs QuantizationScheme and QuantizationArgs objects\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4. Quantization Function\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# LLM Compressor (vLLM native quantization tool)\n", "# Import with error handling in case installation failed\n", "try:\n", " from llmcompressor import oneshot\n", " # Correct import path: AWQModifier is in modifiers.awq, not modifiers.quantization\n", " from llmcompressor.modifiers.awq import AWQModifier\n", " from compressed_tensors.quantization import QuantizationScheme, QuantizationArgs\n", " from compressed_tensors.quantization.quant_args import (\n", " QuantizationStrategy,\n", " QuantizationType,\n", " )\n", " LLM_COMPRESSOR_AVAILABLE = True\n", " print(\"✅ LLM Compressor imported successfully\")\n", "except ImportError as e:\n", " print(f\"❌ Failed to import llmcompressor/quantization deps: {e}\")\n", " print(\"Please ensure llmcompressor is installed:\")\n", " print(\" %pip install llmcompressor\")\n", " print(\" OR\")\n", " print(\" %pip install git+https://github.com/vllm-project/llm-compressor.git\")\n", " print(\"\\nNote: If import still fails, try:\")\n", " print(\" %pip install --upgrade llmcompressor\")\n", " LLM_COMPRESSOR_AVAILABLE = False\n", " raise\n", "\n", "from transformers import AutoTokenizer\n", "from huggingface_hub import HfApi, scan_cache_dir, upload_folder\n", "from datasets import Dataset\n", "from llmcompressor.recipe import Recipe\n", "import torch\n", "import shutil\n", "import gc\n", "import os\n", "\n", "# Try to import delete_revisions (may not be available in all versions)\n", "try:\n", " from huggingface_hub import delete_revisions\n", " DELETE_REVISIONS_AVAILABLE = True\n", "except ImportError:\n", " # delete_revisions might not be available, we'll use alternative method\n", " DELETE_REVISIONS_AVAILABLE = False\n", " print(\"Note: delete_revisions not available, will use alternative cache cleanup method\")\n", "\n", "def build_awq_modifier_config(awq_config: dict):\n", " \"\"\"Create config_groups/ignore settings for AWQModifier.\"\"\"\n", " if not isinstance(awq_config, dict):\n", " raise ValueError(\"awq_config must be a dictionary of quantization settings\")\n", "\n", " def _get(key, *aliases, default=None):\n", " for candidate in (key, *aliases):\n", " if candidate in awq_config:\n", " value = awq_config[candidate]\n", " if value is not None:\n", " return value\n", " return default\n", "\n", " num_bits = _get(\"num_bits\", \"w_bit\", default=4)\n", " group_size = _get(\"group_size\", \"q_group_size\", default=128)\n", " zero_point = awq_config.get(\"zero_point\", True)\n", " symmetric = awq_config.get(\"symmetric\")\n", " if symmetric is None:\n", " symmetric = not bool(zero_point)\n", "\n", " strategy = _get(\"strategy\", default=\"group\")\n", " if isinstance(strategy, QuantizationStrategy):\n", " quant_strategy = strategy\n", " else:\n", " quant_strategy = QuantizationStrategy(str(strategy).lower())\n", "\n", " qtype = awq_config.get(\"type\", QuantizationType.INT)\n", " if isinstance(qtype, QuantizationType):\n", " quant_type = qtype\n", " else:\n", " quant_type = QuantizationType(str(qtype).lower())\n", "\n", " weights_args = QuantizationArgs(\n", " num_bits=num_bits,\n", " group_size=group_size,\n", " symmetric=symmetric,\n", " strategy=quant_strategy,\n", " type=quant_type,\n", " dynamic=awq_config.get(\"dynamic\", False),\n", " observer=awq_config.get(\"observer\", \"minmax\"),\n", " )\n", "\n", " quant_scheme = QuantizationScheme(\n", " targets=awq_config.get(\"targets\", [\"Linear\"]),\n", " weights=weights_args,\n", " input_activations=None,\n", " output_activations=None,\n", " format=awq_config.get(\"format\", \"pack-quantized\"),\n", " )\n", "\n", " config_groups = {\"group_0\": quant_scheme}\n", " ignore = awq_config.get(\"ignore\", [\"lm_head\"])\n", " return config_groups, ignore\n", "\n", "def quantize_model_to_awq(\n", " model_name: str,\n", " repo_id: str,\n", " output_repo: str,\n", " model_type: str,\n", " awq_config: dict,\n", " calibration_dataset_size: int = 128\n", "):\n", " \"\"\"Quantize a model to AWQ format using LLM Compressor (vLLM native).\n", " \n", " Args:\n", " model_name: Display name for the model\n", " repo_id: Source Hugging Face repo ID\n", " output_repo: Destination Hugging Face repo ID\n", " model_type: Model type (gemma/qwen) for tokenizer selection\n", " awq_config: AWQ quantization configuration\n", " calibration_dataset_size: Number of calibration samples\n", " \"\"\"\n", " print(f\"\\n{'='*60}\")\n", " print(f\"Quantizing {model_name} with LLM Compressor (vLLM native)\")\n", " print(f\"Source: {repo_id}\")\n", " print(f\"Destination: {output_repo}\")\n", " print(f\"{'='*60}\\n\")\n", " \n", " # Check disk space before starting\n", " free_space_before = check_disk_space()\n", " if free_space_before < 30:\n", " print(f\"⚠️ WARNING: Low disk space ({free_space_before:.2f} GB). Quantization may fail.\")\n", " \n", " # Step 1: Create temporary output directory\n", " import tempfile\n", " temp_output_dir = f\"./temp_{model_name.replace('-', '_')}_awq\"\n", " print(f\"[1/4] Creating temporary output directory: {temp_output_dir}\")\n", " os.makedirs(temp_output_dir, exist_ok=True)\n", " \n", " # Step 2: Prepare calibration dataset\n", " print(f\"\\n[2/4] Preparing calibration dataset ({calibration_dataset_size} samples)...\")\n", " \n", " # Create calibration dataset for router agent\n", " calibration_texts = [\n", " \"You are the Router Agent coordinating Math, Code, and General-Search specialists.\",\n", " \"Emit EXACTLY ONE strict JSON object with keys route_plan, route_rationale, expected_artifacts,\",\n", " \"Solve a quadratic equation using Python programming.\",\n", " \"Implement a binary search algorithm with proper error handling.\",\n", " \"Explain the concept of gradient descent in machine learning.\",\n", " \"Write a function to calculate the Fibonacci sequence recursively.\",\n", " \"Design a REST API endpoint for user authentication.\",\n", " \"Analyze the time complexity of merge sort algorithm.\",\n", " ]\n", " \n", " # Repeat to reach desired size\n", " while len(calibration_texts) < calibration_dataset_size:\n", " calibration_texts.extend(calibration_texts[:calibration_dataset_size - len(calibration_texts)])\n", " \n", " calibration_texts = calibration_texts[:calibration_dataset_size]\n", " print(f\"✅ Calibration dataset prepared: {len(calibration_texts)} samples\")\n", " \n", " # Step 3: Quantize model using LLM Compressor\n", " print(f\"\\n[3/4] Quantizing model to AWQ with LLM Compressor (this may take 30-60 minutes)...\")\n", " print(f\"Config: {awq_config}\")\n", " print(\"⚠️ LLM Compressor will load the model, quantize it, and save to local directory\")\n", " \n", " if not LLM_COMPRESSOR_AVAILABLE:\n", " raise ImportError(\"LLM Compressor is not available. Please install it first.\")\n", " \n", " try:\n", " # LLM Compressor's oneshot function handles everything:\n", " # - Loading the model\n", " # - Quantization with calibration data\n", " # - Saving quantized model\n", " print(f\" → Starting quantization with LLM Compressor...\")\n", " print(f\" → This may take 30-60 minutes depending on model size...\")\n", " \n", " print(f\" → Creating QuantizationScheme for AWQModifier...\")\n", " config_groups, ignore_modules = build_awq_modifier_config(awq_config)\n", " first_group = next(iter(config_groups.values()))\n", " bits = first_group.weights.num_bits if first_group.weights else \"?\"\n", " group_sz = first_group.weights.group_size if first_group.weights else \"?\"\n", " print(f\" ✅ AWQ config ready ({bits}-bit, group size {group_sz})\")\n", " print(f\" → Creating AWQModifier with structured config...\")\n", " modifiers = [\n", " AWQModifier(\n", " config_groups=config_groups,\n", " ignore=ignore_modules,\n", " )\n", " ]\n", " print(f\" ✅ AWQModifier created successfully\")\n", " \n", " # Call oneshot with the modifier\n", " # oneshot() uses HfArgumentParser which only understands ModelArguments, DatasetArguments, RecipeArguments\n", " # We need to convert modifiers to Recipe and calibration_texts to Dataset\n", " print(f\" → Starting quantization process...\")\n", " \n", " # Prepare calibration dataset (limit to reasonable size)\n", " calibration_texts_limited = calibration_texts[:min(calibration_dataset_size, 128)]\n", " \n", " # Convert calibration texts to Hugging Face Dataset\n", " # DatasetArguments expects a Dataset object, not a list\n", " print(f\" → Creating Hugging Face Dataset from calibration texts...\")\n", " calibration_dataset = Dataset.from_dict({\"text\": calibration_texts_limited})\n", " print(f\" ✅ Created dataset with {len(calibration_dataset)} samples\")\n", " \n", " # Convert modifiers list to Recipe object\n", " # RecipeArguments expects a Recipe object, not a list of modifiers\n", " print(f\" → Converting modifiers to Recipe object...\")\n", " recipe = Recipe.from_modifiers(modifiers)\n", " print(f\" ✅ Recipe created from modifiers\")\n", " \n", " # Load tokenizer for text-only models (required as processor)\n", " # For text-only LLMs, we need to pass tokenizer explicitly to avoid processor initialization errors\n", " print(f\" → Loading tokenizer for text-only model...\")\n", " tokenizer = AutoTokenizer.from_pretrained(\n", " repo_id,\n", " use_fast=True,\n", " trust_remote_code=True,\n", " token=os.environ.get(\"HF_TOKEN\")\n", " )\n", " print(f\" ✅ Tokenizer loaded\")\n", " \n", " # oneshot() API - all kwargs must map to ModelArguments, DatasetArguments, or RecipeArguments\n", " # - model: ModelArguments.model\n", " # - output_dir: ModelArguments.output_dir\n", " # - recipe: RecipeArguments.recipe (Recipe object)\n", " # - dataset: DatasetArguments.dataset (Dataset object)\n", " # - num_calibration_samples: DatasetArguments.num_calibration_samples\n", " # - use_auth_token: ModelArguments.use_auth_token (reads from HF_TOKEN env var)\n", " # - trust_remote_code_model: ModelArguments.trust_remote_code_model\n", " # - stage: RecipeArguments.stage (default: \"default\")\n", " # - tokenizer: ModelArguments.tokenizer (required for text-only models to avoid processor errors)\n", " print(f\" → Calling oneshot() with proper argument structure...\")\n", " oneshot(\n", " model=repo_id,\n", " output_dir=temp_output_dir,\n", " recipe=recipe,\n", " stage=\"default\", # Recipe stage\n", " dataset=calibration_dataset,\n", " num_calibration_samples=min(calibration_dataset_size, len(calibration_dataset)),\n", " tokenizer=tokenizer, # Pass tokenizer explicitly for text-only models (processor inferred)\n", " use_auth_token=True, # Reads from os.environ[\"HF_TOKEN\"]\n", " trust_remote_code_model=True\n", " )\n", " \n", " print(f\"✅ Model quantized to AWQ successfully\")\n", " except Exception as e:\n", " print(f\"❌ Quantization failed: {e}\")\n", " print(f\"\\nTroubleshooting:\")\n", " print(f\"1. Ensure llmcompressor is installed: %pip install llmcompressor\")\n", " print(f\"2. Or install from GitHub: %pip install git+https://github.com/vllm-project/llm-compressor.git\")\n", " print(f\"3. Check that you have sufficient GPU memory (40GB+ recommended)\")\n", " import traceback\n", " traceback.print_exc()\n", " raise\n", " \n", " # Step 4: Upload to Hugging Face\n", " print(f\"\\n[4/4] Uploading quantized model to {output_repo}...\")\n", " \n", " # Create repo if it doesn't exist\n", " api = HfApi()\n", " try:\n", " api.create_repo(\n", " repo_id=output_repo,\n", " repo_type=\"model\",\n", " exist_ok=True,\n", " token=os.environ.get(\"HF_TOKEN\")\n", " )\n", " print(f\"✅ Repository ready: {output_repo}\")\n", " except Exception as e:\n", " print(f\"Note: Repo may already exist: {e}\")\n", " \n", " # Upload the quantized model directory\n", " try:\n", " upload_folder(\n", " folder_path=temp_output_dir,\n", " repo_id=output_repo,\n", " repo_type=\"model\",\n", " token=os.environ.get(\"HF_TOKEN\"),\n", " ignore_patterns=[\"*.pt\", \"*.bin\"] # Only upload safetensors\n", " )\n", " print(f\"✅ Quantized model uploaded to {output_repo}\")\n", " except Exception as e:\n", " print(f\"❌ Upload failed: {e}\")\n", " import traceback\n", " traceback.print_exc()\n", " raise\n", " \n", " # Step 5: Clean up to free disk space (critical for Colab)\n", " print(f\"\\n[5/5] Cleaning up local files to free disk space...\")\n", " \n", " # Delete temporary output directory\n", " try:\n", " import shutil\n", " shutil.rmtree(temp_output_dir)\n", " print(f\" ✅ Deleted temporary directory: {temp_output_dir}\")\n", " except Exception as e:\n", " print(f\" ⚠️ Could not delete temp directory: {e}\")\n", " \n", " # Free GPU memory\n", " torch.cuda.empty_cache()\n", " gc.collect()\n", " \n", " # Clear Hugging Face cache for the source model (frees ~50-70GB)\n", " print(f\" → Clearing Hugging Face cache for {repo_id}...\")\n", " try:\n", " cache_info = scan_cache_dir()\n", " # Find and delete revisions for the source model\n", " revisions_to_delete = []\n", " for repo in cache_info.revisions:\n", " if repo.repo_id == repo_id:\n", " revisions_to_delete.append(repo)\n", " \n", " if revisions_to_delete:\n", " if DELETE_REVISIONS_AVAILABLE:\n", " # Use delete_revisions if available\n", " delete_revisions(revisions_to_delete)\n", " print(f\" ✅ Deleted {len(revisions_to_delete)} cached revision(s) for {repo_id}\")\n", " else:\n", " # Alternative: Delete cache directories manually\n", " deleted_count = 0\n", " for revision in revisions_to_delete:\n", " try:\n", " # Get the cache directory path\n", " cache_path = revision.snapshot_path if hasattr(revision, 'snapshot_path') else None\n", " if cache_path and os.path.exists(cache_path):\n", " shutil.rmtree(cache_path)\n", " deleted_count += 1\n", " except Exception as e:\n", " print(f\" ⚠️ Could not delete {revision.repo_id}: {e}\")\n", " \n", " if deleted_count > 0:\n", " print(f\" ✅ Deleted {deleted_count} cached revision(s) for {repo_id}\")\n", " else:\n", " print(f\" ℹ️ Found {len(revisions_to_delete)} cached revision(s) but couldn't delete them\")\n", " print(f\" Try manually: huggingface-cli scan-cache --dir ~/.cache/huggingface\")\n", " else:\n", " print(f\" ℹ️ No cached revisions found for {repo_id}\")\n", " except Exception as e:\n", " print(f\" ⚠️ Cache cleanup warning: {e} (continuing...)\")\n", " print(f\" You can manually clean cache with: huggingface-cli scan-cache\")\n", " \n", " # Check disk space after cleanup\n", " free_space_after = check_disk_space()\n", " print(f\"\\n✅ Cleanup complete! Free space: {free_space_after:.2f} GB\")\n", " \n", " print(f\"\\n✅ {model_name} quantization complete!\")\n", " print(f\"Model available at: https://huggingface.co/{output_repo}\")\n", " print(f\"💾 Local model files deleted to save disk space\")\n", " print(f\"🚀 Model is ready for vLLM inference with optimal performance!\")\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "quantize_model_to_awq(\n", " model_name=\"Router-Gemma3-27B\",\n", " repo_id=MODELS_TO_QUANTIZE[\"router-gemma3-merged\"][\"repo_id\"],\n", " output_repo=MODELS_TO_QUANTIZE[\"router-gemma3-merged\"][\"output_repo\"],\n", " model_type=MODELS_TO_QUANTIZE[\"router-gemma3-merged\"][\"model_type\"],\n", " awq_config=MODEL_AWQ_CONFIGS[\"router-gemma3-merged\"],\n", " calibration_dataset_size=128\n", ")\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 6. Quantize Router-Qwen3-32B-Merged\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "quantize_model_to_awq(\n", " model_name=\"Router-Qwen3-32B\",\n", " repo_id=MODELS_TO_QUANTIZE[\"router-qwen3-32b-merged\"][\"repo_id\"],\n", " output_repo=MODELS_TO_QUANTIZE[\"router-qwen3-32b-merged\"][\"output_repo\"],\n", " model_type=MODELS_TO_QUANTIZE[\"router-qwen3-32b-merged\"][\"model_type\"],\n", " awq_config=MODEL_AWQ_CONFIGS[\"router-qwen3-32b-merged\"],\n", " calibration_dataset_size=128\n", ")\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 7. Verify Quantized Models\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Verify quantized models with vLLM (recommended) or Transformers\n", "from transformers import AutoTokenizer\n", "\n", "def verify_awq_model_vllm(repo_id: str):\n", " \"\"\"Verify AWQ model can be loaded with vLLM (recommended).\"\"\"\n", " print(f\"\\nVerifying {repo_id} with vLLM...\")\n", " \n", " try:\n", " # Try importing vLLM\n", " try:\n", " from vllm import LLM, SamplingParams\n", " except ImportError:\n", " print(\"⚠️ vLLM not available, skipping vLLM verification\")\n", " return False\n", " \n", " # Load with vLLM (auto-detects AWQ)\n", " llm = LLM(\n", " model=repo_id,\n", " quantization=\"awq\",\n", " trust_remote_code=True,\n", " token=os.environ.get(\"HF_TOKEN\"),\n", " gpu_memory_utilization=0.5 # Lower for verification\n", " )\n", " \n", " # Test generation\n", " sampling_params = SamplingParams(\n", " temperature=0.0,\n", " max_tokens=10\n", " )\n", " \n", " test_prompt = \"You are the Router Agent. Test prompt.\"\n", " outputs = llm.generate([test_prompt], sampling_params)\n", " \n", " generated_text = outputs[0].outputs[0].text\n", " print(f\"✅ vLLM loads and generates correctly\")\n", " print(f\"Generated: {generated_text[:100]}...\")\n", " \n", " del llm\n", " torch.cuda.empty_cache()\n", " \n", " return True\n", " except Exception as e:\n", " print(f\"❌ vLLM verification failed: {e}\")\n", " import traceback\n", " traceback.print_exc()\n", " return False\n", "\n", "def verify_awq_model_transformers(repo_id: str):\n", " \"\"\"Verify AWQ model can be loaded with Transformers (fallback).\"\"\"\n", " print(f\"\\nVerifying {repo_id} with Transformers...\")\n", " \n", " try:\n", " # Load tokenizer\n", " tokenizer = AutoTokenizer.from_pretrained(\n", " repo_id,\n", " trust_remote_code=True,\n", " token=os.environ.get(\"HF_TOKEN\")\n", " )\n", " \n", " # Try loading with AutoAWQ (if available)\n", " try:\n", " from awq import AutoAWQForCausalLM\n", " model = AutoAWQForCausalLM.from_quantized(\n", " repo_id,\n", " fuse_layers=True,\n", " trust_remote_code=True,\n", " device_map=\"auto\",\n", " token=os.environ.get(\"HF_TOKEN\")\n", " )\n", " \n", " # Test generation\n", " test_prompt = \"You are the Router Agent. Test prompt.\"\n", " inputs = tokenizer(test_prompt, return_tensors=\"pt\").to(model.device)\n", " \n", " with torch.inference_mode():\n", " outputs = model.generate(\n", " **inputs,\n", " max_new_tokens=10,\n", " do_sample=False\n", " )\n", " \n", " generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)\n", " print(f\"✅ Transformers loads and generates correctly\")\n", " print(f\"Generated: {generated_text[:100]}...\")\n", " \n", " del model\n", " del tokenizer\n", " torch.cuda.empty_cache()\n", " \n", " return True\n", " except ImportError:\n", " print(\"⚠️ AutoAWQ not available, skipping Transformers verification\")\n", " return False\n", " except Exception as e:\n", " print(f\"❌ Transformers verification failed: {e}\")\n", " import traceback\n", " traceback.print_exc()\n", " return False\n", "\n", "# Verify both models (prefer vLLM)\n", "for model_key, model_info in MODELS_TO_QUANTIZE.items():\n", " print(f\"\\n{'='*60}\")\n", " print(f\"Verifying {model_key}\")\n", " print(f\"{'='*60}\")\n", " \n", " # Try vLLM first (recommended)\n", " vllm_ok = verify_awq_model_vllm(model_info[\"output_repo\"])\n", " \n", " # Fallback to Transformers if vLLM not available\n", " if not vllm_ok:\n", " verify_awq_model_transformers(model_info[\"output_repo\"])\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Notes\n", "\n", "- **GPU Required**: This quantization requires a GPU with at least 40GB VRAM (A100/H100 recommended)\n", "- **Time**: Each model takes approximately 30-60 minutes to quantize\n", "- **Disk Space**: \n", " - Colab has limited disk space (~80GB free)\n", " - Each source model is ~50-70GB (BF16)\n", " - Quantized models are ~15-20GB (AWQ 4-bit)\n", " - **The notebook automatically deletes source models after quantization to save space**\n", "- **Cleanup**: After each model is quantized and uploaded:\n", " - GPU memory is freed\n", " - Hugging Face cache for source model is cleared\n", " - Disk space is checked before/after\n", "- **Output Repos**: Models are saved to new repos with `-awq` suffix\n", "- **Usage**: After quantization, update your `app.py` to use the AWQ repos:\n", " ```python\n", " MODELS = {\n", " \"Router-Gemma3-27B-AWQ\": {\n", " \"repo_id\": \"Alovestocode/router-gemma3-merged-awq\",\n", " \"quantization\": \"awq\"\n", " },\n", " \"Router-Qwen3-32B-AWQ\": {\n", " \"repo_id\": \"Alovestocode/router-qwen3-32b-merged-awq\",\n", " \"quantization\": \"awq\"\n", " }\n", " }\n", " ```\n" ] } ], "metadata": { "language_info": { "name": "python" } }, "nbformat": 4, "nbformat_minor": 2 }