# InstantSplat API Guide This guide shows you how to use the InstantSplat API to submit images and get back the Supabase GLB URL. ## Quick Start ### 1. Using Python (Recommended) Install the Gradio client: ```bash pip install gradio_client ``` #### Simple Example - Get GLB URL ```python from gradio_client import Client # Connect to your Space client = Client("your-username/InstantSplat") # or full URL # Submit images result = client.predict( [ "path/to/image1.jpg", "path/to/image2.jpg", "path/to/image3.jpg" ], api_name="/predict" # Uses the main process function ) # Extract URLs video_path, ply_url, download_path, model_ply, model_glb, glb_url = result print(f"GLB URL: {glb_url}") print(f"PLY URL: {ply_url}") ``` #### Complete Example with Error Handling ```python from gradio_client import Client import os def process_images_to_glb(image_paths, space_url="your-username/InstantSplat"): """ Process images through InstantSplat and get GLB URL. Args: image_paths: List of local image file paths (3+ images recommended) space_url: HuggingFace Space URL or identifier Returns: dict with URLs and status """ try: # Validate inputs if len(image_paths) < 2: return {"error": "Need at least 2 images"} for path in image_paths: if not os.path.exists(path): return {"error": f"Image not found: {path}"} # Connect and process client = Client(space_url) print(f"Submitting {len(image_paths)} images for processing...") result = client.predict( image_paths, api_name="/predict" ) # Unpack results video_path, ply_url, _, _, glb_path, glb_url = result # Check success if glb_url and not glb_url.startswith("Error"): return { "status": "success", "glb_url": glb_url, "ply_url": ply_url, "video_available": video_path is not None } else: return { "status": "error", "error": glb_url or "Upload failed" } except Exception as e: return { "status": "error", "error": str(e) } # Usage if __name__ == "__main__": images = [ "image1.jpg", "image2.jpg", "image3.jpg" ] result = process_images_to_glb(images) if result["status"] == "success": print(f"✅ Success!") print(f"GLB URL: {result['glb_url']}") print(f"PLY URL: {result['ply_url']}") else: print(f"❌ Error: {result['error']}") ``` ### 2. Using JavaScript/TypeScript Install the Gradio client: ```bash npm install --save @gradio/client ``` #### Example Code ```typescript import { Client } from "@gradio/client"; async function processImages(imagePaths: string[]): Promise { const client = await Client.connect("your-username/InstantSplat"); const result = await client.predict("/predict", { inputfiles: imagePaths }); // result.data is an array: [video, ply_url, download, model_ply, model_glb, glb_url] const glbUrl = result.data[5]; return glbUrl; } // Usage const images = [ "./image1.jpg", "./image2.jpg", "./image3.jpg" ]; processImages(images) .then(glbUrl => console.log("GLB URL:", glbUrl)) .catch(err => console.error("Error:", err)); ``` ### 3. Using cURL (Direct HTTP) First, get your Space's API endpoint: ```bash # Get API info curl https://your-username-instantsplat.hf.space/info ``` Then upload files and call the API: ```bash # Upload files and call prediction curl -X POST https://your-username-instantsplat.hf.space/api/predict \ -H "Content-Type: application/json" \ -d '{ "data": [ [ {"path": "https://url-to-image1.jpg"}, {"path": "https://url-to-image2.jpg"}, {"path": "https://url-to-image3.jpg"} ] ] }' ``` **Note**: For cURL, you'll need to either: 1. Provide URLs to publicly accessible images 2. Or use Gradio's file upload API first to upload local files ## API Response Format The API returns a tuple with 6 elements: ```python [ video_path, # (0) Path to generated video file ply_url, # (1) Supabase URL to PLY file ply_download, # (2) Local PLY file for download ply_model, # (3) PLY file for 3D viewer glb_model, # (4) Local GLB file for 3D viewer glb_url # (5) Supabase URL to GLB file ← THIS IS WHAT YOU WANT ] ``` **Access the GLB URL:** - Python: `result[5]` or unpack as shown above - JavaScript: `result.data[5]` ## Requirements ### Input Requirements - **Minimum images**: 2 (though 3+ recommended for better results) - **Image resolution**: All images should have the same resolution - **Supported formats**: JPG, PNG - **Recommended**: 3-10 images of the same scene from different viewpoints ### Processing Time - **3 images**: ~30-60 seconds (with GPU) - **5+ images**: ~60-120 seconds - Depends on image resolution and GPU availability ### Output Files - **GLB file**: Typically 5-20 MB - **PLY file**: Typically 50-200 MB - Both files are uploaded to your Supabase Storage bucket ## Error Handling Common errors and solutions: ### "Supabase credentials not set" ```python # Solution: Set environment variables on your Space SUPABASE_URL=https://xxx.supabase.co SUPABASE_KEY=your-key SUPABASE_BUCKET=outputs ``` ### "Payload too large" ```python # Solution: Increase Supabase bucket file size limit # Dashboard > Storage > Settings > File size limit ``` ### "The number of input images should be greater than 1" ```python # Solution: Provide at least 2 images images = ["img1.jpg", "img2.jpg", "img3.jpg"] ``` ### "The resolution of the input image should be the same" ```python # Solution: Resize images to same resolution before uploading from PIL import Image def resize_images(image_paths, size=(512, 512)): for path in image_paths: img = Image.open(path) img = img.resize(size) img.save(path) ``` ## Advanced Usage ### Batch Processing Multiple Sets ```python from gradio_client import Client import time def batch_process(image_sets, space_url="your-username/InstantSplat"): """ Process multiple sets of images. Args: image_sets: List of image path lists e.g., [["set1_img1.jpg", "set1_img2.jpg"], ["set2_img1.jpg", ...]] """ client = Client(space_url) results = [] for i, images in enumerate(image_sets): print(f"Processing set {i+1}/{len(image_sets)}...") try: result = client.predict(images, api_name="/predict") glb_url = result[5] results.append({ "set_index": i, "status": "success", "glb_url": glb_url, "image_count": len(images) }) except Exception as e: results.append({ "set_index": i, "status": "error", "error": str(e) }) # Rate limiting - be nice to the server time.sleep(2) return results # Usage image_sets = [ ["scene1_img1.jpg", "scene1_img2.jpg", "scene1_img3.jpg"], ["scene2_img1.jpg", "scene2_img2.jpg", "scene2_img3.jpg"], ] results = batch_process(image_sets) for r in results: if r["status"] == "success": print(f"Set {r['set_index']}: {r['glb_url']}") else: print(f"Set {r['set_index']} failed: {r['error']}") ``` ### Async Processing (JavaScript) ```typescript import { Client } from "@gradio/client"; async function processMultipleSets(imageSets: string[][]) { const client = await Client.connect("your-username/InstantSplat"); // Process all sets in parallel const promises = imageSets.map(images => client.predict("/predict", { inputfiles: images }) .then(result => ({ status: "success", glb_url: result.data[5] })) .catch(error => ({ status: "error", error: error.message })) ); return await Promise.all(promises); } // Usage const imageSets = [ ["set1_img1.jpg", "set1_img2.jpg"], ["set2_img1.jpg", "set2_img2.jpg"], ]; processMultipleSets(imageSets) .then(results => { results.forEach((r, i) => { if (r.status === "success") { console.log(`Set ${i}: ${r.glb_url}`); } else { console.error(`Set ${i} failed: ${r.error}`); } }); }); ``` ## API Endpoint Reference ### GET /info Returns API information and available endpoints. ### GET /docs Swagger/OpenAPI documentation (when `show_api=True`). ### POST /api/predict Main prediction endpoint. **Request:** ```json { "data": [ [ {"path": "file1.jpg"}, {"path": "file2.jpg"}, {"path": "file3.jpg"} ] ] } ``` **Response:** ```json { "data": [ "video_path.mp4", "https://supabase.co/.../file.ply", "download_path.ply", "model_path.ply", "model_path.glb", "https://supabase.co/.../file.glb" ], "duration": 45.2 } ``` ## Monitoring and Logs View real-time logs in your HuggingFace Space: 1. Go to your Space page 2. Click "Logs" tab 3. Watch processing in real-time ## Rate Limits - HuggingFace Spaces may have rate limits based on your tier - Free tier: May queue requests during high load - Pro tier: Better availability and no queuing ## Support For issues or questions: - Check the logs in your Space - Review error messages in API responses - Ensure all environment variables are set - Verify Supabase bucket configuration ## Example: Complete Workflow ```python #!/usr/bin/env python3 """ Complete workflow: Upload images → Process → Get GLB → Download """ from gradio_client import Client import requests import os def complete_workflow(image_paths, output_dir="./outputs"): """Process images and download the resulting GLB file.""" # 1. Process images print("🚀 Processing images...") client = Client("your-username/InstantSplat") result = client.predict(image_paths, api_name="/predict") # 2. Extract URLs glb_url = result[5] ply_url = result[1] if not glb_url or glb_url.startswith("Error"): print(f"❌ Processing failed: {glb_url}") return None print(f"✅ Processing complete!") print(f" GLB URL: {glb_url}") print(f" PLY URL: {ply_url}") # 3. Download GLB file os.makedirs(output_dir, exist_ok=True) glb_filename = os.path.join(output_dir, "model.glb") print(f"📥 Downloading GLB to {glb_filename}...") response = requests.get(glb_url) if response.status_code == 200: with open(glb_filename, 'wb') as f: f.write(response.content) print(f"✅ Downloaded: {glb_filename}") return { "glb_url": glb_url, "ply_url": ply_url, "local_glb": glb_filename } else: print(f"❌ Download failed: {response.status_code}") return None if __name__ == "__main__": images = ["img1.jpg", "img2.jpg", "img3.jpg"] result = complete_workflow(images) if result: print(f"\n🎉 Success! Model saved to: {result['local_glb']}") ``` ## Next Steps 1. Test with the Python example above 2. Integrate into your application 3. Set up error handling and retries 4. Monitor your Supabase storage usage 5. Consider batch processing for multiple scenes Happy splating! 🎨✨