--- license: mit language: - en metrics: - accuracy pipeline_tag: robotics tags: - robotics - opencv - tpu - gpu - tensorflow --- # 🧠⚑ Protoge-LG: Scalable Object Detection & Tracking (GPU/TPU-Ready) [![TensorFlow](https://img.shields.io/badge/framework-TensorFlow-orange)](https://www.tensorflow.org/) [![Detection & Tracking](https://img.shields.io/badge/task-Detection%20%26%20Tracking-blue)]() [![Labels](https://img.shields.io/badge/labels-1000%2B-green)]() [![Accelerated](https://img.shields.io/badge/accelerator-GPU%2FTPU-red)]() **Protoge-LG** is a large-scale object detection and tracking algorithm using computer vision and TensorFlow. Optimized for **GPU and TPU inference**, this model can detect and track over **1000 object classes** in real-time, with flexibility for targeted tracking of selected labels. --- ## πŸš€ Highlights - 🧠 Detects & tracks **1000+ object categories** - πŸŒ€ Supports both **full-label** and **targeted detection modes** - ⚑ Accelerated with **GPU or TPU support** - 🧰 Uses TensorFlow’s object detection API and integrates with OpenCV for real-time video processing - πŸ“¦ Exportable to TensorFlow Lite, TF.js, and compatible with Google Cloud TPU infrastructure --- ## πŸ”¬ Applications - Advanced robotics and autonomous systems - Industrial visual inspection and surveillance - Healthcare AI in smart facilities - Cloud-scale computer vision pipelines --- ## βš™οΈ How It Works ```python import tensorflow as tf # Load the model model = tf.saved_model.load("path/to/protoge-lg") # Enable GPU/TPU if available # Configure strategy if deploying on TPU # Run full detection or provide custom labels target_labels = ["robot arm", "conveyor belt", "monitor"] detections = model(input_tensor, labels=target_labels) # πŸ§ͺ Supported Modes ## πŸ” Full Mode Detect and track all 1000+ categories in a single pass. Best used for exploratory environments or full-scene awareness. ## 🎯 Selective Mode Pass in a list of labels to optimize speed and accuracy on known targets. Ideal for constrained detection environments like industrial automation or healthcare. #πŸ“Š Performance Overview Classes Supported = 1000+ Acceleration = GPU / TPU Model Size = ~60MB Inference Speed < 40ms (TPU) Video Tracking <= 60 FPS ## πŸ“– Citation If you use **Celestial-Mini** in your work, please consider citing: ``` @misc{celestialmini2025, title={Celestial-Mini: A Lightweight Real-Time Object Detector}, author={Lang, John}, year={2025}, howpublished={\url{https://huggingface.co/langutang/celestial-mini}} } ``` --- ## πŸ“¬ Contact & License - πŸ“« For questions or collaboration, open an issue or contact the maintainer. - βš–οΈ License: MIT (see LICENSE file for details) --- ## 🌠 Hugging Face Model Hub To load from Hugging Face: ```python from transformers import AutoFeatureExtractor, TFModelForObjectDetection model = TFModelForObjectDetection.from_pretrained("langutang/protege-lg") extractor = AutoFeatureExtractor.from_pretrained("langutang/protege-lg") ``` --- Transform your edge AI projects with the power of **Protege-Lg** πŸŒ