Instella: Fully Open Language Models with Stellar Performance Paper • 2511.10628 • Published 23 days ago • 4
view article Article Introducing AnyLanguageModel: One API for Local and Remote LLMs on Apple Platforms 17 days ago • 29
view article Article Building a Healthcare Robot from Simulation to Deployment with NVIDIA Isaac Oct 29 • 27
view article Article Llasa Goes RL: Training LLaSA with GRPO for Improved Prosody and Expressiveness Nov 5 • 10
view article Article NVIDIA Releases 3 Million Sample Dataset for OCR, Visual Question Answering, and Captioning Tasks Aug 11 • 75
EDGE-GRPO: Entropy-Driven GRPO with Guided Error Correction for Advantage Diversity Paper • 2507.21848 • Published Jul 29 • 8
GLiNER2: An Efficient Multi-Task Information Extraction System with Schema-Driven Interface Paper • 2507.18546 • Published Jul 24 • 28
ULD Loss (Universal LLMs Distillation) Collection The ULD loss, based on optimal transport, enables distillation across different LLM families without requiring shared tokenizers. • 14 items • Updated Jul 15 • 2