Text Generation
Transformers
English
phi
riley-ai
zelgodiz
transformer
conversational
code-generation
invention-engine
ai-agent
custom-license
Instructions to use Riley-01234/Zelgodiz with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Riley-01234/Zelgodiz with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Riley-01234/Zelgodiz") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Riley-01234/Zelgodiz") model = AutoModelForCausalLM.from_pretrained("Riley-01234/Zelgodiz") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Riley-01234/Zelgodiz with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Riley-01234/Zelgodiz" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Riley-01234/Zelgodiz", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Riley-01234/Zelgodiz
- SGLang
How to use Riley-01234/Zelgodiz with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Riley-01234/Zelgodiz" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Riley-01234/Zelgodiz", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Riley-01234/Zelgodiz" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Riley-01234/Zelgodiz", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Riley-01234/Zelgodiz with Docker Model Runner:
docker model run hf.co/Riley-01234/Zelgodiz
Riley-01234
Riley Intelligence Lab prototype for advanced AI development using Phi and Transformers.
Designed to simulate intelligence, memory, and invention capabilities.
๐ฎ Zelgodiz Model for Riley-AI
Zelgodiz is the official foundational model powering the Riley-AI Genesis Core โ a modular intelligence engine engineered to simulate:
- Deep conversational memory
- Scientific and invention-based reasoning
- Dynamic context awareness
- Autonomous evolution and interface control
๐ง Training Overview
- Base Model: (e.g.,
phi-1.5,mistral, orTinyLLaMA) - Fine-Tuned On: Custom Riley dataset
- Frameworks: Hugging Face Transformers, PEFT, PyTorch
๐ License
This model is governed by the Zelgodiz Model License (ZML-1.0).
Redistribution, fine-tuning, or integration into commercial systems requires proper attribution and adherence to ZML-1.0 terms.
๐ For full license terms, see the LICENSE file.
๐ Inference Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("zelgodiz")
model = AutoModelForCausalLM.from_pretrained("zelgodiz")
inputs = tokenizer("Hello Riley, what do you remember?", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
---
license: other
tags:
- riley-ai
- zelgodiz
- transformer
- conversational
- code-generation
- invention-engine
- ai-agent
- custom-license
language:
- en
library_name: transformers
pipeline_tag: text-generation
inference: true
---
# Riley-01234
Riley Intelligence Lab prototype for advanced AI development using Phi and Transformers.
Designed to simulate intelligence, memory, and invention capabilities.
... (rest of the markdown)
- Downloads last month
- 2