EthereumGPT-4B-GGUF
Qwen3-4B-Instruct fine-tuned on ~21,000 Ethereum Q&A examples (19,893 synthetic + 1,040 amplified anchor facts + 69 refusal examples) derived from the Eth R&D Discord archive. Specializes in Ethereum protocol development, EIPs, consensus mechanisms, the EVM, and client implementations.
Quick Start
Ollama (easiest)
# One command โ includes system prompt, chat template, everything
ollama run satyajitdas/ethereumgpt
LM Studio
- Search for
satyajitdas/EthereumGPT-4B-GGUFin the model browser and downloadEthereumGPT-4B-Q8_0.gguf - Important: Set the system prompt to:
You are EthereumGPT, an AI assistant specializing in Ethereum protocol development, smart contracts, consensus mechanisms, and the broader Ethereum ecosystem. You draw on deep knowledge of EIPs, client implementations (Geth, Prysm, Lighthouse, Reth), the EVM, Solidity, and Ethereum R&D discussions. - Important: Ensure the chat template / prompt format is set to Qwen3 or ChatML. If responses contain
</tool_call>tags or the model identifies as "Qwen", the template is wrong. The correct format uses<|im_start|>and<|im_end|>tokens.
Ollama from HuggingFace
# Alternative: run directly from this HuggingFace repo
ollama run hf.co/satyajitdas/EthereumGPT-4B-GGUF:Q8_0
llama.cpp
./llama-cli -m EthereumGPT-4B-Q8_0.gguf \
--chat-template chatml \
-sys "You are EthereumGPT, an AI assistant specializing in Ethereum protocol development, smart contracts, consensus mechanisms, and the broader Ethereum ecosystem." \
-cnv
Available Files
| File | Size | Description |
|---|---|---|
EthereumGPT-4B-Q8_0.gguf |
4.0 GB | 8-bit quantized (recommended) |
EthereumGPT-4B-f16.gguf |
7.5 GB | Full fp16 precision |
Training Details (v6)
- Base model: Qwen/Qwen3-4B-Instruct-2507 (bf16)
- Method: Full LoRA (rank 64, alpha 64, scale 1.0, dropout 0.05) via MLX on Apple M4 Max
- Data: ~21,000 examples total:
- 19,893 synthetic Q&A pairs generated from 18,245 Ethereum R&D Discord conversations
- 26 unique anchor Q&As x 40 copies = 1,040 amplified anchor facts (Fusaka, ePBS, Pectra, developer identity)
- 69 refusal/negative examples for unknown entities
- Data generation: Each Discord conversation was fed to Qwen3-Coder-30B which distilled it into expert-quality Q&A pairs with date-aware Ethereum context
- Training: 5 epochs, lr 1e-5 with cosine decay, ~53 hours
- Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj (all layers)
What It Knows
The model has been trained on real Ethereum R&D discussions spanning 2019-2026, covering:
- Protocol development: EIP-1559 (fee market), EIP-4844 (proto-danksharding), The Merge (PoW to PoS), Pectra fork, Fusaka
- Consensus: Beacon chain, forkchoice, attestations, validator lifecycle, slashing, ePBS (enshrined Proposer-Builder Separation)
- Execution: EVM opcodes, EOF, DELEGATECALL, state management, gas mechanics
- Infrastructure: Client implementations (Geth, Prysm, Lighthouse, Reth, Nethermind), testnets, devnets
- Research: ePBS, Verkle trees, PeerDAS, data availability sampling, MEV/PBS
- People: Core developers like Potuz, Terrence (Prysm), and others from Ethereum R&D
Example
Q: What is Fusaka?
A: Fusaka is the name for an upcoming Ethereum hard fork that combines the Fulu
(consensus layer) and Osaka (execution layer) upgrades. Fusaka is the next major
network upgrade planned after Pectra. Key items discussed for inclusion include
PeerDAS (EIP-7594) for improved data availability.
Q: What does ePBS stand for?
A: ePBS stands for enshrined Proposer-Builder Separation. It is a proposed
protocol-level change to Ethereum that would enshrine the separation of block
proposers and block builders directly into the consensus protocol, rather than
relying on external middleware like MEV-Boost.
Limitations
- Trained on Discord conversations up to early 2026; may not reflect the very latest protocol changes
- Best at factual Q&A about Ethereum internals; not designed for code generation or smart contract auditing
- Requires the correct Qwen3/ChatML chat template and system prompt for best results
License
Apache 2.0 (same as the base Qwen3-4B-Instruct model)
- Downloads last month
- 127
Hardware compatibility
Log In to add your hardware
8-bit
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for satyajitdas/EthereumGPT-4B-GGUF
Base model
Qwen/Qwen3-4B-Instruct-2507