π§ Logo-Mistral Q4_K_M
Mistral Small 24B model fine-tuned on the LLT (Logic Layer Tool) system β quantized Q4_K_M
π Description
Logo-Mistral is a language model based on Mistral Small 24B Instruct, fine-tuned to apply the formal reasoning system LLT (Logic Layer Tool), a framework Structured axiomatic model that separates logical-deductive reasoning from heuristic-creative reasoning.
The model is distributed in GGUF quantized Q4_K_M format, optimized for local execution using tools such as llama.cpp, Ollama, or LM Studio.
π¬ The LLT System
LLT is a structured reasoning framework that operates in two main modes:
| Mode | Description |
|---|---|
V-LOGIC |
Formal-deductive reasoning (propositional logic, modus ponens/tollens, axioms) |
H-LOGIC |
Heuristic-creative reasoning (lateral thinking, problem solving, challenging assumptions) |
LLT Output Structure
Each response follows a deterministic pipeline format:
[MODE: V-LOGIC / H-LOGIC]
[INGEST: <input parsing>]
[PARSE: <problem structure>]
[TRAP-SCAN: <cognitive trap detection>]
[EXECUTE: <axiom or lemma applied>]
[VERIFY: <consistency check>]
[ARTICULATE: <final response with confidence>]
π Training Dataset
The model was fine-tuned on a structured dataset of over 7,000 examples (instruction/output pairs) built around the LLT system. The dataset covers:
- Formal Logic β syllogisms, implications, contradictions, paradoxes
- Heuristic Problem Solving β classic puzzles, lateral thinking, brain teasers
- Probability and Combinatorics β dependent/independent events, complement, geometric distributions
- Philosophy of Mind and Epistemology β knowledge vs. belief, truth, ontological coherence
- Practical Reasoning β cause-and-effect analysis, functions, structural constraints
- Classic Puzzles β river crossing, weighing, torches on the bridge, ropes, and time
The dataset is bilingual (Italian / English).
π How to use it
With llama.cpp
./llama-cli \
-m logo-mistral-q4_k_m.gguf \
-p "Use the LLT system: If it rains, the ground is wet. The ground is not wet. What do you conclude?" \
-n 512
With Ollama
ollama run CiroN2022/logo-mistral-q4_k_m
With LM Studio
Download the .gguf file and load it directly into the LM Studio interface.
π¬ Usage Examples
Example 1 β Modus Tollens (V-LOGIC)
Input:
Use the LLT system: If it rains, the ground is wet. The ground is not wet. What do you conclude?
**Output:**
[MODE: V-LOGIC]
[INGEST: P1: RainingβWet, P2: Β¬ Wet]
[PARSE: PROP: PβQ, PROP: Β¬ Q]
[TRAP-SCAN: No trap detected]
[EXECUTE: Apply A21 [PβQ, Β¬ Q β’ Β¬ P] (MODUS_TOLLENS)]
[VERIFY: A22 (NON_CONTRADICTION) satisfied]
[ARTICULATE: I conclude that it is not raining (Β¬ P). If the implication is true and the consequent is false, the antecedent must be false. Confidence: 1.0]
Example 2 β Lateral Thinking (H-LOGIC)
Input:
Use LLT: An electric train is traveling north at 100 km/h. The wind is blowing from the south at 50 km/h. Which way is the smoke going?
Output:
[MODE: H-LOGIC]
[INGEST: TRAIN: Electric, DIR: North, WIND: South]
[PARSE: DEFAULT: Trains_produce_smoke]
[TRAP-SCAN: T20 (Literal Overshoot)]
[EXECUTE: A42 [REVERSEβINSIGHT] - Reverse: 'Does an electric train produce smoke?']
[VERIFY: Electric trains = no combustion]
[ARTICULATE: The smoke isn't going anywhere. The train is electric. The hidden structure is the implicit assumption. Confidence: 1.0]
Example 3 β Probabiity
Input:
From a standard deck, you draw two cards without replacement. What is the probability the first is a heart and the second is a spade?
Output:
[MODE: V-LOGIC]
[INGEST: HEARTS: 13, SPADES: 13, TOTAL: 52, DRAWS: 2 without replacement]
[PARSE: DEPENDENT: P(heart first) Γ P(spade second | heart first)]
[TRAP-SCAN: No trap]
[EXECUTE: L5.45 [DEPENDENT-SUITS] - Multiply conditional probabilities]
[VERIFY: P(heart first) = 13/52 = 1/4. P(spade second | heart first) = 13/51. P = (1/4)Γ(13/51) = 13/204]
[ARTICULATE]: 13/204 or approximately 6.37%. Confidence: 1.0]
βοΈ Technical Specifications
| Parameter | Value |
|---|---|
| Base model | unsloth/Mistral-Small-24B-Instruct-2501 |
| Format | GGUF |
| Quantization | Q4_K_M |
| Approximate size | ~14 GB |
| Maximum context | 32k tokens |
| Parameters | 24B |
| Training task | Supervised instruction tuning |
β οΈ Limitations
- The model specializes in LLT formal reasoning; It may perform less well on generic, unstructured tasks.
- The Q4_K_M quantization introduces a slight loss of precision compared to the original float16 model.
- Some ambiguous scenarios may generate output completely wrong.
π License
This model is distributed under the Apache 2.0 license. The Mistral base model is subject to the Mistral AI license terms.
π€ Author
Created by CiroN2022 π https://huggingface.co/CiroN2022
- Downloads last month
- 90
4-bit
Model tree for CiroN2022/logo-mistral-q4_k_m
Base model
mistralai/Mistral-Small-24B-Base-2501