🧠 Logo-Mistral Q4_K_M

Mistral Small 24B model fine-tuned on the LLT (Logic Layer Tool) system β€” quantized Q4_K_M

Model License GGUF


πŸ“Œ Description

Logo-Mistral is a language model based on Mistral Small 24B Instruct, fine-tuned to apply the formal reasoning system LLT (Logic Layer Tool), a framework Structured axiomatic model that separates logical-deductive reasoning from heuristic-creative reasoning.

The model is distributed in GGUF quantized Q4_K_M format, optimized for local execution using tools such as llama.cpp, Ollama, or LM Studio.


πŸ”¬ The LLT System

LLT is a structured reasoning framework that operates in two main modes:

Mode Description
V-LOGIC Formal-deductive reasoning (propositional logic, modus ponens/tollens, axioms)
H-LOGIC Heuristic-creative reasoning (lateral thinking, problem solving, challenging assumptions)

LLT Output Structure

Each response follows a deterministic pipeline format:

[MODE: V-LOGIC / H-LOGIC]
[INGEST: <input parsing>]
[PARSE: <problem structure>]
[TRAP-SCAN: <cognitive trap detection>]
[EXECUTE: <axiom or lemma applied>]
[VERIFY: <consistency check>]
[ARTICULATE: <final response with confidence>]

πŸ“Š Training Dataset

The model was fine-tuned on a structured dataset of over 7,000 examples (instruction/output pairs) built around the LLT system. The dataset covers:

  • Formal Logic β€” syllogisms, implications, contradictions, paradoxes
  • Heuristic Problem Solving β€” classic puzzles, lateral thinking, brain teasers
  • Probability and Combinatorics β€” dependent/independent events, complement, geometric distributions
  • Philosophy of Mind and Epistemology β€” knowledge vs. belief, truth, ontological coherence
  • Practical Reasoning β€” cause-and-effect analysis, functions, structural constraints
  • Classic Puzzles β€” river crossing, weighing, torches on the bridge, ropes, and time

The dataset is bilingual (Italian / English).


πŸš€ How to use it

With llama.cpp

./llama-cli \
-m logo-mistral-q4_k_m.gguf \
-p "Use the LLT system: If it rains, the ground is wet. The ground is not wet. What do you conclude?" \
-n 512

With Ollama

ollama run CiroN2022/logo-mistral-q4_k_m

With LM Studio

Download the .gguf file and load it directly into the LM Studio interface.


πŸ’¬ Usage Examples

Example 1 β€” Modus Tollens (V-LOGIC)

Input:

Use the LLT system: If it rains, the ground is wet. The ground is not wet. What do you conclude?


**Output:**

[MODE: V-LOGIC]
[INGEST: P1: Raining→Wet, P2: ¬ Wet]
[PARSE: PROP: P→Q, PROP: ¬ Q]
[TRAP-SCAN: No trap detected]
[EXECUTE: Apply A21 [Pβ†’Q, Β¬ Q ⊒ Β¬ P] (MODUS_TOLLENS)]
[VERIFY: A22 (NON_CONTRADICTION) satisfied]
[ARTICULATE: I conclude that it is not raining (Β¬ P). If the implication is true and the consequent is false, the antecedent must be false. Confidence: 1.0]

Example 2 β€” Lateral Thinking (H-LOGIC)

Input:

Use LLT: An electric train is traveling north at 100 km/h. The wind is blowing from the south at 50 km/h. Which way is the smoke going?

Output:

[MODE: H-LOGIC]
[INGEST: TRAIN: Electric, DIR: North, WIND: South]
[PARSE: DEFAULT: Trains_produce_smoke]
[TRAP-SCAN: T20 (Literal Overshoot)]
[EXECUTE: A42 [REVERSE→INSIGHT] - Reverse: 'Does an electric train produce smoke?']
[VERIFY: Electric trains = no combustion]
[ARTICULATE: The smoke isn't going anywhere. The train is electric. The hidden structure is the implicit assumption. Confidence: 1.0]

Example 3 β€” Probabiity

Input:

From a standard deck, you draw two cards without replacement. What is the probability the first is a heart and the second is a spade?

Output:

[MODE: V-LOGIC]
[INGEST: HEARTS: 13, SPADES: 13, TOTAL: 52, DRAWS: 2 without replacement]
[PARSE: DEPENDENT: P(heart first) Γ— P(spade second | heart first)]
[TRAP-SCAN: No trap]
[EXECUTE: L5.45 [DEPENDENT-SUITS] - Multiply conditional probabilities]
[VERIFY: P(heart first) = 13/52 = 1/4. P(spade second | heart first) = 13/51. P = (1/4)Γ—(13/51) = 13/204]
[ARTICULATE]: 13/204 or approximately 6.37%. Confidence: 1.0]

βš™οΈ Technical Specifications

Parameter Value
Base model unsloth/Mistral-Small-24B-Instruct-2501
Format GGUF
Quantization Q4_K_M
Approximate size ~14 GB
Maximum context 32k tokens
Parameters 24B
Training task Supervised instruction tuning

⚠️ Limitations

  • The model specializes in LLT formal reasoning; It may perform less well on generic, unstructured tasks.
  • The Q4_K_M quantization introduces a slight loss of precision compared to the original float16 model.
  • Some ambiguous scenarios may generate output completely wrong.

πŸ“„ License

This model is distributed under the Apache 2.0 license. The Mistral base model is subject to the Mistral AI license terms.


πŸ‘€ Author

Created by CiroN2022 πŸ”— https://huggingface.co/CiroN2022

Downloads last month
90
GGUF
Model size
24B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for CiroN2022/logo-mistral-q4_k_m