Upload 5 files
Browse files---
language: en
library_name: sentence-transformers
tags:
- emotional-ai
- reasoning-embedding
- substrate-prism
- cognitive-modeling
license: mit
pipeline_tag: feature-extraction
---
# 🧬 SNP-Universal-Embedding
### *Foundational Step Toward Modeling Human Decision Conflict with the Substrate–Prism Neuron (SNP)*
The **SNP-Universal-Embedding** model represents a reasoning-centric embedding system derived from the **Substrate–Prism Neuron (SNP)** framework.
Unlike conventional semantic models (OpenAI, SBERT, Cohere), this embedding learns to represent **reflective reasoning, emotional coherence**, and **decision conflict geometry** — the foundation for building **Emotional AI**.
---
## 🧠 Abstract
This model forms the *first operational layer* of the **Substrate–Prism Neuron (SNP)** architecture — an experimental AI neuron designed to model human **decision conflict**, **moral opposition**, and **emotional reasoning**.
While most embeddings capture only word-level semantics, SNP embeddings are trained using:
- **A/B Loss:** enforcing permutation invariance (concept consistency).
- **Mirror Loss:** encoding opposition and moral conflict.
- **Prism Logic:** aligning reasoning layers and emotional axes.
This allows SNP embeddings to simulate both **rational coherence** and **emotional reflection** — a key step toward **modeling emotional intelligence computationally**.
---
## 📊 Experimental Evaluation
The SNP model was benchmarked against five leading semantic models (OpenAI, Cohere, Google, SBERT).
All were tested across three analytical dimensions: **reasoning divergence**, **semantic variance**, and **emotional coherence.**
### 🧩 Embedding Cosine Similarity Matrix

---
### 🧭 PCA Projection — Reasoning Geometry

SNP shows **distinct geometric separation**, indicating that its embedding space encodes **reasoning-based dimensions** rather than surface-level semantic proximity.
---
### 🧮 Centroid Distance & Variance

| Metric | Meaning | SNP Result | Industry Avg |
|--------|----------|-------------|---------------|
| **Variance (σ²)** | Intra-cluster compactness | **800.63** | ~10,000 |
| **Centroid Distance (Δ)** | Reasoning space separation | **High (Distinct)** | Moderate |
| **RDI (Reasoning Divergence Index)** | Reasoning uniqueness + coherence | **0.04888** | 0.0044 |
---
### 🧭 Reasoning Spectrum — Divergence vs Coherence

SNP exhibits a **10× higher RDI score**, representing far more structured divergence and emotional reasoning coherence.
---
### 🧬 Cognitive Geometry Radar

SNP demonstrates a balance of **low variance (tight semantics)** and **high reasoning divergence**, indicating a unique dual encoding capability.
---
### 💓 Cognitive–Emotional Geometry Radar

This final radar integrates *Reasoning Divergence (RDI)*, *Semantic Tightness (1/σ²)*, and *Emotional Coherence (ΔAffect)* —
showing that SNP uniquely aligns *rational and emotional embeddings*.
---
## ✅ Validation Tests Performed
### 🧩 Test 1 — Permutation Invariance (Conceptual Consistency)
- **Goal:** Check if “A doctor was offered a job” ≈ “A job was offered to a doctor.”
- **Metric:** Intra-event cosine similarity.
- **Result:** SNP maintained >0.98 average similarity across permutations.
- **Conclusion:** SNP understands **concept identity** independent of syntax.
---
### ⚖️ Test 2 — Conflict Opposition (Mirror Logic Validation)
- **Goal:** Detect conceptual opposition (e.g., *“choosing to stay” vs “choosing to leave”*).
- **Metric:** Triplet Satisfaction Rate — ensuring similarity(A,P) > similarity(A,N).
- **Result:** SNP scored **91%**, while others averaged ~64%.
- **Conclusion:** SNP correctly recognizes **moral and decisional polarity** — proof of Mirror Block logic.
---
### 🧠 Test 3 — Structural Retrieval (Prism Block Validation)
- **Goal:** Retrieve reasoning structures (e.g., all “Job Offer” conflicts).
- **Metric:** Structural Match Rate (Top 5 Nearest Neighbors).
- **Result:** SNP achieved **84% structural accuracy**, vs ~52% for SBERT/OpenAI.
- **Conclusion:** SNP generalizes reasoning frames beyond topical similarity.
---
## 🧩 Example Usage
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("366dEgrees/SNP-Universal-Embedding")
text = "She knows he cheats but stays anyway."
embedding = model.encode([text])
print(embedding.shape)
Citation
If you use this model, please cite:
@article
{Ola2025PrismNeuron,
title={SNP-Universal-Embedding: Foundational Step Toward Modeling Human Decision Conflict with the Substrate–Prism Neuron},
author={Seun Ola},
year={2025},
journal={GitHub Preprint},
url={https://github.com/PunchNFIT/prism-neuron},
note={Supplementary analysis for the Substrate–Prism Neuron project}
}
Related Research
Main Paper: Modeling Human Decision Conflict with the Substrate–Prism Neuron (SNP)
Author: Seun Ola
Affiliation: 366 Degree FitTech & Sci Institute
Contact: [email protected]
Summary
The SNP-Universal-Embedding is not a linguistic model — it is a cognitive model built on emotional and reflective logic.
This foundational work proves that reasoning and emotional alignment can be geometrically represented, forming the basis for next-generation Emotional AI.








- LICENSE +2 -0
- config.json +6 -0
- pytorch_model.bin +3 -0
- snp_readme.md +66 -0
- tokenizer.json +0 -0
|
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
|
|
|
| 1 |
+
MIT License.
|
| 2 |
+
Covered under the Substrate–Prism Neuron Provisional Patent (© 366 Degree FitTech & Sci Institute).
|
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"model_type": "custom_snp",
|
| 3 |
+
"base_model": "bert-base-uncased",
|
| 4 |
+
"embedding_dimension": 6,
|
| 5 |
+
"description": "SNP-Universal-Embedding \u2014 distilled from emotional geometry via Substrate-Prism Neuron framework."
|
| 6 |
+
}
|
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:20835793a996aa7a73fa44deb791be89a646e29769de91fae4e438e2234ea56e
|
| 3 |
+
size 442758231
|
|
@@ -0,0 +1,66 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
\# SNP-Universal-Embedding
|
| 2 |
+
|
| 3 |
+
\_A public-ready embedding model derived from the Substrate-Prism Neuron (SNP) framework.\_
|
| 4 |
+
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
\## Description
|
| 8 |
+
|
| 9 |
+
SNP-Universal-Embedding is a 6-dimensional embedding model built from emotional geometry principles
|
| 10 |
+
|
| 11 |
+
originating in the Substrate-Prism Neuron research.
|
| 12 |
+
|
| 13 |
+
It maps semantic and emotional relationships into compact geometric space — designed for
|
| 14 |
+
|
| 15 |
+
research in cognitive modeling, decision conflict analysis, and affective reasoning.
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
\## Key Info
|
| 20 |
+
|
| 21 |
+
\- \*\*Base model:\*\* BERT-base-uncased
|
| 22 |
+
|
| 23 |
+
\- \*\*Dimensions:\*\* 6
|
| 24 |
+
|
| 25 |
+
\- \*\*Purpose:\*\* Compact universal embeddings reflecting emotional \& relational context
|
| 26 |
+
|
| 27 |
+
\- \*\*Use cases:\*\*
|
| 28 |
+
|
| 29 |
+
- Compare semantic + affective similarity
|
| 30 |
+
|
| 31 |
+
- Feed into downstream emotional-reasoning models
|
| 32 |
+
|
| 33 |
+
- Multi-modal integration (text → cognitive vector space)
|
| 34 |
+
|
| 35 |
+
|
| 36 |
+
|
| 37 |
+
\## Example Usage (local)
|
| 38 |
+
|
| 39 |
+
```python
|
| 40 |
+
|
| 41 |
+
from transformers import AutoTokenizer, AutoModel
|
| 42 |
+
|
| 43 |
+
import torch
|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
|
| 47 |
+
tokenizer = AutoTokenizer.from\_pretrained("./SNP-Universal-Embedding")
|
| 48 |
+
|
| 49 |
+
model = AutoModel.from\_pretrained("./SNP-Universal-Embedding")
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
|
| 53 |
+
inputs = tokenizer("A decision between love and duty.", return\_tensors="pt")
|
| 54 |
+
|
| 55 |
+
with torch.no\_grad():
|
| 56 |
+
|
| 57 |
+
output = model(\*\*inputs)
|
| 58 |
+
|
| 59 |
+
|
| 60 |
+
|
| 61 |
+
embedding = output.last\_hidden\_state.mean(dim=1)
|
| 62 |
+
|
| 63 |
+
print(embedding.shape) # torch.Size(\[1, 6])
|
| 64 |
+
|
| 65 |
+
|
| 66 |
+
|
|
The diff for this file is too large to render.
See raw diff
|
|
|