Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf BlueNipples/Apocrypha-7b:IQ2_XXS_IMATRIX# Run inference directly in the terminal:
llama-cli -hf BlueNipples/Apocrypha-7b:IQ2_XXS_IMATRIXUse pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf BlueNipples/Apocrypha-7b:IQ2_XXS_IMATRIX# Run inference directly in the terminal:
./llama-cli -hf BlueNipples/Apocrypha-7b:IQ2_XXS_IMATRIXBuild from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf BlueNipples/Apocrypha-7b:IQ2_XXS_IMATRIX# Run inference directly in the terminal:
./build/bin/llama-cli -hf BlueNipples/Apocrypha-7b:IQ2_XXS_IMATRIXUse Docker
docker model run hf.co/BlueNipples/Apocrypha-7b:IQ2_XXS_IMATRIXDesign
The design intention is to create a pseudo-philosophical, pseudo-spiritual, pseudo counseling chatbob model for sounding ideas off. Like a mirror really. This obviously does not constitute medical advice, and if you are in need seek professional help. The name Apocrypha-7B comes from the fact that it's fake - this isn't a guide, friend or a guru. It's at best, if the model works, a sounding board. But I think such things might still be helpful for organising ones own thoughts. This model should still be able to role-play, but will likely play better as a 'helper' role of some sort given the counseling and theory of mind data if you do use it for role-play.
This mistral 7b model is a task arithmetic merge of Epiculous/Fett-uccine-7B (theory of mind and gnosis datasets), GRMenon/mental-mistral-7b-instruct-autotrain (mental health counseling conversations dataset), and teknium/Hermes-Trismegistus-Mistral-7B (open-hermes + occult datasets)
I will throw a GGUF or two inside a subfolder here.
Configuration
The following YAML configuration was used to produce this model:
models:
- model: ./Hermes-Trismegistus-7B
parameters:
weight: 0.35
- model: ./mental-mistral-7b
parameters:
weight: 0.39
- model: ./Fett-uccine-7B
parameters:
weight: 0.45
merge_method: task_arithmetic
base_model: ./Mistral-7B-v0.1
dtype: bfloat16
Resources used:
https://huggingface.co/teknium/Hermes-Trismegistus-Mistral-7B
https://huggingface.co/GRMenon/mental-mistral-7b-instruct-autotrain
- Downloads last month
- 164

Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf BlueNipples/Apocrypha-7b:IQ2_XXS_IMATRIX# Run inference directly in the terminal: llama-cli -hf BlueNipples/Apocrypha-7b:IQ2_XXS_IMATRIX