Datasets:
task large_stringclasses 40
values | prompt large_stringlengths 37 16k | answer large_stringlengths 0 10.5k | metadata large_stringlengths 247 26.9k | cot large_stringlengths 0 10.5k | level int64 0 3 | mode large_stringclasses 3
values |
|---|---|---|---|---|---|---|
arithmetics | /trace Evaluate max(min(-4.2, 0.4), 2).
The answer is a number. | <trace>
min(-4.2, 0.4) = -4.2
max(-4.2, 2) = 2
</trace>
2 | {"expr": "max(min(-4.2, 0.4), 2)", "height": 4, "cot": "min(-4.2, 0.4) = -4.2\nmax(-4.2, 2) = 2", "_time": 0.002566099166870117, "_task": "arithmetics", "_level": 1, "_config": {"c": 1.0, "level": 1, "seed": null, "size": null, "min_depth": 4, "max_depth": 6, "float_prob": 0.25, "in_decimals": 1, "out_decimals": 4, "ou... | min(-4.2, 0.4) = -4.2
max(-4.2, 2) = 2 | 1 | cot |
arithmetics | Evaluate -11 - -2 / 4 * -5.4 + 7.
The answer is a number. | -6.7 | {"expr": "-11 - -2 / 4 * -5.4 + 7", "height": 5, "cot": "-2 / 4 = -0.5\n-0.5 * -5.4 = 2.7\n-11 - 2.7 = -13.7\n-13.7 + 7 = -6.7", "_time": 0.002050638198852539, "_task": "arithmetics", "_level": 0, "_config": {"c": 1.0, "level": 0, "seed": null, "size": null, "min_depth": 3, "max_depth": 5, "float_prob": 0.25, "in_decim... | -2 / 4 = -0.5
-0.5 * -5.4 = 2.7
-11 - 2.7 = -13.7
-13.7 + 7 = -6.7 | 0 | instruct |
arithmetics | Evaluate 15 / 3.0 + (15) * abs(-3 + -12 + -6 + -5).
The answer is a number.
Answer:
395
Correct? (Yes/No) | Yes | {"expr": "15 / 3.0 + (15) * abs(-3 + -12 + -6 + -5)", "height": 7, "cot": "15 / 3 = 5\n-3 + -12 = -15\n-15 + -6 = -21\n-21 + -5 = -26\nabs(-26) = 26\n15 * 26 = 390\n5 + 390 = 395", "_time": 0.002685546875, "_task": "arithmetics", "_level": 2, "_config": {"c": 1.0, "level": 2, "seed": null, "size": null, "min_depth": 5,... | 15 / 3 = 5
-3 + -12 = -15
-15 + -6 = -21
-21 + -5 = -26
abs(-26) = 26
15 * 26 = 390
5 + 390 = 395 | 2 | verification |
bayesian_association | /trace System:
P(X_0) = {'0': 0.89, '1': 0.11}
P(X_1|X_0=0) = {'0': 0.92, '1': 0.08}
P(X_1|X_0=1) = {'0': 0.67, '1': 0.33}
P(X_2|X_0=0, X_1=0) = {'0': 0.6, '1': 0.05, '2': 0.35}
P(X_2|X_0=0, X_1=1) = {'0': 0.28, '1': 0.26, '2': 0.46}
P(X_2|X_0=1, X_1=0) = {'0': 0.16, '1': 0.14, '2': 0.7}
P(X_2|X_0=1, X_1=1) = {'0... | <trace>
Goal: Compute Observational Probability: P(X_2 | X_1=1)
Elim order: ['X_0']
Sum out X_0 -> P(X_1=1, X_2) = {0: 0.03, 1: 0.02, 2: 0.05}
Normalize (sum=0.11) -> P(X_2 | X_1=1) = {0: 0.3, 1: 0.21, 2: 0.49}
</trace>
{0: 0.3, 1: 0.21, 2: 0.49} | {"target_var_values": [0, 1, 2], "bif_description": "// CANONICAL\n// variable: X_0\n// state_names: {'X_0': [0, 1]}\n// type: TabularCPD\n// CANONICAL\n// variable: X_1\n// state_names: {'X_1': [0, 1], 'X_0': [0, 1]}\n// type: TabularCPD\n// CANONICAL\n// variable: X_2\n// state_names: {'X_2': [0, 1, 2], 'X_0': [0, 1]... | Goal: Compute Observational Probability: P(X_2 | X_1=1)
Elim order: ['X_0']
Sum out X_0 -> P(X_1=1, X_2) = {0: 0.03, 1: 0.02, 2: 0.05}
Normalize (sum=0.11) -> P(X_2 | X_1=1) = {0: 0.3, 1: 0.21, 2: 0.49} | 2 | cot |
bayesian_association | System:
P(X_1) = {'0': 0.16, '1': 0.21, '2': 0.63}
X_3 ~ Noisy-MIN(leak=None, influences={'X_1': {'1': [0.46, 0.54], '2': [0.0, 1.0]}, 'X_2': {'1': [0.57, 0.43], '2': [0.0, 1.0]}})
P(X_2) = {'0': 0.29, '1': 0.18, '2': 0.53}
P(X_0) = {'0': 1.0, '1': 0.0}
Observed conditions:
Observing/Knowing that the state X_2 is eq... | {0: 0.65, 1: 0.35} | {"target_var_values": [0, 1], "bif_description": "// CANONICAL\n// variable: X_1\n// state_names: {'X_1': [0, 1, 2]}\n// type: TabularCPD\n// CANONICAL\n// variable: X_3\n// state_names: {'X_3': [0, 1], 'X_1': [0, 1, 2], 'X_2': [0, 1, 2]}\n// type: MultilevelInfluenceModel\n// mode: MIN\n// leak: None\n// influence_tab... | Goal: Compute Observational Probability: P(X_3 | X_2=0, X_1=2)
Result: P(X_3 | X_1=2, X_2=0) = {0: 0.65, 1: 0.35} | 2 | instruct |
bayesian_association | System:
P(X_0) = {'0': 0.1, '1': 0.9}
P(X_1) = {'0': 0.1, '1': 0.9}
P(X_2) = {'0': 0.4, '1': 0.6}
Observed conditions:
Without further Observation/Knowledge of other variable.
Task: Compute probability distribution for X_1 (possible values: [0, 1]).
The answer is a Python dict mapping each value to its probability, ... | Yes | {"target_var_values": [0, 1], "bif_description": "// CANONICAL\n// variable: X_0\n// state_names: {'X_0': [0, 1]}\n// type: TabularCPD\n// CANONICAL\n// variable: X_1\n// state_names: {'X_1': [0, 1]}\n// type: TabularCPD\n// CANONICAL\n// variable: X_2\n// state_names: {'X_2': [0, 1]}\n// type: TabularCPD\n\nnetwork un... | Goal: Compute Observational Probability: P(X_1)
Result: P(X_1) = {0: 0.1, 1: 0.9} | 0 | verification |
bayesian_intervention | /trace System:
P(X_2) = {'0': 0.49, '1': 0.23, '2': 0.28}
P(X_3|X_2=0) = {'0': 0.49, '1': 0.51}
P(X_3|X_2=1) = {'0': 0.61, '1': 0.39}
P(X_3|X_2=2) = {'0': 0.52, '1': 0.48}
P(X_0) = {'0': 0.38, '1': 0.26, '2': 0.36}
P(X_1) = {'0': 0.42, '1': 0.57, '2': 0.01}
Observed conditions:
Doing/Imposing that the state X_0 is... | <trace>
Goal: Compute Causal Effect: P(X_1 | do(X_0=2), X_2=2, X_3=1)
Surgery: P(X_0)= Point Mass at X_0=2.
Result: P(X_1) = {0: 0.42, 1: 0.57, 2: 0.01}
</trace>
{0: 0.42, 1: 0.57, 2: 0.01} | {"target_var_values": [0, 1, 2], "bif_description": "// CANONICAL\n// variable: X_2\n// state_names: {'X_2': [0, 1, 2]}\n// type: TabularCPD\n// CANONICAL\n// variable: X_3\n// state_names: {'X_3': [0, 1], 'X_2': [0, 1, 2]}\n// type: TabularCPD\n// CANONICAL\n// variable: X_0\n// state_names: {'X_0': [0, 1, 2]}\n// typ... | Goal: Compute Causal Effect: P(X_1 | do(X_0=2), X_2=2, X_3=1)
Surgery: P(X_0)= Point Mass at X_0=2.
Result: P(X_1) = {0: 0.42, 1: 0.57, 2: 0.01} | 2 | cot |
bayesian_intervention | System:
P(X_0) = {'0': 0.53, '1': 0.47}
P(X_1) = {'0': 0.28, '1': 0.72}
P(X_2) = {'0': 0.99, '1': 0.01}
Observed conditions:
Doing/Imposing that the state X_0 is equal to 0. Observing/Knowing that the state X_1 is equal to 1
Task: Compute probability distribution for X_2 (possible values: [0, 1]).
The answer is a Py... | {0: 0.99, 1: 0.01} | {"target_var_values": [0, 1], "bif_description": "// CANONICAL\n// variable: X_0\n// state_names: {'X_0': [0, 1]}\n// type: TabularCPD\n// CANONICAL\n// variable: X_1\n// state_names: {'X_1': [0, 1]}\n// type: TabularCPD\n// CANONICAL\n// variable: X_2\n// state_names: {'X_2': [0, 1]}\n// type: TabularCPD\n\nnetwork un... | Goal: Compute Causal Effect: P(X_2 | do(X_0=0), X_1=1)
Surgery: P(X_0)= Point Mass at X_0=0.
Result: P(X_2) = {0: 0.99, 1: 0.01} | 1 | instruct |
bayesian_intervention | System:
P(X_0) = {'0': 0.46, '1': 0.54}
P(X_1|X_0=0) = {'0': 0.35, '1': 0.29, '2': 0.36}
P(X_1|X_0=1) = {'0': 0.06, '1': 0.57, '2': 0.37}
X_2 ~ Noisy-MIN(leak=None, influences={'X_0': {'1': [0.0, 0.0, 1.0]}, 'X_1': {'1': [0.01, 0.29, 0.7], '2': [0.0, 0.0, 1.0]}})
P(X_3|X_2=0) = {'0': 0.83, '1': 0.17}
P(X_3|X_2=1) ... | No | {"target_var_values": [0, 1], "bif_description": "// CANONICAL\n// variable: X_0\n// state_names: {'X_0': [0, 1]}\n// type: TabularCPD\n// CANONICAL\n// variable: X_1\n// state_names: {'X_1': [0, 1, 2], 'X_0': [0, 1]}\n// type: TabularCPD\n// CANONICAL\n// variable: X_2\n// state_names: {'X_2': [0, 1, 2], 'X_0': [0, 1]... | Goal: Compute Causal Effect: P(X_3 | do(X_1=0), X_2=1, X_0=1)
Surgery: Cut incoming edges to intervened node 'X_1': ['X_0'] -> X_1; P(X_1)= Point Mass at X_1=0.
Result: P(X_3 | X_2=1) = {0: 0.86, 1: 0.14} | 2 | verification |
code_execution | Predict the printed output of the following Python code:
```python
j = 7
z = 7
z = 15 - z
print(len([7, 6, 10]))
```
The answer is the exact printed output string. | 3 | {"code": "j = 7\nz = 7\nz = 15 - z\nprint(len([7, 6, 10]))", "tinypy_level": "1.1", "_time": 0.021509408950805664, "_task": "code_execution", "_level": 0, "_config": {"c": 1.0, "level": 0, "seed": null, "size": null, "difficulty": 0.0, "min_depth": 4, "max_depth": 15, "max_attempts": 100}, "_prompt_tokens": 52, "_answe... | 0 | instruct | |
code_execution | Predict the printed output of the following Python code:
```python
e = 4
p = 14
a = len([11, 0, 10])
print(a)
```
The answer is the exact printed output string.
Answer:
[8, 8, 3]
Correct? (Yes/No) | No | {"code": "e = 4\np = 14\na = len([11, 0, 10])\nprint(a)", "tinypy_level": "1.2", "_time": 0.05066227912902832, "_task": "code_execution", "_level": 1, "_config": {"c": 1.0, "level": 1, "seed": null, "size": null, "difficulty": 1.0, "min_depth": 4, "max_depth": 16, "max_attempts": 100}, "_prompt_tokens": 49, "_answer_to... | 1 | verification | |
conjecture_entailment | Decide if the given premises entail the conjecture (i.e., the conjecture is provable) using Superposition/Resolution/Paramodulation.
Domain: Set Theory
Premises:
- (subset(X1,X2)|~equal_sets(complement(X1),complement(intersection(X3,empty_set))))
Conjecture: `(equal_sets(X1,X2)|~equal_sets(X1,X3)|~equal_sets(X2,X3))... | False | {"hypotheses": ["(subset(X1,X2)|~equal_sets(complement(X1),complement(intersection(X3,empty_set))))"], "conjecture": "(equal_sets(X1,X2)|~equal_sets(X1,X3)|~equal_sets(X2,X3))", "correct_hypotheses": ["(equal_sets(X2,X1)|~equal_sets(X1,X2))"], "proof_depth": 1, "perturbation": 1, "useful_axioms": ["cnf(transitivity_for... | 0 | instruct | |
conjecture_entailment | Decide if the given premises entail the conjecture (i.e., the conjecture is provable) using Superposition/Resolution/Paramodulation.
Domain: Geometry
Premises:
- (incident_c(ax2_sk6(X1),X2)|~part_of(sum(underlying_curve(X1),X3),X2))
- (~meet(X1,sum(sum(X2,sum(X3,sum(X4,X5))),X6),X5))
Conjecture: `(open(X1)|incident_... | Yes | {"hypotheses": ["(incident_c(ax2_sk6(X1),X2)|~part_of(sum(underlying_curve(X1),X3),X2))", "(~meet(X1,sum(sum(X2,sum(X3,sum(X4,X5))),X6),X5))"], "conjecture": "(open(X1)|incident_c(ax2_sk6(X2),X3)|~part_of(X1,X3))", "correct_hypotheses": ["(incident_c(ax2_sk6(X1),X2)|~part_of(sum(underlying_curve(X1),X3),X2))", "(sum(X1... | 0 | verification | |
constrained_continuation | (GRAMMAR)
start -> seq
seq ->
seq -> expr seq
expr -> '(' seq ')'
expr -> '[' seq ']'
expr -> '<' seq '>'
expr -> '⟨' seq '⟩'
expr -> '⟦' seq '⟧'
expr -> '⟪' seq '⟫'
(PREFIX)
( ⟦ ⟧
(TEMPLATE)
___ ] ___
Fill in the 2 blanks (___) to form a grammatical continuation of PREFIX using exactly 3 tokens.
Fixed tokens must ... | [ ] ) | {"g": "start -> seq\nseq -> \nseq -> expr seq\nexpr -> '(' seq ')'\nexpr -> '[' seq ']'\nexpr -> '<' seq '>'\nexpr -> '\u27e8' seq '\u27e9'\nexpr -> '\u27e6' seq '\u27e7'\nexpr -> '\u27ea' seq '\u27eb'", "k": 3, "prefix": ["(", "\u27e6", "\u27e7"], "hints": {"1": "]"}, "template": "___ ] ___", "blanks": [0, 2], "n_blan... | 1 | instruct | |
constrained_continuation | (GRAMMAR)
start -> seq
seq ->
seq -> expr seq
expr -> '(' seq ')'
expr -> '[' seq ']'
expr -> '<' seq '>'
expr -> '⟨' seq '⟩'
expr -> '⟦' seq '⟧'
expr -> '⟪' seq '⟫'
(PREFIX)
< ⟨ ⟩ [ ]
(TEMPLATE)
___ ___ ]
Fill in the 2 blanks (___) to form a grammatical continuation of PREFIX using exactly 3 tokens.
Fixed tokens m... | Yes | {"g": "start -> seq\nseq -> \nseq -> expr seq\nexpr -> '(' seq ')'\nexpr -> '[' seq ']'\nexpr -> '<' seq '>'\nexpr -> '\u27e8' seq '\u27e9'\nexpr -> '\u27e6' seq '\u27e7'\nexpr -> '\u27ea' seq '\u27eb'", "k": 3, "prefix": ["<", "\u27e8", "\u27e9", "[", "]"], "hints": {"2": "]"}, "template": "___ ___ ]", "blanks": [0, 1... | 1 | verification | |
constraint_satisfaction | Variables/domains:
- 0 <= x0 <= 2
- 0 <= x1 <= 1
Constraints:
1. 3*x0 <= 0
2. 3*x1 != 3
3. -3*x0 >= -4
Enumerate ALL satisfying assignments in variable order [x0, x1].
The answer is a Python list of lists of ints, sorted lexicographically, or UNSAT if no assignment exists.
| [[0, 0]] | {"domains": [2, 1], "constraints": [{"type": "lin", "idx": [0], "coeffs": [3], "op": "<=", "rhs": 0}, {"type": "lin", "idx": [1], "coeffs": [3], "op": "!=", "rhs": 3}, {"type": "lin", "idx": [0], "coeffs": [-3], "op": ">=", "rhs": -4}], "solution": [[0, 0]], "solve_mode": "all", "structure_mode": "random", "instance": ... | 0 | instruct | |
constraint_satisfaction | Variables/domains:
- 0 <= x0 <= 1
- 0 <= x1 <= 2
- 0 <= x2 <= 2
- 0 <= x3 <= 3
Constraints:
1. -x1 + 2*x2 - 2*x3 >= -6
2. -x3 == -2
3. 2*x1 - x3 <= 3
4. (2*x0 + x1 + 3*x3) % 3 == 0
5. -x2 + 2*x3 == 3
6. (x2) % 3 == 1
Enumerate ALL satisfying assignments in variable order [x0, x1, x2, x3].
The answer is a Python list ... | No | {"domains": [1, 2, 2, 3], "constraints": [{"type": "lin", "idx": [1, 2, 3], "coeffs": [-1, 2, -2], "op": ">=", "rhs": -6}, {"type": "lin", "idx": [3], "coeffs": [-1], "op": "==", "rhs": -2}, {"type": "lin", "idx": [1, 3], "coeffs": [2, -1], "op": "<=", "rhs": 3}, {"type": "mod", "idx": [0, 1, 3], "coeffs": [2, 1, 3], "... | 3 | verification | |
continuation | /trace List all valid next tokens for this prefix. The answer is the list of valid tokens sorted alphabetically and separated by |, with STOP at the end if the prefix forms a complete string.
(GRAMMAR)
start -> seq
seq ->
seq -> expr seq
expr -> '(' seq ')'
expr -> '[' seq ']'
expr -> '<' seq '>'
expr -> '⟨' seq '⟩'
e... | <trace>
expr→( • seq )⇒{(,),<,[,⟦,⟨,⟪}
</trace>
(|)|<|[|⟦|⟨|⟪ | {"g": "start -> seq\nseq -> \nseq -> expr seq\nexpr -> '(' seq ')'\nexpr -> '[' seq ']'\nexpr -> '<' seq '>'\nexpr -> '\u27e8' seq '\u27e9'\nexpr -> '\u27e6' seq '\u27e7'\nexpr -> '\u27ea' seq '\u27eb'", "prefix": ["(", ")", "(", ")", "("], "depth": 5, "cot": "expr\u2192( \u2022 seq )\u21d2{(,),<,[,\u27e6,\u27e8,\u27ea... | expr→( • seq )⇒{(,),<,[,⟦,⟨,⟪} | 0 | cot |
continuation | List all valid next tokens for this prefix. The answer is the list of valid tokens sorted alphabetically and separated by |, with STOP at the end if the prefix forms a complete string.
(GRAMMAR)
decl -> decl_simple
root -> decl '.'
is -> 'is'
start -> root
det_sg_a -> 'a'
there -> 'there'
n_sg_c -> 'dog'
decl_simple ->... | dog | {"g": "decl -> decl_simple\nroot -> decl '.'\nis -> 'is'\nstart -> root\ndet_sg_a -> 'a'\nthere -> 'there'\nn_sg_c -> 'dog'\ndecl_simple -> there is det_sg_a n_sg_c", "prefix": ["there", "is", "a"], "depth": 3, "cot": "decl_simple\u2192there is det_sg_a \u2022 n_sg_c\u21d2dog", "_time": 0.001918792724609375, "_task": "... | decl_simple→there is det_sg_a • n_sg_c⇒dog | 1 | instruct |
continuation | List all valid next tokens for this prefix. The answer is the list of valid tokens sorted alphabetically and separated by |, with STOP at the end if the prefix forms a complete string.
(GRAMMAR)
start -> seq
seq ->
seq -> expr seq
expr -> '(' seq ')'
expr -> '[' seq ']'
expr -> '<' seq '>'
(PREFIX)
( ) (
Answer:
(|)|<... | Yes | {"g": "start -> seq\nseq -> \nseq -> expr seq\nexpr -> '(' seq ')'\nexpr -> '[' seq ']'\nexpr -> '<' seq '>'", "prefix": ["(", ")", "("], "depth": 3, "cot": "expr\u2192( \u2022 seq )\u21d2{(,),<,[}", "_time": 0.002197742462158203, "_task": "continuation", "_level": 0, "_config": {"c": 1.0, "level": 0, "seed": null, "si... | expr→( • seq )⇒{(,),<,[} | 0 | verification |
count_elements | List: [1, 4, 12, 9, 4, 9, 19, 3, 4, 10]
How many times does 1 appear? The answer is a number. | 1 | {"elements": [1, 4, 12, 9, 4, 9, 19, 3, 4, 10], "target": 1, "_time": 0.0002925395965576172, "_task": "count_elements", "_level": 0, "_config": {"c": 1.0, "level": 0, "seed": null, "size": null, "max_count": 3, "list_size": 10, "domain_size": 20}, "_prompt_tokens": 46, "_answer_tokens": 1} | 0 | instruct | |
count_elements | List: ['two', 'eleven', 'fifteen', 'five', 'eighteen', 'thirteen', 'twelve', 'sixteen', 'one', 'sixteen', 'twenty', 'four']
How many times does 'thirteen' appear? The answer is a number.
Answer:
2
Correct? (Yes/No) | No | {"elements": ["two", "eleven", "fifteen", "five", "eighteen", "thirteen", "twelve", "sixteen", "one", "sixteen", "twenty", "four"], "target": "thirteen", "_time": 0.0005156993865966797, "_task": "count_elements", "_level": 2, "_config": {"c": 1.0, "level": 2, "seed": null, "size": null, "max_count": 5, "list_size": 12,... | 2 | verification | |
diff_patching | Apply the following Unified Diff to the text.
Original Text (Version 610750f):
1 | Voice woman up tonight picture choice
2 | Lot not cultural perform throughout budget five
3 | Goal behind analysis bill section cause
4 | Respond none somebody group benefit design
Diff (610750f -> cbbf163):
@@ -1,4 +1,6 @@... | Voice woman up tonight picture choice
Lot not cultural perform throughout budget five
Goal behind analysis bill section cause
Scientist commercial line certain decision suggest church
Respond none somebody group benefit design
Congress peace agency artist during feel | {"src_text": "1 | Voice woman up tonight picture choice\n2 | Lot not cultural perform throughout budget five\n3 | Goal behind analysis bill section cause\n4 | Respond none somebody group benefit design", "src_id": "610750f", "tgt_id": "cbbf163", "diff": "@@ -1,4 +1,6 @@\n Voice woman up tonight picture choi... | 1 | instruct | |
diff_patching | Apply the following Unified Diff to the text.
Original Text (Version e7e9d81):
1 | Mean help civil sea health
2 | Thought help authority option positive group wish
3 | Most image let speech
4 | Floor expert mouth figure they
5 | Option local summer or
6 | Could couple statement left
7 | Build left... | Yes | {"src_text": "1 | Mean help civil sea health\n2 | Thought help authority option positive group wish\n3 | Most image let speech\n4 | Floor expert mouth figure they\n5 | Option local summer or\n6 | Could couple statement left\n7 | Build left final next\n8 | Cut walk study art today use", "src_id":... | 3 | verification | |
diff_prediction | Below is the version history of a file.
Version 6b3f8a6:
1 | Institution eat public medical sure business
2 | He whether help ok first economic collection
3 | Smile score way follow style it
4 | Traditional type wear friend
5 | Necessary body condition series now city may
Version cea4529:
1 | He whe... | @@ -1,3 +1,4 @@
+Institution eat public medical sure business
He whether help ok first economic collection
Smile score way follow style it
Traditional type wear friend | {"history": "Version 6b3f8a6:\n1 | Institution eat public medical sure business\n2 | He whether help ok first economic collection\n3 | Smile score way follow style it\n4 | Traditional type wear friend\n5 | Necessary body condition series now city may\n\nVersion cea4529:\n1 | He whether help ok first e... | 0 | instruct | |
diff_prediction | Below is the version history of a file.
Version a407bd8:
1 | For customer threat list where government nation
2 | Life rather type the operation knowledge later
3 | This across theory energy painting
4 | Cover side especially when
5 | Brother understand drive picture measure
Version fb5aa48:
1 | For... | Yes | {"history": "Version a407bd8:\n1 | For customer threat list where government nation\n2 | Life rather type the operation knowledge later\n3 | This across theory energy painting\n4 | Cover side especially when\n5 | Brother understand drive picture measure\n\nVersion fb5aa48:\n1 | For customer threat lis... | 0 | verification | |
equation_system | /trace Solve the following system of equations for the variable 'X1'.
System:
X2 + 23 = 0
The answer is the numerical value for X1, or 'No solution' / 'Multiple solutions' if a unique numerical solution does not exist. | <trace>
1. Forward:
2. Backward:
X2 = -23
</trace>
Multiple solutions | {"equations": ["X2 + 23 = 0"], "query_variable": "X1", "full_solution_map": null, "case": "underdetermined", "cot": "1. Forward:\n\n2. Backward:\nX2 = -23", "_time": 0.009702920913696289, "_task": "equation_system", "_level": 0, "_config": {"c": 1.0, "level": 0, "seed": null, "size": null, "num_vars": 2, "obfuscation_s... | 1. Forward:
2. Backward:
X2 = -23 | 0 | cot |
equation_system | Solve the following system of equations for the variable 'X1'.
System:
5*X1 + 4*X2 - 177 = 0
X1 + X2 - 39 = 0
-X1 - X2 + X3 + 67 = 0
-7*X1 - 6*X2 + X4 + 266 = 0
The answer is the numerical value for X1, or 'No solution' / 'Multiple solutions' if a unique numerical solution does not exist. | 21 | {"equations": ["5*X1 + 4*X2 - 177 = 0", "X1 + X2 - 39 = 0", "-X1 - X2 + X3 + 67 = 0", "-7*X1 - 6*X2 + X4 + 266 = 0"], "query_variable": "X1", "full_solution_map": {"X1": 21, "X2": 18, "X3": -28, "X4": -11}, "case": "unique", "cot": "1. Forward:\nR2 -= 0.2*R1\nR3 -= -0.2*R1\nR4 -= -1.4*R1\nR3 -= -1*R2\nR4 -= -2*R2\n\n2.... | 1. Forward:
R2 -= 0.2*R1
R3 -= -0.2*R1
R4 -= -1.4*R1
R3 -= -1*R2
R4 -= -2*R2
2. Backward:
X4 = -11
X3 = -28
X2 = 18
X1 = 21 | 2 | instruct |
equation_system | Solve the following system of equations for the variable 'X2'.
System:
X1 - 17 = 0
X2 - 20 = 0
The answer is the numerical value for X2, or 'No solution' / 'Multiple solutions' if a unique numerical solution does not exist.
Answer:
20
Correct? (Yes/No) | Yes | {"equations": ["X1 - 17 = 0", "X2 - 20 = 0"], "query_variable": "X2", "full_solution_map": {"X1": 17, "X2": 20}, "case": "unique", "cot": "1. Forward:\n\n2. Backward:\nX2 = 20\nX1 = 17", "_time": 0.01401829719543457, "_task": "equation_system", "_level": 0, "_config": {"c": 1.0, "level": 0, "seed": null, "size": null, ... | 1. Forward:
2. Backward:
X2 = 20
X1 = 17 | 0 | verification |
evidence_retrieval | /trace Premise:
[0] Mary is the only person in the room.
[1] if “Mary designs and sews custom cosplay costumes for conventions” then “no old person in the room is quiet”
[2] if someone is not collects rare sneakers then he/she is allergic to anything
[3] Paul writes a travel blog
[4] everyone in the room who is a quiet... | <trace>
0. [input 0] room(mary) & ! [X0] : (room(X0) => X0 = mary)
1. [input 6] ! [X0] : (room(X0) => (is_a_dedicated_advocate_for_digital_privacy_and_encryption(X0) & is_an_expert_in_identifying_and_foraging_wild_edible_plants(X0)))
2. [assumption] is_not_an_expert_in_identifying_and_foraging_wild_edible_plants(mary)
... | {"verbalize_seed": 858628, "proof": {"proof": "% Running in auto input_syntax mode. Trying TPTP\n% Refutation found. Thanks to Tanya!\n% SZS status Unsatisfiable for tmp24wjycmw\n% SZS output start Proof for tmp24wjycmw\n1. room(mary) & ! [X0] : (room(X0) => X0 = mary) [input(axiom) 0]\n7. ! [X0] : (room(X0) => (predh(... | 0. [input 0] room(mary) & ! [X0] : (room(X0) => X0 = mary)
1. [input 6] ! [X0] : (room(X0) => (is_a_dedicated_advocate_for_digital_privacy_and_encryption(X0) & is_an_expert_in_identifying_and_foraging_wild_edible_plants(X0)))
2. [assumption] is_not_an_expert_in_identifying_and_foraging_wild_edible_plants(mary)
3. [pure... | 1 | cot |
evidence_retrieval | Premise:
[0] Mary is the only person in the room.
[1] all quiet people in the room are quiet
[2] Paul is an active member of a local robotics club
[3] “The lighthouse on Cape Sorrow does not glow green.” and “A square cloud is over Silver Lake.”
[4] everyone in the room either is not is dedicated to sustainable living ... | [10] | {"verbalize_seed": 615936, "proof": {"proof": "% Running in auto input_syntax mode. Trying TPTP\n% Refutation found. Thanks to Tanya!\n% SZS status Unsatisfiable for tmp_a305y0t\n% SZS output start Proof for tmp_a305y0t\n11. predf(mary) [input(axiom) 10]\n22. ~predf(mary) [input(axiom) hyp]\n54. predf(mary) [cnf transf... | 0. [input 10] is_an_enthusiastic_bird_watcher_who_travels_for_rare_sightings(mary)
1. [assumption] is_not_an_enthusiastic_bird_watcher_who_travels_for_rare_sightings(mary)
2. [forward 1, 2] $false | 2 | instruct |
evidence_retrieval | Premise:
[0] Mary is the only person in the room.
[1] everyone anywhere who is old is not old
[2] Paul hosts a YouTube channel dedicated to art tutorials
[3] someone who is not owns a significant collection of rare gemstones and minerals likes someone who owns an extensive assortment of vintage comic book memorabilia
[... | Yes | {"verbalize_seed": 859792, "proof": {"proof": "% Running in auto input_syntax mode. Trying TPTP\n% Refutation found. Thanks to Tanya!\n% SZS status Unsatisfiable for tmpb1dorusd\n% SZS output start Proof for tmpb1dorusd\n5. quiet(paul) [input(axiom) 4]\n7. ~quiet(paul) [input(axiom) hyp]\n17. quiet(paul) [cnf transform... | 0. [input 4] quiet(paul)
1. [assumption] ~quiet(paul)
2. [forward 1, 2] $false | 0 | verification |
graph_dependencies | Consider the dependency graph:
Node 0 depends on: 6, 7.
Node 1 has no dependencies.
Node 2 has no dependencies.
Node 3 depends on: 8.
Node 4 has no dependencies.
Node 5 depends on: 1, 4, 6.
Node 6 depends on: 2, 7.
Node 7 has no dependencies.
Node 8 depends on: 7.
List all prerequisites of node 3 (recursively), leave... | [7, 8] | {"graph_description": "Node 0 depends on: 6, 7.\nNode 1 has no dependencies.\nNode 2 has no dependencies.\nNode 3 depends on: 8.\nNode 4 has no dependencies.\nNode 5 depends on: 1, 4, 6.\nNode 6 depends on: 2, 7.\nNode 7 has no dependencies.\nNode 8 depends on: 7.", "query": 3, "nodes": [4, 7, 2, 1, 6, 5, 0, 8, 3], "ed... | 3 | instruct | |
graph_dependencies | Consider the dependency graph:
Dependencies (each key lists its prerequisites): {0: [4], 1: [0, 4], 2: [0], 3: [0, 4], 4: [], 5: [2]}
List all prerequisites of node 3 (recursively), leaves first.
Do not include the query node itself.
If A depends on B and both appear in your answer, B must appear before A.
The answer... | No | {"graph_description": "Dependencies (each key lists its prerequisites): {0: [4], 1: [0, 4], 2: [0], 3: [0, 4], 4: [], 5: [2]}", "query": 3, "nodes": [4, 0, 1, 3, 2, 5], "edges": [[4, 0], [4, 1], [4, 3], [0, 1], [0, 3], [0, 2], [2, 5]], "_time": 0.0006592273712158203, "_task": "graph_dependencies", "_level": 0, "_config... | 0 | verification | |
graph_isomorphism | Consider two graphs described below.
Graph A:
graph { 0--10; 0--21; 0--26; 1--10; 2--25; 3--12; 4--8; 4--13; 4--30; 5--12; 6--16; 7--8; 7--31; 7--36; 8--23; 9--23; 10--22; 10--39; 11--17; 11--29; 12--18; 12--32; 14--27; 14--34; 15--26; 16--28; 17--19; 17--39; 18--34; 20--28; 21--27; 21--28; 24--34; 24--37; 25--30; 25-... | True | {"graph1_description": "graph { 0--10; 0--21; 0--26; 1--10; 2--25; 3--12; 4--8; 4--13; 4--30; 5--12; 6--16; 7--8; 7--31; 7--36; 8--23; 9--23; 10--22; 10--39; 11--17; 11--29; 12--18; 12--32; 14--27; 14--34; 15--26; 16--28; 17--19; 17--39; 18--34; 20--28; 21--27; 21--28; 24--34; 24--37; 25--30; 25--39; 31--38; 32--33; 35... | 3 | instruct | |
graph_isomorphism | Consider two graphs described below.
Graph A:
Nodes: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Matrix:
[0, 0, 0, 0, 0, 0, 1, 1, 0, 0]
[0, 0, 0, 1, 1, 1, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 1, 0, 1, 0]
[0, 1, 0, 0, 0, 0, 0, 0, 1, 0]
[0, 1, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 1, 0, 0, 0, 0, 0, 0, 0, 0]
[1, 0, 1, 0, 0, 0, 0, 0, 0, 0]
[1, 0, 0, ... | Yes | {"graph1_description": "Nodes: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\nMatrix:\n[0, 0, 0, 0, 0, 0, 1, 1, 0, 0]\n[0, 0, 0, 1, 1, 1, 0, 0, 0, 0]\n[0, 0, 0, 0, 0, 0, 1, 0, 1, 0]\n[0, 1, 0, 0, 0, 0, 0, 0, 1, 0]\n[0, 1, 0, 0, 0, 0, 0, 0, 0, 0]\n[0, 1, 0, 0, 0, 0, 0, 0, 0, 0]\n[1, 0, 1, 0, 0, 0, 0, 0, 0, 0]\n[1, 0, 0, 0, 0, 0, 0, 0,... | 1 | verification | |
graph_node_centrality | Consider the following social network graph:
Node 0 connects to 1, 2, 3. Node 1 connects to 0. Node 2 connects to 0, 3, 4. Node 3 connects to 0, 2, 4. Node 4 connects to 2, 3.
Based on the number of connections, identify all nodes that are the most central (i.e., have the highest degree centrality). There may be more... | [0, 2, 3] | {"graph_description": "Node 0 connects to 1, 2, 3. Node 1 connects to 0. Node 2 connects to 0, 3, 4. Node 3 connects to 0, 2, 4. Node 4 connects to 2, 3.", "_time": 0.00031256675720214844, "_task": "graph_node_centrality", "_level": 0, "_config": {"c": 1.0, "level": 0, "seed": null, "size": null, "num_nodes": 5, "no_so... | 0 | instruct | |
graph_node_centrality | Consider the following social network graph:
{0: [7, 17, 23], 1: [2, 6, 7], 2: [1, 21, 36], 3: [14, 22, 26], 4: [13, 21, 31], 5: [11, 17, 26], 6: [1, 15, 28], 7: [0, 1, 30], 8: [28, 32, 38], 9: [12, 17, 25], 10: [25, 31, 36], 11: [5, 35, 37], 12: [9, 20, 29], 13: [4, 26, 27], 14: [3, 20, 37], 15: [6, 27, 38], 16: [24,... | No | {"graph_description": "{0: [7, 17, 23], 1: [2, 6, 7], 2: [1, 21, 36], 3: [14, 22, 26], 4: [13, 21, 31], 5: [11, 17, 26], 6: [1, 15, 28], 7: [0, 1, 30], 8: [28, 32, 38], 9: [12, 17, 25], 10: [25, 31, 36], 11: [5, 35, 37], 12: [9, 20, 29], 13: [4, 26, 27], 14: [3, 20, 37], 15: [6, 27, 38], 16: [24, 30, 39], 17: [0, 5, 9]... | 3 | verification | |
graph_pathfinding | /trace Consider the graph:
Nodes [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] and edges: (0, 1), (0, 2), (0, 3), (0, 15), (0, 17), (1, 5), (1, 7), (1, 10), (1, 11), (1, 13), (1, 14), (1, 15), (1, 16), (1, 18), (2, 5), (2, 7), (2, 8), (2, 9), (2, 10), (2, 11), (2, 14), (2, 16), (2, 18), (2, 19... | <trace>
Goal: Shortest path from 17 to 12 using BFS.
Initialize Queue: [17]
Pop 17. Current Path: [17]
-> Found new neighbors: [0, 3, 14, 15, 18]
-> Add to queue. Visited set updated.
-> Queue is now: [0, 3, 14, 15, 18]
Pop 0. Current Path: [17, 0]
-> Found new neighbors: [1, 2]
-> Add to queue. Visited set... | {"graph_description": "Nodes [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] and edges: (0, 1), (0, 2), (0, 3), (0, 15), (0, 17), (1, 5), (1, 7), (1, 10), (1, 11), (1, 13), (1, 14), (1, 15), (1, 16), (1, 18), (2, 5), (2, 7), (2, 8), (2, 9), (2, 10), (2, 11), (2, 14), (2, 16), (2, 18), (2, 19), (3... | Goal: Shortest path from 17 to 12 using BFS.
Initialize Queue: [17]
Pop 17. Current Path: [17]
-> Found new neighbors: [0, 3, 14, 15, 18]
-> Add to queue. Visited set updated.
-> Queue is now: [0, 3, 14, 15, 18]
Pop 0. Current Path: [17, 0]
-> Found new neighbors: [1, 2]
-> Add to queue. Visited set updated... | 2 | cot |
graph_pathfinding | Consider the graph:
Edges: 0-1, 0-2, 0-4, 1-2, 1-4, 3-4
Find the lexicographically smallest shortest path from Node 4 to Node 0.
If no path exists, answer `None`.
The answer is a Python list of nodes or `None`. | [4, 0] | {"graph_description": "Edges: 0-1, 0-2, 0-4, 1-2, 1-4, 3-4", "start_node": 4, "end_node": 0, "nodes": [0, 1, 2, 3, 4], "edges": [[0, 1], [0, 2], [0, 4], [1, 2], [1, 4], [3, 4]], "optimal_length": 2, "cot": "Goal: Shortest path from 4 to 0 using BFS.\nInitialize Queue: [4]\n\nPop 4. Current Path: [4]\n -> Found new nei... | Goal: Shortest path from 4 to 0 using BFS.
Initialize Queue: [4]
Pop 4. Current Path: [4]
-> Found new neighbors: [0, 1, 3]
-> Add to queue. Visited set updated.
-> Queue is now: [0, 1, 3]
Pop 0. Current Path: [4, 0]
Target 0 found! Search Complete. | 0 | instruct |
graph_pathfinding | Consider the graph:
Nodes: [0, 1, 2, 3, 4]
Matrix:
[0, 0, 1, 1, 1]
[0, 0, 1, 1, 0]
[1, 1, 0, 0, 0]
[1, 1, 0, 0, 1]
[1, 0, 0, 1, 0]
Find the lexicographically smallest shortest path from Node 4 to Node 2.
If no path exists, answer `None`.
The answer is a Python list of nodes or `None`.
Answer:
[9, 1, 2, 0, 16]
Correct... | No | {"graph_description": "Nodes: [0, 1, 2, 3, 4]\nMatrix:\n[0, 0, 1, 1, 1]\n[0, 0, 1, 1, 0]\n[1, 1, 0, 0, 0]\n[1, 1, 0, 0, 1]\n[1, 0, 0, 1, 0]", "start_node": 4, "end_node": 2, "nodes": [0, 1, 2, 3, 4], "edges": [[0, 2], [0, 3], [0, 4], [1, 2], [1, 3], [3, 4]], "optimal_length": 3, "cot": "Goal: Shortest path from 4 to 2 ... | Goal: Shortest path from 4 to 2 using BFS.
Initialize Queue: [4]
Pop 4. Current Path: [4]
-> Found new neighbors: [0, 3]
-> Add to queue. Visited set updated.
-> Queue is now: [0, 3]
Pop 0. Current Path: [4, 0]
-> Found new neighbors: [2]
-> Add to queue. Visited set updated.
-> Queue is now: [3, 2]
Pop ... | 0 | verification |
graph_successors | Consider the directed graph:
Edges: 0->3, 6->4, 5->0, 4->5, 2->1, 1->6, 3->2
Queries: [(0, 3), (5, 1)]
Each pair (x, k) asks for the k-th successor of x.
The answer is a Python list of integers in query order. | [1, 0] | {"graph_description": "Edges: 0->3, 6->4, 5->0, 4->5, 2->1, 1->6, 3->2", "queries": [[0, 3], [5, 1]], "nodes": [0, 1, 2, 3, 4, 5, 6], "edges": [[0, 3], [3, 2], [1, 6], [6, 4], [2, 1], [4, 5], [5, 0]], "_time": 0.0008254051208496094, "_task": "graph_successors", "_level": 1, "_config": {"c": 1.0, "level": 1, "seed": nul... | 1 | instruct | |
graph_successors | Consider the directed graph:
Edges: 2->0, 5->5, 3->2, 1->1, 0->3, 4->4
Queries: [(3, 2)]
Each pair (x, k) asks for the k-th successor of x.
The answer is a Python list of integers in query order.
Answer:
[0]
Correct? (Yes/No) | Yes | {"graph_description": "Edges: 2->0, 5->5, 3->2, 1->1, 0->3, 4->4", "queries": [[3, 2]], "nodes": [0, 1, 2, 3, 4, 5], "edges": [[0, 3], [3, 2], [1, 1], [2, 0], [4, 4], [5, 5]], "_time": 0.001256704330444336, "_task": "graph_successors", "_level": 0, "_config": {"c": 1.0, "level": 0, "seed": null, "size": null, "num_node... | 0 | verification | |
lambda_reduction | Reduce the following untyped λ-term to β-normal form.
Syntax: `\x.body` denotes λx.body; application is left-associative juxtaposition; free identifiers are treated as constants.
Term: (\v0.(((\_1.((\_2.c) (d c))) (\v0.d)) ((d v0) ((v0 (\v1.((\_3.(_3 c)) (\_0.c)))) ((v0 v0) b)))))
The answer is the β-normal form (com... | (\v0.(c ((d v0) ((v0 (\v1.c)) ((v0 v0) b))))) | {"term": "(\\v0.(((\\_1.((\\_2.c) (d c))) (\\v0.d)) ((d v0) ((v0 (\\v1.((\\_3.(_3 c)) (\\_0.c)))) ((v0 v0) b)))))", "normal_form": "(\\v0.(c ((d v0) ((v0 (\\v1.c)) ((v0 v0) b)))))", "_time": 0.0012698173522949219, "_task": "lambda_reduction", "_level": 3, "_config": {"c": 1.0, "level": 3, "seed": null, "size": null, "n... | 3 | instruct | |
lambda_reduction | Reduce the following untyped λ-term to β-normal form.
Syntax: `\x.body` denotes λx.body; application is left-associative juxtaposition; free identifiers are treated as constants.
Term: (b (((\_1.b) (a (c a))) ((\_3.((\_0._0) (\v0.((v0 ((((\_2.v0) (c ((d _3) b))) v0) v0)) (\v1.v1))))) a)))
The answer is the β-normal f... | Yes | {"term": "(b (((\\_1.b) (a (c a))) ((\\_3.((\\_0._0) (\\v0.((v0 ((((\\_2.v0) (c ((d _3) b))) v0) v0)) (\\v1.v1))))) a)))", "normal_form": "(b (b (\\v0.((v0 ((v0 v0) v0)) (\\v1.v1)))))", "_time": 0.0009398460388183594, "_task": "lambda_reduction", "_level": 3, "_config": {"c": 1.0, "level": 3, "seed": null, "size": null... | 3 | verification | |
lexical_knowledge | Context: WordNet (relation holds for any valid noun sense).
Select hypernym(detective)
From: [inspector, potter, scot, presbyterian, coordinator, policeman]
The answer is one word. | policeman | {"expr": "hypernym(detective)", "answer_type": "word", "candidates": ["inspector", "potter", "scot", "presbyterian", "coordinator", "policeman"], "gold_synsets": ["policeman.n.01"], "_time": 0.00310516357421875, "_task": "lexical_knowledge", "_level": 0, "_config": {"c": 1.0, "level": 0, "seed": null, "size": null, "n_... | 0 | instruct | |
lexical_knowledge | Context: WordNet (relation holds for any valid noun sense).
Select hypernym(hypernym(accident))
From: [genesis, misfortune, epiphany, outbreak, rattling, flashing]
The answer is one word.
Answer:
misfortune
Correct? (Yes/No) | Yes | {"expr": "hypernym(hypernym(accident))", "answer_type": "word", "candidates": ["genesis", "misfortune", "epiphany", "outbreak", "rattling", "flashing"], "gold_synsets": ["misfortune.n.01"], "_time": 0.0033690929412841797, "_task": "lexical_knowledge", "_level": 0, "_config": {"c": 1.0, "level": 0, "seed": null, "size":... | 0 | verification | |
locate_error | /trace (GRAMMAR)
start -> seq
seq ->
seq -> expr seq
expr -> '(' seq ')'
expr -> '[' seq ']'
expr -> '<' seq '>'
(STRING)
( [ ] [ ] ) ( ) )
The answer is the shortest contiguous span from STRING that ends at the first invalid token and occurs only once in STRING.
Mark the invalid token as >>token<<.
If the token alo... | <trace>
( ✓
[ ✓
] ✓
[ ✓
] ✓
) ✓
( ✓
) ✓
) ∉ {(,<,[}
Answer: ) >>)<<
</trace>
) >>)<< | {"g": "start -> seq\nseq -> \nseq -> expr seq\nexpr -> '(' seq ')'\nexpr -> '[' seq ']'\nexpr -> '<' seq '>'", "tokens": ["(", "[", "]", "[", "]", ")", "(", ")", ")"], "error_index": 8, "cot": "( \u2713\n[ \u2713\n] \u2713\n[ \u2713\n] \u2713\n) \u2713\n( \u2713\n) \u2713\n) \u2209 {(,<,[}\nAnswer: ) >>)<<", "_time": 0... | ( ✓
[ ✓
] ✓
[ ✓
] ✓
) ✓
( ✓
) ✓
) ∉ {(,<,[}
Answer: ) >>)<< | 3 | cot |
locate_error | (GRAMMAR)
discourse -> decl ',' conj decl
decl -> decl_simple ',' conj decl_simple
pro_pl_subj -> 'they'
start -> root
opt_adv ->
v_intr_base -> 'arrive'
conj -> 'and'
decl_simple -> np_pl_subj vp_pl
root -> discourse '.'
vp_pl -> v_intr_base opt_adv
np_pl_subj -> pro_pl_subj
(STRING)
they , and they arrive , and the... | they >>,<< | {"g": "discourse -> decl ',' conj decl\ndecl -> decl_simple ',' conj decl_simple\npro_pl_subj -> 'they'\nstart -> root\nopt_adv -> \nv_intr_base -> 'arrive'\nconj -> 'and'\ndecl_simple -> np_pl_subj vp_pl\nroot -> discourse '.'\nvp_pl -> v_intr_base opt_adv\nnp_pl_subj -> pro_pl_subj", "tokens": ["they", ",", "and", "t... | they ✓
, ∉ {arrive}
Answer: they >>,<< | 2 | instruct |
locate_error | (GRAMMAR)
start -> seq
seq ->
seq -> expr seq
expr -> '(' seq ')'
expr -> '[' seq ']'
expr -> '<' seq '>'
(STRING)
< > [ < > ] [ ] < >
The answer is the shortest contiguous span from STRING that ends at the first invalid token and occurs only once in STRING.
Mark the invalid token as >>token<<.
If the token alone is... | Yes | {"g": "start -> seq\nseq -> \nseq -> expr seq\nexpr -> '(' seq ')'\nexpr -> '[' seq ']'\nexpr -> '<' seq '>'", "tokens": ["<", ">", "[", "<", ">", "]", "[", "]", "<", ">"], "error_index": -1, "cot": "< \u2713\n> \u2713\n[ \u2713\n< \u2713\n> \u2713\n] \u2713\n[ \u2713\n] \u2713\n< \u2713\n> \u2713", "_time": 0.18376064... | < ✓
> ✓
[ ✓
< ✓
> ✓
] ✓
[ ✓
] ✓
< ✓
> ✓ | 1 | verification |
logic_nli | /trace Premise:
there is a room.
if “Mary is old” then “everyone in the room either is not uses contact lenses or is not old or both” otherwise “Paul and Mary are respectively old and quiet”
Paul is not old
everyone in the room is not quiet if they is dedicated to sustainable living and zero-waste practices and vice ve... | <trace>
0. [input 2] ~old(paul)
1. [assumption] old(paul) & old(mary)
2. [cnf 2] old(paul)
3. [forward 1, 3] $false
</trace>
contradiction | {"verbalize_seed": 168046, "proof": {"proof": "% Running in auto input_syntax mode. Trying TPTP\n% Refutation found. Thanks to Tanya!\n% SZS status Unsatisfiable for tmpu1sb7i_x\n% SZS output start Proof for tmpu1sb7i_x\n3. ~old(paul) [input(axiom) 2]\n7. old(paul) & old(mary) [input(axiom) hyp]\n23. ~old(paul) [cnf tr... | 0. [input 2] ~old(paul)
1. [assumption] old(paul) & old(mary)
2. [cnf 2] old(paul)
3. [forward 1, 3] $false | 0 | cot |
logic_nli | Premise:
Mary is the only person in the room.
Mary participates in long-distance cycling events across the country
Paul plays the drums
neither “A tree in Whispering Woods has golden fruit.” nor “Gravity inverts in Oakhaven on Tuesdays.”
John Smith's car runs on ethanol.
all quiet people in the room are old
everyone in... | neutral | {"verbalize_seed": 243876, "proof": null, "cot": "", "prem": {"tptp": "room(mary)&(![X]:(room(X)=>(X='mary')))&\n(prede(mary))&\n(predc(paul))&\n(~((propositiona)|(propositione)))&\n(propositiond)&\n(![X]:(room(X)=>(quiet(X)=>old(X))))&\n(![X]:(room(X)=>(predb(X))))&\n(((~propositionb)&(propositiond)))&\n(((proposition... | 2 | instruct | |
logic_nli | Premise:
Mary is the only person in the room.
everyone in the room plays the violin if they is a dedicated advocate for digital privacy and encryption
Mary is not not quiet
everyone in the room bakes bread at home
Fred neither enjoys kayaking and exploring remote waterways nor is not not old
if someone is an old person... | No | {"verbalize_seed": 219763, "proof": {"proof": "% Running in auto input_syntax mode. Trying TPTP\n% Refutation found. Thanks to Tanya!\n% SZS status Unsatisfiable for tmp2i1r2q_t\n% SZS output start Proof for tmp2i1r2q_t\n5. ~(predh(fred) | old(fred)) [input(axiom) 4]\n7. predh(fred) [input(axiom) hyp]\n9. ~predh(fred) ... | 0. [input 4] ~(enjoys_kayaking_and_exploring_remote_waterways(fred) | old(fred))
1. [assumption] enjoys_kayaking_and_exploring_remote_waterways(fred)
2. [pure 1] does_not_enjoy_kayaking_and_exploring_remote_waterways(fred)
3. [forward 3, 2] $false | 0 | verification |
navigation | Objects occupy distinct points on the integer grid [0, 4] x [0, 4].
North is +y and East is +x. Any object not mentioned in a step stays fixed.
Initial facts:
- A is right of B.
- B is left of C.
- B is above C.
- A is right of C.
- A is in the same row as C.
- B is above A.
Steps:
1. B jumps to A's position offset b... | (aligned, below) | {"answer_type": "relation", "query_a": "A", "query_b": "B", "grid": 4, "objects": ["A", "B", "C"], "facts": [{"k": "h", "a": "A", "b": "B", "r": "right"}, {"k": "h", "a": "B", "b": "C", "r": "left"}, {"k": "v", "a": "B", "b": "C", "r": "above"}, {"k": "h", "a": "A", "b": "C", "r": "right"}, {"k": "v", "a": "A", "b": "C... | 0 | instruct | |
navigation | Objects occupy distinct points on the integer grid [0, 6] x [0, 6].
North is +y and East is +x. Any object not mentioned in a step stays fixed.
Initial facts:
- E is left of B.
- D is above C.
- A is left of D.
- B is above E.
- C is below B.
- B is in the same row as A.
- E is below D.
- A is in the same column as E.... | No | {"answer_type": "distance", "query_a": "C", "query_b": "D", "grid": 6, "objects": ["A", "B", "C", "D", "E"], "facts": [{"k": "h", "a": "E", "b": "B", "r": "left"}, {"k": "v", "a": "D", "b": "C", "r": "above"}, {"k": "h", "a": "A", "b": "D", "r": "left"}, {"k": "v", "a": "B", "b": "E", "r": "above"}, {"k": "v", "a": "C"... | 2 | verification | |
parsability | /trace (GRAMMAR)
start -> seq
seq ->
seq -> expr seq
expr -> '(' seq ')'
expr -> '[' seq ']'
expr -> '<' seq '>'
expr -> '⟨' seq '⟩'
expr -> '⟦' seq '⟧'
expr -> '⟪' seq '⟫'
(STRING)
( < > ( ) ) ⟦ ⟨ ⟩ ⟧ ⟦ ⟧
(QUESTION)
What is the parsability of this string?
The answer is exactly one word: unambiguous, ambiguous, or u... | <trace>
Parse 1:
'(': start > seq > expr (Depth: 3)
'<': start > seq > expr > seq > expr (Depth: 5)
'>': start > seq > expr > seq > expr (Depth: 5)
'(': start > seq > expr > seq > seq > expr (Depth: 6)
')': start > seq > expr > seq > seq > expr (Depth: 6)
')': start > seq > expr (Depth: 3)
'⟦': start > seq > seq > expr... | {"cot": "Parse 1:\n'(': start > seq > expr (Depth: 3)\n'<': start > seq > expr > seq > expr (Depth: 5)\n'>': start > seq > expr > seq > expr (Depth: 5)\n'(': start > seq > expr > seq > seq > expr (Depth: 6)\n')': start > seq > expr > seq > seq > expr (Depth: 6)\n')': start > seq > expr (Depth: 3)\n'\u27e6': start > seq... | Parse 1:
'(': start > seq > expr (Depth: 3)
'<': start > seq > expr > seq > expr (Depth: 5)
'>': start > seq > expr > seq > expr (Depth: 5)
'(': start > seq > expr > seq > seq > expr (Depth: 6)
')': start > seq > expr > seq > seq > expr (Depth: 6)
')': start > seq > expr (Depth: 3)
'⟦': start > seq > seq > expr (Depth:... | 2 | cot |
parsability | (GRAMMAR)
S -> G
D -> '<' '[' E ']' '>'
A -> F
E -> 'still'
D -> 'firm'
G -> '<' G '>'
G -> 'market'
C -> B E
(STRING)
< < market > > > > < < < >
(QUESTION)
What is the parsability of this string?
The answer is exactly one word: unambiguous, ambiguous, or unparsable. | unparsable | {"cot": "", "label": "unparsable", "tokens": ["<", "<", "market", ">", ">", ">", ">", "<", "<", "<", ">"], "g": "S -> G\nD -> '<' '[' E ']' '>'\nA -> F\nE -> 'still'\nD -> 'firm'\nG -> '<' G '>'\nG -> 'market'\nC -> B E", "_time": 1.367563009262085, "_task": "parsability", "_level": 3, "_config": {"c": 1.0, "level": 3,... | 3 | instruct | |
parsability | (GRAMMAR)
S -> A
B -> 'senior'
A -> B B
B -> A
(STRING)
senior senior senior senior senior
(QUESTION)
What is the parsability of this string?
The answer is exactly one word: unambiguous, ambiguous, or unparsable.
Answer:
unambiguous
Correct? (Yes/No) | No | {"cot": "Parse 1:\n'senior': S > A > B > A > B > A > B > A > B (Depth: 9)\n'senior': S > A > B > A > B > A > B > A > B (Depth: 9)\n'senior': S > A > B > A > B > A > B (Depth: 7)\n'senior': S > A > B > A > B (Depth: 5)\n'senior': S > A > B (Depth: 3)\nParse 2:\n'senior': S > A > B > A > B > A > B (Depth: 7)\n'senior': S... | Parse 1:
'senior': S > A > B > A > B > A > B > A > B (Depth: 9)
'senior': S > A > B > A > B > A > B > A > B (Depth: 9)
'senior': S > A > B > A > B > A > B (Depth: 7)
'senior': S > A > B > A > B (Depth: 5)
'senior': S > A > B (Depth: 3)
Parse 2:
'senior': S > A > B > A > B > A > B (Depth: 7)
'senior': S > A > B > A > B ... | 0 | verification |
parsing | /trace (GRAMMAR)
start -> root
decl_simple -> there is det_sg_a n_sg_c
root -> discourse '.'
det_sg_a -> 'a'
conj -> 'but'
decl -> decl_simple ',' conj decl_simple
is -> 'is'
n_sg_c -> 'cat'
discourse -> decl
there -> 'there'
(STRING)
there is a cat , but there is a cat .
(QUESTION)
The answer is the fully parenthesi... | <trace>
'there': start > root > discourse > decl > decl_simple > there (Depth: 6)
'is': start > root > discourse > decl > decl_simple > is (Depth: 6)
'a': start > root > discourse > decl > decl_simple > det_sg_a (Depth: 6)
'cat': start > root > discourse > decl > decl_simple > n_sg_c (Depth: 6)
',': start > root > disc... | {"cot": "'there': start > root > discourse > decl > decl_simple > there (Depth: 6)\n'is': start > root > discourse > decl > decl_simple > is (Depth: 6)\n'a': start > root > discourse > decl > decl_simple > det_sg_a (Depth: 6)\n'cat': start > root > discourse > decl > decl_simple > n_sg_c (Depth: 6)\n',': start > root >... | 'there': start > root > discourse > decl > decl_simple > there (Depth: 6)
'is': start > root > discourse > decl > decl_simple > is (Depth: 6)
'a': start > root > discourse > decl > decl_simple > det_sg_a (Depth: 6)
'cat': start > root > discourse > decl > decl_simple > n_sg_c (Depth: 6)
',': start > root > discourse > ... | 1 | cot |
parsing | (GRAMMAR)
S -> E
E -> 'sound' E
E -> 'sound'
(STRING)
sound sound sound sound
(QUESTION)
The answer is the fully parenthesized parse tree of STRING in Lisp style.
Given G_ex: S -> NP VP, NP -> 'd' N, N -> 'n', VP -> 'v' and "d n v", correct is (S (NP d (N n)) (VP v)). | (S (E sound (E sound (E sound (E sound))))) | {"cot": "'sound': S > E (Depth: 2)\n'sound': S > E > E (Depth: 3)\n'sound': S > E > E > E (Depth: 4)\n'sound': S > E > E > E > E (Depth: 5)", "parses": [[["sound", ["sound", ["sound", ["sound"]]]]]], "label": "unambiguous", "tokens": ["sound", "sound", "sound", "sound"], "g": "S -> E\nE -> 'sound' E\nE -> 'sound'", "mo... | 'sound': S > E (Depth: 2)
'sound': S > E > E (Depth: 3)
'sound': S > E > E > E (Depth: 4)
'sound': S > E > E > E > E (Depth: 5) | 1 | instruct |
parsing | (GRAMMAR)
np_sg_subj -> name
discourse -> decl
is -> 'is'
adj -> 'open'
name -> 'Bob'
decl -> decl_simple
root -> discourse '.'
start -> root
decl_simple -> np_sg_subj vp_sg
vp_sg -> is adj
(STRING)
Bob is open .
(QUESTION)
Identify the Part-of-Speech (immediate parent) and tree depth for each token.
format per token... | Yes | {"cot": "'Bob': start > root > discourse > decl > decl_simple > np_sg_subj > name (Depth: 7)\n'is': start > root > discourse > decl > decl_simple > vp_sg > is (Depth: 7)\n'open': start > root > discourse > decl > decl_simple > vp_sg > adj (Depth: 7)\n'.': start > root (Depth: 2)", "parses": [[[[[[[["Bob"]], [["is"], ["... | 'Bob': start > root > discourse > decl > decl_simple > np_sg_subj > name (Depth: 7)
'is': start > root > discourse > decl > decl_simple > vp_sg > is (Depth: 7)
'open': start > root > discourse > decl > decl_simple > vp_sg > adj (Depth: 7)
'.': start > root (Depth: 2) | 0 | verification |
planning | /trace [OBJECTS]
object_1, object_2
[ACTIONS]
action_0(x0, x1)
Requires: fluent_0
Effect: not fluent_0
action_1(x0, x1)
Effect: fluent_0
action_2(x0, x1)
Requires: (not fluent_0)
Effect: fluent_0
action_3(x0)
Effect: fluent_0
[STATE]
Default: False
Initial true values: None
[GOAL]
fluent_0
The answer is... | <trace>
Target Goals: fluent_0
Step 1:
Selected Action: (action_1 object_2 object_2)
- Preconditions met. Applying action.
- Added effects: fluent_0
- Goal condition satisfied.
Plan found.
</trace>
action_1(object_2, object_2) | {"domain_seed": "4-357", "fluent_arity": 1, "na": 1, "problem_english": "[OBJECTS]\nobject_1, object_2\n\n[ACTIONS]\naction_0(x0, x1)\n Requires: fluent_0\n Effect: not fluent_0\naction_1(x0, x1)\n Effect: fluent_0\naction_2(x0, x1)\n Requires: (not fluent_0)\n Effect: fluent_0\naction_3(x0)\n Effect: fluent_0\n\... | Target Goals: fluent_0
Step 1:
Selected Action: (action_1 object_2 object_2)
- Preconditions met. Applying action.
- Added effects: fluent_0
- Goal condition satisfied.
Plan found. | 0 | cot |
planning | [OBJECTS]
object_1, object_2, object_3, object_4, object_5, object_6
[ACTIONS]
action_0(x0, x1)
Requires: (not fluent_0(x1, x0)), (not fluent_0(x0, x1))
Effect: fluent_0(x1, x0), fluent_0(x0, x1)
action_1(x0)
action_2(x0)
action_3(x0, x1)
Requires: fluent_0(x0, x1)
Effect: not fluent_0(x0, x1), not fluent_0(x1... | action_0(object_5, object_1)
action_3(object_2, object_4)
action_0(object_4, object_2) | {"domain_seed": "6-89", "fluent_arity": 2, "na": 3, "problem_english": "[OBJECTS]\nobject_1, object_2, object_3, object_4, object_5, object_6\n\n[ACTIONS]\naction_0(x0, x1)\n Requires: (not fluent_0(x1, x0)), (not fluent_0(x0, x1))\n Effect: fluent_0(x1, x0), fluent_0(x0, x1)\naction_1(x0)\naction_2(x0)\naction_3(x0,... | Target Goals: fluent_0 object_4 object_2, fluent_0 object_1 object_5
Step 1:
Selected Action: (action_0 object_5 object_1)
- Preconditions met. Applying action.
- Added effects: fluent_0 object_1 object_5, fluent_0 object_5 object_1
- Removed effects: fluent_0 object_1 object_4, fluent_0 object_1 object_6, fluen... | 1 | instruct |
planning | [OBJECTS]
object_1, object_2, object_3
[ACTIONS]
action_0(x0)
Requires: (not fluent_0(x0))
Effect: fluent_0(x0)
action_1(x0, x1)
Requires: fluent_0(x1)
Effect: not fluent_0(x1)
action_2(x0, x1)
Requires: fluent_0(x0)
Effect: not fluent_0(x0)
[STATE]
Default: False
Initial true values: None
[GOAL]
fluent... | No | {"domain_seed": "4-237", "fluent_arity": 1, "na": 1, "problem_english": "[OBJECTS]\nobject_1, object_2, object_3\n\n[ACTIONS]\naction_0(x0)\n Requires: (not fluent_0(x0))\n Effect: fluent_0(x0)\naction_1(x0, x1)\n Requires: fluent_0(x1)\n Effect: not fluent_0(x1)\naction_2(x0, x1)\n Requires: fluent_0(x0)\n Effec... | Target Goals: fluent_0 object_2
Step 1:
Selected Action: (action_0 object_2)
- Preconditions met. Applying action.
- Added effects: fluent_0 object_2
- Goal condition satisfied.
Plan found. | 0 | verification |
proof_reconstruction | /trace Reconstruct the proof dependency graph.
Domain: Geometry
Theorem: (point_on_plane(X1)=point_2_on_line(X1)|point_1_on_line(X1)=X1|on(X2,X1)|~plane(X1)|~line(X2)|~line(X1)|~point(point_1_on_line(X1))|~point(X1)|~on(point_1_on_line(X1),X2)|~on(X1,X2))
Rules:
- Some clauses are axioms (no parents); do NOT list them... | <trace>
step_0 pm(premise_4, premise_3): [ '(point_2_on_line(X1)=X2|on(X1,X3)|~plane(X3)|~line(X1)|~point(point_2_on_line(X1))|~point(X2)|~on(point_2_on_line(X1),X3)|~on(X2,X3)|~on(X2,X1))' ]
step_1 pm(premise_4, premise_5): [ '(point_1_on_line(X1)=X2|on(X3,X1)|~plane(X1)|~line(X3)|~line(X1)|~point(point_1_on_line(X1))... | {"numbered_clauses": ["(point(point_on_plane(X1))|~plane(X1))", "(point_on_plane(X1)=point_2_on_line(X1)|on(X1,X1)|~plane(X1)|~line(X1)|~point(point_on_plane(X1)))", "(on(point_2_on_line(X1),X1)|~line(X1))", "(X1=X3|on(X2,X4)|~on(X1,X2)|~on(X3,X2)|~on(X1,X4)|~on(X3,X4)|~plane(X4)|~point(X1)|~point(X3)|~line(X2))", "(on... | step_0 pm(premise_4, premise_3): [ '(point_2_on_line(X1)=X2|on(X1,X3)|~plane(X3)|~line(X1)|~point(point_2_on_line(X1))|~point(X2)|~on(point_2_on_line(X1),X3)|~on(X2,X3)|~on(X2,X1))' ]
step_1 pm(premise_4, premise_5): [ '(point_1_on_line(X1)=X2|on(X3,X1)|~plane(X1)|~line(X3)|~line(X1)|~point(point_1_on_line(X1))|~point(... | 1 | cot |
proof_reconstruction | Reconstruct the proof dependency graph.
Domain: Geometry
Theorem: (~meet(X1,X2,X3)|~inner_point(X4,X3)|~inner_point(X4,X5)|~part_of(X5,X2))
Rules:
- Some clauses are axioms (no parents); do NOT list them
- All other clauses derive from exactly 2 parents
- Clauses can be reused as parents
Shuffled clauses:
1. (inciden... | 2 <- 3, 5
3 <- 1, 8
5 <- 4, 7
7 <- 6, 8 | {"numbered_clauses": ["(incident_c(X3,X2)|~part_of(X1,X2)|~incident_c(X3,X1))", "(~meet(X1,X2,X3)|~inner_point(X4,X3)|~inner_point(X4,X5)|~part_of(X5,X2))", "(incident_c(X1,X2)|~inner_point(X1,X3)|~part_of(X3,X2))", "(~inner_point(X1,X2)|~end_point(X1,X2))", "(~meet(X1,X2,X3)|~incident_c(X4,X2)|~inner_point(X4,X3))", "... | step_0 pm(premise_1, premise_8): [ '(incident_c(X1,X2)|~inner_point(X1,X3)|~part_of(X3,X2))' ]
step_1 pm(premise_8, premise_6): [ '(end_point(X1,X2)|~meet(X3,X4,X2)|~incident_c(X1,X4)|~inner_point(X1,X2))' ]
step_2 pm(premise_4, step_1): [ '(~meet(X1,X2,X3)|~incident_c(X4,X2)|~inner_point(X4,X3))' ]
THEOREM pm(step_0, ... | 1 | instruct |
proof_reconstruction | Reconstruct the proof dependency graph.
Domain: Ring Theory
Theorem: (add(additive_inverse(X1),add(X2,add(X3,add(X4,add(X5,X1)))))=add(X2,add(X3,add(X5,X4))))
Rules:
- Some clauses are axioms (no parents); do NOT list them
- All other clauses derive from exactly 2 parents
- Clauses can be reused as parents
Shuffled c... | Yes | {"numbered_clauses": ["(add(X1,add(X2,X3))=add(X2,add(X1,X3)))", "(add(additive_inverse(X1),add(X2,add(X3,add(X4,X1))))=add(X2,add(X4,X3)))", "(add(additive_inverse(X1),add(X2,add(X3,X1)))=add(X3,X2))", "(add(additive_inverse(X1),add(X2,add(X3,add(X4,add(X5,X1)))))=add(X2,add(X3,add(X5,X4))))", "(add(X1,add(X2,X3))=add... | step_0 rw(premise_5, premise_6): [ '(add(additive_inverse(X1),add(X2,add(X3,X1)))=add(X3,X2))' ]
step_1 pm(step_0, premise_5): [ '(add(additive_inverse(X1),add(X2,add(X3,add(X4,X1))))=add(X2,add(X4,X3)))' ]
THEOREM pm(premise_5, step_1): [ '(add(additive_inverse(X1),add(X2,add(X3,add(X4,add(X5,X1)))))=add(X2,add(X3,add... | 0 | verification |
qualitative_reasoning | Qualitative reasoning over vertical extents of 2D boxes.
There are 6 entities labeled 0 through 5.
You are given the following facts (read 'i rel j' as 'entity i is rel to entity j'):
2 meets 3
0 met-by 2
4 after 2
5 starts 0
1 after 2
4 met-by 5
0 starts 3
Question: what is the relation of the vertical ... | started-by | {"calculus": "allen_y", "topic": "vertical extents of 2D boxes", "phrasing": "the relation of the vertical extent of box {i} to that of box {j}", "n_entities": 6, "hops": 3, "n_revealed": 7, "entities": [[-3, -1, -2, 1], [-3, 3, -1, 1], [-3, -2, -3, -2], [-1, 3, -2, 2], [-3, -2, -1, 0], [0, 2, -2, -1]], "revealed": [[2... | 1 | instruct | |
qualitative_reasoning | Qualitative reasoning over time intervals.
There are 6 entities labeled 0 through 5.
You are given the following facts (read 'i rel j' as 'entity i is rel to entity j'):
1 before 2
4 contains 2
5 after 1
3 started-by 1
0 after 1
1 meets 4
2 during 3
2 overlaps 5
0 during 4
0 meets 2
3 finished-by ... | No | {"calculus": "allen_time", "topic": "time intervals", "phrasing": "the temporal relation of interval {i} to interval {j}", "n_entities": 6, "hops": 3, "n_revealed": 13, "entities": [[0, 1], [-3, -1], [1, 3], [-3, 4], [-1, 4], [2, 4]], "revealed": [[1, 2, "before"], [4, 2, "contains"], [5, 1, "after"], [3, 1, "started-b... | 1 | verification | |
reference_tracking | Rules:
- Each ball has a positive integer size.
- Dock(X, Y) succeeds iff size(X) == size(Y).
- If docking fails and the failure sentence says 'it was too large/small',
'it' refers to the larger/smaller of the two docked balls.
Inventory:
- b1: red
- b2: green
- b3: black
- b4: white
- b5: yellow
Initial state:
- b1... | x1 | {"family": "logical_winograd", "balls": ["b1", "b2", "b3", "b4", "b5"], "boxes": ["x1", "x2", "x3", "x4"], "colors": {"b1": "red", "b2": "green", "b3": "black", "b4": "white", "b5": "yellow"}, "initial_placement": {"b1": "x2", "b2": "x1", "b3": "x2", "b4": "x3", "b5": "x1"}, "moves": ["Transfer b5 from x1 into x4.", "T... | 2 | instruct | |
reference_tracking | Inventory:
- b1: green
- b2: blue
- b3: yellow
- b4: green
Initial state:
- b1 is in x1
- b2 is in x2
- b3 is in x1
- b4 is in x1
Moves:
- Transfer everything in x1 into x3.
- Move b3 from x3 to x1.
- Transfer b2 from x2 into x1.
- Move all contents of x1 to x3.
Question: Where is the ball that started in x2 now? Answe... | No | {"family": "track", "balls": ["b1", "b2", "b3", "b4"], "boxes": ["x1", "x2", "x3"], "colors": {"b1": "green", "b2": "blue", "b3": "yellow", "b4": "green"}, "initial_placement": {"b1": "x1", "b2": "x2", "b3": "x1", "b4": "x1"}, "moves": ["Transfer everything in x1 into x3.", "Move b3 from x3 to x1.", "Transfer b2 from x... | 0 | verification | |
regex_following | The answer is a 6-character string that fully matches the regular expression: (?:(\++|[V-V]){1,4})|\W | +++VVV | {"regex": "(?:(\\++|[V-V]){1,4})|\\W", "string": "+++VVV", "_time": 0.02289271354675293, "_task": "regex_following", "_level": 3, "_config": {"c": 1.0, "level": 3, "seed": null, "size": null, "n_ex": 11, "max_depth": 8, "min_depth": 6}, "_prompt_tokens": 31, "_answer_tokens": 3} | 3 | instruct | |
regex_following | The answer is a 5-character string that fully matches the regular expression: \s|(?:5)|\d|x{2,4}c
Answer:
xxxxc
Correct? (Yes/No) | Yes | {"regex": "\\s|(?:5)|\\d|x{2,4}c", "string": "xxxxc", "_time": 0.007856130599975586, "_task": "regex_following", "_level": 1, "_config": {"c": 1.0, "level": 1, "seed": null, "size": null, "n_ex": 9, "max_depth": 6, "min_depth": 4}, "_prompt_tokens": 31, "_answer_tokens": 2} | 1 | verification | |
regex_induction | The answer is the shortest regex that fully matches all POSITIVE strings and none of the NEGATIVE strings.
POSITIVE: '(x', '((q', '((q', '(((((r', '(!', '(( ', '((s', '((((/', '(((3', '(((y', '(((m'
NEGATIVE: ']]', 'x', 'a88', '.', 'e6@', 'rr...6c?', 'k', '5', 'peaceeR', ')whoseQ', 'pasP4b ' | (?:(\(+)(?:[^4-l]))? | {"regex": "(?:(\\(+)(?:[^4-l]))?", "positives": ["(x", "((q", "((q", "(((((r", "(!", "(( ", "((s", "((((/", "(((3", "(((y", "(((m"], "negatives": ["]]", "x", "a88", ".", "e6@", "rr...6c?", "k", "5", "peaceeR", ")whoseQ", "pasP4b "], "_time": 0.10089492797851562, "_task": "regex_induction", "_level": 3, "_config": {"c":... | 3 | instruct | |
regex_induction | The answer is the shortest regex that fully matches all POSITIVE strings and none of the NEGATIVE strings.
POSITIVE: '*"', '*&', '.', '*^', '+', '.', '*<', '.', 'leaderrE'
NEGATIVE: 'F', '#', '?eitherf', ' E', 'leaderrrr]', 'lF9": ', '\', '\', 'c]Y'
Answer:
\+|\.?|\*[^Eca]|leader{2}(?:([JE3]))
Correct? (Yes/No) | Yes | {"regex": "\\+|\\.?|\\*[^Eca]|leader{2}(?:([JE3]))", "positives": ["*\"", "*&", ".", "*^", "+", ".", "*<", ".", "leaderrE"], "negatives": ["F", "#", "?eitherf", " E", "leaderrrr]", "lF9\": ", "\\", "\\", "c]Y"], "_time": 0.06333422660827637, "_task": "regex_induction", "_level": 1, "_config": {"c": 1.0, "level": 1, "se... | 1 | verification | |
regex_reasoning | Consider the regular expressions A = cd(ba)? and B = (c*)|(a|ad)
Find the shortest string that is accepted by exactly one of A or B (but not both).
The answer is the shortest such string. | {"qtype": "distinguishing", "regex_a": "cd(ba)?", "regex_b": "(c*)|(a|ad)", "_time": 0.03389334678649902, "_task": "regex_reasoning", "_level": 1, "_config": {"c": 1.0, "level": 1, "seed": null, "size": null, "max_depth": 5, "min_depth": 3, "n_alpha": 4, "gramforge_algorithm": "sequential"}, "_prompt_tokens": 48, "_ans... | 1 | instruct | ||
regex_reasoning | Consider the regular expressions A = ((ad?|bb))bb* and B = ((ad?|bb))bb*
Do A and B accept exactly the same set of strings?
The answer is Yes or No.
Answer:
bcbba
Correct? (Yes/No) | No | {"qtype": "equivalence", "regex_a": "((ad?|bb))bb*", "regex_b": "((ad?|bb))bb*", "_time": 0.04436779022216797, "_task": "regex_reasoning", "_level": 3, "_config": {"c": 1.0, "level": 3, "seed": null, "size": null, "max_depth": 7, "min_depth": 5, "n_alpha": 5, "gramforge_algorithm": "sequential"}, "_prompt_tokens": 42, ... | 3 | verification | |
sequential_induction | Infer a recurrence for a sequence indexed from 0: [U0, U1, ..., U9].
Max recurrence degree: 1.
Allowed binary ops: +, -, *, **
- Previous terms must be referenced exactly as: U[n - 1] ... U[n - 1]
- You may use "n" (current index).
- The answer is the right-hand side only (do not write "U[n] =").
- Your recurrence deg... | 3*U[n - 1] | {"first elements": [4, 12, 36, 108, 324, 972, 2916, 8748, 26244, 78732], "degree of recursion": 1, "initial terms": [4], "_time": 0.04924726486206055, "_task": "sequential_induction", "_level": 1, "_config": {"c": 1.0, "level": 1, "seed": null, "size": null, "recurrence_depth": 2, "n_visible_terms": 10, "max_terms_len"... | 1 | instruct | |
sequential_induction | Infer a recurrence for a sequence indexed from 0: [U0, U1, ..., U9].
Max recurrence degree: 0.
Allowed binary ops: +, -, *, **
- Previous terms must be referenced exactly as: U[n - 1] ... U[n - 0]
- You may use "n" (current index).
- The answer is the right-hand side only (do not write "U[n] =").
- Your recurrence deg... | Yes | {"first elements": [-1, -3, -5, -7, -9, -11, -13, -15, -17, -19], "degree of recursion": 0, "initial terms": [], "_time": 0.05078721046447754, "_task": "sequential_induction", "_level": 1, "_config": {"c": 1.0, "level": 1, "seed": null, "size": null, "recurrence_depth": 2, "n_visible_terms": 10, "max_terms_len": 15, "m... | 1 | verification | |
set_equality | Set1: ['individual father', 'equal sympathy', 'lonely technology', 'several golf', 'ugly performance', 'aware tennis', 'temporary catch', 'quick resort', 'visible tradition', 'professional poem', 'wise blue', 'yellow task', 'different list', 'classic league', 'classic dealer', 'pleasant bag']
Set2: ['wise blue', 'profe... | True | {"base_subset": ["individual father", "equal sympathy", "lonely technology", "several golf", "ugly performance", "aware tennis", "temporary catch", "quick resort", "visible tradition", "professional poem", "wise blue", "yellow task", "different list", "classic league", "classic dealer", "pleasant bag"], "subset_bis": [... | 1 | instruct | |
set_equality | Set1: [599, 781, 720, 429, 669, 521, 788, 434, 770, 913, 101, 381, 275, 895, 586, 426, 454, 110, 154, 566, 464, 310, 524, 735, 798, 771, 103, 516, 466, 672, 818, 631]
Set2: [895, 586, 524, 426, 735, 818, 788, 434, 720, 599, 110, 672, 466, 566, 101, 631, 521, 798, 454, 618, 770, 913, 103, 310, 669, 275, 781, 381, 464, 4... | Yes | {"base_subset": [599, 781, 720, 429, 669, 521, 788, 434, 770, 913, 101, 381, 275, 895, 586, 426, 454, 110, 154, 566, 464, 310, 524, 735, 798, 771, 103, 516, 466, 672, 818, 631], "subset_bis": [895, 586, 524, 426, 735, 818, 788, 434, 720, 599, 110, 672, 466, 566, 101, 631, 521, 798, 454, 618, 770, 913, 103, 310, 669, 27... | 2 | verification | |
set_intersection | Set1: ['significant group', 'objective bother', 'straight meaning', 'easy insurance', 'pure delivery', 'basic text', 'curious storage', 'responsible assumption']
Set2: ['basic text', 'pure delivery', 'proud mud', 'objective bother', 'primary range', 'lost weekend']
The answer is the intersection of Set1 and Set2 as a P... | {'basic text', 'objective bother', 'pure delivery'} | {"set_1": ["significant group", "objective bother", "straight meaning", "easy insurance", "pure delivery", "basic text", "curious storage", "responsible assumption"], "set_2": ["basic text", "pure delivery", "proud mud", "objective bother", "primary range", "lost weekend"], "_time": 0.0005331039428710938, "_task": "set... | 0 | instruct | |
set_intersection | Set1: ['distinct reputation', 'huge vacation', 'political mouth', 'hot dimension', 'few cable', 'logical concept', 'common age', 'lost weekend']
Set2: ['gross phone', 'common age', 'famous capital', 'hot dimension', 'hungry event', 'lost weekend']
The answer is the intersection of Set1 and Set2 as a Python set: {elem_1... | No | {"set_1": ["distinct reputation", "huge vacation", "political mouth", "hot dimension", "few cable", "logical concept", "common age", "lost weekend"], "set_2": ["gross phone", "common age", "famous capital", "hot dimension", "hungry event", "lost weekend"], "_time": 0.0010385513305664062, "_task": "set_intersection", "_... | 0 | verification | |
set_missing_element | Set_A: ['seven hundred and eighty-four', 'seven hundred and eighty-three', 'seven hundred and seventy-six', 'seven hundred and eighty-two', 'seven hundred and sixty-seven', 'seven hundred and seventy-seven', 'seven hundred and seventy', 'seven hundred and seventy-nine', 'seven hundred and seventy-four', 'seven hundred ... | {'seven hundred and seventy-eight', 'seven hundred and seventy-three', 'seven hundred and seventy-two'} | {"element_list": ["seven hundred and eighty-four", "seven hundred and eighty-three", "seven hundred and seventy-six", "seven hundred and eighty-two", "seven hundred and sixty-seven", "seven hundred and seventy-seven", "seven hundred and seventy", "seven hundred and seventy-nine", "seven hundred and seventy-four", "seve... | 1 | instruct | |
set_missing_element | Set_A: [558, 562, 544, 531, 540, 565, 551, 559, 549, 527, 532, 530, 561, 533, 548, 529, 526, 538, 541, 560, 539, 546, 535, 553, 537, 542, 528, 550, 547, 563, 536, 552, 543, 556, 564, 545, 534, 557, 554]
The answer is the missing elements from Set_A as a Python set.
Answer:
{'2021-04-02', '2021-04-08', '2021-04-10'}
Cor... | No | {"element_list": [558, 562, 544, 531, 540, 565, 551, 559, 549, 527, 532, 530, 561, 533, 548, 529, 526, 538, 541, 560, 539, 546, 535, 553, 537, 542, 528, 550, 547, 563, 536, 552, 543, 556, 564, 545, 534, 557, 554], "_time": 0.0008249282836914062, "_task": "set_missing_element", "_level": 2, "_config": {"c": 1.0, "level"... | 2 | verification | |
symbolic_arithmetics | /trace Simplify the following algebraic expression:
12 % 8 - y + x + abs(4) * (x) - (y)
The answer is the simplified expression. | <trace>
12 % 8 - y + x + abs(4) * (x) - (y)
= 5*x - 2*y + 4
</trace>
5*x - 2*y + 4 | {"expr": "12 % 8 - y + x + abs(4) * (x) - (y)", "height": 7, "cot": "12 % 8 - y + x + abs(4) * (x) - (y)\n= 5*x - 2*y + 4", "_time": 0.014969825744628906, "_task": "symbolic_arithmetics", "_level": 2, "_config": {"c": 1.0, "level": 2, "seed": null, "size": null, "min_depth": 5, "max_depth": 7, "max_coeff": 15, "variabl... | 12 % 8 - y + x + abs(4) * (x) - (y)
= 5*x - 2*y + 4 | 2 | cot |
symbolic_arithmetics | Simplify the following algebraic expression:
((y))**2
The answer is the simplified expression. | y**2 | {"expr": "((y))**2", "height": 4, "cot": "((y))**2\n= y**2", "_time": 0.720430850982666, "_task": "symbolic_arithmetics", "_level": 0, "_config": {"c": 1.0, "level": 0, "seed": null, "size": null, "min_depth": 3, "max_depth": 5, "max_coeff": 9, "variables": ["x", "y"]}, "_prompt_tokens": 21, "_answer_tokens": 13} | ((y))**2
= y**2 | 0 | instruct |
symbolic_arithmetics | Simplify the following algebraic expression:
y + (x) + x - 3
The answer is the simplified expression.
Answer:
2*x + y - 3
Correct? (Yes/No) | Yes | {"expr": "y + (x) + x - 3", "height": 5, "cot": "y + (x) + x - 3\n= 2*x + y - 3", "_time": 0.02155900001525879, "_task": "symbolic_arithmetics", "_level": 0, "_config": {"c": 1.0, "level": 0, "seed": null, "size": null, "min_depth": 3, "max_depth": 5, "max_coeff": 9, "variables": ["x", "y"]}, "_prompt_tokens": 26, "_an... | y + (x) + x - 3
= 2*x + y - 3 | 0 | verification |
Reasoning-Core : Symbolic Pre-Training pile (SPT) ◉
SPT is designed for symbolic/formal pre-training, mid-training and SFT. The data is procedurally generated on cpu and can be scaled to trillion tokens, and the difficulty is also adjustable with a single knob.
Task Categories
📐 Formal Reasoning: planning • conjecture_entailment • proof_reconstruction
📜 Formal Semantics, Logic: logic_nli • evidence_retrieval
🔢 Mathematical computation: equation_system • arithmetics • symbolic_arithmetics • sequential_induction
💻 Code & Execution: code_execution • diff_prediction • diff_patching
🕸️ Graph Theory: graph_pathfinding • graph_node_centrality • graph_cycle_detection • graph_isomorphism
🎲 Probabilistic: bayesian_association • bayesian_intervention
📝 Language Parsing, Syntax: regex_following • regex_induction • parsability • parsing • continuation
📋 Table Processing: table_qa • table_conversion
🔎 Set Operations, Retrieval: set_intersection • set_missing_element • set_equality
Task Modes
We provide three modes for most tasks, all in SFT/pretraining suitable format:
➡️ Instruct mode: Direct prompt/answer format
🧠 Trace mode: Most tasks include reasoning traces to bake-in chain-of-thought reasoning patterns
✅ Verification mode: Tasks framed as prompt/candidate: valid (yes/no)? 10% of the time, to strengthen reasoning self-verification capabilities
🧪 Paper: Reasoning Core: A Scalable RL Environment for LLM Symbolic Reasoning
📦 Code: GitHub Repository (An updated paper for pre-training results is coming.)
RLVR version
See rc1 for the post-training/RLVR version
Abstract
We introduce Reasoning Core, a new scalable environment for Reinforcement Learning with Verifiable Rewards (RLVR), designed to advance foundational symbolic reasoning in Large Language Models (LLMs). Unlike existing benchmarks that focus on games or isolated puzzles, Reasoning Core procedurally generates problems across core formal domains, including PDDL planning, first-order logic, context-free grammar parsing, causal reasoning, and system equation solving. The environment is built on key design principles of high-generality problem distributions, verification via external tools, and continuous difficulty control, which together provide a virtually infinite supply of novel training instances. Initial zero-shot evaluations with frontier LLMs confirm the difficulty of Reasoning Core's tasks, positioning it as a promising resource to improve the reasoning capabilities of future models.
Usage
ds = load_dataset("reasoning-core/symbolic-pretraining-pile")
Citation
@article{reasoningcore2026,
title={Reasoning Core: A Scalable Procedural Data Generation Suite for Symbolic Pre-training and Post-Training},
author={Lacombe, Valentin and Quesnel, Valentin and Sileo, Damien},
journal={arXiv preprint arXiv:2603.02208},
year={2026},
url={https://arxiv.org/abs/2603.02208}
}
- Downloads last month
- 718