id stringclasses 7
values | subject stringclasses 7
values | category stringclasses 5
values | prompt stringclasses 7
values |
|---|---|---|---|
mario | Mario from Super Mario Bros | character | Draw pixel art on a 24x24 grid.
Return a JSON object with:
- "grid": 24x24 list of integers (0β9)
- "palette": list of colors used
Subject: Mario from Super Mario Bros |
rainbow | a rainbow | nature | Draw pixel art on a 24x24 grid.
Return a JSON object with:
- "grid": 24x24 list of integers (0β9)
- "palette": list of colors used
Subject: a rainbow |
sun | the sun | nature | Draw pixel art on a 24x24 grid.
Return a JSON object with:
- "grid": 24x24 list of integers (0β9)
- "palette": list of colors used
Subject: the sun |
saturn | the planet Saturn with its rings | space | Draw pixel art on a 24x24 grid.
Return a JSON object with:
- "grid": 24x24 list of integers (0β9)
- "palette": list of colors used
Subject: the planet Saturn with its rings |
maze | a maze | abstract | Draw pixel art on a 24x24 grid.
Return a JSON object with:
- "grid": 24x24 list of integers (0β9)
- "palette": list of colors used
Subject: a maze |
mona_lisa | the Mona Lisa | art | Draw pixel art on a 24x24 grid.
Return a JSON object with:
- "grid": 24x24 list of integers (0β9)
- "palette": list of colors used
Subject: the Mona Lisa |
hope | hope | abstract | Draw pixel art on a 24x24 grid.
Return a JSON object with:
- "grid": 24x24 list of integers (0β9)
- "palette": list of colors used
Subject: hope |
π¨ Pixel Art Bench
Pixel Art Bench is a structured-output benchmark designed to evaluate language models on their ability to generate valid, interpretable, and semantically meaningful JSON outputs under strict constraints.
The benchmark focuses on pixel art generation over a fixed 24Γ24 grid, requiring models to produce outputs that are not only syntactically correct but also structurally and visually coherent.
While many benchmarks evaluate free-form text generation, real-world applications increasingly require:
Strict JSON compliance Structured reasoning Format adherence under constraints
Pixel Art Bench targets this gap by evaluating:
Whether a model can follow rigid output schemas Whether outputs are machine-interpretable Whether generated structures are usable downstream (e.g., rendering)
π§ͺ Task Definition
Each sample consists of:
A prompt describing a subject A requirement to generate a 24Γ24 pixel grid
Input
A natural language instruction:
Draw pixel art on a 24x24 grid. Return a JSON object with:
- "grid": 24x24 list of integers (0β9)
- "palette": list of colors used
Subject: the planet Saturn with its rings
Expected Output Format
{ "grid": [[0,1,1,...],[...],...], "palette": ["black", "yellow", "orange"] }
Constraints:
- grid must be 24Γ24
- values must be integers (0β9)
- output must be valid JSON
π Evaluation Protocol
The benchmark evaluates model outputs across three dimensions, using robust parsing and graded scoring functions.
1. JSON Validity
Evaluates whether the model produces a parsable JSON object, using a multi-stage extraction pipeline:
- Direct JSON parsing
- Regex-based extraction of JSON blocks
- Cleanup of formatting artifacts (e.g., markdown code fences)
Outputs are scored as:
1.0 (correct) β valid JSON extracted 0.0 (incorrect) β parsing failed
2. Render Success (Structural Correctness)
Measures whether the output can be interpreted as a valid 24Γ24 pixel grid.
This is a graded score β [0,1], composed of:
Height score β how close grid height is to 24 Width score β consistency of row lengths Type score β proportion of valid integer cells
Final score: Render = (height + width + type) / 3
This ensures partial credit for structurally plausible outputs.
3. Pixel Art Quality
Evaluates structural richness and completeness of the generated grid.
This is also a graded score β [0,1], combining:
Color diversity
- number of unique integer values
- scaled with a soft cap at ~8 colors
Grid density
- proportion of valid filled cells in the 24Γ24 grid
Final score: Quality = 0.7 Γ diversity + 0.3 Γ density
Aggregate Score
The final benchmark score is computed as:
Score = 0.4 Γ JSON Validity + 0.3 Γ Render Success + 0.3 Γ Pixel Art Quality
- Downloads last month
- 48
