voice
listlengths 1
1
| conversations
listlengths 2
2
|
|---|---|
[
"brightness/task_comparison/brightness_pair_072.wav"
] |
[
{
"from": "human",
"value": "You are given two clips with a 0.5-second pause between them. Which has higher perceived brightness? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_comparison/brightness_pair_036.wav"
] |
[
{
"from": "human",
"value": "From the two clips in the audio, which one has a more luminous quality? They are separated by 0.5 seconds of silence. Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_comparison/brightness_pair_098.wav"
] |
[
{
"from": "human",
"value": "Which of the two clips has a sharper or brighter tone? They are separated by 0.5 seconds of silence. Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_comparison/brightness_pair_033.wav"
] |
[
{
"from": "human",
"value": "Two fragments are separated by 0.5 seconds of silence. Which one sounds brighter? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_comparison/brightness_pair_090.wav"
] |
[
{
"from": "human",
"value": "Two clips are concatenated with a 500ms pause. Which one has more presence or sharpness? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_comparison/brightness_pair_091.wav"
] |
[
{
"from": "human",
"value": "Two clips are played with a short pause. Which one has higher perceived brightness? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_comparison/brightness_pair_095.wav"
] |
[
{
"from": "human",
"value": "Determine which of the two audio segments has a brighter sound. They are joined by 0.5 seconds of silence. Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_comparison/brightness_pair_094.wav"
] |
[
{
"from": "human",
"value": "Listen to the two clips separated by 0.5 seconds of silence. Which has greater brightness? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_comparison/brightness_pair_096.wav"
] |
[
{
"from": "human",
"value": "Two recordings are played in sequence with 0.5 seconds of silence. Which has greater brilliance? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_comparison/brightness_pair_010.wav"
] |
[
{
"from": "human",
"value": "Which of the two clips has a brighter auditory character? They are separated by 0.5 seconds of silence. Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_comparison/brightness_pair_020.wav"
] |
[
{
"from": "human",
"value": "Two sounds are connected by 0.5 seconds of silence. Which one sounds brighter? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_comparison/brightness_pair_067.wav"
] |
[
{
"from": "human",
"value": "Two fragments are separated by 0.5 seconds of silence. Which one sounds more luminous? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_comparison/brightness_pair_029.wav"
] |
[
{
"from": "human",
"value": "You are presented with two consecutive audio clips separated by 0.5 seconds of silence. Which has higher tonal brightness? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_comparison/brightness_pair_071.wav"
] |
[
{
"from": "human",
"value": "Which clip has more high-frequency content — the first or the second? They are separated by 0.5 seconds of silence. Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_comparison/brightness_pair_016.wav"
] |
[
{
"from": "human",
"value": "Judge the brightness of two consecutive clips with a 0.5-second gap. Which wins? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_comparison/brightness_pair_047.wav"
] |
[
{
"from": "human",
"value": "Two audio fragments are played with a 500ms gap. Which one is brighter? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_comparison/brightness_pair_013.wav"
] |
[
{
"from": "human",
"value": "Two tones are presented with a brief 0.5-second pause. Which one is brighter? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_comparison/brightness_pair_042.wav"
] |
[
{
"from": "human",
"value": "Listen to the two clips. They are joined by 0.5 seconds of silence. Which has higher brightness? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_comparison/brightness_pair_057.wav"
] |
[
{
"from": "human",
"value": "An audio file contains two clips with a half-second pause. Which one sounds brighter? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_comparison/brightness_pair_049.wav"
] |
[
{
"from": "human",
"value": "Two audio fragments are linked by 0.5 seconds of silence. Which one sounds brighter? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_comparison/brightness_pair_012.wav"
] |
[
{
"from": "human",
"value": "Assess which of the two clips, separated by 0.5 seconds of silence, has higher brightness. Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_comparison/brightness_pair_005.wav"
] |
[
{
"from": "human",
"value": "An audio file has two segments separated by 0.5 seconds of silence. Which is brighter? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_comparison/brightness_pair_048.wav"
] |
[
{
"from": "human",
"value": "An audio pair is joined by 0.5 seconds of silence. Which clip has higher brightness? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_comparison/brightness_pair_063.wav"
] |
[
{
"from": "human",
"value": "Compare two clips with a 0.5-second pause. Which has more high-frequency energy? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_comparison/brightness_pair_018.wav"
] |
[
{
"from": "human",
"value": "Compare the brightness of two clips with 0.5 seconds of silence between them. Which is brighter? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_comparison/brightness_pair_080.wav"
] |
[
{
"from": "human",
"value": "Compare the brightness of two consecutive clips with 0.5 seconds of silence between them. Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_comparison/brightness_pair_079.wav"
] |
[
{
"from": "human",
"value": "Listen to the two parts separated by 0.5 seconds of silence. Which one is brighter? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_comparison/brightness_pair_023.wav"
] |
[
{
"from": "human",
"value": "Listen to the two clips with a short silence between them. Which one sounds brighter? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_comparison/brightness_pair_097.wav"
] |
[
{
"from": "human",
"value": "Which clip has a brighter sound — the first or the second? They are separated by 0.5 seconds of silence. Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_comparison/brightness_pair_069.wav"
] |
[
{
"from": "human",
"value": "Which of the two clips sounds more brilliant? They are separated by 0.5 seconds of silence. Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_comparison/brightness_pair_055.wav"
] |
[
{
"from": "human",
"value": "Listen to the two clips. They are separated by 0.5 seconds of silence. Which has higher brightness? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_comparison/brightness_pair_070.wav"
] |
[
{
"from": "human",
"value": "You will hear two sounds with a moment of silence (0.5s) in between. Which one has more clarity or presence? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_comparison/brightness_pair_064.wav"
] |
[
{
"from": "human",
"value": "Determine the brighter clip from this audio, where two segments are separated by 0.5 seconds of silence. Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_comparison/brightness_pair_045.wav"
] |
[
{
"from": "human",
"value": "Determine the brighter clip from this pair separated by 0.5 seconds of silence. Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_comparison/brightness_pair_044.wav"
] |
[
{
"from": "human",
"value": "An audio file includes two clips with a short 0.5-second silent interval. Which clip has greater luminosity? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_comparison/brightness_pair_030.wav"
] |
[
{
"from": "human",
"value": "Two clips are concatenated with a short pause. Which one has greater luminosity? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_comparison/brightness_pair_007.wav"
] |
[
{
"from": "human",
"value": "Judge which of the two clips, divided by 0.5 seconds of silence, has greater brightness. Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_comparison/brightness_pair_087.wav"
] |
[
{
"from": "human",
"value": "Two tones are separated by 0.5 seconds of silence. Which one has a brighter timbre? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_comparison/brightness_pair_060.wav"
] |
[
{
"from": "human",
"value": "Listen to the two parts. They are joined by 0.5 seconds of silence. Which has higher brightness? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_comparison/brightness_pair_050.wav"
] |
[
{
"from": "human",
"value": "Two clips are played with a brief 0.5-second pause. Which has higher brightness? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_comparison/brightness_pair_006.wav"
] |
[
{
"from": "human",
"value": "Two clips are presented with a short pause. Which has higher brightness? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_comparison/brightness_pair_017.wav"
] |
[
{
"from": "human",
"value": "You will hear two audio segments separated by 0.5 seconds of silence. Which segment has higher perceived brightness? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_comparison/brightness_pair_074.wav"
] |
[
{
"from": "human",
"value": "Two audio snippets are connected by 0.5 seconds of silence. Which one is brighter? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_comparison/brightness_pair_051.wav"
] |
[
{
"from": "human",
"value": "After listening to two clips with a 0.5-second gap, decide which has higher high-frequency energy. Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_comparison/brightness_pair_026.wav"
] |
[
{
"from": "human",
"value": "Two clips are played consecutively with a short pause. Which one has higher brightness? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_comparison/brightness_pair_089.wav"
] |
[
{
"from": "human",
"value": "Which of the two has higher perceived brightness? They are joined by 0.5 seconds of silence. Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_comparison/brightness_pair_058.wav"
] |
[
{
"from": "human",
"value": "Two tones are separated by 0.5 seconds of silence. Which one sounds more luminous? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_comparison/brightness_pair_000.wav"
] |
[
{
"from": "human",
"value": "Compare the tonal brightness of two clips with 0.5 seconds of silence between them. Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_comparison/brightness_pair_046.wav"
] |
[
{
"from": "human",
"value": "Which of the two consecutive clips has a brighter timbre? There is 0.5 seconds of silence between them. Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_comparison/brightness_pair_081.wav"
] |
[
{
"from": "human",
"value": "Which clip is perceptually brighter — the first or the second? They are separated by 0.5 seconds of silence. Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_recog/brightness_single_072.wav"
] |
[
{
"from": "human",
"value": "Determine if the audio is bright (crisp) or dark (mellow). Choose 'A' (brightness) or 'B' (dark) only. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_recog/brightness_single_036.wav"
] |
[
{
"from": "human",
"value": "Would you rate this audio as bright (luminous) or dark (warm)? Respond with just 'A' if bright, or 'B' if dark. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_recog/brightness_single_098.wav"
] |
[
{
"from": "human",
"value": "Is the audio high-frequency dominant (bright) or mid/bass dominant (dark)? Your answer must be 'A' (brightness) or 'B' (dark). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_recog/brightness_single_033.wav"
] |
[
{
"from": "human",
"value": "Listen to the timbre: does it feel bright or dark? Output only 'A' (bright) or 'B' (dark). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_recog/brightness_single_090.wav"
] |
[
{
"from": "human",
"value": "Determine if the sound is bright (luminous) or dark (dull). Enter 'A' for brightness, 'B' for dark — nothing else. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_recog/brightness_single_091.wav"
] |
[
{
"from": "human",
"value": "Classify the sound as bright (high-frequency rich) or dark (low-frequency rich). Reply strictly with 'A' (bright) or 'B' (dark). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_recog/brightness_single_095.wav"
] |
[
{
"from": "human",
"value": "Would you classify the sound as bright (sparkling) or dark (mellow)? Say only 'A' (brightness) or 'B' (dark). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_recog/brightness_single_094.wav"
] |
[
{
"from": "human",
"value": "Is the timbral quality bright or dark? Only answer letter 'A' (refers to brightness) or 'B' (refers to dark). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_recog/brightness_single_096.wav"
] |
[
{
"from": "human",
"value": "Listen carefully to the audio. Would you describe it as bright or dark? Respond with just 'A' if bright, or 'B' if dark. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_recog/brightness_single_010.wav"
] |
[
{
"from": "human",
"value": "Determine if the audio sounds bright (luminous) or dark (dull). Enter 'A' for brightness, 'B' for dark — nothing else. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_recog/brightness_single_020.wav"
] |
[
{
"from": "human",
"value": "Is the audio perceived as bright (high-energy) or dark (low-energy in highs)? Enter 'A' for brightness, 'B' for dark — nothing else. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_recog/brightness_single_067.wav"
] |
[
{
"from": "human",
"value": "Does the audio have a sharp attack and clarity (bright) or a smooth, rounded feel (dark)? Answer only 'A' for brightness or 'B' for dark. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_recog/brightness_single_029.wav"
] |
[
{
"from": "human",
"value": "Is the audio's quality bright (luminous) or dark (mellow)? Just reply with 'A' or 'B', where 'A' means brightness and 'B' means dark. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_recog/brightness_single_071.wav"
] |
[
{
"from": "human",
"value": "Does the audio sound clear and brilliant, or warm and dark? Choose one. Reply strictly with 'A' (bright) or 'B' (dark). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_recog/brightness_single_016.wav"
] |
[
{
"from": "human",
"value": "Is the sound more 'forward' (bright) or 'recessed' (dark)? Respond with just 'A' if bright, or 'B' if dark. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_recog/brightness_single_047.wav"
] |
[
{
"from": "human",
"value": "Does the audio feel bright (present) or dark (recessed)? Answer only 'A' for brightness or 'B' for dark. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_recog/brightness_single_013.wav"
] |
[
{
"from": "human",
"value": "Does the audio have a bright, punchy quality or a dark, smooth one? Output only 'A' (bright) or 'B' (dark). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_recog/brightness_single_042.wav"
] |
[
{
"from": "human",
"value": "Does the audio have a brilliant or muted character? Make your judgment. Choose 'A' (brightness) or 'B' (dark) only. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_recog/brightness_single_057.wav"
] |
[
{
"from": "human",
"value": "Would you say the audio is bright (crisp) or dark (soft)? Answer only 'A' for brightness or 'B' for dark. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_recog/brightness_single_049.wav"
] |
[
{
"from": "human",
"value": "Judge whether the audio feels sharp and present (bright) or soft and recessed (dark). Just reply with 'A' or 'B', where 'A' means brightness and 'B' means dark. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_recog/brightness_single_012.wav"
] |
[
{
"from": "human",
"value": "Listen to the sound: is it bright or dark in character? Choose 'A' (brightness) or 'B' (dark) only. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_recog/brightness_single_005.wav"
] |
[
{
"from": "human",
"value": "Would you rate the audio as bright (vivid) or dark (soft)? Say only 'A' (brightness) or 'B' (dark). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_recog/brightness_single_048.wav"
] |
[
{
"from": "human",
"value": "Does the audio have a bright, edgy quality or a dark, smooth one? Your answer must be 'A' (brightness) or 'B' (dark). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_recog/brightness_single_063.wav"
] |
[
{
"from": "human",
"value": "Evaluate the sound quality: does it feel bright or dark? Output only 'A' (bright) or 'B' (dark). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_recog/brightness_single_018.wav"
] |
[
{
"from": "human",
"value": "Analyze the timbral balance: is it bright or dark? Your answer must be 'A' (brightness) or 'B' (dark). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_recog/brightness_single_080.wav"
] |
[
{
"from": "human",
"value": "The audio may exhibit high-frequency presence or lack thereof. Is it bright or dark? Enter 'A' for brightness, 'B' for dark — nothing else. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_recog/brightness_single_079.wav"
] |
[
{
"from": "human",
"value": "Judge if the sound is bright (vivid) or dark (mellow). Just reply with 'A' or 'B', where 'A' means brightness and 'B' means dark. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_recog/brightness_single_023.wav"
] |
[
{
"from": "human",
"value": "Is the sound more brilliant or more muted? Output only 'A' (bright) or 'B' (dark). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_recog/brightness_single_097.wav"
] |
[
{
"from": "human",
"value": "Is the audio bright (lively) or dark (dull) in tone? Answer only 'A' for brightness or 'B' for dark. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_recog/brightness_single_069.wav"
] |
[
{
"from": "human",
"value": "Listen: is the sound bright (clear) or dark (muffled)? Just reply with 'A' or 'B', where 'A' means brightness and 'B' means dark. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_recog/brightness_single_055.wav"
] |
[
{
"from": "human",
"value": "Is the audio top-heavy (bright) or bottom-heavy (dark)? Say only 'A' (brightness) or 'B' (dark). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_recog/brightness_single_070.wav"
] |
[
{
"from": "human",
"value": "Does the sound have a luminous quality (bright) or a subdued one (dark)? Enter 'A' for brightness, 'B' for dark — nothing else. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_recog/brightness_single_064.wav"
] |
[
{
"from": "human",
"value": "Listen to the clip and assess whether it sounds bright or dark. Only answer letter 'A' (refers to brightness) or 'B' (refers to dark). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_recog/brightness_single_045.wav"
] |
[
{
"from": "human",
"value": "Does the audio sound bright (crisp) or dark (mellow)? Say only 'A' (brightness) or 'B' (dark). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_recog/brightness_single_044.wav"
] |
[
{
"from": "human",
"value": "Is the sound bright (radiant) or dark (subdued)? Only answer letter 'A' (refers to brightness) or 'B' (refers to dark). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_recog/brightness_single_030.wav"
] |
[
{
"from": "human",
"value": "Would you describe the sound as bright (snappy) or dark (smooth)? Enter 'A' for brightness, 'B' for dark — nothing else. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_recog/brightness_single_007.wav"
] |
[
{
"from": "human",
"value": "Listen to the spectral balance: bright or dark? Answer only 'A' for brightness or 'B' for dark. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_recog/brightness_single_087.wav"
] |
[
{
"from": "human",
"value": "Does the sound feel bright (present) or dark (recessed)? Answer only 'A' for brightness or 'B' for dark. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_recog/brightness_single_060.wav"
] |
[
{
"from": "human",
"value": "Assess the spectral tilt: is it bright or dark? Enter 'A' for brightness, 'B' for dark — nothing else. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_recog/brightness_single_050.wav"
] |
[
{
"from": "human",
"value": "Is the audio perceived as bright (sparkling) or dark (warm)? Enter 'A' for brightness, 'B' for dark — nothing else. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_recog/brightness_single_006.wav"
] |
[
{
"from": "human",
"value": "Listen to the clip: bright or dark? Respond with just 'A' if bright, or 'B' if dark. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_recog/brightness_single_017.wav"
] |
[
{
"from": "human",
"value": "Listen and classify: bright or dark? Answer only 'A' for brightness or 'B' for dark. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_recog/brightness_single_074.wav"
] |
[
{
"from": "human",
"value": "Judge the tonal balance: bright or dark? Only answer letter 'A' (refers to brightness) or 'B' (refers to dark). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_recog/brightness_single_051.wav"
] |
[
{
"from": "human",
"value": "Listen and determine: bright or dark? Reply strictly with 'A' (bright) or 'B' (dark). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_recog/brightness_single_026.wav"
] |
[
{
"from": "human",
"value": "Judge the perceptual brightness of the audio: is it bright or dark? Respond with just 'A' if bright, or 'B' if dark. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_recog/brightness_single_089.wav"
] |
[
{
"from": "human",
"value": "Determine if the audio emphasizes high frequencies (bright) or low/mid (dark). Just reply with 'A' or 'B', where 'A' means brightness and 'B' means dark. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_recog/brightness_single_058.wav"
] |
[
{
"from": "human",
"value": "Would you say this audio is bright or dark in tonal balance? Your answer must be 'A' (brightness) or 'B' (dark). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_recog/brightness_single_000.wav"
] |
[
{
"from": "human",
"value": "Is the audio's spectral energy skewed toward higher frequencies (bright) or lower (dark)? Enter 'A' for brightness, 'B' for dark — nothing else. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
] |
[
"brightness/task_recog/brightness_single_046.wav"
] |
[
{
"from": "human",
"value": "Listen and assess: bright or dark? Respond with just 'A' if bright, or 'B' if dark. Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
[
"brightness/task_recog/brightness_single_081.wav"
] |
[
{
"from": "human",
"value": "Does the sound have a bright, snappy feel or a dark, smooth one? Reply strictly with 'A' (bright) or 'B' (dark). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "A"
}
] |
SonicBench: Dissecting the Physical Perception Bottleneck in Large Audio Language Models
12 physical attributes, 5 perceptual dimensions, 2 task types - dissecting the physical perception bottleneck of Large Audio Language Models.
Benchmark • Directory Layout • JSON Format • Probe Splits • Paper
TL;DR. SonicBench is a psychophysically grounded benchmark that probes physical audio perception rather than semantics:
12 core attributes × 5 perceptual dimensions × 2 paradigms (recognition vs. comparison) = 2,400 question-audio pairs.
Despite strong performance on semantic and paralinguistic tasks, most LALMs perform near chance and fail to show the expected human-like advantage on comparison tasks.
Explicit chain-of-thought reasoning brings only marginal gains, while linear probes on frozen encoders reach ≥60% accuracy, revealing that the main bottleneck lies in alignment and decoding.
1. Benchmark Overview
SonicBench targets physical perception, i.e., the ability to interpret intrinsic properties of audio signals that underlie any higher-level reasoning.
It covers 12 core attributes grouped into 5 perceptual dimensions:
- Spectral & Amplitude
pitch,brightness,loudness,velocity - Temporal
duration,tempo - Spatial & Environment
direction,distance,reverberation - Timbre
timbre,texture - Scene-Level
counting
For each attribute, SonicBench defines two complementary psychophysical paradigms:
Recognition (absolute judgment)
- Input: a single 4-second audio clip.
- Task: make an absolute decision between two physical categories
(e.g., “bright” vs. “dark”, “short” vs. “long”, “near” vs. “far”). - Output: a binary choice
"A"or"B".
Comparison (relative judgment)
- Input: two 4-second clips concatenated with 0.5 seconds of silence in between
(≈ 8.5 seconds in total). - Task: make a relative judgment of which clip has a larger value along a given attribute
(e.g., which is brighter / louder / faster / closer). - Output:
"A"for the first segment,"B"for the second.
- Input: two 4-second clips concatenated with 0.5 seconds of silence in between
This yields in total:
- 12 attributes × 2 task types × 100 items = 2,400 question-audio pairs
This design turns a broad space of non-linguistic, low-level skills into a structured, attribute-wise benchmark, and the comparison paradigm explicitly probes relational reasoning, where human listeners are typically more proficient than in absolute estimation.
2. Directory Layout
On this Hugging Face dataset, the directory structure is:
.
├── brightness/
│ ├── task_recog/
│ │ ├── brightness_single_000.wav
│ │ ├── brightness_single_001.wav
│ │ └── ...
│ └── task_comparison/
│ ├── brightness_pair_000.wav
│ ├── brightness_pair_001.wav
│ └── ...
├── counting/
│ ├── task_recog/
│ └── task_comparison/
├── direction/
│ ├── task_recog/
│ └── task_comparison/
├── ...
├── json/
│ ├── brightness_recog.json
│ ├── brightness_comparison.json
│ ├── counting_recog.json
│ ├── counting_comparison.json
│ ├── ...
└── probe_json/
├── brightness_recog/
│ ├── train.json
│ └── eval.json
├── brightness_comparison/
│ ├── train.json
│ └── eval.json
├── counting_recog/
│ ├── train.json
│ └── eval.json
├── counting_comparison/
│ ├── train.json
│ └── eval.json
├── ...
- Each attribute has its own folder under the root, containing WAV files for
task_recogandtask_comparison. json/contains the canonical evaluation JSONs (2,400 QA pairs in total).probe_json/exposes train/eval splits for probing experiments (see Section 4 in our paper).
3. JSON Format
All main benchmark files in json/ follow a unified conversational format.
Each JSON file is a list of items; each item has at least the following keys:
voice:A list of audio paths (relative to the dataset root).
For SonicBench, there is currently one path per item, e.g.:
- Recognition:
"brightness/task_recog/brightness_single_000.wav" - Comparison:
"brightness/task_comparison/brightness_pair_000.wav"
- Recognition:
conversations:A list of message turns, following a simple chat-style schema:
from:"human"or"gpt"value: the text content
A typical example from brightness_comparison.json:
{
"voice": [
"brightness/task_comparison/brightness_pair_000.wav"
],
"conversations": [
{
"from": "human",
"value": "The audio includes two segments with a 0.5-second silent interval. Which is brighter? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
},
{
"from": "gpt",
"value": "B"
}
]
}
- The first turn (
from: "human") gives the full instruction and contains a<audio>placeholder. - The second turn (
from: "gpt") contains the ground-truth answer, which is always a single letter"A"or"B"with no explanation or punctuation.
In summary:
To evaluate a model, you typically:
- Play or feed the audio in
voice[0]into your model. - Give the model the
valueof the"human"turn as the textual prompt. - Compare the model’s final answer with the
valueof the"gpt"turn (letter"A"/"B").
- Play or feed the audio in
4. Probe JSON (Train/Eval Splits)
The probe_json/ directory contains train/eval splits derived from the same underlying items, designed for:
- Linear probes on frozen audio encoders
- Small classifier training
- Attribute-wise analysis without touching the main test set
For each attribute × task type (e.g., brightness_recog, distance_comparison), there is a folder:
probe_json/brightness_recog/train.json
probe_json/brightness_recog/eval.json
probe_json/brightness_comparison/train.json
probe_json/brightness_comparison/eval.json
probe_json/velocity_recog/train.json
probe_json/velocity_recog/eval.json
probe_json/velocity_comparison/train.json
probe_json/velocity_comparison/eval.json
...
Each train.json / eval.json is again a list of items with the same schema.
These splits correspond to a fixed random partition (42). In our experiments, we train simple linear probes on train.json and evaluate on eval.json, while keeping the encoder frozen.
5. What We Found with SonicBench
Using SonicBench, we evaluate 36 systems across three families:
- LALMs – Large Audio(-Language) Models built by aligning pre-trained audio encoders with LLMs
- LARMs – audio-specific reasoning models
- OLMs – omni-modal models that include an audio interface
SonicBench uncovers several consistent patterns:
Fundamental physical perception is weak.
Despite strong performance on semantic and paralinguistic benchmarks, most models perform near random guessing (~50%) on many SonicBench tasks.
Even the best model in our study (Qwen3-Omni) reaches only about 72% accuracy, far below human performance (~91%).
This indicates that current systems often lack reliable physical grounding, even when their high-level behavior appears competent.No human-like advantage on comparison tasks.
In human psychophysics, relative comparison is often easier than absolute judgment.
In contrast, LALMs and related systems show no systematic advantage on comparison tasks;
for several attributes, comparison accuracy is even lower than recognition accuracy.
This suggests that current models struggle with relational reasoning over physical attributes.Inference-time reasoning brings limited gains.
We experiment with explicit reasoning and inference-time scaling (longer chain-of-thought, more deliberation).
The improvements on SonicBench are marginal, indicating that simply adding reasoning tokens cannot compensate for missing or poorly used physical representations.Encoders perceive more than the full model can use.
When we freeze audio encoders and train simple linear probes onprobe_jsonsplits, these probes consistently achieve ≥60% accuracy across attributes and, in several cases, outperform the full end-to-end models.
This shows that the physical cues are already present in the encoder representations.
The primary bottleneck lies in alignment and decoding-the projector and language layers fail to faithfully leverage the sensory information they receive.
6. Intended Uses
SonicBench is designed primarily as an evaluation and analysis benchmark for physical audio perception. Typical use cases include:
Benchmarking physical grounding
Evaluate LALMs, LARMs, and OLMs on their ability to perceive core physical attributes.Attribute-wise and dimension-wise diagnostics
Use the 12 attributes and 5 perceptual dimensions to pinpoint which aspects (e.g., spectral vs. spatial vs. scene-level) a model handles well or fails on.Studying recognition vs. comparison behavior
Compare model performance across absolute (recognition) and relative (comparison) paradigms to analyze relational reasoning over acoustic signals.Encoder probing and architecture analysis
Useprobe_jsontrain/eval splits to attach simple probes to audio encoders, isolating where information is lost along the encoder-projector-LLM pipeline.
We recommend treating all files in
json/as held-out test sets.
For training probes or auxiliary models, please use the splits provided underprobe_json/.
7. Citation
If you use SonicBench in your work, please cite:
xxx
8. Contact
Email: [email protected], [email protected], [email protected]
Organization: EIT-NLP Lab
- Downloads last month
- 19