The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 246, in _generate_tables
pa_table = paj.read_json(
^^^^^^^^^^^^^^
File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to number in row 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 97, in _split_generators
pa_table = next(iter(self._generate_tables(**splits[0].gen_kwargs, allow_full_read=False)))[1]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 260, in _generate_tables
batch = json_encode_fields_in_json_lines(original_batch, json_field_paths)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 106, in json_encode_fields_in_json_lines
examples = [ujson_loads(line) for line in original_batch.splitlines()]
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 20, in ujson_loads
return pd.io.json.ujson_loads(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Expected object or value
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
NES Surrogate Dataset
Overview
This dataset contains trained neural architectures, their predictions, and validation performance, designed for studying:
- surrogate modeling of neural architectures
- diversity estimation between models
- ensemble construction strategies
Each architecture is associated with:
- its structure (DARTS-like cell)
- model weights
- validation predictions
- validation accuracy
Dataset Structure
CIFAR10/
CIFAR100/
FashionMNIST/
Each dataset directory contains:
architectures/
weights/
architectures/
Each JSON file contains:
- architecture definition (DARTS-like DAG)
- validation predictions
- validation accuracy
Predictions are computed on a shared validation split, enabling construction of pairwise similarity matrices.
weights/
Contains trained model weights corresponding to each architecture in safetensors format.
Example Entry
{
"architecture": {
"normal/op_2_0": "sep_conv_3x3",
"normal/input_2_0": [1],
"normal/op_2_1": "sep_conv_5x5",
"normal/input_2_1": [0]
},
"valid_predictions": [6, 2, 5, 6, 3],
"valid_accuracy": 0.76
}
Dataset Size
- ~3,000 models per dataset
- Total: architectures, predictions, and weights for three benchmarks
Data Split
For each dataset, the original training set is split into:
- 20% training subset
- 80% validation subset
The split is performed using:
- fixed random seed: 42
torch.utils.data.Subset
The validation subset is used to:
- compute model accuracy
- generate prediction vectors for diversity estimation
Training Setup
Architectures are trained under a reduced configuration to limit computational cost:
- optimizer: SGD
- learning rate: cosine schedule from 0.025 → 1e-3
- weight decay: 3e-4
- batch size: 96
- auxiliary loss weight: 0.4
Training Statistics
| Dataset | Num. Cells | Initial Width | Num. Epochs | Avg. Accuracy (%) | Avg. Top-1 Agreement |
|---|---|---|---|---|---|
| FashionMNIST | 3 | 16 | 125 | 89.6 ± 0.5 | 0.900 ± 0.004 |
| CIFAR-10 | 8 | 16 | 200 | 75.8 ± 0.6 | 0.693 ± 0.006 |
| CIFAR-100 | 8 | 16 | 200 | 37.6 ± 1.1 | 0.324 ± 0.008 |
Note: Models are not trained to full convergence. They are trained for a fixed number of epochs sufficient to obtain reliable relative performance estimates.
Key Properties
- DARTS-like architecture search space
- Graph-based representation (DAGs)
- Aligned predictions across models
- Supports diversity estimation via prediction similarity
- Suitable for surrogate-based ranking and selection
Intended Use
This dataset enables:
- training accuracy surrogate models
- learning diversity embeddings (e.g., via triplet loss)
- constructing similarity matrices between models
- analyzing relationships between architecture and predictions
Example similarity metric:
similarity = (y_i == y_j).mean()
Relation to Paper
This dataset accompanies the paper:
"Surrogate Assisted Diversity Estimation in Neural Ensemble Search"
It is used to:
- train surrogate models for accuracy and diversity
- guide ensemble construction
- study scaling behavior with respect to dataset size
Notes
- Similarity between models is computed from prediction agreement
- The dataset is designed for relative ranking, not absolute accuracy
- Can be used with alternative diversity metrics (e.g., correlation, divergence)
- Downloads last month
- 7,944