Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ParserError
Message:      Error tokenizing data. C error: Expected 1 fields in line 8, saw 2

Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 503, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 350, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/csv/csv.py", line 190, in _generate_tables
                  for batch_idx, df in enumerate(csv_file_reader):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1843, in __next__
                  return self.get_chunk()
                         ^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1985, in get_chunk
                  return self.read(nrows=size)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1923, in read
                  ) = self._engine.read(  # type: ignore[attr-defined]
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 234, in read
                  chunks = self._reader.read_low_memory(nrows)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pandas/_libs/parsers.pyx", line 850, in pandas._libs.parsers.TextReader.read_low_memory
                File "pandas/_libs/parsers.pyx", line 905, in pandas._libs.parsers.TextReader._read_rows
                File "pandas/_libs/parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
                File "pandas/_libs/parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
                File "pandas/_libs/parsers.pyx", line 2061, in pandas._libs.parsers.raise_parser_error
              pandas.errors.ParserError: Error tokenizing data. C error: Expected 1 fields in line 8, saw 2

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

πŸ“Œ Hamed Behrouzi β€” Identity Graph Dataset (Q1 Edition, v02)

This dataset represents the Q1 edition of my multilingual Identity Graph Project, designed for:

  • AI research
  • semantic modeling
  • cross-platform identity alignment
  • structured knowledge engineering
  • graph-based reasoning

It provides a high-level model of my professional, academic, and creative digital identity across verified platforms.


πŸ“ Dataset Contents

The dataset includes:

1. nodes.csv

A structured list of all canonical entities contained in the identity graph, including:

  • Person identifiers
  • Creative works
  • Academic profiles
  • Film/VFX credits
  • Research objects (DOIs, Zenodo entries)
  • External platform links
column description
id unique node identifier
label human-readable entity name
type entity type (person, article, dataset, platform, etc.)

2. edges.csv

Typed graph relationships between entities.

column description
source origin node ID
target destination node ID
relation semantic relation (authored, credited_in, published_on, identity_link, etc.)

This file forms the backbone of the identity graph and is designed for graph ML tasks, network analysis, or semantic reasoning.


3. identityGraph.jsonld

A formal JSON-LD representation of the entire identity graph schema.

It is suitable for:

  • semantic web tools
  • linked-data systems
  • AI reasoning engines
  • schema validation

4. metadata.json

General metadata describing:

  • dataset structure
  • fields
  • versioning
  • semantic notes

5. README.md

You are reading it.


πŸ” Purpose of This Dataset

This dataset is intended to support research in:

  • identity resolution
  • knowledge graph alignment
  • multimodal personal identity modeling
  • semantic graph design
  • AI-driven profile integration
  • ethical AI metadata systems

It mirrors a real-world, multi-platform digital identity structure using verified sources:

  • ORCID
  • Zenodo
  • IMDb / TMDb / Metacritic
  • GitHub
  • Personal website
  • Wikidata & related systems

🌐 Version

Q1 Edition, v02 β€” December 2025

This version improves:

  • node/edge normalization
  • JSON-LD schema clarity
  • multi-platform alignment
  • dataset portability

πŸ“Ž Citation

πŸ”– BibTeX Citation

@dataset{behrouzi_identity_graph_q1_v02_2025,
  author       = {Hamed Behrouzi},
  title        = {Identity Graph Dataset (Q1 Edition, v02)},
  year         = {2025},
  month        = dec,
  publisher    = {HuggingFace Datasets},
  url          = {https://huggingface.co/datasets/HamedBehrouzi/HamedBehrouzi-IdentityGraph-Q1-v02},
  note         = {Multilingual professional identity graph dataset including nodes.csv, edges.csv, schema.jsonld, and metadata.json.}
}
Downloads last month
17