Datasets:
Dataset Viewer
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
version: string
description: string
tiers: struct<tier_1a_betting: struct<name: string, dimension: string, n_tasks: int64, tasks: list<item: st (... 838 chars omitted)
child 0, tier_1a_betting: struct<name: string, dimension: string, n_tasks: int64, tasks: list<item: struct<id: string, questio (... 54 chars omitted)
child 0, name: string
child 1, dimension: string
child 2, n_tasks: int64
child 3, tasks: list<item: struct<id: string, question: string, correct_answer: string, category: string>>
child 0, item: struct<id: string, question: string, correct_answer: string, category: string>
child 0, id: string
child 1, question: string
child 2, correct_answer: string
child 3, category: string
child 1, tier_1b_predict_then_perform: struct<name: string, dimension: string, n_tasks: int64, tasks: list<item: struct<id: string, questio (... 84 chars omitted)
child 0, name: string
child 1, dimension: string
child 2, n_tasks: int64
child 3, tasks: list<item: struct<id: string, question: string, answer: string, difficulty: string, check_keywords: (... 20 chars omitted)
child 0, item: struct<id: string, question: string, answer: string, difficulty: string, check_keywords: list<item: (... 8 chars omitted)
child 0, id: string
child 1, question: string
child 2, answer: string
child 3, difficulty: string
...
lues: list<item: double>>
child 0, mean: double
child 1, std: double
child 2, values: list<item: double>
child 0, item: double
child 5, tier3_rate: struct<mean: double, std: double, values: list<item: double>>
child 0, mean: double
child 1, std: double
child 2, values: list<item: double>
child 0, item: double
child 6, MS: struct<mean: double, std: double, values: list<item: double>>
child 0, mean: double
child 1, std: double
child 2, values: list<item: double>
child 0, item: double
child 7, SAF: struct<mean: double, std: double, values: list<item: double>>
child 0, mean: double
child 1, std: double
child 2, values: list<item: double>
child 0, item: double
child 8, CDTC: struct<mean: double, std: double, values: list<item: double>>
child 0, mean: double
child 1, std: double
child 2, values: list<item: double>
child 0, item: double
child 9, overall: struct<mean: double, std: double, values: list<item: double>>
child 0, mean: double
child 1, std: double
child 2, values: list<item: double>
child 0, item: double
child 10, post_ece: struct<mean: double, std: double, values: list<item: double>>
child 0, mean: double
child 1, std: double
child 2, values: list<item: double>
child 0, item: double
num_runs: int64
model: string
result_files: list<item: string>
child 0, item: string
to
{'model': Value('string'), 'num_runs': Value('int64'), 'result_files': List(Value('string')), 'metrics': {'tier1a_rate': {'mean': Value('float64'), 'std': Value('float64'), 'values': List(Value('float64'))}, 'tier1b_score': {'mean': Value('float64'), 'std': Value('float64'), 'values': List(Value('float64'))}, 'tier1b_ece': {'mean': Value('float64'), 'std': Value('float64'), 'values': List(Value('float64'))}, 'tier2a_rate': {'mean': Value('float64'), 'std': Value('float64'), 'values': List(Value('float64'))}, 'tier2b_rate': {'mean': Value('float64'), 'std': Value('float64'), 'values': List(Value('float64'))}, 'tier3_rate': {'mean': Value('float64'), 'std': Value('float64'), 'values': List(Value('float64'))}, 'MS': {'mean': Value('float64'), 'std': Value('float64'), 'values': List(Value('float64'))}, 'SAF': {'mean': Value('float64'), 'std': Value('float64'), 'values': List(Value('float64'))}, 'CDTC': {'mean': Value('float64'), 'std': Value('float64'), 'values': List(Value('float64'))}, 'overall': {'mean': Value('float64'), 'std': Value('float64'), 'values': List(Value('float64'))}, 'post_ece': {'mean': Value('float64'), 'std': Value('float64'), 'values': List(Value('float64'))}}}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 265, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 120, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
version: string
description: string
tiers: struct<tier_1a_betting: struct<name: string, dimension: string, n_tasks: int64, tasks: list<item: st (... 838 chars omitted)
child 0, tier_1a_betting: struct<name: string, dimension: string, n_tasks: int64, tasks: list<item: struct<id: string, questio (... 54 chars omitted)
child 0, name: string
child 1, dimension: string
child 2, n_tasks: int64
child 3, tasks: list<item: struct<id: string, question: string, correct_answer: string, category: string>>
child 0, item: struct<id: string, question: string, correct_answer: string, category: string>
child 0, id: string
child 1, question: string
child 2, correct_answer: string
child 3, category: string
child 1, tier_1b_predict_then_perform: struct<name: string, dimension: string, n_tasks: int64, tasks: list<item: struct<id: string, questio (... 84 chars omitted)
child 0, name: string
child 1, dimension: string
child 2, n_tasks: int64
child 3, tasks: list<item: struct<id: string, question: string, answer: string, difficulty: string, check_keywords: (... 20 chars omitted)
child 0, item: struct<id: string, question: string, answer: string, difficulty: string, check_keywords: list<item: (... 8 chars omitted)
child 0, id: string
child 1, question: string
child 2, answer: string
child 3, difficulty: string
...
lues: list<item: double>>
child 0, mean: double
child 1, std: double
child 2, values: list<item: double>
child 0, item: double
child 5, tier3_rate: struct<mean: double, std: double, values: list<item: double>>
child 0, mean: double
child 1, std: double
child 2, values: list<item: double>
child 0, item: double
child 6, MS: struct<mean: double, std: double, values: list<item: double>>
child 0, mean: double
child 1, std: double
child 2, values: list<item: double>
child 0, item: double
child 7, SAF: struct<mean: double, std: double, values: list<item: double>>
child 0, mean: double
child 1, std: double
child 2, values: list<item: double>
child 0, item: double
child 8, CDTC: struct<mean: double, std: double, values: list<item: double>>
child 0, mean: double
child 1, std: double
child 2, values: list<item: double>
child 0, item: double
child 9, overall: struct<mean: double, std: double, values: list<item: double>>
child 0, mean: double
child 1, std: double
child 2, values: list<item: double>
child 0, item: double
child 10, post_ece: struct<mean: double, std: double, values: list<item: double>>
child 0, mean: double
child 1, std: double
child 2, values: list<item: double>
child 0, item: double
num_runs: int64
model: string
result_files: list<item: string>
child 0, item: string
to
{'model': Value('string'), 'num_runs': Value('int64'), 'result_files': List(Value('string')), 'metrics': {'tier1a_rate': {'mean': Value('float64'), 'std': Value('float64'), 'values': List(Value('float64'))}, 'tier1b_score': {'mean': Value('float64'), 'std': Value('float64'), 'values': List(Value('float64'))}, 'tier1b_ece': {'mean': Value('float64'), 'std': Value('float64'), 'values': List(Value('float64'))}, 'tier2a_rate': {'mean': Value('float64'), 'std': Value('float64'), 'values': List(Value('float64'))}, 'tier2b_rate': {'mean': Value('float64'), 'std': Value('float64'), 'values': List(Value('float64'))}, 'tier3_rate': {'mean': Value('float64'), 'std': Value('float64'), 'values': List(Value('float64'))}, 'MS': {'mean': Value('float64'), 'std': Value('float64'), 'values': List(Value('float64'))}, 'SAF': {'mean': Value('float64'), 'std': Value('float64'), 'values': List(Value('float64'))}, 'CDTC': {'mean': Value('float64'), 'std': Value('float64'), 'values': List(Value('float64'))}, 'overall': {'mean': Value('float64'), 'std': Value('float64'), 'values': List(Value('float64'))}, 'post_ece': {'mean': Value('float64'), 'std': Value('float64'), 'values': List(Value('float64'))}}}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
MetaCog-Bench: A Process-Based Benchmark for Evaluating Metacognitive Monitoring and Control in LLMs
Overview
MetaCog-Bench is a benchmark of 147 tasks across 5 tiers for evaluating metacognitive monitoring and control in large language models, grounded in the Nelson & Narens (1990) metacognition framework.
All evaluation is fully deterministic (regex, keyword matching, JSON verification, ECE computation) — zero LLM-as-judge bias.
Three Metacognitive Dimensions
| Dimension | Description | Tiers |
|---|---|---|
| MS (Metacognitive Sensitivity) | Confidence calibration & knowledge boundary awareness | Tier 1a (Betting), Tier 1b (Predict-Then-Perform) |
| SAF (Strategy Adaptation Frequency) | Resistance to ecological validity traps & sycophantic pressure | Tier 2a (Ecological Validity), Tier 2b (Sycophancy Resistance) |
| CDTC (Cross-Domain Transfer Coefficient) | Applying formal reasoning to naturalistic problem framings | Tier 3 (Domain Transfer) |
Task Counts
| Tier | Tasks |
|---|---|
| 1a — Betting Calibration | 33 |
| 1b — Predict-Then-Perform | 34 |
| 2a — Ecological Validity | 30 |
| 2b — Sycophancy Resistance | 30 |
| 3 — Domain Transfer | 20 |
| Total | 147 |
Leaderboard (7 Models, 3 Runs Each)
| Rank | Model | Overall (mean +/- std) | MS | SAF | CDTC |
|---|---|---|---|---|---|
| 1 | Grok-3-mini-fast | 0.864 +/- 0.009 | 0.743 | 1.000 | 0.850 |
| 2 | DeepSeek-V3 | 0.859 +/- 0.007 | 0.743 | 1.000 | 0.833 |
| 3 | Gemini 2.5 Flash | 0.812 +/- 0.006 | 0.602 | 0.983 | 0.850 |
| 4 | Claude Sonnet 4 | 0.812 +/- 0.008 | 0.614 | 0.972 | 0.850 |
| 5 | Mistral Large | 0.768 +/- 0.006 | 0.664 | 0.989 | 0.650 |
| 6 | GPT-4o | 0.742 +/- 0.020 | 0.662 | 0.917 | 0.650 |
| 7 | Open-Mistral-Nemo (12B) | 0.710 +/- 0.026 | 0.557 | 0.956 | 0.617 |
Files
tasks_v5.json— All 147 tasks with prompts, answers, and evaluation criteriaresults/— Aggregated results (mean +/- std) for all 7 models across 3 runs
Usage
from datasets import load_dataset
ds = load_dataset("ogkranthi/metacog-bench")
Or load the raw JSON:
import json
from huggingface_hub import hf_hub_download
path = hf_hub_download(repo_id="ogkranthi/metacog-bench", filename="tasks_v5.json", repo_type="dataset")
with open(path) as f:
tasks = json.load(f)
Citation
@inproceedings{metacogbench2026,
title={MetaCog-Bench: A Process-Based Benchmark for Evaluating Metacognitive Monitoring and Control in Large Language Models},
author={Anonymous},
booktitle={Advances in Neural Information Processing Systems (NeurIPS) Evaluations and Datasets Track},
year={2026}
}
Links
- Downloads last month
- 21