The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Benchmark Documentation
Core
- benchmark_structure.md
- benchmark_matrix.md
- datasets.md
Evaluation
- evaluation_framework.md
- transfer_matrix.md
- clarus_score.md
Robustness
- missing_data_protocol.md
- imbalance_protocol.md
- robustness_suite.md
Theory
- stability_manifold.md
- stability_topology.md
- stability_mechanisms.md
Results
- baseline_results.md
- leaderboard.md
Clarus Clinical Stability Benchmark
The Clarus Clinical Stability Benchmark evaluates whether machine learning models can detect latent instability in complex clinical systems.
Most tabular benchmarks reward models for learning correlations within a single dataset.
The Clarus benchmark instead evaluates whether models can infer instability from interacting proxy signals across multiple physiological and operational regimes.
Each dataset represents a simplified regime in which instability emerges from multi-variable interaction rather than single-variable thresholds.
Benchmark Concept
In real clinical systems, deterioration rarely occurs because one measurement crosses a threshold.
Instead, instability emerges when several components drift simultaneously.
Examples include:
- circulatory compensation failure
- microvascular perfusion loss
- metabolic energy collapse
- respiratory control failure
- endocrine dysregulation
- thermoregulatory breakdown
- coagulation instability
- hospital operational overload
Each dataset exposes a different regime while keeping the underlying structure similar:
instability arises from interacting system signals.
The generative rules that determine the labels are intentionally not published.
Models must infer instability from observable proxies.
Included Datasets
| Stability Regime | Dataset |
|---|---|
| Hemodynamic collapse | ClarusC64/clinical-hemodynamic-collapse-v0.1 |
| Sepsis trajectory instability | ClarusC64/clinical-sepsis-trajectory-instability-v0.1 |
| Intervention delay failure | ClarusC64/clinical-intervention-delay-failure-v0.1 |
| Organ coupling cascade | ClarusC64/clinical-organ-coupling-cascade-v0.1 |
| Recovery window detection | ClarusC64/clinical-recovery-window-detection-v0.1 |
| Ventilation–Perfusion instability | ClarusC64/clinical-ventilation-perfusion-instability-v0.1 |
| Hemorrhage compensation collapse | ClarusC64/clinical-hemorrhage-compensation-collapse-v0.1 |
| Electrolyte instability | ClarusC64/clinical-electrolyte-instability-v0.1 |
| Microcirculation instability | ClarusC64/clinical-microcirculation-instability-v0.1 |
| Endocrine instability | ClarusC64/clinical-endocrine-instability-v0.1 |
| Thermoregulation instability | ClarusC64/clinical-thermoregulation-instability-v0.1 |
| Cellular energy instability | ClarusC64/clinical-cellular-energy-instability-v0.1 |
| Respiratory drive instability | ClarusC64/clinical-respiratory-drive-instability-v0.1 |
| Coagulation instability | ClarusC64/clinical-coagulation-instability-v0.1 |
| Hospital operational collapse | ClarusC64/clinical-hospital-operational-collapse-v0.1 |
Each dataset repository contains: data/train.csv data/test.csv scorer.py README.md
Evaluation Protocol
Predictions must follow the format:
scenario_id,prediction
Example:
MC101,0 MC102,1
Evaluation is performed using the scorer located in the dataset repository.
Example:
python scorer.py --predictions predictions.csv --truth data/test.csv --output metrics.json
The --truth path refers to the dataset's local data/test.csv file.
Metrics reported include:
- accuracy
- precision
- recall
- f1
- confusion matrix
Benchmark Tasks
The benchmark supports three evaluation settings.
1 Single-Dataset Evaluation
Train and test on the same dataset.
Purpose:
Measure baseline performance within a single stability regime.
2 Cross-Regime Transfer
Train on one regime and test on another.
Example:
Train → clinical-hemodynamic-collapse-v0.1
Test → clinical-microcirculation-instability-v0.1
Purpose:
Determine whether models learn general stability reasoning rather than dataset-specific correlations.
3 Multi-Regime Training
Train on multiple datasets simultaneously.
Evaluate performance across all regimes.
Purpose:
Test whether models can learn shared stability representations across physiological systems.
Dataset Design Principles
The Clarus datasets follow several explicit design rules.
No Single-Feature Dominance
No observable variable strongly predicts the label independently.
Target:
|correlation| < 0.30
Interaction-Based Labels
Instability emerges from interactions between multiple variables rather than isolated thresholds.
Adversarial Symmetry
Rows with nearly identical values may produce opposite labels.
This prevents trivial heuristics.
Decoy Variables
Some variables appear meaningful but do not determine the label independently.
Hidden Generative Logic
The dataset generator and rule equations are intentionally not published.
Models must infer instability from proxy signals.
Baseline Results
Reference baseline experiments are provided in:
baseline_results.md
These establish approximate difficulty levels for common tabular models.
Benchmark Architecture
The benchmark can be interpreted as observing a shared stability manifold through different clinical regimes.
Each dataset exposes a different control system while preserving the underlying concept of instability emerging from interacting signals.
Additional details are provided in:
stability_manifold.md
Research Applications
The benchmark supports research into:
- system stability reasoning
- interaction-based tabular learning
- cross-domain generalization
- clinical early warning modeling
- infrastructure and system risk detection
Quick Start
Quick Start
This example demonstrates how to evaluate a simple model on one Clarus dataset.
1 Install dependencies
Example environment:
pip install pandas scikit-learn
2 Load the dataset
train = data/train.csv test = data/test.csv
3 Train a simple baseline model
Example using logistic regression:
import pandas as pd from sklearn.linear_model import LogisticRegression
train = pd.read_csv("data/train.csv")
X = train.drop(columns=["scenario_id","label"]) y = train["label"]
model = LogisticRegression() model.fit(X, y)
4 Generate predictions
test = pd.read_csv("data/test.csv")
X_test = test.drop(columns=["scenario_id","label"])
pred = model.predict(X_test)
out = pd.DataFrame({ "scenario_id": test["scenario_id"], "prediction": pred })
out.to_csv("predictions.csv", index=False)
5 Evaluate predictions
Run the official scorer:
python scorer.py --predictions predictions.csv --truth data/test.csv
The scorer returns:
- accuracy
- precision
- recall
- f1
- confusion matrix
License
MIT
- Downloads last month
- 38