SPoRC: the Structured Podcast Open Research Corpus (V 1.0)
SPoRC is a large multimodal dataset for studying the podcast ecosystem. It contains metadata, full transcripts, speaker-turn-level diarization, speaker-role labels, and acoustic features for over 1.1 million podcast episodes across 228,000 podcasts.
Paper: Mapping the Podcast Ecosystem with the Structured Podcast Research Corpus (ACL 2025)
Dataset Summary
| Statistic | Count |
|---|---|
| Podcasts | 228,099 |
| Episodes | 1,124,058 |
| Speaker turns | ~100M |
| Speaker name index entries | 921,287 |
| Apple Podcast categories covered | 20 main + subcategories |
| Languages | 60+ (primarily English) |
Data Format
The dataset is stored as Parquet files with zstd compression, organized into a partitioned layout for efficient access. A manifest.json file at the root describes the layout and record counts.
Directory Structure
├── manifest.json # Layout description and record counts
├── metadata/
│ ├── podcast_catalog.parquet # One row per podcast (228K rows)
│ ├── episode_catalog.parquet # One row per episode, no transcripts (1.1M rows)
│ ├── category_index.parquet # category → podcast_id mapping (572K rows)
│ ├── hostname_index.parquet # hostname → podcast_id mapping (228K rows)
│ ├── speaker_name_index.parquet # Speaker name lookup index (921K rows)
│ └── episode_metrics.parquet # Precomputed episode-level metrics (373K rows)
├── episodes/
│ └── podcast_id=<id>/
│ └── data.parquet # Full episode data including transcript
└── turns/
└── podcast_id=<id>/
├── text.parquet # Turn text, timing, and speaker info
├── audio_features.parquet # MFCCs, F0, formants per turn
└── metrics.parquet # Turn-level computed metrics
The episodes/ and turns/ directories are Hive-partitioned by podcast_id, enabling efficient per-podcast lookups without scanning the entire dataset.
ID Scheme
| ID | Derivation | Length | Unique across |
|---|---|---|---|
podcast_id |
md5(rss_url)[:12] |
12 hex chars | Globally unique |
episode_id |
md5(mp3_url)[:16] |
16 hex chars | Globally unique |
Episodes link to podcasts via podcast_id. Turns link to episodes via episode_id (and podcast_id for partition pruning).
Schema Reference
metadata/podcast_catalog.parquet
One row per podcast with aggregated statistics. This is the best starting point for browsing or filtering podcasts.
| Column | Type | Description |
|---|---|---|
podcast_id |
string | Unique podcast identifier |
rss_url |
string | RSS feed URL |
pod_title |
string | Podcast title |
pod_description |
string | Podcast description |
language |
string | Language code (e.g., en, en-au, es) |
explicit |
int64 | Explicit content flag (0 or 1) |
image_url |
string | Podcast cover image URL |
itunes_author |
string | iTunes author field |
episode_count |
int64 | Number of episodes in the dataset |
total_duration_seconds |
double | Sum of all episode durations |
primary_category |
string | Main Apple Podcast category |
all_categories |
list<string> | All assigned categories |
host_names |
list<string> | Predicted host names across episodes |
earliest_date |
string | Earliest episode date (ISO 8601) |
latest_date |
string | Latest episode date (ISO 8601) |
metadata/episode_catalog.parquet
One row per episode with key metadata (no transcripts). Use this for filtering and discovery before loading full episode data.
| Column | Type | Description |
|---|---|---|
episode_id |
string | Unique episode identifier |
podcast_id |
string | Parent podcast identifier |
ep_title |
string | Episode title |
mp3_url |
string | Audio file URL |
duration_seconds |
double | Episode duration in seconds |
category1–category10 |
string | Apple Podcast categories (up to 10) |
host_predicted_names |
list<string> | NER-predicted host names |
guest_predicted_names |
list<string> | NER-predicted guest names |
num_main_speakers |
int64 | Number of speakers with >5% speaking time |
language |
string | Language code |
explicit |
int64 | Explicit content flag (0 or 1) |
episode_date |
string | Publication date (millisecond timestamp as string) |
overlap_prop_duration |
double | Proportion of episode duration with overlapping speech (diarization quality indicator) |
avg_turn_duration |
double | Average speaker turn duration in seconds |
total_sp_labels |
int64 | Total number of distinct speaker labels |
episodes/podcast_id=<id>/data.parquet
Full episode data including transcripts, partitioned by podcast. Each partition contains all episodes for one podcast.
| Column | Type | Description |
|---|---|---|
episode_id |
string | Unique episode identifier |
podcast_id |
string | Parent podcast identifier |
ep_title |
string | Episode title |
ep_description |
string | Episode description |
mp3_url |
string | Audio file URL |
duration_seconds |
double | Episode duration in seconds |
transcript |
string | Full episode transcript |
rss_url |
string | Podcast RSS feed URL |
pod_title |
string | Podcast title |
pod_description |
string | Podcast description |
category1–category10 |
string | Apple Podcast categories |
host_predicted_names |
list<string> | NER-predicted host names |
guest_predicted_names |
list<string> | NER-predicted guest names |
neither_predicted_names |
list<string> | Named speakers classified as neither host nor guest |
main_ep_speakers |
list<string> | Speaker labels with >5% speaking time |
host_speaker_labels |
string | JSON mapping of host names → speaker labels |
guest_speaker_labels |
string | JSON mapping of guest names → speaker labels |
num_main_speakers |
int64 | Number of main speakers |
overlap_prop_duration |
double | Overlap proportion by duration |
overlap_prop_turn_count |
double | Overlap proportion by turn count |
avg_turn_duration |
double | Average turn duration |
total_sp_labels |
int64 | Total distinct speaker labels |
language |
string | Language code |
explicit |
int64 | Explicit content flag |
image_url |
string | Episode/podcast image URL |
episode_date_localized |
string | Localized publication date |
oldest_episode_date |
string | Oldest episode date for the podcast |
last_update |
string | Last update timestamp |
created_on |
string | Creation timestamp |
itunes_author |
string | iTunes author |
itunes_owner_name |
string | iTunes owner name |
host |
string | Host field from RSS |
turns/podcast_id=<id>/text.parquet
Speaker turn text, timing, and speaker information. One row per turn.
| Column | Type | Description |
|---|---|---|
episode_id |
string | Parent episode identifier |
podcast_id |
string | Parent podcast identifier |
mp3_url |
string | Audio file URL (for aligning with audio) |
speaker |
list<string> | Speaker label(s) for this turn (e.g., ["SPEAKER_03"]) |
turn_text |
string | Text spoken in this turn |
start_time |
double | Turn start time in seconds |
end_time |
double | Turn end time in seconds |
duration |
double | Turn duration in seconds |
turn_count |
int64 | Sequential turn index within the episode (0-based) |
inferred_speaker_role |
string | Predicted role: "host", "guest", or "neither" |
inferred_speaker_name |
string | Predicted speaker name (from NER + role model) |
turns/podcast_id=<id>/audio_features.parquet
Acoustic features extracted per turn. Join with text.parquet on (episode_id, turn_count).
| Column | Type | Description |
|---|---|---|
episode_id |
string | Parent episode identifier |
podcast_id |
string | Parent podcast identifier |
mp3_url |
string | Audio file URL |
turn_count |
int64 | Turn index (join key with text.parquet) |
start_time |
double | Turn start time in seconds |
mfcc1_sma3_mean |
double | Mean of 1st MFCC coefficient (smoothed) |
mfcc2_sma3_mean |
double | Mean of 2nd MFCC coefficient |
mfcc3_sma3_mean |
double | Mean of 3rd MFCC coefficient |
mfcc4_sma3_mean |
double | Mean of 4th MFCC coefficient |
f0_semitone_from_27_5hz_sma3nz_mean |
double | Mean fundamental frequency in semitones (re 27.5 Hz) |
f1_frequency_sma3nz_mean |
double | Mean 1st formant frequency |
turns/podcast_id=<id>/metrics.parquet
Precomputed turn-level metrics. Join with text.parquet on (episode_id, turn_count).
| Column | Type | Description |
|---|---|---|
episode_id |
string | Parent episode identifier |
turn_count |
int32 | Turn index (join key) |
word_count |
int32 | Number of words in the turn |
words_per_second |
float | Speaking rate |
gap_from_prev |
float | Silence gap from previous turn (seconds) |
overlap_with_prev |
float | Overlap with previous turn (seconds) |
discourse_marker_count |
int16 | Count of discourse markers (e.g., "um", "like", "you know") |
char_count |
int32 | Character count of turn text |
metadata/category_index.parquet
Lookup table mapping categories to podcast IDs.
| Column | Type | Description |
|---|---|---|
category |
string | Apple Podcast category (lowercased) |
podcast_id |
string | Podcast identifier |
metadata/hostname_index.parquet
Lookup table mapping RSS hostnames to podcast IDs.
| Column | Type | Description |
|---|---|---|
hostname |
string | RSS feed hostname |
podcast_id |
string | Podcast identifier |
metadata/speaker_name_index.parquet
Index for searching by speaker name across the corpus.
| Column | Type | Description |
|---|---|---|
name_normalized |
string | Lowercased, whitespace-normalized speaker name |
name_original |
string | Original speaker name |
role |
string | Speaker role ("host", "guest", or "neither") |
episode_id |
string | Episode identifier |
podcast_id |
string | Podcast identifier |
metadata/episode_metrics.parquet
Precomputed episode-level aggregate metrics (available for ~373K episodes).
| Column | Type | Description |
|---|---|---|
episode_id |
string | Episode identifier |
podcast_id |
string | Podcast identifier |
total_word_count |
int32 | Total words in episode |
total_turn_count |
int32 | Total speaker turns |
unique_speaker_count |
int16 | Number of unique speakers |
avg_turn_duration |
float | Mean turn duration (seconds) |
median_turn_duration |
float | Median turn duration |
avg_words_per_second |
float | Mean speaking rate |
host_word_count |
int32 | Words spoken by host(s) |
guest_word_count |
int32 | Words spoken by guest(s) |
host_turn_proportion |
float | Proportion of turns by host |
host_word_proportion |
float | Proportion of words by host |
avg_gap_duration |
float | Mean silence between turns |
total_overlap_duration |
float | Total overlapping speech (seconds) |
discourse_marker_count |
int32 | Total discourse markers |
discourse_marker_rate |
float | Discourse markers per word |
speaking_rate_host |
float | Host speaking rate (words/sec) |
speaking_rate_guest |
float | Guest speaking rate (words/sec) |
Using the sporc Python Package
The recommended way to work with SPoRC is through the sporc Python package, which provides a high-level interface for search, filtering, and analysis:
pip install sporc
from sporc import SPORCDataset
# Load the dataset
dataset = SPORCDataset()
# Search for a podcast
podcast = dataset.search_podcast("My Favorite Murder")
# Iterate episodes with lazy-loaded turns
for episode in podcast.episodes:
print(episode.title, len(episode.turns), "turns")
# Full-text search across turns
results = dataset.search_turns("artificial intelligence", mode="fts")
# Search by speaker name
results = dataset.search_by_speaker_name("Ira Glass", role="host")
# KWIC concordance
results = dataset.concordance("like", context_words=5)
See the sporc package documentation for the full API.
Working with the Parquet Files Directly
The Parquet format makes SPoRC accessible from any language or tool that supports Parquet, without needing the sporc package. Below are examples for common workflows.
Python (pandas / pyarrow)
import pandas as pd
# Load the podcast catalog
podcasts = pd.read_parquet("metadata/podcast_catalog.parquet")
print(f"{len(podcasts)} podcasts")
# Filter to comedy podcasts with 50+ episodes
comedy = podcasts[
(podcasts["primary_category"] == "comedy")
& (podcasts["episode_count"] >= 50)
]
# Load the episode catalog (no transcripts — fast)
episodes = pd.read_parquet("metadata/episode_catalog.parquet")
# Get episodes for a specific podcast
pod_episodes = episodes[episodes["podcast_id"] == "03b0f2a257fd"]
# Load full episode data (with transcripts) for one podcast
full_episodes = pd.read_parquet("episodes/podcast_id=03b0f2a257fd/data.parquet")
# Load turn-level text for one podcast
turns = pd.read_parquet("turns/podcast_id=03b0f2a257fd/text.parquet")
# Join turns with audio features
audio = pd.read_parquet("turns/podcast_id=03b0f2a257fd/audio_features.parquet")
turns_with_audio = turns.merge(audio, on=["episode_id", "turn_count"])
Python (DuckDB) — query across partitions without loading everything
import duckdb
con = duckdb.connect()
# Query across all podcasts using Hive partitioning
result = con.sql("""
SELECT podcast_id, episode_id, turn_text, inferred_speaker_role, duration
FROM read_parquet('turns/podcast_id=*/text.parquet', hive_partitioning=true)
WHERE inferred_speaker_role = 'host'
AND duration > 30
LIMIT 100
""").df()
# Search transcripts
matches = con.sql("""
SELECT podcast_id, episode_id, ep_title, transcript
FROM read_parquet('episodes/podcast_id=*/data.parquet', hive_partitioning=true)
WHERE transcript ILIKE '%machine learning%'
LIMIT 20
""").df()
# Aggregate speaking statistics across the corpus
stats = con.sql("""
SELECT inferred_speaker_role,
COUNT(*) as turn_count,
AVG(duration) as avg_duration,
AVG(words_per_second) as avg_speaking_rate
FROM read_parquet('turns/podcast_id=*/text.parquet', hive_partitioning=true) t
JOIN read_parquet('turns/podcast_id=*/metrics.parquet', hive_partitioning=true) m
ON t.episode_id = m.episode_id AND t.turn_count = m.turn_count
GROUP BY inferred_speaker_role
""").df()
R
library(arrow)
# Read metadata catalogs directly
podcasts <- read_parquet("metadata/podcast_catalog.parquet")
episodes <- read_parquet("metadata/episode_catalog.parquet")
# Read Hive-partitioned turn data for a specific podcast
turns <- read_parquet("turns/podcast_id=03b0f2a257fd/text.parquet")
# Use open_dataset() for efficient filtered reads across partitions
ds <- open_dataset("turns", partitioning = "hive")
host_turns <- ds |>
filter(inferred_speaker_role == "host") |>
select(podcast_id, episode_id, turn_text, duration) |>
head(1000) |>
collect()
Command Line (DuckDB CLI)
# Install: https://duckdb.org/docs/installation/
duckdb
# Count episodes per category
SELECT category1, COUNT(*) as n
FROM read_parquet('metadata/episode_catalog.parquet')
WHERE category1 != ''
GROUP BY category1
ORDER BY n DESC;
# Find longest episodes
SELECT ep_title, duration_seconds / 3600.0 as hours
FROM read_parquet('metadata/episode_catalog.parquet')
ORDER BY duration_seconds DESC
LIMIT 10;
Key Concepts
Speaker Labels and Roles
Each turn has a generic speaker label (e.g., SPEAKER_00, SPEAKER_03) assigned by the diarization model. These labels are consistent within an episode but not across episodes.
A role-inference model assigns each speaker one of three roles:
host— the podcast hostguest— a guest on the episodeneither— other speakers (e.g., advertisers, co-hosts in ambiguous cases)
An NER model predicts speaker names from introductions and context. These appear in inferred_speaker_name (per-turn) and host_predicted_names / guest_predicted_names (per-episode).
The host_speaker_labels and guest_speaker_labels fields (JSON strings) map predicted names to their generic speaker labels, e.g., {"John Smith": "SPEAKER_00"}.
Diarization Quality Indicators
Not all episodes have high-quality diarization. Use these columns to filter:
overlap_prop_duration: Proportion of episode duration where multiple speakers are marked as speaking simultaneously. High values (>0.15) may indicate diarization errors.overlap_prop_turn_count: Same concept measured by turn count.avg_turn_duration: Very high values may indicate under-segmented episodes.total_sp_labels: Number of unique speaker labels. Episodes with very high counts may have noisy diarization.
Audio Features
Turn-level acoustic features are extracted using openSMILE with the eGeMAPSv2 feature set:
- MFCCs 1–4: Mel-frequency cepstral coefficients capturing spectral shape (voice quality, timbre)
- F0 (fundamental frequency): Pitch in semitones relative to 27.5 Hz, useful for intonation and prosody analysis
- F1 (first formant): Related to vowel height and openness, useful for phonetic and sociolinguistic analysis
All features are mean values smoothed over the turn duration (sma3 = 3-frame moving average).
Categories
Categories follow the Apple Podcasts taxonomy: 20 main categories (Arts, Business, Comedy, Education, Fiction, Government, Health & Fitness, History, Kids & Family, Leisure, Music, News, Religion & Spirituality, Science, Society & Culture, Sports, Technology, True Crime, TV & Film) with subcategories. Each episode can have up to 10 categories stored in category1 through category10.
Citation
If you use SPoRC in your research, please cite:
@inproceedings{litterer-etal-2025-mapping,
title = "Mapping the Podcast Ecosystem with the Structured Podcast Research Corpus",
author = "Litterer, Benjamin Roger and
Jurgens, David and
Card, Dallas",
booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.acl-long.1222/",
doi = "10.18653/v1/2025.acl-long.1222",
pages = "25132--25154",
}
License and Terms of Use
This dataset is released for research and educational purposes only. By accessing the dataset, you agree to the terms of use. If you are a podcast creator and would like your content removed, please use the removal request form linked on the dataset page.
- Downloads last month
- 456