YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
X-AVDT: Audio-Visual Cross-Attention for Robust Deepfake Detection
Visual Media Lab @ KAIST
CVPR 2026
This repository contains the official implementation of the paper "X-AVDT: Audio-Visual Cross-Attention for Robust Deepfake Detection".
TL;DR : audio-visual cross-attention with diffusion inversion features, a more robust detector against unseen deepfake generators.
Installation
conda create -n x-avdt python=3.10 -y
conda activate x-avdt
pip install -r requirements.txt
ffmpeg is required for video preprocessing.
Download Pretrained Models
The Hallo feature extraction uses the original Hallo repository. Pretrained weights are placed under hallo/pretrained_models.
hallo/pretrained_models/
The easiest setup is to clone the official pretrained model bundle from Hugging Face:
cd hallo
git lfs install
git clone https://huggingface.co/fudan-generative-ai/hallo pretrained_models
cd ..
If downloading files manually, organize them as follows:
hallo/pretrained_models/
audio_separator/
download_checks.json
mdx_model_data.json
vr_model_data.json
Kim_Vocal_2.onnx
face_analysis/
models/
face_landmarker_v2_with_blendshapes.task
1k3d68.onnx
2d106det.onnx
genderage.onnx
glintr100.onnx
scrfd_10g_bnkps.onnx
hallo/
net.pth
motion_module/
mm_sd_v15_v2.ckpt
sd-vae-ft-mse/
config.json
diffusion_pytorch_model.safetensors
stable-diffusion-v1-5/
unet/
config.json
diffusion_pytorch_model.safetensors
wav2vec/
wav2vec2-base-960h/
config.json
feature_extractor_config.json
model.safetensors
preprocessor_config.json
special_tokens_map.json
tokenizer_config.json
vocab.json
These paths match hallo/configs/inference/default.yaml.
Dataset
The MMDF can be downloaded from Hugging Face.
After feature extraction, training expects this layout for both real and fake roots:
<root>/<split>/<label>/<model_id>/<clip_id>/
original/*.pt
inverted/*.pt
reconstructed/*.pt
residual/*.pt
attn_feat/*.pt
Feature Extraction
If your inputs are raw videos, first convert them into frame folders and wav files:
python hallo/preprocess_videos.py extract-frames \
--video_dir /path/to/videos \
--frames_dir /path/to/frames \
--duration 5 \
--fps 25 \
--size 512 512
Then run Hallo feature extraction. This produces whole-clip outputs such as original.mp4, inverted.mp4, reconstructed.mp4, residual.mp4, attn_map.mp4, and attn_feat.pt for each clip:
python hallo/extract_features.py \
--frames_dir /path/to/frames \
--output_dir /path/to/hallo_features
Finally, pack the whole-clip feature outputs into the training layout. This step slices each clip into 16-frame .pt chunks:
python hallo/preprocess_videos.py pack-features \
--feature_dir /path/to/hallo_features \
--output_dir /path/to/pt \
Training
python train/train.py --data_dir /path/to/pt/
Evaluation
python train/evaluate.py --data_dir /path/to/pt/ --ckpt results/x_avdt/model_best.pt
Citation
@article{kim2026x,
title={X-AVDT: Audio-Visual Cross-Attention for Robust Deepfake Detection},
author={Kim, Youngseo and Yun, Kwan and Hong, Seokhyeon and Cha, Sihun and Koo, Colette Suhjung and Noh, Junyong},
journal={arXiv preprint arXiv:2603.08483},
year={2026}
}
- Downloads last month
- 1,127