Chuangchuang Tan, Renshuai Tao, Huan Liu, Guanghua Gu, Baoyuan Wu, Yao Zhao, Yunchao Wei
Beijing Jiaotong University, YanShan University, CUHK
⭐ If our code is helpful to you, please help star this repo. Thanks! 🤗
conda create -n c2pclip python=3.10.14 -y
conda activate c2pclip
pip install -r requirements.txt- Prepare your dataset (e.g., GenImage, UniversalFakeDetect).
- Download Genimage_CNNDetection_CLIP_prefix_caption.tar.gz from the provided Google Drive link.
- Download CLIP weights (ViT-L/14) from Hugging Face.
Train C2P-CLIP on GenImage and UniversalFakeDetect.
conda activate c2pclip
./train_genimage.sh
./train_UniversalFakeDetect.shconda activate c2pclip
python inference.py \
--dataroot ./datasets/GenImage/test/ \
--model_path ./checkpoints/c2p_clip_genimage/last_model.pthconda activate c2pclip
# Decode features to text
python decode_clipfeature_image.py \
--image_path ./assets/DALLE/DALLE_2_Cowboy_In_Swamp_Close_Up_Outpaint_1.png \
--cal_detection_feat
# Visualization (t-SNE)
CUDA_VISIBLE_DEVICES=1 python draw_tsne_kmean.py \
--draw_data_path ./tsne_png \
--image_path ./stylegan_tsne_data \
--save_name stylegan_test \
--legend stylegan-bedroom-real stylegan-bedroom-fake stylegan-car-real stylegan-car-fake stylegan-cat-real stylegan-cat-fake \
--do_extract --do_fit --draw_text 0If you find this code or paper helpful, please cite:
@inproceedings{tan2025c2p,
title={C2p-clip: Injecting category common prompt in clip to enhance generalization in deepfake detection},
author={Tan, Chuangchuang and Tao, Renshuai and Liu, Huan and Gu, Guanghua and Wu, Baoyuan and Zhao, Yao and Wei, Yunchao},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={39},
number={7},
pages={7184--7192},
year={2025}
}