Siyuan Liu*, Chaoqun Zheng*, Xin Zhou, Tianrui Feng, Dingkang Liang† and Xiang Bai
Huazhong University of Science and Technology, Wuhan, China
(*) equal contribution, (†) corresponding author.
- [06/Apr/2026] ✨ Release the code. 😊😊
- [21/Feb/2026] 🎉 Our paper PointTPA is accepted by CVPR 2026! 🥳🥳
Scene-level point cloud understanding remains challenging due to diverse geometries, imbalanced category distributions, and highly varied spatial layouts. Existing methods improve object-level performance but rely on static network parameters during inference, limiting their adaptability to dynamic scene data. We propose PointTPA, a Test-time Parameter Adaptation framework that generates input-aware network parameters for scene-level point clouds. PointTPA adopts a Serialization-based Neighborhood Grouping (SNG) to form locally coherent patches and a Dynamic Parameter Projector (DPP) to produce patch-wise adaptive weights, enabling the backbone to adjust its behavior according to scene-specific variations while maintaining a low parameter overhead. Integrated into the PTv3 structure, PointTPA demonstrates strong parameter efficiency by introducing two lightweight modules of less than 2% of the backbone's parameters. Despite this minimal parameter overhead, PointTPA achieves 78.4% mIoU on ScanNet validation, surpassing existing parameter-efficient fine-tuning (PEFT) methods across multiple benchmarks, highlighting the efficacy of our test-time dynamic network parameter adaptation mechanism in enhancing 3D scene understanding.
We recommend using Anaconda to set up the environment for this project. You may also refer to Pointcept for additional environment configuration details.
$ git clone https://github.com/Ykzzldx2435/PointTPA.git
$ cd PointTPA
# Create virtual env
$ conda env create -f environment.yml
$ conda activate PointTPAThe required datasets should be prepared in the data directory in advance. Detailed preparation instructions are provided in the Pointcept documentation.
$mkdir data
$ln -s ${PROCESSED_SCANNET_DIR} ${CODEBASE_DIR}/data/{DATASET NAME} # (e.g. scannet s3dis scannetpp)With the datasets linked properly, the project directory is expected to have the following structure:
PointTPA/
├── data/ # symlink to processed datasets
│ ├── scannet/
│ ├── s3dis/
│ └── scannetpp/
├── configs/
| ├── _base_/
| └── ptv3-pointtpa/ # training and testing configuration
├── pointcept/
| ├── ...
| ├── datasets/
| ├── engines/
| └── models/
| ├── ...
| ├── peft/
| | ├── __init__.py
| | └── pointtpa.py # core modules
| └── point_transformer_v3/
| ├── __init__.py
| ├── point_transformer_v3.py # backbone
| └── point_transformer_v3_pointtpa.py # PointTPA
├── scripts/
├── tools/
├── libs/
├── environment.yml
├── LICENSE
└── README.md
Use the following commands to train or evaluate the model from the terminal.
# Linear Probing
$sh scripts/train.sh -m 1 -g 2 -d ptv3-pointtpa -c semseg-ptv3-scannet-lin -n semseg-ptv3-scannet-lin -w /path/to/pretrain_weight
# Lin + PointTPA
$sh scripts/train.sh -m 1 -g 2 -d ptv3-pointtpa -c semseg-ptv3-scannet-pointtpa -n semseg-ptv3-scannet-pointtpa -w /path/to/pretrain_weight
# Decoder Probing
$sh scripts/train.sh -m 1 -g 2 -d ptv3-pointtpa -c semseg-ptv3-scannet-dec -n semseg-ptv3-scannet-dec -w /path/to/pretrain_weight
# Dec with PointTPA
$sh scripts/train.sh -m 1 -g 2 -d ptv3-pointtpa -c semseg-ptv3-scannet-pointtpa-dec -n semseg-ptv3-scannet-pointtpa-dec -w /path/to/pretrain_weight
# FFT
$sh scripts/train.sh -m 1 -g 2 -d ptv3-pointtpa -c semseg-ptv3-scannet-ft -n semseg-ptv3-scannet-ft -w /path/to/pretrain_weight# Linear Probing
$sh scripts/train.sh -m 1 -g 2 -d ptv3-pointtpa -c semseg-ptv3-scannet200-lin -n semseg-ptv3-scannet200-lin -w /path/to/pretrain_weight
# Lin + PointTPA
$sh scripts/train.sh -m 1 -g 2 -d ptv3-pointtpa -c semseg-ptv3-scannet200-pointtpa -n semseg-ptv3-scannet200-pointtpa -w /path/to/pretrain_weight
# Decoder Probing
$sh scripts/train.sh -m 1 -g 2 -d ptv3-pointtpa -c semseg-ptv3-scannet200-dec -n semseg-ptv3-scannet200-dec -w /path/to/pretrain_weight
# Dec with PointTPA
$sh scripts/train.sh -m 1 -g 2 -d ptv3-pointtpa -c semseg-ptv3-scannet200-pointtpa-dec -n semseg-ptv3-scannet200-pointtpa-dec -w /path/to/pretrain_weight
# FFT
$sh scripts/train.sh -m 1 -g 2 -d ptv3-pointtpa -c semseg-ptv3-scannet200-ft -n semseg-ptv3-scannet200-ft -w /path/to/pretrain_weight# Linear Probing
$sh scripts/train.sh -m 1 -g 2 -d ptv3-pointtpa -c semseg-ptv3-s3dis-lin -n semseg-ptv3-s3dis-lin -w /path/to/pretrain_weight
# Lin + PointTPA
$sh scripts/train.sh -m 1 -g 2 -d ptv3-pointtpa -c semseg-ptv3-s3dis-pointtpa -n semseg-ptv3-s3dis-pointtpa -w /path/to/pretrain_weight
# Decoder Probing
$sh scripts/train.sh -m 1 -g 2 -d ptv3-pointtpa -c semseg-ptv3-s3dis-dec -n semseg-ptv3-s3dis-dec -w /path/to/pretrain_weight
# Dec with PointTPA
$sh scripts/train.sh -m 1 -g 2 -d ptv3-pointtpa -c semseg-ptv3-s3dis-pointtpa-dec -n semseg-ptv3-s3dis-pointtpa-dec -w /path/to/pretrain_weight
# FFT
$sh scripts/train.sh -m 1 -g 2 -d ptv3-pointtpa -c semseg-ptv3-s3dis-ft -n semseg-ptv3-s3dis-ft -w /path/to/pretrain_weight# Linear Probing
$sh scripts/train.sh -m 1 -g 2 -d ptv3-pointtpa -c semseg-ptv3-scannetpp-lin -n semseg-ptv3-scannetpp-lin -w /path/to/pretrain_weight
# Lin + PointTPA
$sh scripts/train.sh -m 1 -g 2 -d ptv3-pointtpa -c semseg-ptv3-scannetpp-pointtpa -n semseg-ptv3-scannetpp-pointtpa -w /path/to/pretrain_weight
# Decoder Probing
$sh scripts/train.sh -m 1 -g 2 -d ptv3-pointtpa -c semseg-ptv3-scannetpp-dec -n semseg-ptv3-scannetpp-dec -w /path/to/pretrain_weight
# Dec with PointTPA
$sh scripts/train.sh -m 1 -g 2 -d ptv3-pointtpa -c semseg-ptv3-scannetpp-pointtpa-dec -n semseg-ptv3-scannetpp-pointtpa-dec -w /path/to/pretrain_weight
# FFT
$sh scripts/train.sh -m 1 -g 2 -d ptv3-pointtpa -c semseg-ptv3-scannetpp-ft -n semseg-ptv3-scannetpp-ft -w /path/to/pretrain_weightThe pretrained Sonata weights are available for download HERE.
This project is based on Sonata , PTv3, and also references DAPT (paper, code), PointGST (paper, code), IDPT (paper, code) and VeRA (paper, code). Our code organization follows the Pointcept repository ( Pointcept ). We thank the authors for their excellent work.
If you find this repository useful in your research, please consider giving a star ⭐ and a citation.
@inproceedings{liu2026pointtpa,
title={PointTPA: Dynamic Network Parameter Adaptation for 3D Scene Understanding},
author={Liu, Siyuan and Zheng, Chaoqun and Zhou, Xin and Feng, Tianrui and Liang, Dingkang and Bai, Xiang},
booktitle={CVPR},
year={2026}
}




.png)