-
Notifications
You must be signed in to change notification settings - Fork 45
Open
Description
Hi all,
I am running DeepSomatic on an HPC cluster using a slurm-based parallel workflow. The pipeline works successfully for the majority of samples (n>50), but for a small subset of samples, the job never finishes.
There is no error message, no crash, and the job remains running indefinitely until manually terminated. All samples were preprocessed using the exact same pipeline and tools.
Command used:
singularity exec \
-B "${TMPDIR_HOST}":"${TMPDIR_HOST}" \
-B /usr/lib/locale:/usr/lib/locale \
-B "${REF_DIR}:${REF_DIR}" \
-B "${TUMOR_DIR}:${TUMOR_DIR}" \
-B "${OUT_DIR}:${OUT_DIR}" \
$IMAGE run_deepsomatic \
--model_type=FFPE_WGS_TUMOR_ONLY \
--ref=${REF} \
--reads_tumor=${BAM} \
--output_vcf=${OUT_DIR}/${SAMPLE}.deepsomatic.vcf.gz \
--sample_name_tumor=${SAMPLE} \
--num_shards=8 \
--logging_dir=${OUT_DIR}/logs/${SAMPLE} \
--use_default_pon_filtering=true \
--intermediate_results_dir=${OUT_DIR}/logs/${SAMPLE}
Please find some screenshot of logs attached:
Any guidance on how to diagnose or trace where execution stalls would be greatly appreciated.
Thank you very much for your help.
Best regards,
Lipika
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels