Skip to content

Potential fix for negative heating from Compton scattering with atomic relaxation#3865

Draft
jtramm wants to merge 1 commit intoopenmc-dev:developfrom
jtramm:fix-compton-relaxation-heating2
Draft

Potential fix for negative heating from Compton scattering with atomic relaxation#3865
jtramm wants to merge 1 commit intoopenmc-dev:developfrom
jtramm:fix-compton-relaxation-heating2

Conversation

@jtramm
Copy link
Contributor

@jtramm jtramm commented Mar 12, 2026

In fixing some CI issues with my shared secondary PR #3863 I stumbled across what I think may be a small existing issue in the Compton scattering kernel.

I was finding that the test_photon_heating in tests/unit_tests/weightwindows/test.py was failing as it uses weight windows, which in my PR means it automatically turns on the shared secondary mode. Getting different results would be expected in that case as the shared secondary feature makes alterations to the PRNG stream.

At first I just assumed that I needed to add a with/without shared secondary case, but then noticed that this test asserts np.all(tally_mean >= 0) on a photon heating tally with a mesh. So -- it seemed like perhaps there was a deeper issue at play with the shared secondaries that was causing some breakage here. However, I went back and tested out the develop branch and found that if I increased the particle count from 100 to 1000 then this test would also fail by producing two negative tallies.

I haven't looked too closely at this area of the code before but at first glance it doesn't seem right that you'd get negative tally values (though perhaps the physics allows for this with this tally?).

The test_photon_heating unit test was originally written for PR #3227 to catch a different bug (weight-window splits contaminating bank_second_E). It passed on develop with the original particle count because with that specific PRNG stream and 100 particles, the negative Compton events happened to cancel out across mesh bins. When increasing to 1000 particles, or when the shared secondary bank changed the PRNG stream, the negative events concentrated in a few bins and the test failed.

I had claude code see if it could come up with a solution and it seems to have, but I'm not up to date on my Compton scattering physics so this will take some research on my end to verify this fix is correct. Before I do that, I'd like to determine if the negative tallies are:

  1. Actually a bug or just intended behavior,

  2. If the fix claude code proposes is physically correct or not. At first glance the logic seems reasonable and it gives a few references, but maybe they are hallucinated, who knows.

For now I'm leaving this as a draft as I haven't yet done the research to vouch for the correctness of the fix. @pshriwise -- I know you wrote the test_photon_heating unit test, so perhaps you can confirm if the intention is that photon heating tallies should always be > 0? Others might also be more up to speed on the Compton scattering physics than I am so can chime in.

Potential Fix Summary (Below was all written by Claude code so take with a grain of salt)

When Doppler broadening produces a scattered photon energy where the energy
transfer E - E' is less than the shell binding energy E_b, the Compton
electron has negative kinetic energy and is not created as a secondary particle.
However, atomic relaxation still runs and produces secondaries (Auger electrons,
fluorescent photons) totaling ~E_b. Since the electron is missing from
bank_second_E, the heating score E - E' - bank_second_E goes negative.

Two changes in src/physics.cpp:

  1. Use relaxation binding energy for the electron energy calculation when
    atomic relaxation data is available, matching the approach already used by
    the photoelectric effect (lines 401-403). This ensures the electron energy
    and the relaxation cascade use a self-consistent binding energy source.

  2. Correct the energy bookkeeping when E_electron < 0 by adding
    E_electron (a negative value) to bank_second_E before relaxation runs.
    This offsets the relaxation products so that
    bank_second_E = E_electron + sum(relaxation) = photon_transfer, and the
    heating score is zero. Relaxation still runs — no particles are added or
    removed, and the PRNG stream is unchanged.

Root Cause

The Compton electron energy is given by (OpenMC docs, Eq. compton-electron-energy):

E_electron = E - E' - E_b

where E is the incident photon energy, E' is the scattered photon energy,
and E_b is the shell binding energy.

In compton_doppler() (src/photon.cpp:454-569), there are three fallback
paths (lines 476, 483, 541) where an invalid sampling configuration causes the
code to fall back to the free-electron Klein-Nishina formula:

E' = E / (1 + alpha * (1 - mu))

This formula does not account for binding energy. The normal acceptance
criterion at line 564 (if (*E_out < E - E_b)) ensures E - E' > E_b, but
the fallback paths bypass this check. The result is that E - E' < E_b and
E_electron < 0.

In the calling code (sample_photon_reaction), the electron is correctly not
created when E_electron < 0 (it falls below the energy cutoff). But
relaxation still runs unconditionally and adds ~E_b to bank_second_E.
Since the (missing) electron's contribution would have been
E_electron = photon_transfer - E_b (a negative value that would have
partially offset the relaxation energy), bank_second_E ends up larger than
the photon's energy transfer:

// Before fix:
if (settings::atomic_relaxation && i_shell >= 0 &&
    element.subshell_map_[i_shell] >= 0) {
  element.atomic_relaxation(element.subshell_map_[i_shell], p);
  // bank_second_E = sum(relaxation) ≈ E_b > photon_transfer
}
// heating = (E - E') - bank_second_E = photon_transfer - E_b < 0

Approach: Consistent with Geant4 PENELOPE

Geant4's PENELOPE Compton model (G4PenelopeComptonModel.cc) handles this case
by keeping relaxation running and adjusting the energy bookkeeping:

// Geant4 PENELOPE approach:
eKineticEnergy     = diffEnergy - ionEnergy;    // = photon_transfer - E_b
localEnergyDeposit = ionEnergy;                 // = E_b

if (eKineticEnergy < 0) {
  localEnergyDeposit = diffEnergy;              // = photon_transfer (clamped)
  eKineticEnergy = 0.0;
}

// relaxation ALWAYS runs
fAtomDeexcitation->GenerateParticles(fvect, shell, Z, index);
for (auto& product : relaxation_products)
  localEnergyDeposit -= product.energy;         // subtract each product

When eKineticEnergy < 0, Geant4 sets localEnergyDeposit = diffEnergy
(the photon's energy transfer) instead of ionEnergy (the binding energy).
Relaxation products are then subtracted from this value. The key principle:
relaxation still occurs (the vacancy is real), but the local energy deposit
is clamped to reflect that the photon only contributed diffEnergy.

Our fix is the OpenMC equivalent. OpenMC derives local deposit from the formula
E_last - E' - bank_second_E rather than tracking it explicitly. By adding the
negative E_electron to bank_second_E, we ensure:

bank_second_E = E_electron + sum(relaxation)
              = (photon_transfer - E_b) + E_b
              = photon_transfer

heating = photon_transfer - bank_second_E
        = photon_transfer - photon_transfer
        = 0

Other Code Approaches (for context)

Code Approach Reference
Geant4 (PENELOPE) Clamps localEnergyDeposit = diffEnergy when eKineticEnergy < 0; relaxation still runs G4PenelopeComptonModel.cc, lines 494-531
Geant4 (Monash/LowEP) Re-samples until energy transfer > binding energy; prevents negative KE by construction G4LowEPComptonModel.cc, lines 316-366
Geant4 (KleinNishina) "If final electron energy becomes negative then sampling is repeated" Geant4 Physics Reference Manual
PENELOPE Shell selection uses Heaviside function Theta(E - U_i) to exclude inaccessible shells Salvat, PENELOPE-2018, OECD NEA

Energy Conservation

Before fix:

  • Photon transfers E - E' to the atom
  • No electron created (negative KE)
  • Relaxation emits secondaries totaling ~E_b
  • bank_second_E = E_b
  • heating = (E - E') - E_b < 0 (energy deficit)

After fix:

  • Photon transfers E - E' to the atom
  • No electron created (negative KE)
  • bank_second_E offset by E_electron (negative)
  • Relaxation emits secondaries totaling ~E_b
  • bank_second_E = E_electron + E_b = (E - E') - E_b + E_b = E - E'
  • heating = (E - E') - (E - E') = 0

Impact

  • Only heating tally scores are affected. No particles are added or removed,
    so the PRNG stream is identical. Non-heating tallies (flux, current, reaction
    rates, etc.) produce bit-identical results.
  • Three regression test baselines that tally heating are re-recorded:
    electron_heating, photon_production, photon_production_fission.
  • A new unit test test_compton_relaxation_heating verifies no negative heating
    in a lead (Z=82) geometry with 200 keV photon source.
  • In a test with 5 MeV photon source in water, approximately 98 negative
    heating events per 1000 source particles (all MT 504 Compton on oxygen) are
    eliminated.

References

  • OpenMC photon physics documentation: docs/source/methods/photon_physics.rst
  • Geant4 G4PenelopeComptonModel.cc source (lines 494-531)
  • Sood et al., LA-UR-04-0487 (Compton Doppler broadening algorithm used by OpenMC)
  • Kaltiaisenaho, "Photon transport physics in Serpent 2 Monte Carlo code," Comp. Phys. Comm. 252 (2020) 107143
  • Salvat, "PENELOPE-2018: A Code System for Monte Carlo Simulation of Electron and Photon Transport," OECD NEA
  • Geant4 Physics Reference Manual, electromagnetic/gamma_incident/compton
  • Brown et al., "A low energy bound atomic electron Compton scattering model for Geant4," Nucl. Instrum. Meth. B 338 (2014) 77-88

Checklist

  • I have performed a self-review of my own code
  • I have run clang-format (version 18) on any C++ source files (if applicable)
  • I have followed the style guidelines for Python source files (if applicable)
  • I have made corresponding changes to the documentation (if applicable)
  • I have added tests that prove my fix is effective or that my feature works (if applicable)

When Doppler broadening causes the Compton-scattered photon to transfer
less energy than the shell binding energy, the recoil electron has
negative kinetic energy and is not created. However, atomic relaxation
still runs and produces secondaries totaling ~e_b. The heating score
(E_last - E' - bank_second_E) went negative because bank_second_E
contained only relaxation products (~e_b) which exceeded the photon's
energy transfer.

Fix: when E_electron < 0, add it (a negative value) to bank_second_E
before relaxation runs. This offsets the relaxation energy so
bank_second_E = E_electron + sum(relaxation) = photon_transfer, giving
a heating score of zero. This is equivalent to Geant4 PENELOPE's
approach of setting localEnergyDeposit = diffEnergy when
eKineticEnergy < 0 (G4PenelopeComptonModel.cc).

Also use relaxation binding energy (ENDF/B) instead of Compton profile
binding energy (Biggs et al.) for the electron energy calculation when
atomic relaxation is enabled, matching the approach already used by the
photoelectric effect. This ensures the electron energy and relaxation
cascade use a self-consistent binding energy source.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@jtramm jtramm requested a review from pshriwise March 12, 2026 02:01
@jtramm
Copy link
Contributor Author

jtramm commented Mar 12, 2026

So in debugging the new CI failure for this PR, claude code is now saying that the negative heating tallies are just sort of inevitable given our physics model and not really a bug. Claude code is saying:

Compton scattering with Doppler broadening uses the impulse approximation — the photon-electron interaction happens so fast that the electron is treated as quasi-free during the collision. The
electron has some initial momentum (sampled from the Compton profile for that shell), and the scattering kinematics are computed as if it were a free electron with that momentum. This determines the
scattered photon energy E'.

After the collision, we subtract the shell's mean binding energy to get the electron's kinetic energy: E_electron = (E - E') - E_b. This is a post-hoc correction — the binding energy is a mean value
for the shell, not the exact binding for the specific quantum state of that electron.

When E_electron < 0, it doesn't mean the interaction didn't happen. The photon really did scatter, and the electron really was knocked out of its shell. The scattered photon energy E' is physical.
It's just that the mean binding energy E_b is a coarse approximation that overshoots for some electron momentum states. The electron was ejected — we just can't accurately assign its kinetic energy.

So the vacancy is real, and relaxation is physically justified. That's why both codes run it unconditionally. It's not variance reduction — it's that the model says the electron left the atom, even
though our energy arithmetic gives a nonsensical KE for it.

This means the negative per-bin heating is an inherent artifact of using mean binding energies in the impulse approximation. It's globally energy-conserving and not really a "bug" — it's just an
uncomfortable consequence of the approximation that becomes visible in fine mesh tallies.

Anyway, my worry is still that we have an enforcement of positivity on the heating tallies in the test_photon_heating test in tests/unit_tests/weightwindows/test.py, suggesting that these tallies should always be positive, so I want to square that.

@GuySten
Copy link
Contributor

GuySten commented Mar 12, 2026

I think we should set compton scattering cross section to zero when the photon does not have enough energy to emit the most loosely bound electron.
I also think that we should add rejection when the resulting electron energy is negative.

@pshriwise
Copy link
Contributor

Anyway, my worry is still that we have an enforcement of positivity on the heating tallies in the test_photon_heating test in tests/unit_tests/weightwindows/test.py, suggesting that these tallies should always be positive, so I want to square that.

Thanks for digging into this @jtramm! I think your notes, in particular the latest comment, summarizes it well. This test from #3863 was meant to check for a systematic negative tally due to a bug involving weight windows and I agree it would be good to determine a better test metric given that it's possible to get negative tally values due to the way we sample photons with our electron treatment. Tricky given that the symptom of that bug is also negative tally values, but I'll put some thought into that.

Copy link
Contributor

@paulromano paulromano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the possible negative heating may have more to do with thick-target bremsstrahlung because the sampling of photons from the TTB data doesn't necessarily preserve the original energy of the electron/positron. That could be easily tested out by turning it off (settings.electron_treatment = 'led').

Would love to hear @amandalund's thoughts on this one.

@GuySten
Copy link
Contributor

GuySten commented Mar 12, 2026

I've opened PR #3874, to reduce the times when negative electron energy occur.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants