Skip to content

Experiment: force debug_assertions in consteval context#97467

Closed
clarfonthey wants to merge 1 commit intorust-lang:masterfrom
clarfonthey:cursed_eval_debug_assertions
Closed

Experiment: force debug_assertions in consteval context#97467
clarfonthey wants to merge 1 commit intorust-lang:masterfrom
clarfonthey:cursed_eval_debug_assertions

Conversation

@clarfonthey
Copy link
Contributor

@clarfonthey clarfonthey commented May 27, 2022

This is probably very cursed, but, I saw this as a potential avenue to problems like #95332 and decided to just try and implement it since it felt comically simple.

A lot of unsafe code uses debug assertions to verify safety constraints in debug mode, even though they're removed in release mode. Since compile time is expected to be longer in release mode anyway due to optimisations… why not enable these assertions when doing consteval, since we're expecting compile times to be longer anyway, if it means that we'll catch more problems? While a fully featured UB check is helpful, it's way easier to just use all of the checks we already use to make sure unsafe code is working correctly.

Of course, I have no idea how well this will work, or if it will cause a serious dip in performance, but the code is so small I figured I'd open a PR anyway and see what people think about this idea.

I did try testing and noticed a few UI tests that explicitly try to look for the UB-checker behaviour and try to disable the debug-assert behaviour, and I'm not sure how to deal with these. But other than that, it looks like everything works with this turned on.


Note: I think that a full implementation of this would involve modifying conditional compilation in constants to make cfg!(debug_assertions) actually just return true always in consteval, but that's way more complicated since it would mean actually compiling things twice for consteval. Plus, even if we technically modify the cfg! macro, it will leave #[cfg] unchanged.

This is basically a hack to just apply this to debug_assert! for now, and we can figure out the finer details later.


Okay, I'm genuinely unsure how tests managed to pass for this when ./x.py test --stage 1 failed locally. But 🤷🏻, cool.

Loading
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

perf-regression Performance regression. S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-libs Relevant to the library team, which will review and decide on the PR/issue.

Projects

None yet

Development

Successfully merging this pull request may close these issues.