Do not panic in Itertools::get large range#1089
Do not panic in Itertools::get large range#1089stepancheg wants to merge 1 commit intorust-itertools:masterfrom
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #1089 +/- ##
==========================================
- Coverage 94.38% 93.90% -0.49%
==========================================
Files 48 51 +3
Lines 6665 6513 -152
==========================================
- Hits 6291 6116 -175
- Misses 374 397 +23 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Note that arguably the correct fix if that's the problem you're seeing is to have validation on your handler that returns HTTP 400 for excessively-large inputs. Especially for a paging API, even walking though a million pages is impractical and probably worth discouraging, let alone 16 billion billion. |
|
There could be ten pages total, but server would still crash because of assertion inside itertools, and would work perfectly fine with this fix. As for validation, it could have happen after iterator was constructed, if it even needed. My point is: panicking is bad, especially on server-side, and innocently looking code should not panic. It is hard to debug, it leads to DOS and outages (remember recent cloudflare). |
I don't care how many pages there are; the request handler should be rejecting large numbers well before it even talks to the storage that holds the pages, let alone uses itertools on it, the same way it rejects Panicking in a server is fine; arguably even good. It should be turned into an HTTP 500 to avoid impacting other requests and you should use your telemetry to investigate those 500s and fix them -- by doing things like having better input validation on the incoming requests. That said, I do agree that |
|
There could be plenty of and places where large numbers may slip in; I just gave first illustration that came to my mind. Your argument about validation is akin to just write code without bugs and you don’t need safe language. We love rust for many reasons and one of the larger reasons is that it allows writing safer code without bugs and with less effort than say in C++. Panicking is not ok; much of software is written with panic=abort because binary size, speed, and mostly because writing panic-safe code is extremely hard; I actually doubt it is practically possible. In fact, in both companies I worked, everything production is compiled with panic=abort; I believe Google C++ does not use exceptions for similar reasons. Google also seems to use panic=abort. But what we are arguing about? What is the harm of making code safer? |
Panic is bad, because server may crash on user input like
?max_page=18446744073709551615.This is backwards incompatible change because return type of
getchanges.