Conversation
| rate_limit_pool[req.model] += 1 | ||
| except RateLimitError as e: | ||
| # reduce rate limit by half and put to the back of the queue | ||
| rate_limit_pool[req.model] /= 2 |
There was a problem hiding this comment.
What about refactor this into a backoff function that a user can control the strategy? "/2" may be too aggressive.
| semaphore.release() | ||
|
|
||
| # --- Worker Thread Function --- | ||
| def worker(semaphore, task_queue): |
There was a problem hiding this comment.
worker is single process multi-thread. I'm not sure if it will becomes the performance bottleneck. Can we write a short piece of code to test its performance limit?
| result = {"result": {**response.model_dump(), "_hidden_params": response._hidden_params, "_response_headers": response._response_headers}} | ||
| job_results[job_id] = result | ||
| # increase rate limit by 1 | ||
| rate_limit_pool[req.model] += 1 |
There was a problem hiding this comment.
This should also refactor into a separate function and let user decide the strategy. In some cases we don't need to increase the rate limit.
| except RateLimitError as e: | ||
| # reduce rate limit by half and put to the back of the queue | ||
| rate_limit_pool[req.model] /= 2 | ||
| task_queue.put((job_id, req)) |
There was a problem hiding this comment.
This assumes the requester won't timeout. It is probably ok for cognify but not necessary ok for a generic rate limiter. Can you add a comment?
| **model_kwargs | ||
| ) | ||
|
|
||
| # response = completion( |
There was a problem hiding this comment.
If the client does not want rate limiter (e.g., to ease the debugging to avoid another http endpoint, or in the replay mode), can they revert to the non-rate limiter functionality?
To get around API provider rate limit, we currently queue all outgoing requests by model type and dispatch them with rate control and retry.
-por--rate_limit_port.