Skip to content

Add v1/orders POST endpoint to get orders in batches#4048

Open
m-sz wants to merge 35 commits intomainfrom
get-orders-by-uid
Open

Add v1/orders POST endpoint to get orders in batches#4048
m-sz wants to merge 35 commits intomainfrom
get-orders-by-uid

Conversation

@m-sz
Copy link
Contributor

@m-sz m-sz commented Jan 13, 2026

Description

Aave wants to track specific orders in bulk, knowing their ids.

Changes

Adds POST handler for v1/orders/lookup endpoint that requires a list of order uids and responds with a vector of their data. Has a hardcoded limit of 128 orders per request (to fit the MAX_JSON_BODY_PAYLOAD size).

How to test

Test on staging, query multiple orders using this API.

@m-sz m-sz marked this pull request as ready for review January 23, 2026 14:10
@m-sz m-sz requested a review from a team as a code owner January 23, 2026 14:10
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new endpoint v1/get_orders to retrieve orders in batches using a POST request with a limit of 5000 orders per request. The code adds a new module get_orders_by_uid.rs, modifies api.rs to include the new route, and updates database/orders.rs and orderbook.rs to support fetching multiple orders. I found a missing test case for the MAX_ORDERS_LIMIT validation.

@m-sz
Copy link
Contributor Author

m-sz commented Jan 23, 2026

Tested on sepolia staging:

curl -d "[\"0x4f30bff027049ef2d2f8c8ba17be5e2e452c8fc9ad48273ef44d48c0e210bbc16406ce57254d45d503c7cb3e729aab9ddbbb3ec269738a51\", \"0x4f30bff027049ef2d2f8c8ba17be5e2e452c8fc9ad48273ef44d48c0e210bbc16406ce57254d45d503c7cb3e729aab9ddbbb3ec269738a51\"]" "https://barn.api.cow.fi/sepolia/api/v1/orders/lookup" -H "Content-Type: application/json"

(queries twice for the same order)

response

[
    {
        "creationDate": "2026-01-23T14:18:53.912159Z",
        "owner": "0x6406ce57254d45d503c7cb3e729aab9ddbbb3ec2",
        "uid": "0x4f30bff027049ef2d2f8c8ba17be5e2e452c8fc9ad48273ef44d48c0e210bbc16406ce57254d45d503c7cb3e729aab9ddbbb3ec269738a51",
        "availableBalance": null,
        "executedBuyAmount": "3931710370705679403",
        "executedSellAmount": "4000000000000000000",
        "executedSellAmountBeforeFees": "4000000000000000000",
        "executedFeeAmount": "0",
        "executedFee": "68289629294320597",
        "executedFeeToken": "0xbe72e441bf55620febc26715db68d3494213d8cb",
        "invalidated": false,
        "status": "fulfilled",
        "class": "limit",
        "settlementContract": "0x9008d19f58aabd9ed0d60971565aa8510560ab41",
        "isLiquidityOrder": false,
        "fullAppData": "{\"appCode\":\"CoW Swap\",\"environment\":\"barn\",\"metadata\":{\"orderClass\":{\"orderClass\":\"market\"},\"quote\":{\"slippageBips\":101,\"smartSlippage\":true}},\"version\":\"1.12.0\"}",
        "quote": {
            "gasAmount": "98318.00000000000",
            "gasPrice": "3575917434.000000",
            "sellTokenPrice": "0.008587696541803472",
            "sellAmount": "4000000000000000000",
            "buyAmount": "4000000000000000000",
            "feeAmount": "40939622000450720",
            "solver": "0x99b4136666ca1d13020830350ca8d01a0e5e466b",
            "verified": true,
            "metadata": {
                "interactions": [],
                "jitOrders": [],
                "preInteractions": [],
                "version": "1.0"
            }
        },
        "sellToken": "0xbe72e441bf55620febc26715db68d3494213d8cb",
        "buyToken": "0xbe72e441bf55620febc26715db68d3494213d8cb",
        "receiver": "0x6406ce57254d45d503c7cb3e729aab9ddbbb3ec2",
        "sellAmount": "4000000000000000000",
        "buyAmount": "3919073868181753833",
        "validTo": 1769179729,
        "appData": "0x8a8e8bdd353e10a8fb1aabb9594537fba2f1cce8fc655fdeaaa568286b3b53af",
        "feeAmount": "0",
        "kind": "sell",
        "partiallyFillable": false,
        "sellTokenBalance": "erc20",
        "buyTokenBalance": "erc20",
        "signingScheme": "eip712",
        "signature": "0x6cc9e9e2e8826cffb092a8531b103bbc3e70472e6455185af20245d1be14748e13a440f8e19e88db1a47320cff5d7b4a5ac32f07c76f708f94531616f597e8411b",
        "interactions": {
            "pre": [],
            "post": []
        }
    },
    {
        "creationDate": "2026-01-23T14:18:53.912159Z",
        "owner": "0x6406ce57254d45d503c7cb3e729aab9ddbbb3ec2",
        "uid": "0x4f30bff027049ef2d2f8c8ba17be5e2e452c8fc9ad48273ef44d48c0e210bbc16406ce57254d45d503c7cb3e729aab9ddbbb3ec269738a51",
        "availableBalance": null,
        "executedBuyAmount": "3931710370705679403",
        "executedSellAmount": "4000000000000000000",
        "executedSellAmountBeforeFees": "4000000000000000000",
        "executedFeeAmount": "0",
        "executedFee": "68289629294320597",
        "executedFeeToken": "0xbe72e441bf55620febc26715db68d3494213d8cb",
        "invalidated": false,
        "status": "fulfilled",
        "class": "limit",
        "settlementContract": "0x9008d19f58aabd9ed0d60971565aa8510560ab41",
        "isLiquidityOrder": false,
        "fullAppData": "{\"appCode\":\"CoW Swap\",\"environment\":\"barn\",\"metadata\":{\"orderClass\":{\"orderClass\":\"market\"},\"quote\":{\"slippageBips\":101,\"smartSlippage\":true}},\"version\":\"1.12.0\"}",
        "quote": {
            "gasAmount": "98318.00000000000",
            "gasPrice": "3575917434.000000",
            "sellTokenPrice": "0.008587696541803472",
            "sellAmount": "4000000000000000000",
            "buyAmount": "4000000000000000000",
            "feeAmount": "40939622000450720",
            "solver": "0x99b4136666ca1d13020830350ca8d01a0e5e466b",
            "verified": true,
            "metadata": {
                "interactions": [],
                "jitOrders": [],
                "preInteractions": [],
                "version": "1.0"
            }
        },
        "sellToken": "0xbe72e441bf55620febc26715db68d3494213d8cb",
        "buyToken": "0xbe72e441bf55620febc26715db68d3494213d8cb",
        "receiver": "0x6406ce57254d45d503c7cb3e729aab9ddbbb3ec2",
        "sellAmount": "4000000000000000000",
        "buyAmount": "3919073868181753833",
        "validTo": 1769179729,
        "appData": "0x8a8e8bdd353e10a8fb1aabb9594537fba2f1cce8fc655fdeaaa568286b3b53af",
        "feeAmount": "0",
        "kind": "sell",
        "partiallyFillable": false,
        "sellTokenBalance": "erc20",
        "buyTokenBalance": "erc20",
        "signingScheme": "eip712",
        "signature": "0x6cc9e9e2e8826cffb092a8531b103bbc3e70472e6455185af20245d1be14748e13a440f8e19e88db1a47320cff5d7b4a5ac32f07c76f708f94531616f597e8411b",
        "interactions": {
            "pre": [],
            "post": []
        }
    }
]

@m-sz
Copy link
Contributor Author

m-sz commented Jan 23, 2026

By the way, we can have a discussion on what the endpoint itself should be.
This has to be a POST request since we need to provide JSON array of order ids and /v1/orders obviously creates an order so there has to be a different name

I got a couple suggestions from Claude

  1. Nested under a "batch" or "bulk" path
    POST /v1/batch/orders
    POST /v1/bulk/orders
    Groups batch operations separately from the main resource routes.
  2. Different verb-like suffix
    POST /v1/orders/fetch
    POST /v1/orders/get
    POST /v1/orders/list
    Less common but unambiguous.
  3. Underscore-prefixed action
    POST /v1/orders/_search
    POST /v1/orders/_mget
    Elasticsearch uses this convention. The underscore signals "this is a special action, not a resource."
  4. Hyphenated compound resource
    POST /v1/order-lookups
    POST /v1/order-queries
    Treats the bulk lookup as its own resource type.
  5. RPC-style separate namespace
    POST /v1/rpc/get-orders
    POST /v1/actions/get-orders

based on the above suggestions I have opted for a custom not-order-id-like name /v1/orders/lookup - to lookup many orders, but I am open for discussion and don't have a strong opinion.

@m-sz m-sz marked this pull request as draft January 26, 2026 12:25
@m-sz m-sz marked this pull request as ready for review February 5, 2026 13:58
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The implementation adds a new endpoint to fetch orders in batches. A critical issue was found in crates/orderbook/src/database/orders.rs where await is used on a stream, which will cause a compilation error. The implementation also inefficiently collects two full vectors into memory; a suggestion is provided to fix the bug and improve performance by chaining the streams.

@m-sz m-sz marked this pull request as draft February 5, 2026 14:05
Copy link
Contributor

@fafk fafk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. The endpoint should be documented in openapi.yml too I think.

@github-actions
Copy link

This pull request has been marked as stale because it has been inactive a while. Please update this pull request or it will be automatically closed.

@github-actions github-actions bot added the stale label Feb 13, 2026
@github-actions
Copy link

Reminder: Please consider backward compatibility when modifying the API specification.
If breaking changes are unavoidable, ensure:

  • You explicitly pointed out breaking changes.
  • You communicate the changes to affected teams (at least Frontend team and SAFE team).
  • You provide proper versioning and migration mechanisms.

Caused by:

@m-sz m-sz marked this pull request as ready for review February 17, 2026 14:24
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new endpoint for fetching multiple orders by their UIDs. It has a critical potential Denial of Service vulnerability due to database connection pool exhaustion when streaming results. Additionally, the API route is incorrectly defined as GET instead of POST, the database logic for fetching multiple orders has a lifetime issue and silently ignores errors, and the new end-to-end test contains a compilation error.

Comment on lines 321 to 346
async fn many_orders(&self, uids: &[OrderUid]) -> Result<BoxStream<'static, Result<Order>>> {
let _timer = super::Metrics::get()
.database_queries
.with_label_values(&["many_orders"])
.start_timer();
let uids = uids.iter().map(|uid| ByteArray(uid.0)).collect::<Vec<_>>();
let mut ex = self.pool.acquire().await?;

Ok(async_stream::try_stream! {
{
let mut stream = orders::many_full_orders_with_quotes(&mut ex, uids.as_slice()).await;
while let Some(order) = stream.next().await {
let Ok(order) = order else { continue };
let (order, quote) = order.into_order_and_quote();
yield full_order_with_quote_into_model_order(order, quote.as_ref())?;
}
}
{
let mut stream = database::jit_orders::get_many_by_id(&mut ex, uids.as_slice()).await;
while let Some(order) = stream.next().await {
let Ok(order) = order else { continue };
yield full_order_into_model_order(order)?;
}
}
}.boxed())
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

The many_orders function acquires a database connection and captures it within an async_stream. This can lead to a Denial of Service vulnerability due to database connection pool exhaustion if the stream is used as a response body and clients hold connections open. Additionally, the database connection ex is a local variable that is dropped, but the returned 'static stream captures ex by reference, which will lead to a use-after-free error and likely a compilation error due to lifetime mismatch. It is recommended to collect the results into a Vec<Order> and return the collection instead of a stream to ensure the database connection is released promptly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has been chosen as a trade-off between collecting all order data in-memory. This allows for streaming return of response.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additionally, the database connection ex is a local variable that is dropped, but the returned 'static stream captures ex by reference, which will lead to a use-after-free error and likely a compilation error due to lifetime mismatch.

Isn't that an important concern?

Copy link
Contributor Author

@m-sz m-sz Feb 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not an actual concern, since async-stream macro expands into an async move block into which the value is moved.

Excerpt from cargo expand on the function:

let mut ex = __self.pool.acquire().await?;
Ok({
    let (mut __yield_tx, __yield_rx) =
        unsafe { ::async_stream::__private::yielder::pair() };
    ::async_stream::__private::AsyncStream::new(__yield_rx,
        async move
            {
            '__async_stream_private_check_scope:
                {
                {
                    let mut stream =
                        orders::many_full_orders_with_quotes(&mut ex,
                                uids.as_slice()).await;

{
let mut stream = orders::many_full_orders_with_quotes(&mut ex, uids.as_slice()).await;
while let Some(order) = stream.next().await {
let Ok(order) = order else { continue };
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Database errors from the stream are silently ignored. If an error occurs while fetching from the database, let Ok(order) = order else { continue }; will skip the erroneous item and continue, potentially returning an incomplete list of orders without any error indication. The error should be propagated using the ? operator. This also applies to the loop on line 341.

Suggested change
let Ok(order) = order else { continue };
let order = order?;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no way to propagate errors as we are returning an array of orders that were fetched. The end-user of the API can cross-reference which orders were not returned.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wdym? full_order_with_quote_into_model_order returns a result. If stream.next() returned an Err, we should tell the client that this order is missing in the response, not because it doesn't exist, but because we encountered an error during the process. Silencing the error doesn't fit here at all.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I will re-work the response format to include which orders had an error on this stage.
It is not feasible to figure out which orders were missing though (as this would require internally mapping the requested vs returned orders) which I believe it's best done at the request side, not as part of handling this request.

m-sz added 2 commits February 17, 2026 16:38
Adjusts ORDER_UID_LIMIT to be 128 which allows it to actually work
instead of having the MAX_JSON_BODY_PAYLOAD always take effect.

The MAX_JSON_BODY_PAYLOAD is set to 16384.
OrderUid is 56 bytes, when encoded as Json array of 0x hex prefixed strings
each takes around 116 bytes. The new value for ORDER_UID_LIMIT is 128.

(128 * 116 = 14848)
@m-sz m-sz requested a review from squadgazzz February 17, 2026 16:48
@github-actions github-actions bot removed the stale label Feb 18, 2026
pub const ORDER_UID_LIMIT: usize = 1024;
/// OrderUid is 56 bytes. When hex encoded as 0x prefixes Json string it is 116.
/// Chosen to be under the MAX_JSON_BODY_PAYLOAD size of 1024 * 16
pub const ORDER_UID_LIMIT: usize = 128;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure I am following. The PR description mentions 5000 orders. Here we have a limit of 128 orders.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated the PR description now.
I have been doing some tests on the allowable payload size and if we don't want to bump up the MAX_JSON_BODY_PAYLOAD size of 16kb then this is the limit of how many orders we can put into a single request.

Maybe we want to remove the limiting altogether since the body size limit would be hit first?

Copy link
Contributor

@squadgazzz squadgazzz Feb 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand the idea here. AAVE requested an api to receive batches of up to 5k orders. Now you are saying we shouldn't exceed 16kb for a request. That doesn't match. Also, if we end up with the 16kb request limit, it doesn't make any sense to introduce that complex streaming solution.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we want to remove the limiting altogether since the body size limit would be hit first?

Removing limiting is not an option for untrusted endpoints. The simple solution here would be to bump the limit.

A more complicated solution would be to have a filter that decides whether a request gets the small limit or the big limit, would of course require coordination, etc.

My point being: don't remove the limits, bump them

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have confirmed with Aave on the call that 128 orders per request is fine. We can always revisit the limit at a later point in time.

Copy link
Contributor

@squadgazzz squadgazzz Feb 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have confirmed with Aave on the call that 128 orders per request is fine. We can always revisit the limit at a later point in time.

Ok, then this stream solution doesn't make sense here. IMO, it is overkill. I originally suggested it for the 5k orders request.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say let's keep this approach anyway, since It's lighter on our services' resources and if Aave would ever like us to expand the max request body limit/requested order count, it will just be a change of the constants.

Copy link
Contributor

@squadgazzz squadgazzz Feb 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say let's keep this approach anyway, since It's lighter on our services' resources and if Aave would ever like us to expand the max request body limit/requested order count, it will just be a change of the constants.

I want to strongly avoid overcomplicating an already non-trivial codebase. This stream communication seems absolutely unnecessary for a simple response that can contain up to 128 orders. The get_orders API has a page limit of 1000 with no streaming!

Comment on lines 66 to 76
fn streaming_response(orders: impl Stream<Item = Order> + Send + 'static) -> Response {
let json_stream = streaming_json_array(
orders.filter_map(async move |order| serde_json::to_string(&order).ok()),
)
.map(|s| Ok::<_, std::convert::Infallible>(s.into_bytes()));
(
[("content-type", "application/json")],
body::Body::from_stream(json_stream),
)
.into_response()
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is AAVE fine with this approach?

{
let mut stream = database::jit_orders::get_many_by_id(&mut ex, uids.as_slice()).await;
while let Some(order) = stream.next().await {
let Ok(order) = order else { continue };
Copy link
Contributor

@squadgazzz squadgazzz Feb 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto (the error needs to be propogated to the client the same way as on the line below)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will rework the error handling.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before wasting time on polishing this streaming solution, let's agree on the overall design.

Copy link
Contributor

@jmg-duarte jmg-duarte left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR looks almost there, but considering you're in touch with AAVE, I'd suggest reviewing the shape of the response for them as JSON is not a stream-friendly format at all

pub const ORDER_UID_LIMIT: usize = 1024;
/// OrderUid is 56 bytes. When hex encoded as 0x prefixes Json string it is 116.
/// Chosen to be under the MAX_JSON_BODY_PAYLOAD size of 1024 * 16
pub const ORDER_UID_LIMIT: usize = 128;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we want to remove the limiting altogether since the body size limit would be hit first?

Removing limiting is not an option for untrusted endpoints. The simple solution here would be to bump the limit.

A more complicated solution would be to have a filter that decides whether a request gets the small limit or the big limit, would of course require coordination, etc.

My point being: don't remove the limits, bump them

Comment on lines 53 to 64
fn streaming_json_array(
elements: impl Stream<Item = String> + Send + 'static,
) -> impl Stream<Item = String> + Send + 'static {
let mut first = true;
stream::once(async { "[".to_string() })
.chain(elements.map(move |element| {
let prefix = if first { "" } else { "," };
first = false;
format!("{prefix}{element}")
}))
.chain(stream::once(async { "]".to_string() }))
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To me this looks like a complicated way of implementing a JSONL, would AAVE be ok with that instead? It's much more stream friendly and it would be also easier for them to parse since each line is:

  • delimited with the new line
  • a valid json object that they can parse (as opposed to waiting for the whole array)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will go ahead and make the change, while waiting for Aave's response on the matter since I presume they have no preference and JSONL is indeed more fitting here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants

Comments