Skip to content

Core: Upgrade Jetty to 12.1.5#15232

Open
manuzhang wants to merge 1 commit intoapache:mainfrom
manuzhang:upgrade-jetty
Open

Core: Upgrade Jetty to 12.1.5#15232
manuzhang wants to merge 1 commit intoapache:mainfrom
manuzhang:upgrade-jetty

Conversation

@manuzhang
Copy link
Member

@manuzhang manuzhang commented Feb 4, 2026

Pick up @nastra's work in #10837

@manuzhang manuzhang force-pushed the upgrade-jetty branch 6 times, most recently from 760afee to 76a23f0 Compare February 12, 2026 12:29
@manuzhang manuzhang marked this pull request as ready for review February 12, 2026 14:39
@manuzhang manuzhang force-pushed the upgrade-jetty branch 3 times, most recently from b58e186 to 070a318 Compare February 13, 2026 01:51
@manuzhang manuzhang requested a review from Copilot February 13, 2026 05:44
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR upgrades Jetty from version 11.0.26 to 12.1.5 and Jakarta Servlet API to 6.1.0, continuing work from PR #10837 after the JDK 17 migration was completed. The upgrade addresses dependency conflicts and modernizes the HTTP server infrastructure used in REST catalog implementations and tests.

Changes:

  • Upgrade Jetty from 11.0.26 to 12.1.5 and Servlet API to 6.1.0 (Jakarta EE 10)
  • Migrate from GzipHandler to CompressionHandler for compression handling
  • Add DummyMetricsServlet to avoid Jetty conflicts in Spark tests with DISABLE_UI_CONFIGS
  • Handle Jetty 12 ETag compression suffixes with stripETagSuffix utility
  • Remove JDK version checks for Spark 4.0/4.1 builds and transitive dependency flags
  • Override supportsNamesWithSlashes() to return false due to Jakarta Servlet 6.0 security changes

Reviewed changes

Copilot reviewed 108 out of 109 changed files in this pull request and generated 12 comments.

Show a summary per file
File Description
gradle/libs.versions.toml Upgrades Jetty version to 12.1.5, adds compression module dependencies
build.gradle Adds Jetty compression dependencies, excludes Jetty from Hadoop
open-api/src/testFixtures/java/org/apache/iceberg/rest/RESTCatalogServer.java Migrates to CompressionHandler and ee10 servlet packages
core/src/test/java/org/apache/iceberg/rest/*.java Updates imports to ee10 servlet packages
core/src/test/java/org/apache/iceberg/rest/RESTCatalogAdapter.java Adds stripETagSuffix to handle Jetty 12 ETag compression suffixes
spark/v*/spark/src/test/java/org/apache/iceberg/spark/TestBase.java Adds DISABLE_UI_CONFIGS and baseConfigs method for consistent test setup
spark/v*/spark/src/test/java/org/apache/iceberg/spark/DummyMetricsServlet.java New dummy servlet to avoid Jetty conflicts in Spark tests
spark/v*/spark/src/test/java/org/apache/iceberg/spark/sql/TestAggregatePushDown.java Removes custom @BeforeAll, relies on parent TestBase setup
spark/v*/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/ExtensionsTestBase.java Uses baseConfigs from TestBase for consolidated configuration
spark/v*/build.gradle Removes JDK version checks, removes transitive=false flags and testRuntimeOnly jetty.servlet
Comments suppressed due to low confidence (4)

spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/ExtensionsTestBase.java:56

  • Missing shuffle.partitions configuration. The previous implementation explicitly set "spark.sql.shuffle.partitions" to "4", but this configuration is not included in the baseConfigs() method. This could affect test performance or behavior if shuffle partitions defaults to a higher value. Consider adding this configuration to baseConfigs() or explicitly configuring it here if it's important for consistent test behavior across the extensions tests.
    spark/v4.1/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/ExtensionsTestBase.java:56
  • Missing shuffle.partitions configuration. The previous implementation explicitly set "spark.sql.shuffle.partitions" to "4", but this configuration is not included in the baseConfigs() method. This could affect test performance or behavior if shuffle partitions defaults to a higher value. Consider adding this configuration to baseConfigs() or explicitly configuring it here if it's important for consistent test behavior across the extensions tests.
    spark/v4.0/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/ExtensionsTestBase.java:56
  • Missing shuffle.partitions configuration. The previous implementation explicitly set "spark.sql.shuffle.partitions" to "4", but this configuration is not included in the baseConfigs() method. This could affect test performance or behavior if shuffle partitions defaults to a higher value. Consider adding this configuration to baseConfigs() or explicitly configuring it here if it's important for consistent test behavior across the extensions tests.
    spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/ExtensionsTestBase.java:56
  • Missing shuffle.partitions configuration. The previous implementation explicitly set "spark.sql.shuffle.partitions" to "4", but this configuration is not included in the baseConfigs() method. This could affect test performance or behavior if shuffle partitions defaults to a higher value. Consider adding this configuration to baseConfigs() or explicitly configuring it here if it's important for consistent test behavior across the extensions tests.
    TestBase.spark =
        SparkSession.builder()
            .master("local[2]")
            .config(baseConfigs(hiveConf))
            .config("spark.testing", "true")
            .config("spark.sql.extensions", IcebergSparkSessionExtensions.class.getName())
            .config("spark.sql.hive.metastorePartitionPruningFallbackOnException", "true")
            .config(
                SQLConf.ADAPTIVE_EXECUTION_ENABLED().key(), String.valueOf(RANDOM.nextBoolean()))
            .enableHiveSupport()
            .getOrCreate();

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

/**
* Strips the compression suffix from an ETag value.
*
* <p>Jetty 12 appends "--gzip" to ETags when compression is enabled. This method removes such
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

was that causing test failures or why is this needed exactly?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's an explanation here jetty/jetty.project#13047 (comment)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the pointer. Was this causing test failures? I don't see etags with --gzip when running with the changes from #10837, so I wonder whether you ran into issues here? I'm wondering whether that started to happen after switching to the new Compression API

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I believe that's due to new compression API.

public class TestAggregatePushDown extends CatalogTestBase {

@BeforeAll
public static void startMetastoreAndSpark() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this being removed? I think it would be helpful if you could point that out for reviewers, since it's not always obvious why a certain change is being done

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for not pointing out earlier. While disabling UI configs for all the places where SparkSession is created, I find no difference in testing results when I remove it. Hence, I think it's redundant and we can use SparkSession from TestBase.

protected static JavaSparkContext sparkContext = null;
protected static HiveCatalog catalog = null;
// Disable Spark UI and MetricsServlet to avoid dependency conflicts with Spark's Jetty 11
public static final Map<String, Object> DISABLE_UI_CONFIGS =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the PR should really only disable UI configs and not refactor configs in general

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Otherwise, I have to copy and paste these configs wherever a SparkSession is created.

Co-authored-by: Eduard Tudenhoefner <etudenhoefner@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants