Mill is built using Mill. The ./mill script will be sufficient to bootstrap the project.
When starting out, please run:
-
Run
./mill --helpfor a quick primer on how to use./millas a build tool. -
Run
./mill visualizeMillModulesto see the dependency structure of the codebase
-
Import code into IntelliJ IDEA
-
Make code changes
-
Run
./mill __.compileto make sure you didn’t screw up anything simple (you can use-w/--watchfor convenience) -
Run specific unit tests (if any) e.g.
./mill core.api.testor./mill core.api.test mill.api.PathRefTests -
Test manually via
./mill dist.run scratch foo.scala -
Run specific integration/example tests e.g.
./mill integration.feature.full-run-logsor./mill example.javalib.basic.1-script -
Run broader integration/unit tests e.g.
./mill core._.testor./mill example.javalib.basic. -
Push to a PR and run all tests in CI
-
If anything fails, go back to (2.)
Ways you can run manual testing or automated tests using ./mill:
Config |
Automated Testing |
Manual Testing |
Manual Testing CI |
In-Process Tests |
|
||
Sub-Process Tests |
|
|
|
Bootstrapping: Building Mill with your current checkout of Mill |
|
|
In general, println or pprint.log should work in most places and be sufficient for
instrumenting and debugging the Mill codebase. In the occasional spot where println
doesn’t work you can use mill.constants.DebugLog.println(…)/mill.api.Debug(…) which writes
to a file ~/mill-debug-log.txt in your home folder. DebugLog/Debug is useful for scenarios
like debugging Mill’s terminal UI (where println would mess things up) or subprocesses
(where stdout/stderr may get captured or used and cannot be used to display your own
debug statements).
In-process tests live in the .test sub-modules of the various Mill modules.
These range from tiny unit tests, to larger integration tests that instantiate a
mill.testkit.BaseModule in-process and a UnitTester to evaluate tasks on it.
Most "core" tests live in core.__.test; these should run almost instantly, and cover
most of Mill’s functionality that is not specific to Scala/Scala.js/etc..
Tests specific to Scala/Scala.js/Scala-Native live in
scalalib.test/scalajslib.test/scalanativelib.test respectively, and take a lot longer
to run because running them involves actually compiling Scala code.
The various contrib modules also have tests in this style, e.g.
contrib.buildinfo.test
Note that the in-memory tests compile the BaseModule together with the test suite,
and do not exercise the Mill script-file bootstrapping, transformation, and compilation process.
These are tests that involve bootstrapping an entire Mill subprocess to exercise its functionality. Typically end-to-end test to verify user workflows, or integration tests to assert on behavior that is difficult to cover via unit tests.
Automated Test
> ./mill example.javalib.basic.3-simple
> ./mill integration.feature.inspectFor automated test runs, you can see the state of the workspace and out/ folder contents in
the respective test sandboxes e.g. out/integration/feature/inspect/testForked.dest/sandbox/run-1/.
This is useful for analysis of why the test passed or failed.
Manual Test
> ./mill dist.run example/javalib/basic/3-simple -i foo.run --text helloManual Test using Launcher Script
> ./mill dist.launcher && (cd example/javalib/basic/3-simple && ../../../../out/dist/launcher.dest/run foo.run --text hello)example tests are written in a single build.mill file, with the test commands written
in a comment with a bash-like syntax together with the build code and comments that explain
the example.These serve three purposes:
-
Basic smoke-tests to make sure the functionality works at all, without covering every edge case
-
User-facing documentation, with the test cases, test commands, and explanatory comments included in the Mill documentation site
-
Example repositories, that Mill users can download to bootstrap their own projects
The integration tests are similar to example tests and share most of their test
infrastructure, but with differences:
-
integrationtests are meant to test features more thoroughly thenexampletests, covering more and deeper edge cases even at the expense of readability -
integrationtests are written using a Scala test suite extendingIntegrationTestSuite, giving more flexibility at the expense of readability
There are eight variants of these tests, .{packaged,fast,native}.{daemon,nodaemon}.
When you run e.g. ./mill integration.feature.full-run-logs, it picks the default configuration
for the test that is exercised in Mill’s CI pipeline, but you can run the other variants manually
e.g. ./mill integration.feature.full-run-logs.proguard.nodaemon which can be useful for manual
testing.
The first label specifies how the Mill code is packaged before testing
-
.packagedruns using the compiled Mill code either packaged into an assembly or into jars. -
.proguardis similar to.packaged, but uses a proguard-optimized assembly jar. This is typically not used manually since it adds ~15s overhead optimizing the jar, but some CI jobs use this to make sure the proguarded launcher assemblies work since that’s what we publish -
.nativeruns using the compiled Mill code packaged into an assembly with a Graal native binary launcher. This is the slowest mode is only really necessary for debugging Mill’s Graal native binary configuration. -
fastruns without packaging code into jars and re-uses theout/folder the Mill daemon between runs. This isn’t intended for manual usage, but is used in CI to speed up running large numbers of tests
The second label specifies how the Mill process is managed during the test:
-
.daemontest run the test cases with the default configuration, so consecutive commands run in the same long-lived background daemon process -
.nodaemontest run the test cases with--no-daemon, meaning each command runs in a newly spawned Mill process
You can reproduce these tests manually using dist.raw.installLocal:
> ./mill dist.raw.installLocal && (cd example/javalib/basic/3-simple && ../../../../mill-assembly.jar run --text hello)You can also use dist.native.installLocal for a Graal Native Image executable
(which is slower to create but faster to start than the default executable assembly)
or dist.installLocal which is the proguarded launcher assembly we publish (which
is also slower to create and faster to start than the raw jar).
To test bootstrapping of Mill’s own Mill build using a version of Mill built from your checkout, you can run
> ./mill dist.installLocal
> ci/patch-mill-bootstrap.shThis creates a standalone assembly at mill-assembly.jar you can use, which references jars
published locally in your ~/.ivy2/local cache, and applies any necessary patches to
build.mill to deal with changes in Mill between the version specified in .config/mill-version
that is normally used to build Mill and the HEAD version your assembly was created from.
You can then use this standalone assembly to build & re-build your current Mill checkout without
worrying about stomping over compiled code that the assembly is using.
You can also use ./mill dist.installLocalCache to provide a "stable" version of Mill that
can be used locally in bootstrap scripts.
This assembly is design to work on bash, bash-like shells and Windows Cmd.
If you have another default shell like zsh or fish, you probably need to invoke it with
sh ~/mill-release or prepend the file with a proper shebang.
If you want to install into a different location or a different Ivy repository, you can set its optional parameters.
/tmp$ ./mill dist.installLocal --binFile /tmp/mill --ivyRepo /tmp/millRepo
...
Published 44 modules and installed /tmp/millFor testing documentation changes locally, you can generate documentation for the current checkout via
$ ./mill website.fastPagesTo generate documentation for both the current checkout and earlier versions, you can use
$ ./mill website.localPagesIn case of troubles with caching and/or incremental compilation, you can always restart from scratch removing the out directory:
rm -rf out/To run all autofixes and autoformatters:
> ./mill __.fix + mill.javalib.palantirformat/ + mill.scalalib.scalafmt/ + mill.kotlinlib.ktlint/These are run automatically on pull requests, so feel free to pull down the changes if you want
to continue developing after your PR has been autofixed for you.
-
Mill’s pull-request validation runs with Selective Test Execution enabled; this automatically selects the tests to run based on the code or build configuration that changed in that PR.
-
CI uses
selectiveInputsto customize the selective testing behavior for the following test jobs, to only run them if the correspondinglibs/module has changed, ignoring changes in the core Mill codebase. This is a heuristic optimization that assumes that bugs in the core Mill code paths will get caught in one of the other jobs.-
example.kotlinlibonly runs iflibs/kotlinlib/changes -
example.javascriptlibonly runs iflibs/javascriptlib/changes -
example.pythonlibonly runs iflibs/pythonlib/changes -
example.groovylibonly runs iflibs/groovylib/changes -
example.androidlibonly runs iflibs/androidlib/changes -
example.migratingonly runs iflibs/init/changes
-
-
To disable selective test execution on a PR, you can label it with
run-all-tests, which will run all tests on a PR regardless of what code was changed -
Mill tests draft PRs on contributor forks of the repository, so please make sure Github Actions are enabled on your fork. Once you are happy with your draft, mark it
ready_for_reviewand it will run CI on Mill’s repository before merging -
If you need to debug things in CI, you can comment/uncomment the two sections of
.github/workflows/run-tests.ymlin order to skip the main CI jobs and only run the command(s) you need, on the OS you want to test on. This can greatly speed up the debugging process compared to running the full suite every time you make a change.
The Mill project is organized roughly as follows:
-
runner,main.*,scalalib,scalajslib,scalanativelib.
These are general lightweight and dependency-free: mostly configuration & wiring of a Mill build and without the heavy lifting.
Heavy lifting is delegated to the worker modules (described below), which the core modules resolve from Maven Central (or from the local filesystem in dev) and load into isolated classloaders.
-
scalalib.worker,scalajslib.worker[0.6],scalajslib.worker[1.0]
These modules are where the heavy-lifting happens, and include heavy dependencies like the Scala compiler, Scala.js optimizer, etc.. Rather than being bundled in the main assembly & classpath, these are resolved separately from Maven Central (or from the local filesystem in dev) and kept in isolated classloaders.
This allows a single Mill build to use multiple versions of e.g. the Scala.js optimizer without classpath conflicts.
-
contrib/bloop/,contrib/flyway/,contrib/scoverage/, etc.
These are modules that help integrate Mill with the wide variety of different tools and utilities available in the JVM ecosystem.
These modules are not as stringently reviewed as the main Mill core/worker codebase, and are primarily maintained by their individual contributors. These are maintained as part of the primary Mill Github repo for easy testing/updating as the core Mill APIs evolve, ensuring that they are always tested and passing against the corresponding version of Mill.
Mill maintains backward binary compatibility for each major version (major.minor.point),
enforced with Mima, for the following packages:
-
mill.api -
mill.util -
mill.api -
mill.eval -
mill.resolve -
mill.scalalib -
mill.scalajslib -
mill.scalanativelib
Other packages like mill.runner, mill.bsp, etc. are on the classpath but offer no
compatibility guarantees.
Currently, Mill does not offer compatibility guarantees for mill.contrib
packages, although they tend to evolve slowly.
This may change over time as these packages mature over time.
-
Changes to the main branch need a pull request. Exceptions are preparation commits for releases, which are meant to be pushed with tags in one go
-
Merged pull request (and closed issues) need to be assigned to a Milestone
-
Pull requests are typically merged via "Squash and merge", so we get a linear and useful history
-
Larger pull request, where it makes sense to keep single commits, or with multiple authors may be committed via merge commits.
-
The title should be meaningful and may contain the pull request number in parentheses (typically automatically generated on GitHub)
-
The description should contain additional required details, which typically reflect the content of the first PR comment
-
A full link to the pull request should be added via a line:
Pull request: <link> -
If the PR has multiple authors but is merged as merge commit, those authors should be included via a line for each co-author:
Co-authored-by: <author> -
If the message contains links to other issues or pull requests, you should use full URLs to reference them
Mill is profiled using the JProfiler Java Profiler, by courtesy of EJ Technologies.