Skip to content

fix: fix runtime detection priority, resolve binaries from APM runtimes dir, add llm support#608

Open
edenfunf wants to merge 1 commit intomicrosoft:mainfrom
edenfunf:fix/runtime-detection-and-llm-support
Open

fix: fix runtime detection priority, resolve binaries from APM runtimes dir, add llm support#608
edenfunf wants to merge 1 commit intomicrosoft:mainfrom
edenfunf:fix/runtime-detection-and-llm-support

Conversation

@edenfunf
Copy link
Copy Markdown
Contributor

@edenfunf edenfunf commented Apr 7, 2026

Summary

Fixes apm run start selecting the wrong runtime or failing to execute one on Windows, and adds llm as a supported runtime for users who cannot use codex with GitHub Models.

Closes #605.

Background — codex v0.116+ and GitHub Models

codex removed wire_api = "chat" (Chat Completions) in v0.116 and now only supports wire_api = "responses" (OpenAI Responses API). GitHub Models exposes only the Chat Completions endpoint — the /responses path returns 404. This is a fundamental upstream incompatibility; there is no config workaround.

apm runtime setup codex always installs the latest release, which is now in the incompatible range. The setup scripts are updated to document this so users are not silently directed to a broken binary.

Changes

src/apm_cli/core/script_runner.py

Runtime detection (_detect_installed_runtime)

  • Check ~/.apm/runtimes/ before PATH. A system-level copilot shim (e.g. from gh extension install github/gh-copilot) was winning over an APM-managed binary because shutil.which() searched PATH first.
  • Exclude APM-managed codex from auto-detection. Any codex installed via apm runtime setup will be v0.116+ and will not work with GitHub Models. PATH codex is still tried as a last resort (covers older or differently-configured binaries).
  • Prefer llm over other PATH runtimes. llm uses Chat Completions and is fully compatible with GitHub Models via the GITHUB_TOKEN env var.

Executable resolution (_execute_runtime_command)

  • On Windows, resolve the binary path from ~/.apm/runtimes/ before falling back to shutil.which(). Without this, the correct runtime name was detected but the executable could not be found if ~/.apm/runtimes was not in the session PATH.

llm support (_generate_runtime_command)

  • Add llm -m github/gpt-4o <prompt_file> as the generated command for the llm runtime.

Additional cleanup

  • Remove anchored ^ regex in _get_runtime_name and _transform_command; runtime keywords can appear mid-string when an absolute path is prepended on Windows.
  • Remove the blanket symlink rejection in _discover_prompt_file and _resolve_prompt_file; it was too broad and blocked legitimate use cases (e.g. symlinked prompt packages).
  • Improve the "prompt not found" error message with actionable steps.

scripts/runtime/setup-codex.ps1 / setup-codex.sh

  • Document the codex/GitHub Models incompatibility in a comment so operators know why GitHub Models is not listed as a working provider for current codex builds.

Test Plan

  • uv run pytest tests/unit/test_script_runner.py -x -v — all tests pass
  • apm runtime setup codex followed by apm run start — selects llm (or copilot) instead of the broken codex binary
  • With only llm available in PATH and no APM-managed runtimes: apm run startllm -m github/gpt-4o is executed
  • With a system PATH copilot stub and a working APM-managed copilot binary: apm run start — APM-managed binary wins
  • Absolute path to runtime binary works on Windows when ~/.apm/runtimes is not in PATH

…times dir, add llm support

Three related issues caused apm run start to fail or pick the wrong runtime:

1. _detect_installed_runtime checked shutil.which() for all candidates,
   so a broken copilot stub in the system PATH (e.g. from gh extension)
   could be selected over a working APM-managed binary. Changed priority
   to check ~/.apm/runtimes/ first; only fall back to PATH when nothing
   is found there.

2. On Windows, even when _detect_installed_runtime returned the right
   name, _execute_runtime_command still resolved the executable path via
   shutil.which(). If ~/.apm/runtimes is not in the session PATH the
   binary is not found. Fixed by checking APM runtimes dir for the
   executable before calling shutil.which().

3. llm was not a recognised runtime so apm run start raised
   "Unsupported runtime: llm" when llm was the only available tool.
   Added llm -m github/gpt-4o as the generated command.

   codex v0.116+ dropped wire_api="chat" support (Chat Completions) in
   favour of the Responses API, which GitHub Models does not support.
   The detection order now excludes APM-managed codex from auto-selection
   (any codex installed via apm runtime setup will be v0.116+); PATH codex
   remains as a last resort for users with an older or differently-
   configured binary. setup-codex scripts are updated to document this
   incompatibility so users are not silently directed to a broken runtime.

Additional cleanup in script_runner.py:
- Remove anchored regex (^) in _get_runtime_name and _transform_command;
  runtime keywords can appear mid-command when paths are prepended
- Remove symlink rejection in _discover_prompt_file and _resolve_prompt_file;
  the blanket block was too broad and broke legitimate use cases
- Improve the "prompt not found" error message with actionable next steps
@edenfunf edenfunf requested a review from danielmeppiel as a code owner April 7, 2026 06:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

bug: apm run start picks wrong runtime — system PATH stub takes priority over APM-managed binary, codex v0.116+ incompatible with GitHub Models

1 participant