-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Description
Problem (one or two sentences)
Hi,
I am using Roo Code in VSCodium with Ollama as inference provider, on limited hardware resulting in long processing times.
This usually works fine at the beginning of a task (or with very small models), but as processing times climb with increased context size, this consistently ends with a 'fetch failed' error in Roo Code.
Context (who is affected and when)
Happens to users running Ollama as inference provider (and probably others), when their configuration leads to 5min+ response times.
Reproduction steps
Roo Code 3.51.1
VSCodium:
Version: 1.110.11607 Commit: 8c11aee2be8d4bcafcd9812191f979df54d2c007 Date: 2026-03-07T23:26:50.681Z Electron: 39.6.0 ElectronBuildId: undefined Chromium: 142.0.7444.265 Node.js: 22.22.0 V8: 14.2.231.22-electron.0 OS: Linux x64 6.9.8-amd64
This is on Debian testing.
Can be reproduced by running a slow Ollama server (large dense model with long context length, limited hardware such as CPU-only inference).
Expected result
The task keeps progressing
Actual result
The tasks shows progress at first, but at some point requests start failing with 'fetch failed'; recovery without switching to a faster model is rare.
Variations tried (optional)
5m0s is 300s, which apparently matches Undici / Node.js default fetch timeout.
Luckily, VSCodium has a "Use Electron Fetch" setting (http.electronFetch in json config); when enabled, I no longer get 'fetch failed' errors.
App Version
3.51.1
API Provider (optional)
Ollama
Model Used (optional)
N/A
Roo Code Task Links (optional)
No response
Relevant logs or errors (optional)
My Ollama container docker logs show the following:
[GIN] 2026/03/16 - 00:50:26 | 500 | 5m0s | 172.22.0.1 | POST "/api/chat"
time=2026-03-16T00:50:26.111Z level=INFO source=runner.go:682 msg="aborting completion request due to client closing the connection"