feat: CLI composability helpers and universal output formatter#461
feat: CLI composability helpers and universal output formatter#461carlos-alm wants to merge 17 commits intomainfrom
Conversation
|
Claude finished @carlos-alm's task —— View job Root Cause AnalysisProblem Being Solved: Eliminates CLI boilerplate duplication and enables consistent output formatting across commands. The internal architecture already supports composability via Why This Approach: Three targeted improvements that align with the existing Phase 3 architectural refactoring:
Risk Assessment: Low risk - this is pure refactoring that eliminates duplication without changing functionality. No breaking changes to public APIs. Backlog Compliance
Code Quality AssessmentStrengths✅ Clean abstraction: Implementation Quality✅ Table formatting: Auto-width calculation with MAX_COL_WIDTH (40) prevents layout issues Test Coverage NoteTechnical Review
|
Greptile SummaryThis PR delivers the 3.17 CLI composability roadmap item in three coordinated changes: an All critical issues raised in the previous review round (RFC 4180 escaping, Key changes:
Remaining observations:
Confidence Score: 4/5
Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
CMD["CLI Command\n(execute)"] --> RQO["ctx.resolveQueryOpts(opts)\noptions.js"]
RQO --> SPREAD["...qOpts spread\ninto domain fn opts"]
CMD --> OG["openGraph(opts)\nopen-graph.js"]
OG --> DB["db: Database\nclose: () => db.close()"]
DB --> TRY["try { domain fn(db, ...) }\nfinally { close() }"]
CMD --> OR["ctx.outputResult(data, field, opts)\nresult-formatter.js"]
OR --> NDJSON{"opts.ndjson?"}
NDJSON -->|yes| PNJ["printNdjson()"]
NDJSON -->|no| JSON_{"opts.json?"}
JSON_ -->|yes| PJ["JSON.stringify → stdout"]
JSON_ -->|no| CSV_{"opts.csv?"}
CSV_ -->|yes| PC["printCsv()\nflattenObject → escapeCsv → stdout"]
CSV_ -->|no| TABLE_{"opts.table?"}
TABLE_ -->|yes| PAT["printAutoTable()\nflattenObject → formatTable → stdout"]
TABLE_ -->|no| FALSE["return false\n(caller handles output)"]
PC --> RET_C["return true/false"]
PAT --> RET_T["return true/false"]
|
| function escapeCsv(val) { | ||
| const str = val == null ? '' : String(val); | ||
| if (str.includes(',') || str.includes('"') || str.includes('\n')) { | ||
| return `"${str.replace(/"/g, '""')}"`; | ||
| } | ||
| return str; | ||
| } |
There was a problem hiding this comment.
escapeCsv missing \r check for RFC 4180
The JSDoc says "RFC 4180 CSV output", but RFC 4180 §2.6 requires quoting fields that contain CRLF (\r\n). The current check handles \n but not \r, so a bare \r in a value would pass through unquoted. Unlikely with codegraph data, but worth fixing for correctness since the doc promises RFC compliance.
| function escapeCsv(val) { | |
| const str = val == null ? '' : String(val); | |
| if (str.includes(',') || str.includes('"') || str.includes('\n')) { | |
| return `"${str.replace(/"/g, '""')}"`; | |
| } | |
| return str; | |
| } | |
| function escapeCsv(val) { | |
| const str = val == null ? '' : String(val); | |
| if (str.includes(',') || str.includes('"') || str.includes('\n') || str.includes('\r')) { | |
| return `"${str.replace(/"/g, '""')}"`; | |
| } | |
| return str; | |
| } |
There was a problem hiding this comment.
Fixed in a28cf6f — added str.includes('\r') to the escapeCsv condition so fields containing CR are properly quoted per RFC 4180 §2.6.
src/presentation/result-formatter.js
Outdated
| const flatItems = items.map((item) => | ||
| typeof item === 'object' && item !== null ? flattenObject(item) : { value: item }, | ||
| ); | ||
| const columns = autoColumns(items.filter((i) => typeof i === 'object' && i !== null)); |
There was a problem hiding this comment.
Column detection uses unflattened items
Both printCsv (here) and printAutoTable (line 79) derive columns from autoColumns(items.filter(...)) — the unflattened original items filtered to objects only. autoColumns internally calls flattenObject, so the flattened keys are correct. However, if the array contains a mix of primitives and objects, the primitive items are mapped to { value: item } in flatItems but the value key won't appear in columns (since autoColumns only sees the objects). Those rows silently produce empty values for every column.
This won't happen with current codegraph data (always homogeneous object arrays), but if you want the helper to be truly general-purpose, consider deriving columns from flatItems instead:
const columns = autoColumns(flatItems);
There was a problem hiding this comment.
Fixed in a28cf6f — both printCsv and printAutoTable now derive columns from autoColumns(flatItems) instead of autoColumns(items.filter(...)), so primitive-only and mixed arrays correctly produce the value column.
…pts, universal output formatter Add openGraph(opts) helper to eliminate DB-open/close boilerplate in cycles, export, and plot commands. Add resolveQueryOpts(opts) to extract the 5 repeated option fields (noTests, json, ndjson, limit, offset) plus new table/csv into one call. Refactor 20 command files to use the spread pattern. Extend outputResult() with --table (auto-column aligned table) and --csv (RFC 4180 with nested object flattening) output formats. Add --table and --csv options to applyQueryOpts(). Impact: 31 functions changed, 71 affected
…d items Impact: 3 functions changed, 2 affected
3546120 to
a28cf6f
Compare
src/presentation/result-formatter.js
Outdated
| if (columns.length === 0) columns.push('value'); | ||
|
|
||
| const colDefs = columns.map((col) => { | ||
| const maxLen = Math.max(col.length, ...flatItems.map((item) => String(item[col] ?? '').length)); |
There was a problem hiding this comment.
Math.max(...spread) crashes on large result sets
Math.max(col.length, ...flatItems.map(...)) passes every row's string length as a separate function argument. V8 limits the argument count to roughly 65k; beyond that this throws RangeError: Maximum call stack size exceeded. On a large codebase that returns, say, 100k symbols the --table flag would hard-crash instead of printing.
A reduce-based approach has no argument-count ceiling:
| const maxLen = Math.max(col.length, ...flatItems.map((item) => String(item[col] ?? '').length)); | |
| const maxLen = flatItems.reduce( | |
| (max, item) => Math.max(max, String(item[col] ?? '').length), | |
| col.length, | |
| ); |
There was a problem hiding this comment.
Fixed — replaced Math.max(...spread) with a reduce-based approach that has no argument-count ceiling.
There was a problem hiding this comment.
Already fixed in 323e704 — replaced Math.max(...spread) with a reduce-based approach that has no argument-count ceiling.
…rflow on large sets Impact: 1 functions changed, 1 affected
| console.log(columns.map(escapeCsv).join(',')); | ||
| for (const row of flatItems) { | ||
| console.log(columns.map((col) => escapeCsv(row[col])).join(',')); | ||
| } |
There was a problem hiding this comment.
RFC 4180 CRLF line endings not honored
The JSDoc for printCsv says "RFC 4180 CSV output", but RFC 4180 §2 mandates CRLF (\r\n) as the record delimiter. console.log always appends a plain \n. In practice most CSV consumers tolerate LF-only files, but the JSDoc claim of RFC compliance isn't fully met.
Consider using process.stdout.write with explicit \r\n endings, or tone down the JSDoc to say "CSV output (LF line endings)":
| console.log(columns.map(escapeCsv).join(',')); | |
| for (const row of flatItems) { | |
| console.log(columns.map((col) => escapeCsv(row[col])).join(',')); | |
| } | |
| const CRLF = '\r\n'; | |
| process.stdout.write(columns.map(escapeCsv).join(',') + CRLF); | |
| for (const row of flatItems) { | |
| process.stdout.write(columns.map((col) => escapeCsv(row[col])).join(',') + CRLF); | |
| } |
There was a problem hiding this comment.
Fixed — toned down the JSDoc to say 'CSV output (LF line endings)' instead of claiming full RFC 4180 compliance, since console.log uses LF and changing to CRLF would break terminal output.
src/presentation/result-formatter.js
Outdated
| const flatItems = items.map((item) => | ||
| typeof item === 'object' && item !== null ? flattenObject(item) : { value: item }, | ||
| ); | ||
| const columns = autoColumns(flatItems); |
There was a problem hiding this comment.
Redundant double-flattening in autoColumns
flatItems are already flat (all values are primitives or JSON strings after the flattenObject call above), yet autoColumns(flatItems) passes each one through flattenObject a second time internally. The second call is a no-op on flat objects — every value is a primitive, so no recursion happens — but it does iterate every key of every row twice for large result sets.
A small, safe optimization is to derive columns directly from the already-flat items instead:
| const columns = autoColumns(flatItems); | |
| const columns = (() => { | |
| const keys = new Set(); | |
| for (const item of flatItems) for (const key of Object.keys(item)) keys.add(key); | |
| return [...keys]; | |
| })(); |
The same double-flattening appears on line 79 in printAutoTable.
There was a problem hiding this comment.
Fixed — replaced autoColumns(flatItems) with inline key extraction in both printCsv and printAutoTable, eliminating the redundant second pass through flattenObject.
Impact: 2 functions changed, 1 affected
# Conflicts: # src/cli/commands/structure.js Impact: 29 functions changed, 1 affected
| function flattenObject(obj, prefix = '') { | ||
| const result = {}; | ||
| for (const [key, value] of Object.entries(obj)) { | ||
| const fullKey = prefix ? `${prefix}.${key}` : key; | ||
| if (value != null && typeof value === 'object' && !Array.isArray(value)) { | ||
| Object.assign(result, flattenObject(value, fullKey)); | ||
| } else if (Array.isArray(value)) { | ||
| result[fullKey] = JSON.stringify(value); | ||
| } else { | ||
| result[fullKey] = value; | ||
| } | ||
| } | ||
| return result; |
There was a problem hiding this comment.
Date (and other special objects) silently dropped by flattenObject
Date, RegExp, Map, Set, and similar built-in objects all satisfy typeof value === 'object' && !Array.isArray(value), so they are recursed into. But Object.entries(new Date()) returns [], meaning the date value is silently dropped from the flattened output — no key is emitted at all, rather than even a fallback string like "2024-01-01T00:00:00.000Z".
For codegraph's current SQLite-backed domain objects this is unlikely to matter, but since flattenObject is now a general-purpose utility it's worth guarding against it with an explicit prototype check:
function flattenObject(obj, prefix = '') {
const result = {};
for (const [key, value] of Object.entries(obj)) {
const fullKey = prefix ? `${prefix}.${key}` : key;
if (value !== null && typeof value === 'object'
&& !Array.isArray(value)
&& Object.getPrototypeOf(value) === Object.prototype) {
Object.assign(result, flattenObject(value, fullKey));
} else if (Array.isArray(value)) {
result[fullKey] = JSON.stringify(value);
} else {
result[fullKey] = value;
}
}
return result;
}This ensures Date, RegExp, etc. fall through to the else branch and are stored as their string representation.
There was a problem hiding this comment.
Fixed — added a plain-object prototype check so only POJOs are recursed into. Date, RegExp, Map, Set etc. now fall through to the else branch and are stored as their string representation.
Impact: 1 functions changed, 3 affected
| const isNumeric = flatItems.every((item) => { | ||
| const v = item[col]; | ||
| return v == null || v === '' || Number.isFinite(Number(v)); | ||
| }); |
There was a problem hiding this comment.
Booleans misclassified as numeric columns
Number.isFinite(Number(true)) evaluates to true (since Number(true) is 1), so boolean-valued columns — such as exported, isTest, or hasBody that are common in codegraph domain objects — are classified as numeric and rendered right-aligned. This makes the table visually misleading since true/false reads more naturally left-aligned.
| const isNumeric = flatItems.every((item) => { | |
| const v = item[col]; | |
| return v == null || v === '' || Number.isFinite(Number(v)); | |
| }); | |
| const isNumeric = flatItems.every((item) => { | |
| const v = item[col]; | |
| return v == null || v === '' || (typeof v !== 'boolean' && Number.isFinite(Number(v))); | |
| }); |
There was a problem hiding this comment.
Fixed in 026d923. Added the \ guard before \ so booleans are no longer misclassified as numeric columns.
src/cli/commands/cycles.js
Outdated
| const { db, close } = openGraph(opts); | ||
| const cycles = findCycles(db, { | ||
| fileLevel: !opts.functions, | ||
| noTests: ctx.resolveNoTests(opts), | ||
| }); | ||
| db.close(); | ||
| close(); |
There was a problem hiding this comment.
close() not called if findCycles throws
If findCycles raises an unhandled exception, the close() call on line 20 is never reached and the database connection is leaked for the lifetime of the process. better-sqlite3 will eventually reclaim it via GC, but it's inconsistent with the explicit-cleanup pattern this PR establishes via openGraph.
Note: the old code had the same risk with db.close(), so this is not a regression — but if the intent of openGraph is to make lifecycle management uniform across commands, wrapping the body in a try/finally (as is done in the plot.js error path for the config read) would make cycles.js consistent:
const { db, close } = openGraph(opts);
let cycles;
try {
cycles = findCycles(db, {
fileLevel: !opts.functions,
noTests: ctx.resolveNoTests(opts),
});
} finally {
close();
}The same pattern applies to export.js if any export function throws before the close() at line 66.
There was a problem hiding this comment.
Fixed in 026d923. Wrapped findCycles in try/finally so close() is always called. Also applied the same fix to export.js (the switch block could throw before reaching close()) and plot.js (same pattern — generatePlotHTML could throw, leaking the DB handle).
…ly for close() - result-formatter: skip boolean values in numeric column heuristic (Number(true) === 1 was misclassifying boolean columns as numeric) - cycles/export/plot commands: wrap DB operations in try/finally to ensure close() is called even if an export function throws Impact: 4 functions changed, 1 affected
…degraph into feat/cli-composability Impact: 11 functions changed, 72 affected
| const flatItems = items.map((item) => | ||
| typeof item === 'object' && item !== null ? flattenObject(item) : { value: item }, | ||
| ); |
There was a problem hiding this comment.
Top-level array items routed to flattenObject instead of JSON-stringified
Both printCsv and printAutoTable use the same guard:
typeof item === 'object' && item !== null ? flattenObject(item) : { value: item }Since typeof [] === 'object' is true, an items entry that is itself an array (e.g. a query that returns [[1, 2], [3, 4]]) passes the branch and goes into flattenObject. Object.entries([1, 2]) yields [["0", 1], ["1", 2]], so the flattened object gets numeric string keys — the CSV/table ends up with columns named 0, 1, etc. rather than a single value column containing the serialised array.
This is internally inconsistent: array values nested inside an object are correctly JSON-stringified (the else if (Array.isArray(value)) branch inside flattenObject), but top-level array items are not.
The fix is the same !Array.isArray(item) guard used inside flattenObject:
| const flatItems = items.map((item) => | |
| typeof item === 'object' && item !== null ? flattenObject(item) : { value: item }, | |
| ); | |
| const flatItems = items.map((item) => | |
| typeof item === 'object' && item !== null && !Array.isArray(item) | |
| ? flattenObject(item) | |
| : { value: item }, | |
| ); |
The same change is needed on the identical line inside printAutoTable (line 73).
There was a problem hiding this comment.
Fixed — added !Array.isArray(item) guard in both printCsv and printAutoTable so top-level array items are treated as primitives (JSON-stringified via the value branch) instead of being flattened into numeric-key columns.
| function printCsv(data, field) { | ||
| const items = field ? data[field] : data; | ||
| if (!Array.isArray(items) || items.length === 0) return; | ||
|
|
||
| const flatItems = items.map((item) => | ||
| typeof item === 'object' && item !== null ? flattenObject(item) : { value: item }, | ||
| ); | ||
| const columns = (() => { | ||
| const keys = new Set(); | ||
| for (const item of flatItems) for (const key of Object.keys(item)) keys.add(key); | ||
| return [...keys]; | ||
| })(); | ||
| if (columns.length === 0) columns.push('value'); | ||
|
|
||
| console.log(columns.map(escapeCsv).join(',')); | ||
| for (const row of flatItems) { | ||
| console.log(columns.map((col) => escapeCsv(row[col])).join(',')); | ||
| } |
There was a problem hiding this comment.
printCsv silently emits nothing (not even a header) for empty result sets
When items.length === 0 the function returns before printing anything — no header row, no rows. outputResult still returns true, so the caller's fallback (e.g. formatStructure(data) or a "no results" message) is suppressed. The user sees no output at all.
This is different from --json, which always prints the full data object (even {}). It is also surprising for downstream CSV consumers: an empty CSV file carries no schema information, while a header-only file does.
Consider emitting the header even when there are no rows:
function printCsv(data, field) {
const items = field ? data[field] : data;
if (!Array.isArray(items)) return;
const flatItems = items.map((item) =>
typeof item === 'object' && item !== null && !Array.isArray(item)
? flattenObject(item)
: { value: item },
);
const columns = (() => {
const keys = new Set();
for (const item of flatItems) for (const key of Object.keys(item)) keys.add(key);
return [...keys];
})();
if (columns.length === 0) columns.push('value');
console.log(columns.map(escapeCsv).join(','));
for (const row of flatItems) {
console.log(columns.map((col) => escapeCsv(row[col])).join(','));
}
}If the intent is truly "nothing for empty", then outputResult should return false so the caller can present its own empty-state message instead.
There was a problem hiding this comment.
Fixed — removed the items.length === 0 early return so an empty result set still emits a header-only CSV row, consistent with schema-aware CSV consumers.
…empty sets Impact: 2 functions changed, 1 affected
Impact: 22 functions changed, 13 affected
src/presentation/result-formatter.js
Outdated
| function printAutoTable(data, field) { | ||
| const items = field ? data[field] : data; | ||
| if (!Array.isArray(items) || items.length === 0) return; |
There was a problem hiding this comment.
--table on empty results suppresses all output silently
printAutoTable returns early when items.length === 0 without printing anything, but outputResult still returns true — telling every caller that output was "handled". This means any fallback path (e.g. console.log(formatStructure(data)) in structure.js, or the "no results" message in other commands) is unconditionally suppressed. Running codegraph where nonexistent --table produces zero output with no indication of why.
This is inconsistent with the --csv path: after the fix in this PR, printCsv no longer has the items.length === 0 guard and always emits at least a header row.
Consider printing at least the column headers (or a "No results." message) so the caller's return true is meaningful:
| function printAutoTable(data, field) { | |
| const items = field ? data[field] : data; | |
| if (!Array.isArray(items) || items.length === 0) return; | |
| function printAutoTable(data, field) { | |
| const items = field ? data[field] : data; | |
| if (!Array.isArray(items)) return; | |
| const flatItems = items.map((item) => |
There was a problem hiding this comment.
Fixed — removed the items.length === 0 early return in printAutoTable, consistent with the CSV path. Empty result sets now emit a header-only table instead of suppressing all output.
Impact: 1 functions changed, 1 affected
| if (opts.csv) { | ||
| printCsv(data, field); | ||
| return true; | ||
| } | ||
| if (opts.table) { | ||
| printAutoTable(data, field); | ||
| return true; | ||
| } |
There was a problem hiding this comment.
outputResult swallows fallback when items is not an array
Both printCsv (line 44) and printAutoTable (line 73) contain an early-return guard:
if (!Array.isArray(items)) return;When items is not an array (e.g. data[field] is undefined or a non-array object), these functions return without printing anything. However, outputResult always returns true after calling them, telling every caller that output was handled. This suppresses the fallback path (e.g. console.log(formatStructure(data))) silently — the user sees nothing.
A minimal fix is to propagate the return value:
function printCsv(data, field) {
const items = field ? data[field] : data;
if (!Array.isArray(items)) return false; // <-- signal failure
// ...
return true;
}
// in outputResult:
if (opts.csv) {
return printCsv(data, field) !== false;
}
if (opts.table) {
return printAutoTable(data, field) !== false;
}The same change is needed in printAutoTable.
There was a problem hiding this comment.
Fixed in bf6f551. printCsv and printAutoTable now return false when items is not an array (and true on success). outputResult checks the return value with !== false so callers see the fallback correctly.
| export function resolveQueryOpts(opts) { | ||
| return { | ||
| noTests: resolveNoTests(opts), | ||
| json: opts.json, | ||
| ndjson: opts.ndjson, | ||
| table: opts.table, | ||
| csv: opts.csv, | ||
| limit: opts.limit ? parseInt(opts.limit, 10) : undefined, | ||
| offset: opts.offset ? parseInt(opts.offset, 10) : undefined, | ||
| config, | ||
| }; | ||
| } |
There was a problem hiding this comment.
config now injected into every domain call via spread
resolveQueryOpts unconditionally returns config. Because many command files now use ...ctx.resolveQueryOpts(opts) as a full spread, config is injected into every domain function's options object — including functions (fileDeps, impactAnalysis, where, fileExports, etc.) that previously did not receive it.
This is a silent behavioural change: if any of those domain functions inspect opts.config (or newly added helper functions later do), they will now receive the global config even when no explicit config argument was intended by the caller. Functions that previously opted in to config (like audit, check, diffImpact) will continue to work correctly, but the coupling is now implicit for all others.
Consider removing config from resolveQueryOpts and having the handful of commands that need it pass it explicitly (as they did before), to preserve the opt-in behaviour.
There was a problem hiding this comment.
Fixed in bf6f551. Removed config from the object returned by resolveQueryOpts, so it no longer leaks into every domain call via spread. The two commands that need it (audit, check) now import config directly from ../shared/options.js and pass it explicitly. diff-impact spreads resolveQueryOpts but the downstream domain/analysis/impact.js already has a opts.config || loadConfig(repoRoot) fallback, so removing config from the spread is safe there.
| case 'neo4j': { | ||
| const csv = exportNeo4jCSV(db, exportOpts); | ||
| if (opts.output) { | ||
| const base = opts.output.replace(/\.[^.]+$/, '') || opts.output; | ||
| fs.writeFileSync(`${base}-nodes.csv`, csv.nodes, 'utf-8'); | ||
| fs.writeFileSync(`${base}-relationships.csv`, csv.relationships, 'utf-8'); | ||
| console.log(`Exported to ${base}-nodes.csv and ${base}-relationships.csv`); | ||
| return; | ||
| } |
There was a problem hiding this comment.
close() called before neo4j file-write completes — then return bypasses output check
When opts.format === 'neo4j' and opts.output is set, the code calls return inside the try block (line 56). finally { close(); } correctly fires before the function exits. However, output remains undefined after the try/finally block, and the code below:
if (opts.output) {
fs.writeFileSync(opts.output, output, 'utf-8');…would write undefined to disk if the return on line 56 were ever removed or the control flow changed. While this is currently safe due to the return, the invariant that output is always a string after the try block is silently broken for this branch.
A defensive approach is to move the return outside the try/finally entirely or add an explicit output !== undefined guard after the block.
There was a problem hiding this comment.
Fixed in bf6f551. Removed the early return inside the neo4j try block. The neo4j+output case now writes files and leaves output as undefined, while the neo4j+stdout case sets output normally. After the try/finally (which always calls close()), an explicit if (output === undefined) return guard skips the output block when it was already handled inside the switch. This makes the control flow unambiguous — close() always runs via finally, and the post-block output logic only runs when output was actually set.
- Make printCsv/printAutoTable return false when items is not an array, and propagate that in outputResult so callers see the failure - Remove config from resolveQueryOpts spread; audit and check now import config directly, other consumers use their existing loadConfig fallback - Restructure export neo4j branch to avoid return inside try/finally; use output===undefined guard after the block instead
| const isNumeric = flatItems.every((item) => { | ||
| const v = item[col]; | ||
| return v == null || v === '' || (typeof v !== 'boolean' && Number.isFinite(Number(v))); | ||
| }); |
There was a problem hiding this comment.
Vacuous isNumeric right-aligns fallback value column on empty result sets
When flatItems is empty, flatItems.every(predicate) returns true vacuously — making isNumeric = true and producing a right-aligned header for the synthetic value column. Every non-empty string-result query correctly renders its columns left-aligned, so codegraph where nonexistent --table would produce a conspicuously right-padded value heading that is inconsistent with the actual column layout users would see on any successful query.
A minimal guard:
| const isNumeric = flatItems.every((item) => { | |
| const v = item[col]; | |
| return v == null || v === '' || (typeof v !== 'boolean' && Number.isFinite(Number(v))); | |
| }); | |
| const isNumeric = flatItems.length > 0 && flatItems.every((item) => { | |
| const v = item[col]; | |
| return v == null || v === '' || (typeof v !== 'boolean' && Number.isFinite(Number(v))); | |
| }); |
Summary
openGraph(opts)helper (src/cli/shared/open-graph.js) to eliminate DB-open/close boilerplate — refactoredcycles,export,plotcommandsresolveQueryOpts(opts)tosrc/cli/shared/options.jsextracting the 5 repeated option fields (noTests, json, ndjson, limit, offset) plus new table/csv — refactored 20 command files to use spread patternoutputResult()insrc/presentation/result-formatter.jswith--table(auto-column aligned table) and--csv(RFC 4180 with nested object flattening) output formats--tableand--csvoptions toapplyQueryOpts()Implements the 3.17 CLI composability roadmap item.
Test plan
npx codegraph where buildGraph --json(json still works)npx codegraph where buildGraph --table(new table output)npx codegraph where buildGraph --csv(new csv output)npx codegraph cycles(uses openGraph now)npx codegraph export --format dot(uses openGraph now)