← Back to Devchain

Session Reader

The headline feature of 0.11.0 — a full transcript viewer built directly into DevChain. Open any active agent session to see every tool call, thinking block, and response in a structured, scrollable timeline. Token usage, cost, and compaction events are tracked per message, giving you complete visibility into what your agents are doing and how much they cost.

Session Reader showing a full agent transcript with tool calls, thinking blocks, and inline metrics
Session Reader — full transcript view with tool calls, cost tracking, and real-time updates

Sessions are discovered automatically from the agent's terminal working directory. The reader supports Claude and Codex transcripts out of the box, with content-based discovery that works regardless of provider-specific file layouts.

AI Turn Grouping

Related messages are grouped into collapsible AI turn cards. Each card shows aggregated token usage and cost, and expands to reveal the individual tool calls, thinking blocks, and responses within. This keeps long sessions navigable without losing detail.

Token Hotspot Detection

An IQR-based algorithm automatically identifies the most token-intensive steps in a session. Hotspots are highlighted with a visual indicator, and you can filter to show only hotspots or navigate between them with keyboard shortcuts — useful for finding the expensive parts of a long-running session.

Inline Session Summary

A compact summary bar appears at the top of each agent's terminal, showing the active model, running cost, context usage, and compaction count at a glance — without opening the full session view.

Terminal session bar showing model, cost, context percentage, and compaction count
Quick session summary in the terminal bar — model, cost, context %, and compactions

Multi-provider support

The Session Reader works with Claude Code and OpenAI Codex transcripts. Each provider's transcript format is parsed natively with incremental updates — the viewer stays current as the agent works.


Context Tracking

Every agent in the sidebar now shows a visual progress bar representing how much of its context window has been consumed. The bar fills in real time as the agent works, changing color as it approaches capacity. Hover to see exact token counts.

Agent sidebar showing context window progress bars for each agent
Per-agent context progress bars in the Chat sidebar

Context data is extracted from session transcripts in real time. For Claude and Codex, token counts come directly from the provider's usage metadata. The progress bar accounts for each provider's actual context window size.

Tooltip showing context usage: 49% used (98k of 200k)
Hover for exact context usage — tokens used and total window size

Provider Model Override

You can now change any agent's model on the fly. Right-click an agent in the sidebar to open the Provider Config menu, select a provider, and pick a model override. The change takes effect on the next session start — no template editing required.

Context menu showing Provider Config with provider list and model override options
Right-click any agent to switch providers and override the model

Model overrides are per-agent and persist across restarts. They layer on top of template defaults, so you can experiment with different models without modifying the template itself.

OpenCode Provider

OpenCode is now available as a provider option, bringing GLM models into your agent team. Assign any agent to use OpenCode from the provider config menu and it runs natively alongside Claude and Codex agents.

OpenCode agent running GLM-5 as the Coder agent in a DevChain terminal
OpenCode running GLM-5 as the Coder agent

Provider Models in Templates

Templates can now define available models per provider. When you switch providers, the model list updates to show only the models that provider supports. Model labels are shown in the UI for clarity (e.g. "sonnet" instead of the full model ID).


Internal Improvements

Significant refactoring across both the backend and UI to improve maintainability and reduce file complexity.

Storage delegates
Decomposed local-storage.service into focused domain delegate classes.
MCP handlers
Decomposed mcp.service into domain-specific handler modules.
Project helpers
Decomposed projects.service into smaller helper modules.
UI composition
Decomposed BoardPage, ProjectsPage, SettingsPage, and AgentsPage into composition roots.

Full Changelog


Update

$ npm i -g devchain-cli@latest

Context tracking and the Session Reader work automatically — no template upgrade needed. To use provider model overrides, update your template to the latest version.