Skip to content

Conversation

Copy link
Contributor

Copilot AI commented Jan 20, 2026

The auto model router adds ~200ms latency to inline chat due to the classification API call. This change optimizes inline chat by skipping the router API call when a cached model selection already exists, while still participating in automodel routing.

Changes

Modified AutomodeService._resolveWithRouterModel() to skip router API fetch for inline chat when cache exists:

// For inline chat (ChatLocation.Editor), skip router fetching if there's already a cached entry
// to avoid 200ms latency penalty.
const prompt = chatRequest?.prompt?.trim();
const isInlineChat = location === ChatLocation.Editor;
const shouldRoute = prompt?.length && (!entry || entry.lastRoutedPrompt !== prompt) && !(isInlineChat && entry);

Behavior:

  • First inline chat request: Routes normally using the router API to select the best model
  • Subsequent inline chat requests: Reuses the cached model selection, avoiding the 200ms API call
  • Other locations (Panel, Terminal, etc.): Continue to use router on every request as before

This approach allows inline chat to benefit from smart model routing while avoiding latency penalties on repeated requests.

Original prompt

This section details on the original issue you should resolve

<issue_title>Auto model router should not be used in inline chat</issue_title>
<issue_description>I believe that the auto model router is being used for all Auto usage today (when enabled).
We should make sure to not use the router for inline chat.
Reason:
a) Inline chat needs to be fast, and we do not want to pay the 200ms penalty
b) We always resolve to fast models in Inline Chat ourselves, thus the classifiation might not be needed at all

cc @jrieken @eleanorjboyd </issue_description>

Comments on the Issue (you are @copilot in this section)


💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.

Co-authored-by: TylerLeonhardt <2644648+TylerLeonhardt@users.noreply.github.com>
Copilot AI changed the title [WIP] Remove auto model router for inline chat usage Skip auto model router for inline chat Jan 20, 2026
Copilot AI requested a review from TylerLeonhardt January 20, 2026 16:58
Comment on lines 156 to 160
// Skip router for inline chat (ChatLocation.Editor) to avoid 200ms latency penalty.
// Inline chat already resolves to fast models, so classification is not needed.
const location = chatRequest?.location ?? ChatLocation.Panel;
const isInlineChat = location === ChatLocation.Editor;
const usingRouterModel = !isInlineChat && this._configurationService.getExperimentBasedConfig(ConfigKey.TeamInternal.AutoModeRouterUrl, this._expService) !== undefined;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot I still want InlineChat to use the automodel routing, but I want it to skip over the fetching if there's already something cached

so it should be included in the shouldRoute logic

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the logic to include the inline chat check in shouldRoute condition. Inline chat now uses automodel routing but skips router fetching when a cache entry exists. Fixed in commit 1b34074.

- Inline chat still uses automodel routing but skips router API call when cache exists
- Addresses feedback to include logic in shouldRoute instead of blocking router path entirely
- Inline chat will route on first call, then reuse cached model on subsequent calls

Co-authored-by: TylerLeonhardt <2644648+TylerLeonhardt@users.noreply.github.com>
Copilot AI changed the title Skip auto model router for inline chat Skip auto model router API call for cached inline chat Jan 20, 2026
Copilot AI requested a review from TylerLeonhardt January 20, 2026 17:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Auto model router should not be used in inline chat

2 participants