product updateGitHub

GitHub Copilot CLI adds Rubber Duck for second-opinion AI suggestions

TL;DR

GitHub has added Rubber Duck to Copilot CLI, a feature that provides alternative suggestions by consulting different AI model families. The feature lets developers get a second opinion on code suggestions directly from the command line.

2 min read
0

GitHub Copilot CLI adds Rubber Duck for second-opinion AI suggestions

GitHub has expanded Copilot CLI with Rubber Duck, a new feature that generates alternative code suggestions by consulting different AI model families within the tool.

What is Rubber Duck?

Rubber Duck operates as a built-in second-opinion system for Copilot CLI. When developers request code suggestions or explanations in the terminal, Rubber Duck can provide an alternative perspective by leveraging a different model family than the primary suggestion engine. This gives users multiple approaches to the same problem without switching tools or contexts.

The feature is accessible directly from the CLI, maintaining the integrated workflow developers expect from GitHub's AI assistant.

Model Family Approach

The implementation reflects a deliberate architectural choice: rather than relying on a single model, GitHub's approach uses multiple model families to generate suggestions. This strategy acknowledges that different AI models have different strengths—some may excel at certain coding patterns, optimization techniques, or explanatory approaches.

GitHub has not disclosed which specific models or vendors power each model family in this implementation.

Developer Experience

The Rubber Duck feature integrates into Copilot CLI's existing workflow. When users request code suggestions, they can access the alternative perspective without manual intervention or additional configuration.

This addresses a common developer need: when the first suggestion doesn't feel right, having an immediate alternative saves context switching and maintains flow state. Rather than copying code to a separate chat interface or switching to the web version of Copilot, developers can iterate within their terminal.

What this means

GitHub is positioning Copilot CLI as a multi-model platform rather than a single-model tool. This approach distributes the technical risk of relying on one model while providing users with practical optionality. The Rubber Duck feature signals that GitHub views second opinions as core to code generation—not a luxury feature, but a workflow necessity. For developers already using Copilot CLI, this represents expanded capability at no apparent additional cost.

Related Articles

product update

GitHub Copilot CLI adds Rubber Duck for second-opinion analysis across model families

GitHub has added a feature called Rubber Duck to Copilot CLI that queries multiple AI model families to provide alternative perspectives on code suggestions. The feature acts as a second opinion mechanism, allowing developers to compare recommendations from different model architectures.

product update

OpenAI launches ChatGPT app integrations with DoorDash, Spotify, Uber, and 10+ others

OpenAI has expanded ChatGPT with native app integrations allowing users to connect accounts from Spotify, DoorDash, Uber, Booking.com, Canva, Figma, Coursera, Expedia, Target, and others. Users can request actions like meal planning with DoorDash grocery delivery, playlist creation with Spotify, and hotel bookings through Booking.com directly within ChatGPT. The feature requires account authentication and data sharing; users can disconnect any integrated app from Settings.

product update

Anthropic attributes Claude Code usage drain to peak-hour caps and large context windows

Anthropic has identified two primary causes for Claude Code users hitting usage limits faster than expected: stricter rate limiting during peak hours and sessions with context windows exceeding 1 million tokens. The company also recommends switching to Sonnet 4.6 instead of Opus, which consumes limits roughly twice as fast.

product update

OpenAI shifts Codex to usage-based pricing, offers $500 credits to enterprise customers

OpenAI is replacing per-seat licensing with usage-based pricing for Codex in ChatGPT Business and Enterprise plans, eliminating upfront license costs. Eligible Business customers can claim up to $500 in promotional credit per workspace. The shift targets enterprises where coding tools typically expand from individual developers to full teams, positioning OpenAI against GitHub Copilot and Cursor.

Comments

Loading...