Visual Studio Code hosts Julia Kasper for a rich discussion on AI models inside VS Code, revealing how tools like GitHub Copilot’s Raptor Mini are shaping developer workflows and coding best practices.

Exploring AI Models and GitHub Copilot’s Raptor Mini in VS Code

In this episode of the VS Code Insiders podcast, James sits down with Julia Kasper to unpack the evolving landscape of AI integration within Visual Studio Code. Their discussion centers around:

  • Model Evaluation: How developers compare and choose between different AI models in VS Code, weighing speed against complexity.
  • Fine-Tuning and Checkpoints: Julia provides a primer on fine-tuning AI models and the use of checkpoints to improve performance for coding applications.
  • Raptor Mini Spotlight: An inside look at GitHub Copilot’s new Raptor Mini model, including details about its public preview rollout, strengths compared to larger models, and when developers might choose a mini versus a complex model for their coding tasks.
  • VS Code’s Evaluation Suite: Introduction to how VS Code benchmarks model performance, runs evaluations (evals), and allows developers to provide direct feedback to shape model development.
  • Developer Workflows: Analysis of how context, complexity, and changing AI models influence the day-to-day experience for VS Code users. The episode also discusses trends in AI-powered coding assistance and the importance of continuous benchmarking to maintain model quality.

Chapters Recap

  • Welcome & Introductions
  • Diving into Models and Evaluations
  • Choosing Models: Gut Feel vs Science
  • Fine-Tuning Explained & Checkpoints
  • Raptor Mini Story: Behind the Scenes
  • Mini vs Larger Model Decisions
  • VS Code’s Own Eval Suite
  • Developer Feedback Channels
  • Closing Thoughts

For developers invested in AI-powered tools, this episode is packed with actionable insights and practical tips on maximizing the value of coding assistants in the ever-changing VS Code ecosystem.