"- cursor position marked as ${CURSOR_TAG}: Indicates where the developer's cursor is currently located, which can be crucial for understanding what part of the code they are focusing on."
I was not aware that was a thing and useful to know. Thanks!
I use the in-line prompt when I’m talking about a specific area. In the chat I always explained in words what part of the code I’m talking about. This tidbit of information will change how I use chat.
I very much need to know this also. First, tools [0] and prompts [1]. I'll get back to you in a minute while I back trace the calling path. One thing to note is that they use .tsx for rendering the prompts and tool responses.
1. User selects ask or edit and AskAgentIntent.handleRequest or EditAgentIntent.handleRequest is called on character return.
Something I’ve wanted to hack together for a while is a custom react-renderer and react-reconciler for prompt templating so that you can write prompts with JSX.
I haven’t really thought about it beyond “JSX is a templating language and templating helps with prompt building and declarative is better than spaghetti code like LangChain.” But there’s probably some kernel of coolness there.
You're asking if they break the user prompt into multiple chunks?
All I can find is counting number of tokens and trimming to make sure the current turn conversation fits. I can not find any chunking logic to make multiple requests. This logic exists in the classes that extend IIntentInvocation which as buildPrompt() method.
What is Copilot Chat but a front end to some Microsoft SaaS offering? There's nothing materially "open source" about that. All the important stuff is locked up behind the GitHub Copilot API. No one can customize the LLM design or training material. It certainly can't be self-hosted. This is just in-app advertising for yet another subscription service that sends your personal data to an amoral third party. There's no community, no public benefit, no commonwealth.
I beg to differ. All commercial SOTA models emit roughly the same quality of code and have roughly the same limitations and ability to remain coherent in the size of context passed to them.
As has always been the case, it's the mechanisms used to feed relevant contextual information and process results that sets one tool apart from another. Everyone can code up a small agent that calls in LLM in a loop and passes in file contents. As I'm sure you've noticed, this alone does not make for a good coding agent.
I don't follow the criticism. It is built on very weak foundations.
Open source is just that - open source. Whether it is useful to you
or anyone at all is another matter.
Yet here we are, it is out there, some are already poking at how they render
responses from their api. Trying to understand some of the technical choices
they had to make. Someone has probably cloned this and started pluggin in their own api - or reverse engineering the various api calls.
In the end, the fact that it exists makes a difference. It won't be useful to all especially non-technical people who've never seen the nuts and bolts of a vscode extension.
That is why people are comfortable open sourcing things like this. It is good publicity and they don't loose anything. On the other hand curious devs get to poke around and wonder how their copilot prompts were processed by the plugin. Or how it handles attaching files to context. And even what it sends in its payloads.
Of course most of the value is on the API service side. That holds true for most applications these days.
No, that's source available. See the OSI definition for what 'open source' means. And this is precisely the issue with 'open source' vs 'free software'. Once you rewire your brain for the latter, it's very obvious why a project like this is simply open-washing for PR points.
I mean you're right it's just a front end. And front ends can be open sourced? Obviously this has some public value: other people don't have to build a frontend starting from zero.
I don't think it's well-aimed criticism to say that the LLM design/training material itself should have been made open source. Pretty much no one in the open source community would have the computational resources to actually do anything with this...
I have a hard time getting excited about this when they have such an atrocious record of handling pull requests in VS Code already: https://github.com/microsoft/vscode/pulls
I hate this analogy. Just because something is open source, doesn’t mean it is forced to commit or comment on every pull request which takes development time. If that notion really bothers you, you are free to fork VSCode and close all 600 pull requests on your fork.
It's a common theme across most (all?) Microsoft "Open Source" repos. They publish the codebase on Github (which implies a certain thing on it's own), but accept very little community input/contributions - if any.
These repo's will usually have half a dozen or more Microsoft Employees with "Project Manager" titles and the like - extremely "top heavy". All development, decision making, roadmap and more are done behind closed doors. PR's go dormant for months or years... Issues get some sort of cursory "thanks for the input" response from a PM... then crickets.
I'm not arguing all open source needs to be a community and accept contributions. But let's be honest - this is deliberate on Microsoft's part. They want the "good vibes" of being open source friendly - but corporate Microsoft still isn't ready to embrace open source. ie, it's fake open source.
I've looked at a bunch of the popular JS libraries I depend on and they are all the same story, hundreds of open PRs. I think it's just difficult to review work from random people who may not be implementing changes the right way at all. Same with the project direction/roadmap, I'd say the majority of open source repos are like that. People will suggest ideas/direction all day and you can't listen to everyone.
Not sure for VSCode, but for .NET 9 they claim: "There were over 26,000 contributions from over 9,000 community members! "
I was not aware that was a thing and useful to know. Thanks!
Have you even used any of their products lately? Where "lately" = the last 15 years...
1. User selects ask or edit and AskAgentIntent.handleRequest or EditAgentIntent.handleRequest is called on character return.
2. DefaultIntentRequestHandler.getResult() -> createInstance(AskAgentIntentInvocation) -> getResult -> intent.invoke -> runWithToolCalling(intentInvocation) -> createInstance(DefaultToolCallingLoop) -> loop.onDidReceiveResponse -> emit _onDidReceiveResponse -> loop.run(this.stream, pauseCtrl) -> runOne() -> getAvailableTools -> createPromptContext -> buildPrompt2 -> buildPrompt -> [somewhere in here the correct tool gets called] -> responseProcessor.processResponse -> doProcessResponse -> applyDelta ->
[0] https://github.com/microsoft/vscode-copilot-chat/blob/main/s...
[1] https://github.com/microsoft/vscode-copilot-chat/blob/main/s...
[2] src/extension/intents/node/toolCallingLoop.ts
I haven’t really thought about it beyond “JSX is a templating language and templating helps with prompt building and declarative is better than spaghetti code like LangChain.” But there’s probably some kernel of coolness there.
> in a minute
Honestly. Why the hurry?
All I can find is counting number of tokens and trimming to make sure the current turn conversation fits. I can not find any chunking logic to make multiple requests. This logic exists in the classes that extend IIntentInvocation which as buildPrompt() method.
will update when i find more info.
As has always been the case, it's the mechanisms used to feed relevant contextual information and process results that sets one tool apart from another. Everyone can code up a small agent that calls in LLM in a loop and passes in file contents. As I'm sure you've noticed, this alone does not make for a good coding agent.
"Copilot chat" isn't open source. It's the service.
In the end, the fact that it exists makes a difference. It won't be useful to all especially non-technical people who've never seen the nuts and bolts of a vscode extension.
I don't understand this criticism.
The criticism is that most of the value is (presumably) on the API service side.
https://gwern.net/complement
Of course most of the value is on the API service side. That holds true for most applications these days.
I don't think it's well-aimed criticism to say that the LLM design/training material itself should have been made open source. Pretty much no one in the open source community would have the computational resources to actually do anything with this...
I'm no fan of Microsoft but that's a massive maintenance burden. They must have multiple people working on this full time.
All the good FOSS vibes, without any of the hard FOSS work...
These repo's will usually have half a dozen or more Microsoft Employees with "Project Manager" titles and the like - extremely "top heavy". All development, decision making, roadmap and more are done behind closed doors. PR's go dormant for months or years... Issues get some sort of cursory "thanks for the input" response from a PM... then crickets.
I'm not arguing all open source needs to be a community and accept contributions. But let's be honest - this is deliberate on Microsoft's part. They want the "good vibes" of being open source friendly - but corporate Microsoft still isn't ready to embrace open source. ie, it's fake open source.
Not sure for VSCode, but for .NET 9 they claim: "There were over 26,000 contributions from over 9,000 community members! "
https://github.com/microsoft/vscode/pulls?q=is%3Apr+is%3Aclo...