"- cursor position marked as ${CURSOR_TAG}: Indicates where the developer's cursor is currently located, which can be crucial for understanding what part of the code they are focusing on."
I was not aware that was a thing and useful to know. Thanks!
I very much need to know this also. First, tools [0] and prompts [1]. I'll get back to you in a minute while I back trace the calling path. One thing to note is that they use .tsx for rendering the prompts and tool responses.
1. User selects ask or edit and AskAgentIntent.handleRequest or EditAgentIntent.handleRequest is called on character return.
Something I’ve wanted to hack together for a while is a custom react-renderer and react-reconciler for prompt templating so that you can write prompts with JSX.
I haven’t really thought about it beyond “JSX is a templating language and templating helps with prompt building and declarative is better than spaghetti code like LangChain.” But there’s probably some kernel of coolness there.
You're asking if they break the user prompt into multiple chunks?
All I can find is counting number of tokens and trimming to make sure the current turn conversation fits. I can not find any chunking logic to make multiple requests. This logic exists in the classes that extend IIntentInvocation which as buildPrompt() method.
What is Copilot Chat but a front end to some Microsoft SaaS offering? There's nothing materially "open source" about that. All the important stuff is locked up behind the GitHub Copilot API. No one can customize the LLM design or training material. It certainly can't be self-hosted. This is just in-app advertising for yet another subscription service that sends your personal data to an amoral third party. There's no community, no public benefit, no commonwealth.
I beg to differ. All commercial SOTA models emit roughly the same quality of code and have roughly the same limitations and ability to remain coherent in the size of context passed to them.
As has always been the case, it's the mechanisms used to feed relevant contextual information and process results that sets one tool apart from another. Everyone can code up a small agent that calls in LLM in a loop and passes in file contents. As I'm sure you've noticed, this alone does not make for a good coding agent.
I don't follow the criticism. It is built on very weak foundations.
Open source is just that - open source. Whether it is useful to you
or anyone at all is another matter.
Yet here we are, it is out there, some are already poking at how they render
responses from their api. Trying to understand some of the technical choices
they had to make. Someone has probably cloned this and started pluggin in their own api - or reverse engineering the various api calls.
In the end, the fact that it exists makes a difference. It won't be useful to all especially non-technical people who've never seen the nuts and bolts of a vscode extension.
That is why people are comfortable open sourcing things like this. It is good publicity and they don't loose anything. On the other hand curious devs get to poke around and wonder how their copilot prompts were processed by the plugin. Or how it handles attaching files to context. And even what it sends in its payloads.
Of course most of the value is on the API service side. That holds true for most applications these days.
No, that's source available. See the OSI definition for what 'open source' means. And this is precisely the issue with 'open source' vs 'free software'. Once you rewire your brain for the latter, it's very obvious why a project like this is simply open-washing for PR points.
I mean you're right it's just a front end. And front ends can be open sourced? Obviously this has some public value: other people don't have to build a frontend starting from zero.
I don't think it's well-aimed criticism to say that the LLM design/training material itself should have been made open source. Pretty much no one in the open source community would have the computational resources to actually do anything with this...
I have a hard time getting excited about this when they have such an atrocious record of handling pull requests in VS Code already: https://github.com/microsoft/vscode/pulls
I hate this analogy. Just because something is open source, doesn’t mean it is forced to commit or comment on every pull request which takes development time. If that notion really bothers you, you are free to fork VSCode and close all 600 pull requests on your fork.
It's a common theme across most (all?) Microsoft "Open Source" repos. They publish the codebase on Github (which implies a certain thing on it's own), but accept very little community input/contributions - if any.
These repo's will usually have half a dozen or more Microsoft Employees with "Project Manager" titles and the like - extremely "top heavy". All development, decision making, roadmap and more are done behind closed doors. PR's go dormant for months or years... Issues get some sort of cursory "thanks for the input" response from a PM... then crickets.
I'm not arguing all open source needs to be a community and accept contributions. But let's be honest - this is deliberate on Microsoft's part. They want the "good vibes" of being open source friendly - but corporate Microsoft still isn't ready to embrace open source. ie, it's fake open source.
I've looked at a bunch of the popular JS libraries I depend on and they are all the same story, hundreds of open PRs. I think it's just difficult to review work from random people who may not be implementing changes the right way at all. Same with the project direction/roadmap, I'd say the majority of open source repos are like that. People will suggest ideas/direction all day and you can't listen to everyone.
Not sure for VSCode, but for .NET 9 they claim: "There were over 26,000 contributions from over 9,000 community members! "
Here's the system Prompt Template they use : https://github.com/microsoft/vscode-copilot-chat/blob/4c72d6...
"- cursor position marked as ${CURSOR_TAG}: Indicates where the developer's cursor is currently located, which can be crucial for understanding what part of the code they are focusing on."
I was not aware that was a thing and useful to know. Thanks!
Interesting to hear how others use these tools. I often phrase things as “this line/method” which implies the tool knows where my cursor is.
Isn't that needed for the tab completion?
Copilot in vs code is kind of lackluster and really missing the sort of polish you’d expect from a company like Microsoft
USED to expect from Microsoft.
Have you even used any of their products lately? Where "lately" = the last 15 years...
Quick, someone use AI to scan the codebase and explain the decision tree of Copilot Chat with regards how it handle prompts and responses.
I very much need to know this also. First, tools [0] and prompts [1]. I'll get back to you in a minute while I back trace the calling path. One thing to note is that they use .tsx for rendering the prompts and tool responses.
1. User selects ask or edit and AskAgentIntent.handleRequest or EditAgentIntent.handleRequest is called on character return.
2. DefaultIntentRequestHandler.getResult() -> createInstance(AskAgentIntentInvocation) -> getResult -> intent.invoke -> runWithToolCalling(intentInvocation) -> createInstance(DefaultToolCallingLoop) -> loop.onDidReceiveResponse -> emit _onDidReceiveResponse -> loop.run(this.stream, pauseCtrl) -> runOne() -> getAvailableTools -> createPromptContext -> buildPrompt2 -> buildPrompt -> [somewhere in here the correct tool gets called] -> responseProcessor.processResponse -> doProcessResponse -> applyDelta ->
[0] https://github.com/microsoft/vscode-copilot-chat/blob/main/s...
[1] https://github.com/microsoft/vscode-copilot-chat/blob/main/s...
[2] src/extension/intents/node/toolCallingLoop.ts
Something I’ve wanted to hack together for a while is a custom react-renderer and react-reconciler for prompt templating so that you can write prompts with JSX.
I haven’t really thought about it beyond “JSX is a templating language and templating helps with prompt building and declarative is better than spaghetti code like LangChain.” But there’s probably some kernel of coolness there.
Care to also check if they do prompt decomposition into multiple prompts?
You're asking if they break the user prompt into multiple chunks?
All I can find is counting number of tokens and trimming to make sure the current turn conversation fits. I can not find any chunking logic to make multiple requests. This logic exists in the classes that extend IIntentInvocation which as buildPrompt() method.
I believe it's this paper, but... not certain: https://arxiv.org/abs/2210.02406
will update when i find more info.
What is Copilot Chat but a front end to some Microsoft SaaS offering? There's nothing materially "open source" about that. All the important stuff is locked up behind the GitHub Copilot API. No one can customize the LLM design or training material. It certainly can't be self-hosted. This is just in-app advertising for yet another subscription service that sends your personal data to an amoral third party. There's no community, no public benefit, no commonwealth.
I beg to differ. All commercial SOTA models emit roughly the same quality of code and have roughly the same limitations and ability to remain coherent in the size of context passed to them.
As has always been the case, it's the mechanisms used to feed relevant contextual information and process results that sets one tool apart from another. Everyone can code up a small agent that calls in LLM in a loop and passes in file contents. As I'm sure you've noticed, this alone does not make for a good coding agent.
I don't follow the criticism. It is built on very weak foundations. Open source is just that - open source. Whether it is useful to you or anyone at all is another matter.
It's an open source... API connector to a closed source product.
"Copilot chat" isn't open source. It's the service.
It's white-washing through "Open Source". No one will benefit from this
Yet here we are, it is out there, some are already poking at how they render responses from their api. Trying to understand some of the technical choices they had to make. Someone has probably cloned this and started pluggin in their own api - or reverse engineering the various api calls.
In the end, the fact that it exists makes a difference. It won't be useful to all especially non-technical people who've never seen the nuts and bolts of a vscode extension.
Doesn't open source mean users get the source code?
I don't understand this criticism.
They get the source code to a client.
The criticism is that most of the value is (presumably) on the API service side.
https://gwern.net/complement
That is why people are comfortable open sourcing things like this. It is good publicity and they don't loose anything. On the other hand curious devs get to poke around and wonder how their copilot prompts were processed by the plugin. Or how it handles attaching files to context. And even what it sends in its payloads.
Of course most of the value is on the API service side. That holds true for most applications these days.
No, that's source available. See the OSI definition for what 'open source' means. And this is precisely the issue with 'open source' vs 'free software'. Once you rewire your brain for the latter, it's very obvious why a project like this is simply open-washing for PR points.
I mean you're right it's just a front end. And front ends can be open sourced? Obviously this has some public value: other people don't have to build a frontend starting from zero.
I don't think it's well-aimed criticism to say that the LLM design/training material itself should have been made open source. Pretty much no one in the open source community would have the computational resources to actually do anything with this...
But they might have the computational resource to showcase how these companies are breaking the copyright law that they loved until recently.
They are not obligated to provide it even if people have the computational resources to operationalise it.
I wont trust microsoft nor google until the end if universe.
[flagged]
[flagged]
Tell me the competition is winning without telling me the competition is winning...
I have a hard time getting excited about this when they have such an atrocious record of handling pull requests in VS Code already: https://github.com/microsoft/vscode/pulls
It looks to me like they close nearly 30 PRs every day. That's kind of amazing.
I'm no fan of Microsoft but that's a massive maintenance burden. They must have multiple people working on this full time.
If you examine the merged PR's - the overwhelming majority are from Microsoft employees. Meanwhile, community contributions sit and rot.
I thought they just open sourced this? Was there enough time to start reviewing community contributions?
Yet the Settings UI is still a nonsensical mess.
That's because it's Microsoft's Trademarked version of Open Source.
All the good FOSS vibes, without any of the hard FOSS work...
I hate this analogy. Just because something is open source, doesn’t mean it is forced to commit or comment on every pull request which takes development time. If that notion really bothers you, you are free to fork VSCode and close all 600 pull requests on your fork.
Agree. OSS is hard work and not obligatory.
[flagged]
It's a common theme across most (all?) Microsoft "Open Source" repos. They publish the codebase on Github (which implies a certain thing on it's own), but accept very little community input/contributions - if any.
These repo's will usually have half a dozen or more Microsoft Employees with "Project Manager" titles and the like - extremely "top heavy". All development, decision making, roadmap and more are done behind closed doors. PR's go dormant for months or years... Issues get some sort of cursory "thanks for the input" response from a PM... then crickets.
I'm not arguing all open source needs to be a community and accept contributions. But let's be honest - this is deliberate on Microsoft's part. They want the "good vibes" of being open source friendly - but corporate Microsoft still isn't ready to embrace open source. ie, it's fake open source.
I've looked at a bunch of the popular JS libraries I depend on and they are all the same story, hundreds of open PRs. I think it's just difficult to review work from random people who may not be implementing changes the right way at all. Same with the project direction/roadmap, I'd say the majority of open source repos are like that. People will suggest ideas/direction all day and you can't listen to everyone.
Not sure for VSCode, but for .NET 9 they claim: "There were over 26,000 contributions from over 9,000 community members! "
f. o. r. k. everything costs money, waaaay more than a $5 buy me a coffee. Every PR MS closes costs them thousands of dollars.
I'm not sure I see the problem. The number of merged PR's looks on the high side for a FOSS project.
https://github.com/microsoft/vscode/pulls?q=is%3Apr+is%3Aclo...
There are just two forms of code - public domain and private. It's just that some people don't see it yet.