I took a look at a project I maintain[0], and wow. It's so wrong in every section I saw. The generated diagrams make no sense. The text sections take implementation details that don't matter and present them to the user like they need to know them. It's also outdated.
I hope actual users never see this. I dread thinking about having to go around to various LLM generated sites to correct documentation I never approved of to stop confusing users that are tricked into reading it.
I just tried it on several of my repos and I was rather impressed.
This is another one of those bizarre situations that keeps happening in AI coding related matters where people can look at the same thing and reach diametrically opposed conclusions. It's very peculiar and I've never experienced anything like it in my career until recently.
But you’re not looking at the same thing — you’re looking at two completely different sets of output.
Perhaps their project uses a more obscure language, has a more complex architecture, resembles another project that’s tripping up the interpretation of it. You have have excellent results without it being perfect for everything. Nothing is perfect and it’s important for people making these things to know how, right?
In my career I’ve never seen such aggressive dismissal of people’s negative experiences without even knowing if their use case is significantly different.
Which repos worked well? I've had the same experience as op- unhelpful diagrams and bad information hierarchy. But I'm curious to see examples of where it's produced good output!
> people can look at the same thing and reach diametrically opposed conclusions. It's very peculiar and I've never experienced anything like it in my career until recently
React vs other frameworks (or no framework). Object oriented vs functional. There's loads of examples of this that predate AI.
I dont think it's quite the same. The cases you mention are more like two alternative but roughly functionally equivalent things. People still argue and use both, but the argument is different. Even if people don't explicitly acknowledge it, at some level they understand it's a difference in taste.
This feels to me more like the horses vs cars thing, computers vs... something (no computers?), crypto vs "dollar-pegged" money, etc. It's deeper. I'm not saying the AI people are the "car" people, just that...there will be one opinion that will exist in 5-20 years, and the other will be gone. Which one... we'll see.
I went to the lodash docs and asked about how I'd use the 'pipeline' operator (which doesn't exist) and it correctly pointed out that pipeline isn't a thing, and suggested chain() for normal code and flow() for lodash fp instead. That's pretty much spot on. If I was guessing I'd suggest that the base model has a lot more lodash code examples in the training data, which probably makes a big difference to the quality of the output.
> The text sections take implementation details that don't matter and present them to the user like they need to know them. It's also outdated.
The point of the wiki is to help people learn the codebase so they can possibly contribute to the project, not for end users. It absolutely should explain implementation details. I do agree that it goes overboard with the diagrams. I’m curious, I’ve seen other moderately sized repo owners rave about how DeepWiki did very well in explaining implementation details. What specifically was it getting wrong about your code in your case? Is it just that it’s outdated?
I dunno, it seems to be real excited about a VS Code extension that doesn't exist and isn't mentioned in the actual documentation. There's just too many factual errors to list.
>I dunno, it seems to be real excited about a VS Code extension that doesn't exist and isn't mentioned in the actual documentation. There's just too many factual errors to list.
There is a folder for a VS Code extension here[0]. It seems to have a README with installation instructions. There is also an extension.ts file, which seems to me to be at least the initial prototype for the extension. Did you forget that you started implementing this?
In that folder is CHANGELOG.md[0] that indicates that this is unreleased. I'd say that including installation instructions for an unreleased version of the extension is exactly the issue that is being flagged.
It’s funny, I accidentally put a link to the commit instead of the current repo file because I was investigating whether or not he committed it versus he recently took over the project and didn’t realize the previous owner had started one. But he is the one who actually committed the code. I guess LLMs are so good now that they’re stopping developers from hallucinating about code they themselves wrote.
here the maintainer says it doesn't exist. there's basically no way another interpretation is "more correct". presence or files can be not intended for use, deprecated, internal, WIP, etc. this is why we need maintainers.
- Users are confused by autogenerated docs and don’t even want to try using a project because of it
- Real curated project documentation is no longer corrected by users feedback (because they never reach it)
- LLMs are trained on wrong autogenerated documentation: a downward spiral for hallucinations! (Maybe this one could then force users go look for the official docs? But not sure at this point…)
> LLMs are trained on wrong autogenerated documentation: a downward spiral for hallucinations! (Maybe this one could then force users go look for the official docs? But not sure at this point…)
I wonder what incentives for adherence to the use of this meta-tag might exist? For example, imagine I send you my digital resume and it has an AI-generated footer tag on display? Maybe a bad example- I like the idea of this in general, but my mind wanders to the fact that large entities completely ignored the wishes of robots.txt when collecting the internet's text for their training corpuses
Large entities aside, I would use this to mark my own generated content. Would be even more helpful if you could get the LLM to recognise it which would allow you to prevent ouroboros situations.
Also, no one is reading your resume anymore and big corps cannot be trusted with any rule as half of them think the next-word-machine is going to create God.
Not talking about this tool, but in general-incorrect LLM-generated documentation can have some value - developer knows they should write some docs, but are starring at a blank screen and not sure what to write so they don’t. Then developer runs an LLM, gets a screenful of LLM-generated docs, notices it is full of mistakes, starts correcting them-suddenly, a screenful of half-decent docs.
For this to actually work, you need to keep the quantity of generated docs a trickle rather than a flood-too many and the developer’s eyes glaze over and they miss stuff or just can’t be bothered. But a small trickle of errors to correct could actually be a decent motivator to build up better documentation over time.
> The text sections take implementation details that don't matter and present them to the user like they need to know them.
Yeah this seems to be a recurring issue on each of the repos I've tried. Some occasionally useful tables or diagrams buried in pages of distracting irrelevant slop.
I tried it with my repo and it is actually really nice. I kind of want to link to this so that anyone wanting to make contributions to my repo can learn about the code structure.
As always, these kinds of things are good for "simple" stuff (e.g. stuff you don't really need AI for) but totally suck for "complicated" or "weird" things. For example, I curiously ran it on one of my OSS projects: https://github.com/dvx/lofi
It's a cute little Electron-based mini Spotify player that gets maybe like 200 users a day and has 1.3k stars on GitHub. Code quality is pretty high and it's more or less "feature-complete." There's a lot of simple/typical React stuff in there, but there's also some weird stuff I had to do. For example, native volume capture is weird. But even weirder is having to mess with the Electron internal window boundaries (so people can move their Lofi window where-ever they want to).
We're essentially suppressing window rect constraints using some funky ObjectiveC black magic[1]. The code isn't complicated[1], but it's weird and probably very specific to this use case. When I ask what "constraints" does, DeepWiki totally breaks, telling me it doesn't even have access to those source files[2] (which it does).
Visualizations were also actually disabled on MacOS a few versions ago (because of the janky way you need to hook into the audio driver), but, again DeepWiki doesn't really notice[3]. There have been issues/patch notes about this, so I feel those should be getting crawled.
How many errors does that contain - anyone knows stats for that?
I see "AI summaries" on github all the time. It's like a wall of
text and seems to be designed to be super-verbose but without
seemingly being very informative.
Yeah, I wanted to try it on my (GitLab) repo as well, but it also said "No repositories found". Clicking "Index any public repo" pops up a dialog that says "Search for a GitHub repository" and "or Enter the URL of a public GitHub repository".
Agreed, nice idea in theory. But as a codebase owner I’d rather build tailored markdown files with a CLI agent to publish as my docs. And as a codebase consumer I probably only care about a codebase if I’m modifying or running it, which means a CLI agent makes the most sense and I can ask questions/generate .md files as we go.
Looks like it's impossible for me to use this service - when I try to submit the form, I get a reCAPTCHA challenge. By the time I complete it (Google requires me to make several attempts, each one being several pages), the page errors out in the background with "reCAPTCHA execution timeout".
It certainly helps, but in my experience you get 60-80% of the benefit just with code (except in legacy or otherwise terrible code, for example with misleading/outdated comments everywhere, bad variable/function names, etc - in that case more like 40%).
I agree wholeheartedly, at best I want a "smarter" search bar where I don't have to guess the exact wording of what I'm looking for, but the reply should still be a verbatim quote from the docs, not something regurgitated to be less accurate.
This is an interesting threads. There are many instances of "this is bad, doesn't work, don't like it", and many instances of "it works reasonably well here, look: <url>".
There was some article here on how llm's are like gambling, in that sometimes you get great payouts and oftentimes not, and as psych 101 taught us, that kind of intermittent reward is addictive.
Interesting point, never thought of it like that, and I think there is some truth to that view. On the other hand, IIRC, this works best in instances where it's pure chance (you have no control over the likelihood of reward) and the probability is within some range (optimal is not 50%, I think, could be wrong).
I don't think either of this is true of LLMs. You obviously can improve its results with the right prompt + context + model choice, to a pretty large degree. The probability...hard to quantify, so I won't try. Let's just say that you wouldn't say you are addicted to your car because you have a 1% chance of being stuck in the middle of nowhere if it breaks down and 99% chance of a reward. The threshold I'm not sure.
This doesn't work. It's better to prompt an agent with specific questions per subject. Having this general AI interpretation of a doc can be amazingly misleading. Nice idea, but unfortunately absolutely useless and even time wasting at the moment.
I don't know the specifics of this particular tool, I assume it's at most using a couple of passes of (some frontier model with specific system prompt + custom tools, for example code-specific rag + some form of "summarize"). By at most I mean "probably isn't doing anything crazier than that".
But it seems to be producing docs that are better than I tend to see with basic "summarize this repo for me"-style prompts, which is what I usually use on a first pass.
One is the extremely sprawling MarginaliaSearch repo[M1].
Here it did a decent job of capturing the architecture, though it is to be fair well documented in the repo itself. It successfully identifies the most important components, which is also good.
But when describing the components, it only really succeeds where the components themselves are very self-contained and easy to grok. It did a decent job with e.g. the buffer pool[M2], but even then fails to define some concepts that would have made it easier to follow, e.g. what is a pin count in buffer management? This is standard terminology and something the model should know.
I get the impression it lifts a lot of its fact from the comments and documentation that already exists, which may lead it to propagate outdated falsehoods about the code.
The other is the SlopData[S1] repo, which contains a small library for columnar data serialization.
This one I wasn't very impressed with. It produced more documentation than was necessary, mostly amending what was already there with incorrect statements it seems to have pulled out of its posterior[2][3].
The library is very low-abstraction, and there simply isn't a lot of architecture to diagram, but the model seems to insist that there must be a lot of architecture and then produces excessive diagrams as a result.
So overall it gives me a bit of a broken clock vibe. When it's right, it's great. When it isn't, it's not very useful. Good at the stuff that is already easy, borderline useless for the stuff that isn't.
This worked well for me for some things I've recently been learning/working on. One improvement I'd add is the citations of where information have come from aren't hyperlinks it would be very useful if they were!
I insta-banned this site in Kagi. The trigger for me: utter disrespect for the user with unhideable glassy floating chatbox at the bottom of the page.
And WTF with these floating boxes popping up everywhere?!? They are tailor-made to trigger anxiety in people with OCD. They look like a notification that keep grabbing your attention as you scroll the text. Example: https://aws.amazon.com/blogs/aws/secure-eks-clusters-with-th...
I took a look at a project I maintain[0], and wow. It's so wrong in every section I saw. The generated diagrams make no sense. The text sections take implementation details that don't matter and present them to the user like they need to know them. It's also outdated.
I hope actual users never see this. I dread thinking about having to go around to various LLM generated sites to correct documentation I never approved of to stop confusing users that are tricked into reading it.
[0]: https://deepwiki.com/blopker/codebook
I just tried it on several of my repos and I was rather impressed.
This is another one of those bizarre situations that keeps happening in AI coding related matters where people can look at the same thing and reach diametrically opposed conclusions. It's very peculiar and I've never experienced anything like it in my career until recently.
> at the same thing
But you’re not looking at the same thing — you’re looking at two completely different sets of output.
Perhaps their project uses a more obscure language, has a more complex architecture, resembles another project that’s tripping up the interpretation of it. You have have excellent results without it being perfect for everything. Nothing is perfect and it’s important for people making these things to know how, right?
In my career I’ve never seen such aggressive dismissal of people’s negative experiences without even knowing if their use case is significantly different.
Which repos worked well? I've had the same experience as op- unhelpful diagrams and bad information hierarchy. But I'm curious to see examples of where it's produced good output!
You could link your docs so we can compare them to OP's docs.
No need to guess.
> people can look at the same thing and reach diametrically opposed conclusions. It's very peculiar and I've never experienced anything like it in my career until recently
React vs other frameworks (or no framework). Object oriented vs functional. There's loads of examples of this that predate AI.
I dont think it's quite the same. The cases you mention are more like two alternative but roughly functionally equivalent things. People still argue and use both, but the argument is different. Even if people don't explicitly acknowledge it, at some level they understand it's a difference in taste.
This feels to me more like the horses vs cars thing, computers vs... something (no computers?), crypto vs "dollar-pegged" money, etc. It's deeper. I'm not saying the AI people are the "car" people, just that...there will be one opinion that will exist in 5-20 years, and the other will be gone. Which one... we'll see.
Democrats and republicans?
I went to the lodash docs and asked about how I'd use the 'pipeline' operator (which doesn't exist) and it correctly pointed out that pipeline isn't a thing, and suggested chain() for normal code and flow() for lodash fp instead. That's pretty much spot on. If I was guessing I'd suggest that the base model has a lot more lodash code examples in the training data, which probably makes a big difference to the quality of the output.
> The text sections take implementation details that don't matter and present them to the user like they need to know them. It's also outdated.
The point of the wiki is to help people learn the codebase so they can possibly contribute to the project, not for end users. It absolutely should explain implementation details. I do agree that it goes overboard with the diagrams. I’m curious, I’ve seen other moderately sized repo owners rave about how DeepWiki did very well in explaining implementation details. What specifically was it getting wrong about your code in your case? Is it just that it’s outdated?
I dunno, it seems to be real excited about a VS Code extension that doesn't exist and isn't mentioned in the actual documentation. There's just too many factual errors to list.
>I dunno, it seems to be real excited about a VS Code extension that doesn't exist and isn't mentioned in the actual documentation. There's just too many factual errors to list.
There is a folder for a VS Code extension here[0]. It seems to have a README with installation instructions. There is also an extension.ts file, which seems to me to be at least the initial prototype for the extension. Did you forget that you started implementing this?
[0] https://github.com/blopker/codebook/blob/c141f349a10ba170424...
In that folder is CHANGELOG.md[0] that indicates that this is unreleased. I'd say that including installation instructions for an unreleased version of the extension is exactly the issue that is being flagged.
[0] https://github.com/blopker/codebook/blob/main/vscode-extensi...
This thread should end up in the hall of fame, right next to the Dropbox one.
From a fellow LLM-powered app builder, I wish you best of luck!
Plot twist, OP has a doc mentioning it as unreleased.
What a plot twist
It’s funny, I accidentally put a link to the commit instead of the current repo file because I was investigating whether or not he committed it versus he recently took over the project and didn’t realize the previous owner had started one. But he is the one who actually committed the code. I guess LLMs are so good now that they’re stopping developers from hallucinating about code they themselves wrote.
here the maintainer says it doesn't exist. there's basically no way another interpretation is "more correct". presence or files can be not intended for use, deprecated, internal, WIP, etc. this is why we need maintainers.
Wow. Better advertisement for LLM in three comments than anything OpenAI could come up with.
It might be internal, unfinished, a prototype, in testing and not yet for public use. It might exist but do something else.
This is not an ad for LLMs. If you think this is good, you should probably not ever touch code that humans interact with.
I fear the consequences will be even darker:
- Users are confused by autogenerated docs and don’t even want to try using a project because of it
- Real curated project documentation is no longer corrected by users feedback (because they never reach it)
- LLMs are trained on wrong autogenerated documentation: a downward spiral for hallucinations! (Maybe this one could then force users go look for the official docs? But not sure at this point…)
> LLMs are trained on wrong autogenerated documentation: a downward spiral for hallucinations! (Maybe this one could then force users go look for the official docs? But not sure at this point…)
On this, I think, we should have some kind of AI-generated meta-tag, like this: https://github.com/whatwg/html/issues/9479
I wonder what incentives for adherence to the use of this meta-tag might exist? For example, imagine I send you my digital resume and it has an AI-generated footer tag on display? Maybe a bad example- I like the idea of this in general, but my mind wanders to the fact that large entities completely ignored the wishes of robots.txt when collecting the internet's text for their training corpuses
Large entities aside, I would use this to mark my own generated content. Would be even more helpful if you could get the LLM to recognise it which would allow you to prevent ouroboros situations.
Also, no one is reading your resume anymore and big corps cannot be trusted with any rule as half of them think the next-word-machine is going to create God.
> It's so wrong in every section I saw.
Not talking about this tool, but in general-incorrect LLM-generated documentation can have some value - developer knows they should write some docs, but are starring at a blank screen and not sure what to write so they don’t. Then developer runs an LLM, gets a screenful of LLM-generated docs, notices it is full of mistakes, starts correcting them-suddenly, a screenful of half-decent docs.
For this to actually work, you need to keep the quantity of generated docs a trickle rather than a flood-too many and the developer’s eyes glaze over and they miss stuff or just can’t be bothered. But a small trickle of errors to correct could actually be a decent motivator to build up better documentation over time.
At some point it will be less wrong (TM) and it'll be helpful. Feels generally like a good bet.
What model did you use?
> The text sections take implementation details that don't matter and present them to the user like they need to know them.
Yeah this seems to be a recurring issue on each of the repos I've tried. Some occasionally useful tables or diagrams buried in pages of distracting irrelevant slop.
This is made by “Devin” I believe.
I tried it with my repo and it is actually really nice. I kind of want to link to this so that anyone wanting to make contributions to my repo can learn about the code structure.
My repo has a plugin structure (https://github.com/ytreister/gibr), and I love how it added a section about adding a new plugin: https://deepwiki.com/ytreister/gibr/7.4-adding-a-new-issue-t...
As always, these kinds of things are good for "simple" stuff (e.g. stuff you don't really need AI for) but totally suck for "complicated" or "weird" things. For example, I curiously ran it on one of my OSS projects: https://github.com/dvx/lofi
It's a cute little Electron-based mini Spotify player that gets maybe like 200 users a day and has 1.3k stars on GitHub. Code quality is pretty high and it's more or less "feature-complete." There's a lot of simple/typical React stuff in there, but there's also some weird stuff I had to do. For example, native volume capture is weird. But even weirder is having to mess with the Electron internal window boundaries (so people can move their Lofi window where-ever they want to).
We're essentially suppressing window rect constraints using some funky ObjectiveC black magic[1]. The code isn't complicated[1], but it's weird and probably very specific to this use case. When I ask what "constraints" does, DeepWiki totally breaks, telling me it doesn't even have access to those source files[2] (which it does).
Visualizations were also actually disabled on MacOS a few versions ago (because of the janky way you need to hook into the audio driver), but, again DeepWiki doesn't really notice[3]. There have been issues/patch notes about this, so I feel those should be getting crawled.
[1] https://github.com/dvx/lofi/blob/master/src/native/black-mag...
[2] https://deepwiki.com/search/what-is-constraints_cc5c0478-e45...
[3] https://deepwiki.com/search/how-do-macos-visualizations-wo_d...
This gets posted pretty frequently.
231 points | 77 days ago | 53 comments
https://news.ycombinator.com/item?id=45002092
YMMV, my experience with DeepWiki is that it’s decent but the DX of the documentation is horrible and the diagrams are often just incorrect.
Worth mentioning this is a Cognition / Devin on-ramp and has been posted on HN a few times in just a couple months, feels a little sales-y to me.
From the title I assumed it would generate docs to put in the repo.
But it's docs outside the dev's purview on a deepwiki url, used to shepherd people into Devin. Wow. Talk about slimy.
How many errors does that contain - anyone knows stats for that?
I see "AI summaries" on github all the time. It's like a wall of text and seems to be designed to be super-verbose but without seemingly being very informative.
I tried a few different repositories (both my own and various other people’s projects). They all yield the same:
Probably broken/down right now?I've looked at mine and it take 10 to 15 minutes to process.
Maybe they only support github?
Yeah, I wanted to try it on my (GitLab) repo as well, but it also said "No repositories found". Clicking "Index any public repo" pops up a dialog that says "Search for a GitHub repository" and "or Enter the URL of a public GitHub repository".
So looks like it's not actually any repository.
Do we need this, when we have tools like Claude Code, Codex etc that you can talk to about the codebase they are started in?
Agreed, nice idea in theory. But as a codebase owner I’d rather build tailored markdown files with a CLI agent to publish as my docs. And as a codebase consumer I probably only care about a codebase if I’m modifying or running it, which means a CLI agent makes the most sense and I can ask questions/generate .md files as we go.
This is a nice idea in theory. But you need excellent docs in the firstplace for it to work.
And if a human spent painstaking effort writing excellent docs, the least bit of respect i can give them is read it.
> But you need excellent docs in the first place for it to work.
Are you sure? I just tried it on projects of mine that have almost zero documentation it did a fairly good job.
Looks like it's impossible for me to use this service - when I try to submit the form, I get a reCAPTCHA challenge. By the time I complete it (Google requires me to make several attempts, each one being several pages), the page errors out in the background with "reCAPTCHA execution timeout".
Try solving it slowly, some captchas love that.
You need great pre-existing docs for something like this to work properly.
AI must RTFM. https://passo.uno/from-tech-writers-to-ai-context-curators/
It certainly helps, but in my experience you get 60-80% of the benefit just with code (except in legacy or otherwise terrible code, for example with misleading/outdated comments everywhere, bad variable/function names, etc - in that case more like 40%).
I don’t want to talk to my documentation. I just want the facts searchable and easily readable.
I agree wholeheartedly, at best I want a "smarter" search bar where I don't have to guess the exact wording of what I'm looking for, but the reply should still be a verbatim quote from the docs, not something regurgitated to be less accurate.
This is an interesting threads. There are many instances of "this is bad, doesn't work, don't like it", and many instances of "it works reasonably well here, look: <url>".
Seems like a consistent pattern.
There was some article here on how llm's are like gambling, in that sometimes you get great payouts and oftentimes not, and as psych 101 taught us, that kind of intermittent reward is addictive.
Interesting point, never thought of it like that, and I think there is some truth to that view. On the other hand, IIRC, this works best in instances where it's pure chance (you have no control over the likelihood of reward) and the probability is within some range (optimal is not 50%, I think, could be wrong).
I don't think either of this is true of LLMs. You obviously can improve its results with the right prompt + context + model choice, to a pretty large degree. The probability...hard to quantify, so I won't try. Let's just say that you wouldn't say you are addicted to your car because you have a 1% chance of being stuck in the middle of nowhere if it breaks down and 99% chance of a reward. The threshold I'm not sure.
I wanted to try the tool with a repo I know. After a few attempts to select cars, bus,crosswalks, I got "capchat timeout error".
This doesn't work. It's better to prompt an agent with specific questions per subject. Having this general AI interpretation of a doc can be amazingly misleading. Nice idea, but unfortunately absolutely useless and even time wasting at the moment.
I've seen this idea before claude code gemini cli etc were a thing. This is not relevant anymore (unless you surpass these tools).
Cool idea, bad timing
I don't know the specifics of this particular tool, I assume it's at most using a couple of passes of (some frontier model with specific system prompt + custom tools, for example code-specific rag + some form of "summarize"). By at most I mean "probably isn't doing anything crazier than that".
But it seems to be producing docs that are better than I tend to see with basic "summarize this repo for me"-style prompts, which is what I usually use on a first pass.
Works pretty well for gdzig
https://deepwiki.com/gdzig/gdzig/1-overview
Hi! Cool to see you commenting here, great work on gdzig btw :)
Thanks! Who knew I would be known for that. Not me!
So I gave it a spin on two of my repos.
One is the extremely sprawling MarginaliaSearch repo[M1].
Here it did a decent job of capturing the architecture, though it is to be fair well documented in the repo itself. It successfully identifies the most important components, which is also good.
But when describing the components, it only really succeeds where the components themselves are very self-contained and easy to grok. It did a decent job with e.g. the buffer pool[M2], but even then fails to define some concepts that would have made it easier to follow, e.g. what is a pin count in buffer management? This is standard terminology and something the model should know.
I get the impression it lifts a lot of its fact from the comments and documentation that already exists, which may lead it to propagate outdated falsehoods about the code.
[M1] https://deepwiki.com/MarginaliaSearch/MarginaliaSearch
[M2] https://deepwiki.com/MarginaliaSearch/MarginaliaSearch/5.2-b...
The other is the SlopData[S1] repo, which contains a small library for columnar data serialization.
This one I wasn't very impressed with. It produced more documentation than was necessary, mostly amending what was already there with incorrect statements it seems to have pulled out of its posterior[2][3].
The library is very low-abstraction, and there simply isn't a lot of architecture to diagram, but the model seems to insist that there must be a lot of architecture and then produces excessive diagrams as a result.
[S1] https://deepwiki.com/MarginaliaSearch/SlopData
[S2] https://deepwiki.com/MarginaliaSearch/SlopData#storage-types (performance numbers are completely invented, in practice reading compressed data is typically faster than plain data)
[S3] https://deepwiki.com/MarginaliaSearch/SlopData/6.3-zip-packa... (the overview section is false, all these tables are immutable).
So overall it gives me a bit of a broken clock vibe. When it's right, it's great. When it isn't, it's not very useful. Good at the stuff that is already easy, borderline useless for the stuff that isn't.
Is the documentation generated using LLMs? Anyway this would only work if the documentation is truly top notch and completely accurate
This worked well for me for some things I've recently been learning/working on. One improvement I'd add is the citations of where information have come from aren't hyperlinks it would be very useful if they were!
I use this heavily to navigate the neondatabase/neon repo and it has been invaluable
I'm very curious how this will turn out, and especially when :)
https://github.com/cameyo42/newLISP-Code
Looks really promising:
https://deepwiki.com/cameyo42/newLISP-Code/3.1-newlisp-99-pr...
I find it's better than context7, but that's not saying much
Context7 uses the real documentation of I’m not mistaken and just provides you a RAG mcp
The diagrams generated are arbitrary and make no sense. This needs improvements
I insta-banned this site in Kagi. The trigger for me: utter disrespect for the user with unhideable glassy floating chatbox at the bottom of the page.
And WTF with these floating boxes popping up everywhere?!? They are tailor-made to trigger anxiety in people with OCD. They look like a notification that keep grabbing your attention as you scroll the text. Example: https://aws.amazon.com/blogs/aws/secure-eks-clusters-with-th...
Help yourself with https://secure.fanboy.co.nz/fanboy-annoyance.txt or one of its variants.
> floating boxes
Will need boxblock.
It works! I love using it for open source repos.