LinkedIn is also a great example of this stuff at the moment. Every day I see posts where someone clearly took a slide or a diagram from somewhere, then had ChatGPT "make it better" and write text for them to post along with it. Words get mangled, charts no longer make sense, but these people clearly aren't reading anything they're posting.
It's not like LinkedIn was great before, but the business-influencer incentives there seem to have really juiced nonsense content that all feels gratingly similar. Probably doesn't help that I work in energy which in this moment has attracted a tremendous number of hangers-on looking for a hit from the data center money funnel.
LinkedIn is a masquerade ball dressed up as a business oriented forum. Nobody is showing their true selves, everyone is either grinding at their latest unicorn potential with their LLM BFF or posting a "thoughtful" story that is 100% totally real about a life changing event that somehow turns into a sales pitch at the end...
There's this. There's that video from Los Alamos discussed yesterday on HN, the one with a fake shot of some AI generated machinery. The image was purchased from Alamy Stock Photo. I recently saw a fake documentary about the famous GG-1 locomotive; the video had AI-generated images that looked wrong, despite GG-1 pictures being widely available.
YouTube is creating fake images as thumbnails for videos now, and for industrial subjects they're not even close to the right thing. There's a glut of how-to videos with AI-generated voice giving totally wrong advice.
Then newer LLM training sets will pick up this stuff.
"The memes will continue" - White House press secretary after posting an altered shot of someone crying.
The war on facts continues. Facts are hard, they require a careful chain of provenance. It's much cheaper to just make up whatever people want to hear, safe in the knowledge that there will never be any negative consequences for you. Only other people, who aren't real anyway.
Similar story. I'm American but work and live outside the US, so I don't know how likely this would be if I had ordered from Amazon. But I ordered a rug for my sons' room from this country's equivalent to Amazon (that is, the most popular order-online-and-we-ship-to-you storefront in this country), and instead of what I ordered (a rug with an image showing the planets, with labels in English) I got an obviously AI-generated copy of the image, whose letters were often mangled (MARS looked like MɅPS, for example). Thankfully the storefront allowed me to return it for a refund, I ordered from a different seller on the second try, and this time I received a rug that precisely matched the image on the storefront. But yes, there are unscrupulous merchants who are using AI to sloppily copy other people's work.
Another similar story:
My aunt passed away last year, and an acquaintance of my cousin sent her one of those "hug in a box" care packages you can buy off Amazon.
Except when it was delivered, this one said "hug in a boy" and "with heaetfelt equqikathy" (whatever the hell that means). When we looked up the listing on Amazon it was clear it was actually wrong in the pictures, just well hidden with well placed objects in front of the mistakes. It seems like they ripped off another popular listing that had a similar font/contents/etc.
This is hilarious actually. I am starting to lean into "AI-dangerous" camp, but not because the chatbot will ever become sentient. Its precisely because of increasingly widespread adoption of un-reliable tools by the incompetent but self-confident Office Worker (R).
> looks like a vendor, and we have a group now doing a post-mortem trying to figure out how it happened. It'll be removed ASAFP
> Understood. Not trying to sweep under rugs, but I also want to point out that everything is moving very fast right now and there’s 300,000 people that work here, so there’s probably be a bunch of dumb stuff happening. There’s also probably a bunch of dumb stuff happening at other companies
> Sometimes it’s a big systemic problem and sometimes it’s just one person who screwed up
This excuse is hollow to me. In an organization of this size, it takes multiple people screwing up for a failure to reach the public, or at least it should. In either case -- no review process, or a failed review process -- the failure is definitionally systemic. If a single person can on their own whim publish not only plagiarised material, but material that is so obviously defective at a single glance that it should never see the light of day, that is in itself a failure of the system.
With this objective lack or control, sooner or later your LLM experiments in production will drive into a wall instead of hitting a little pothole like this diagram.
And at the same time, they have time to quickly brush it off with "looks like a vendor" even though people are still investigating. Yes, we can see it's moving really fast, probably "move fast break things" been infecting Microsoft, users are leaving Microsoft behind because everything is breaking then clueless VPs blame it on moving too fast?
> This excuse is hollow to me. In an organization of this size, it takes multiple people screwing up for a failure to reach the public, or at least it should.
Completely with you on this, plus I would add following thoughts:
I don't think the size of the company should automatically be a proxy measure for a certain level of quality. Surely you can have slobs prevailing in a company of any size.
However - this kind of mistake should not be happening in a valuable company. Microsoft is currently still priced as a very valuable company, even with the significant corrections post Satyas crazy CapEx commitments from 2 weeks ago.
However it seems recently the mistakes, errors and "vendors without guidelines" pile up a bit too much for a supposedly 3-4T USD worth company, culminating in this weird random but very educational case. If anything, it's indicator that Microsoft may not really be as valuable as it is currently still perceived.
You’re incorrect on how the publishing process works. If a vendor wrote the document, it has a single repo owner (all those docs are in github) that would need to sign off on a PR.
There isn’t multiple layers or really any friction to get content on learn.msft.
I suggested that if there is no review process, it is a systemic issue, and that if there is a review process that failed to catch something this egregious, it is a systemic issue. My supposition is that regardless of how the publishing process works, there is a systemic failure here, and I made no claims as to how it actually works, so I'm not sure where the "you're incorrect on how it works" is coming from.
There is no singular publishing org at MSFT. Each product publishes its own docs, generally following a style guide. But the doc process is up to the doc owner(s).
Person A, possibly a vendor, pushed the content. Person B, working for MSFT, approved this process where the vendor could just push content, and trusted that this vendor/process would represent the standards of the MSFT brand even amid the temptation of new tooling. Thus, at least 2 people screwed up, and probable more, because MSFT is a large corp and the vendor might be, too.
A common word for saying "2 or more" is "multiple". Multiple people screwed up. Learn to fucking count.
> In either case -- no review process, or a failed review process -- the failure is definitionally systemic.
Ortho and grammar errors should have been corrected, but do you really expect a review process to identify that a diagram is a copy from another one some rando already published on the internet years ago?
It’s not just a copy. It’s a caricature of a copy with a plenty of nonsense in it: typos and weird “text”, broken arrows, etc. Even a cursory look gives a feeling that something’s fishy.
Weird text was already deemed acceptable by microsoft in their documentation as they machine translated most screenshots instead of recreating them in different locales, leading to the same problems as this image.
This is the same Microsoft that promised to indemnify any of its customers sued over copyright lawsuits as a result of using its AIs. [0] So I'm sure legal reviewed it the same way, saying "Yep, our war chest is still ample".
Just that tiny image on his blog was enough for me to go "oh yeah, I used his diagram to explain this type of git workflow to colleagues a decade ago". Someone should have spotted that right away.
Now that's an interesting comment for him to include. The cynic in me could find / can think of lots of reasons from my YouTube feed as to why that might be so. What else is going on at Microsoft that could cause this sense of urgency?
My guess is there is some communication going out to every "manager", even the M1, that says this is your priority.
For example, I know of an unrelated mandate Microsoft has for its management. Anything security team analysis flags in code that you or your team owns must be fixed or somehow acceptably mitigated within the deadline specified. It doesn't matter if it is Newton soft json being "vulnerable" and the entire system is only built for use by msft employees. If you let this deadline slip, you have to explain yourself and might lose your bonus.
Ok so the remediation for the Newton soft case is easy enough that it is worth doing but the point is I have a conspiracy theory that internally msft has such a memo (yes, beyond what is publicly disclosed) going to all managers saying they must adopt copilot, whatever copilot means.
> This excuse is hollow to me. In an organization of this size, it takes multiple people screwing up for a failure to reach the public, or at least it should.
Only if this is considered a failure.
Native English speakers may not know, but for a very long time (since before automatic translation tools became adequate) pretty much all MSFT docs were machine translated to the user agent language by default. Initially they were as useless as they were hilarious - a true slop before the term was invented.
Seems like this is going to be the year of AI slop being released everywhere by Microsoft. Just wish they'd put as much effort into a post morten for this one as they're doing for a diagram on a blog post https://github.com/microsoft/onnxruntime/issues/27263#issuec...
Microsoft seems to have thrown quality assurance overboard completely. Vibe generate everything, throw it at a wall, see what sticks.
Tech bros are so afraid of regulation they even drop regulation inside their own companies. (just kidding)
Just a thought: the timeline of the vibe techs rolling out and the timeline of increasing product rot, sloppiness, and user-hostile “has anyone ever actually used this shit!?!” coming out of MS overlap.
Vibing won’t help out at all, and years from now we’re gonna have project math on why 10x-LLM-ing mediocre devs on a busted project that’s behind schedule isn’t the play (like how adding more devs to a late project generally makes it more late). But it takes years for those failures to aggregate and spread up the stack.
I believe the vibing is highlighting the missteps from the wave right before which has been cloud-first, cloud-integrated, cloud-upselling that cannibalized MS’s core products, multiplied by the massive MS layoff waves. MS used to have a lot of devs that made a lot of culture who are simply gone. The weakened offerings, breakdown of vision, and platform enshittification have been obvious for a while. And then ChatGPT came.
Stock price reflects how attractive stocks are for stock purchasers on the stock market, not how good something is. MS has been doing great things for their stock price.
LLMs make getting into emacs and Linux and OSS and OCaml easier than ever. SteamOS is maturing. Windows Subsytem for Linux is a mature bridge. It’s a bold time for MS to be betting on brand loyalty and product love, even if their shit worked.
Wow it’s even worse than I thought. I thought that convictungly morhing would be the only problem. The nonsense and inconsistent arrowheads, the missing annotations, the missing bubbles. The “tirm” axis…
That this was ever published shows a supreme lack of care.
It looks like typical "memorization" in image generation models. The author likely just prompted the image.
The model makers attempt to add guardrails to prevent this but it's not perfect. It seems a lot of large AI models basically just copy the training data and add slight modifications
Regarding the original git-flow model: I've never had anyone able to explain to me why it's worth the hassle to do all the integration work on the "develop" branch, while relegating the master/main branch to just being a place to park the tag from the latest release. Why not just use the master/main branch for integration instead of the develop branch - like the git gods intended - and then not have the develop branch at all? If your goal is to have an easy answer to "what's the latest release?", you have the tags for that in any case. Or if you really want to have a whole branch just to double-solve that one use-case, why not make a "release-tags" branch for that, instead of demoting the master/main branch to that role, when it already has a widely used, different meaning?
It's a pity that such a weird artifact/choice has made its way into a branching model that has become so widely implemented. Especially when the rest of it is so sensible - the whole "feature-branch, release-branch, hotfix" flow is IMO exactly right for versioned software where you must support multiple released versions of it in the wild (and probably the reason why it's become so popular). I just wish it didn't have that one weirdness marring it.
It can be beneficial if there is no mechanism that ensures that develop is always in a working state, but there is one that ensures that master is. The immediate benefit is that a new feature branch can always be started off master from a known-good state.
Of course, there are ways to enforce a known-good state on master without a dedicated develop branch, but it can be easier when having the two branches.
(I just dislike the name “develop”, because branch names should be nouns.)
I am working with main/master for years now, and there's one problem you don't have with develop: Whenever you merge something into master, it kind of blocks the next release until its (non-continuous) QA is done. If your changes are somewhat independent, you can cherry-pick them from develop into master in an arbitrary order and call that a release whenever you want to.
I worked at a place that had Gitlab review apps set up. Where the QA people could just click a button and it would create an instance of the app with just that PR on it. Then they could test, approve, and kill the instance.
Then you can merge to master and it's immediately ready to go.
> Whenever you merge something into master, it kind of blocks the next release until its (non-continuous) QA is done.
That's what tags are for, QA tests the tagged release, then that gets released. Master can continue changing up until the next tag, then QA has another thing to test.
It's useful if your integration work takes some time - easy to run into with open source.
Imagine you have multiple contributors with multiple new features, and you want to do a big release with all of them. You sit down a weekend and merge in your own feature branch, and then tell everyone else to do so too - but it's a hobby project, the other guys aren't consistently available, maybe they need two weekends to integrate and test when they're merging their work with everyone else's, and they don't have time during the weekdays.
So, the dev branch sits there for 2-3 weeks gradually acquiring features (and people testing integration too, hopefully, with any fixes that emerge from that). But then you discover a bug in the currently live version, either from people using it or even from the integration work, and you want that fix live during the week (specific example: there's a rare but consistent CTD in a game mod, you do not want to leave that in for several weeks). Well, if you have a branch reflecting the live status you can put your hotfix there, do a release, and merge the hotfix into dev right away.
Speaking of game mods, that also gives you a situation where you have a hard dependency on another project - if they do a release in between your mods releases, you might need to drop a compat hotfix ASAP, and you want a reflection of the live code where you can do that, knowing you will always have a branch that works with the latest version of the game. If your main branch has multiple people's work on it, in progress, that differs from what's actually released, you're going to get a mess.
And sure you could do just feature branches and merge feature branches one by one into each other, and then into main so you never have code-under-integration in a centralized place but... why not just designate a branch to be the place to do integration work?
You could also merge features one by one into main branch but again, imagine the mod case, if the main code needs X update for compatibility with a game update, why do that update for every feature branch, and expect every contributor to do that work? Much better to merge a feature in when the feature is done, and if you're waiting on other features centralize the work to keep in step with main (and the dependency) in one place. Especially relevant if your feature contributors are volunteers who probably wouldn't have the time to keep up with changes if it takes a few weeks before they can merge in their code.
If this pattern is so pervasive, and so many people care enough to attempt to explain it to you, yet you remain unconvinced, I’m not sure how you reach the conclusion that you are right, and correct, and that it’s such a shame that the world does not conform to how you believe that things should be.
Besides a bit of a puritan argument about “git gods”, you haven’t really justified why this matters at all, let alone why you care so much about it.
On the other hand, the model that you are so strongly against has a very easy to understand mental model that is analogous to real-world things. What do you think that the flow in git flow is referring to?
I’m sorry that you find git flow so disgusting but I think your self-righteousness is completely unjustified.
It's funny how big of an impact individual developers can have with such seemingly simple publications. At the time of the article with that diagram release, I was changing jobs and I distinctly remember, that the diagram was extensively discussed and compared to company standards, at both the old and the new place.
Is this not a good example of how generative AI does copyright laundering? Suppose the image was AI generated and it did a bad copy of the source image that was in the training data, which seems likely with such a widely disseminated image. When using generative AI to produce anything else, how do you know it's not just doing a bad quality copy-paste of someone else's work? Are you going to scour the internet for the source? Will the AI tell you? What if code generation is copy-pasting GPL-licensed code in to your proprietary codebase? The likelihood of this, the lack of a way to easily know it's happening, and the risks it causes, seems to me to be being overlooked amidst all the AI hype. And generative AI is a lot less impressive if it often works as a bad quality copy paste tool rather than the galaxy brain intelligence some like to portray it as.
There are countless examples. Often I think about the fact that the google search AI is just rewording news articles from the search results, when you look at the source articles they have exactly the same points as the AI answers.
So these services depends on journalists to continuously feed them articles, while stealing all of the viewers by automatically copying every article.
I actually often have the opposite problem. The AI overview will assert something and give me dozens of links, and then I'm forced to check them one by one to try to figure out where the assertion came from, and, in some cases, none of the articles even say what the AI overview claimed they said.
I honestly don't get it. All I want is for it to quote verbatim and link to the source. This isn't hard, and there is no way the engineers at Google don't know how to write a thesis with citations. How did things end up this way?
ChatGPT was a research prototype thrown at end users as a "product".
It is not a carefully designed product; ask yourself "What is it FOR?".
But the identification of reliable sources isn't as easy as you may think, either. A chat-based interaction really makes most sense if you can rely on every answer, otherwise the user is misled and user and conversation may go in a wrong direction. The previous search paradigm ("ten snippets + links") did not project the confidence that turns out is not grounded in truth that the chat paradigm does.
If you actually care about having that sort of discussion I’d suggest a framing that doesn’t paint anyone that doesn’t agree with you as succumbing to AI hype and believing it has “galaxy brain intelligence”. Please ditch this false dichotomy. At this point, in 2026, it’s tiring.
Is there a single thing that Microsoft doesn’t half-ass? Even if you wanted to AI generate a graph, how hard is it to go into Paint or something and fix the test?
I have been having oodles of headaches dealing with exFAT not being journaled and having to engineer around it. It’s annoying because exFAT is basically the only filesystem used on SD cards since it’s basically the only filesystem that’s compatible with everything.
It feels like everything Microsoft does is like that though; superficially fine until you get into the details of it and it’s actually broken, but you have to put up with it because it’s used everywhere.
When I read the title, I thought "morg" was one of those goofy tech words that I had missed but whose meaning was still pretty clear in context (like a portmanteau of "Microsoft" and "borged," the latter of which I've never heard as a verb but still works). I guess it's a goofy tech word now.
This is why we don't use diffusion style models for diagrams or anything containing detailed typography.
An LLM driving mermaid with text tokens will produce infinitely more accurate diagrams than something operating in raster space.
A lot of the hate being generated seems due to really poor application of the technology. Not evil intent or incapable technology. Bad engineering. Not understanding when to use png vs jpeg. That kind of thing.
Don't think so, expect they just mean replying to comments to mention it, or they posted another article and people commented about seeing this and isn't it from another article of yours etc.
> take someone's carefully crafted work, run it through a machine to wash off the fingerprints, and ship it as your own.
I don’t even care about AI or not here. That’s like copying someone’s work, badly, and either not understanding or not giving a shit that it’s wrong? I’m not sure which of those two is worse.
Waiting for the LLM evangelists to tell us that their box of weights of choice did that on purpose to create engagement as a sentient entity understanding the nature of tech marketing, or that OP should try again with quatuor 4.9-extended (that really ships AGI with the $5k monthly subscription addon) because it refactored their pet project last week into a compilable state, after only boiling 3 oceans.
Using an LLM to generate an image of a diagram is not a good idea, but you can get really good results if you ask it to generate a diagram.io SVG (or a Miro diagram through their MCP).
I sometimes ask Claude to read some code and generate a process diagram of it, and it works surprisingly well!
It took ~5 months for anyone to notice and fix something that is obviously wrong at a glance.
How many people saw that page, skimmed it, and thought “good enough”? That feels like a pretty honest reflection of the state of knowledge work right now. Everyone is running at a velocity where quality, craft and care are optional luxuries. Authors don’t have time to write properly, reviewers don’t have time to review properly, and readers don’t have time to read properly.
So we end up shipping documentation that nobody really reads and nobody really owns. The process says “published”, so it’s done.
AI didn’t create this, it just dramatically lowers the cost of producing text and images that look plausible enough to pass a quick skim. If anything it makes the underlying problem worse: more content, less attention, less understanding.
It was already possible to cargo-cult GitFlow by copying the diagram without reading the context. Now we’re cargo-culting diagrams that were generated without understanding in the first place.
If the reality is that we’re too busy to write, review, or read properly, what is the actual function of this documentation beyond being checkbox output?
You are assuming: A) That everyone who saw this would go as far as post publicly about it (and not just chuckle / send it their peers privately) and B) Any post about this would reach you/HN and not potentially be lost in the sea of new content.
If you work in a medium to large company, you know most of the documentation is there for compliance reasons or for showing others that you did something at one point. You can probably just put slop at the end of documents, while you still keep headlines relevant and no one will ever read it or notice it.
> So we end up shipping documentation that nobody really reads
I'd note that the documentation may have been read and noticed as flawed, but some random person noticing that it's flawed is just going to sigh, shake their heads, and move on. I've certainly been frustrated by inadequate documentation before (that describes the majority of all documentation, in my experience), but I don't make a point of raising a fuss about it because I'm busy trying to figure out how to actually accomplish the goal for which I was reading documentation for rather than stopping what I'm doing to make a complaint about how bad the documentation is.
This says nothing to absolve everyone involved in publishing it, of course. The craft of software engineering is indeed in a very sorry state, and this offers just one tiny glimpse into the flimsiness of the house of cards.
I usually would post it in our dev slack chat and rant for a message or two how many hours were lost "reverse-engineering" bad documentation. But I probably wouldn't post about it on here/BlueSky.
> the diagram was both well-known enough and obviously AI-slop-y enough that it was easy to spot as plagiarism. But we all know there will just be more and more content like this that isn't so well-known or soon will get mutated or disguised in more advanced ways that this plagiarism no longer will be recognizable as such.
Most content will be less known and the ensloppified version more obfuscated... the author is lucky to have such an obvious association. Curious to see if MSFT will react in any meaningful way to this.
>What's dispiriting is the (lack of) process and care: take someone's carefully crafted work, run it through a machine to wash off the fingerprints, and ship it as your own.
"Don't attribute to malice what can be adequately explained by stupidity". I bet someone just typed into ChatGPT/Copilot, "generate a Git flow diagram," and it searched the web, found your image, and decided to recreate it by using as a reference (there's probably something in the reasoning traces like, "I found a relevant image, but the user specifically asked me to generate one, so I'll create my own version now.") The person creating the documentation didn't bother to check...
In this case, we can chalk it up to malicious stupidity. Someone posting a reference aimed at learners, especially with Microsoft's reach and name recognition, has a responsibility to check the quality and accuracy of the materials. Using an AI tool doesn't absolve that responsibility one bit.
That old beatiful git branching model got printed into the minds of many. Any other visual is not going to replace it. The flood of 'plastic' incarnations of everything is abominable. Escape to jungles!!
Maybe you're missing the reference to the Morbius movie joke, which sounds surprisingly fitting. It's not like older HNers never made funny references.
The commenter you're responding to a) independently made the exact same reference; b) has a username like that of Jared Leto's other Disney tentpole flop role...
I propose to adopt the word „morge”, a verb meaning „use an LLM to generate content that badly but recognizably plagiarizes some other known/famous work”.
A noun describing such piece of slop could be „morgery”.
On the one hand, I feel for people who have their creations ripped off.
On the other hand, it makes sense for Microsoft to rip this off, as part of the continuing enshittification of, well, everything.
Having been subjected to GitFlow at a previous employer, after having already done git for years and version control for decades, I can say that GitFlow is... not good.
It seems to me rather less likely that someone at Microsoft knowingly and deliberately took his specific diagram and "ran it through an AI image generator" than that someone asked an AI image generator to produce a diagram with a similar concept, and it responded with a chunk of mostly-memorized data, which the operator believed to be a novel creation. How many such diagrams were there likely to have been, in the training set? Is overfitting really so unlikely?
The author of the Microsoft article most likely failed to credit or link back to his original diagram because they had no idea it existed.
I don't think "LLM" and "hallucinated" are accurate; different kinds of AI create images, and I get the impression that they generally don't ascribe semantics to words in the same way that LLMs do, and thus when they draw letter shapes they typically aren't actually modelling the fact that the letters are supposed to spell a particular word that has a particular meaning.
A somewhat contrarian perspective is that this diagram is so simple and widely used and has been reproduced (ie redrawn) so many times that is very easy to assume this does not have a single origin and that its public domain.
> In 2010, I wrote A successful Git branching model and created a diagram to go with it. I designed that diagram in Apple Keynote, at the time obsessing over the colors, the curves, and the layout until it clearly communicated how branches relate to each other over time. I also published the source file so others could build on it.
If you mean that the Microsoft publisher shouldn't be faulted for assuming it would be okay to reproduce the diagram... then said publisher should have actually reproduced the diagram instead of morging it.
Is it about the haphazardous deployment of AI generated content without revising/proof reading the output?
Or is it about using some graphs without attributing their authors?
if it's the latter (even if partially) then I have to disagree with that angle. A very widespread model isn't owned by anyone surely, I don't have to reference newton everytime I write an article on gravity no? but maybe I'm misunderstanding the angle the author is coming from
(Sidenote: if it was meant in a lightheaded way then I can see it making sense)
did you read the article? this is explicitly explained! at length!
not at all about the reuse. it's been done over and over with this diagram. it's about the careless copying that destroyed the quality. nothing was wrong with the original diagram! why run it through the AI at all?
Other than that, I find this whole thing mostly very saddening. Not because some company used my diagram. As I said, it's been everywhere for 15 years and I've always been fine with that. What's dispiriting is the (lack of) process and care: take someone's carefully crafted work, run it through a machine to wash off the fingerprints, and ship it as your own. This isn't a case of being inspired by something and building on it. It's the opposite of that. It's taking something that worked and making it worse. Is there even a goal here beyond "generating content"?
I mean come on – the point literally could not be more clearly expressed.
LinkedIn is also a great example of this stuff at the moment. Every day I see posts where someone clearly took a slide or a diagram from somewhere, then had ChatGPT "make it better" and write text for them to post along with it. Words get mangled, charts no longer make sense, but these people clearly aren't reading anything they're posting.
It's not like LinkedIn was great before, but the business-influencer incentives there seem to have really juiced nonsense content that all feels gratingly similar. Probably doesn't help that I work in energy which in this moment has attracted a tremendous number of hangers-on looking for a hit from the data center money funnel.
LinkedIn is a masquerade ball dressed up as a business oriented forum. Nobody is showing their true selves, everyone is either grinding at their latest unicorn potential with their LLM BFF or posting a "thoughtful" story that is 100% totally real about a life changing event that somehow turns into a sales pitch at the end...
LinkedIn is a fucking asylum populate by the most unhinged “people” and bots. I don’t know a single serious technical person active on LinkedIn.
There are people who write genuinely interesting stuff there as well.
I use block option there quite a lot. That cleans up my experience rather well.
Can to share some of them? Genuinely curious.
Of course they aren't. The text to go with those diagrams is also machine generated.
The comment you’re replying to already stated that.
This is so out of hand.
There's this. There's that video from Los Alamos discussed yesterday on HN, the one with a fake shot of some AI generated machinery. The image was purchased from Alamy Stock Photo. I recently saw a fake documentary about the famous GG-1 locomotive; the video had AI-generated images that looked wrong, despite GG-1 pictures being widely available. YouTube is creating fake images as thumbnails for videos now, and for industrial subjects they're not even close to the right thing. There's a glut of how-to videos with AI-generated voice giving totally wrong advice.
Then newer LLM training sets will pick up this stuff.
"The memes will continue" - White House press secretary after posting an altered shot of someone crying.
The war on facts continues. Facts are hard, they require a careful chain of provenance. It's much cheaper to just make up whatever people want to hear, safe in the knowledge that there will never be any negative consequences for you. Only other people, who aren't real anyway.
> recently saw a fake documentary about the famous GG-1 locomotive
It wouldn’t happen to be a certain podcast about engineering disasters, now, would it?
Similar story. I'm American but work and live outside the US, so I don't know how likely this would be if I had ordered from Amazon. But I ordered a rug for my sons' room from this country's equivalent to Amazon (that is, the most popular order-online-and-we-ship-to-you storefront in this country), and instead of what I ordered (a rug with an image showing the planets, with labels in English) I got an obviously AI-generated copy of the image, whose letters were often mangled (MARS looked like MɅPS, for example). Thankfully the storefront allowed me to return it for a refund, I ordered from a different seller on the second try, and this time I received a rug that precisely matched the image on the storefront. But yes, there are unscrupulous merchants who are using AI to sloppily copy other people's work.
Another similar story: My aunt passed away last year, and an acquaintance of my cousin sent her one of those "hug in a box" care packages you can buy off Amazon.
Except when it was delivered, this one said "hug in a boy" and "with heaetfelt equqikathy" (whatever the hell that means). When we looked up the listing on Amazon it was clear it was actually wrong in the pictures, just well hidden with well placed objects in front of the mistakes. It seems like they ripped off another popular listing that had a similar font/contents/etc.
Luckily my cousin found it hilarious.
This is hilarious actually. I am starting to lean into "AI-dangerous" camp, but not because the chatbot will ever become sentient. Its precisely because of increasingly widespread adoption of un-reliable tools by the incompetent but self-confident Office Worker (R).
Automatic Soldier Sveijk.
Microsoft employee (VP of something or other, for whatever Microsoft uses "VP" to mean) doing damage control on Bluesky: https://bsky.app/profile/scott.hanselman.com/post/3mez4yxty2...
> looks like a vendor, and we have a group now doing a post-mortem trying to figure out how it happened. It'll be removed ASAFP
> Understood. Not trying to sweep under rugs, but I also want to point out that everything is moving very fast right now and there’s 300,000 people that work here, so there’s probably be a bunch of dumb stuff happening. There’s also probably a bunch of dumb stuff happening at other companies
> Sometimes it’s a big systemic problem and sometimes it’s just one person who screwed up
This excuse is hollow to me. In an organization of this size, it takes multiple people screwing up for a failure to reach the public, or at least it should. In either case -- no review process, or a failed review process -- the failure is definitionally systemic. If a single person can on their own whim publish not only plagiarised material, but material that is so obviously defective at a single glance that it should never see the light of day, that is in itself a failure of the system.
> "everything is moving very fast"
Then slow down.
With this objective lack or control, sooner or later your LLM experiments in production will drive into a wall instead of hitting a little pothole like this diagram.
Jokes on you, I’ll cash out by then and move to the next gig.
And at the same time, they have time to quickly brush it off with "looks like a vendor" even though people are still investigating. Yes, we can see it's moving really fast, probably "move fast break things" been infecting Microsoft, users are leaving Microsoft behind because everything is breaking then clueless VPs blame it on moving too fast?
> This excuse is hollow to me. In an organization of this size, it takes multiple people screwing up for a failure to reach the public, or at least it should.
Completely with you on this, plus I would add following thoughts:
I don't think the size of the company should automatically be a proxy measure for a certain level of quality. Surely you can have slobs prevailing in a company of any size.
However - this kind of mistake should not be happening in a valuable company. Microsoft is currently still priced as a very valuable company, even with the significant corrections post Satyas crazy CapEx commitments from 2 weeks ago.
However it seems recently the mistakes, errors and "vendors without guidelines" pile up a bit too much for a supposedly 3-4T USD worth company, culminating in this weird random but very educational case. If anything, it's indicator that Microsoft may not really be as valuable as it is currently still perceived.
You’re incorrect on how the publishing process works. If a vendor wrote the document, it has a single repo owner (all those docs are in github) that would need to sign off on a PR. There isn’t multiple layers or really any friction to get content on learn.msft.
I suggested that if there is no review process, it is a systemic issue, and that if there is a review process that failed to catch something this egregious, it is a systemic issue. My supposition is that regardless of how the publishing process works, there is a systemic failure here, and I made no claims as to how it actually works, so I'm not sure where the "you're incorrect on how it works" is coming from.
You said it takes multiple people screwing up, implying that publishing content had multiple gates/reviewers.
It doesn’t.
But if there are no gates, doesn't that mean the people who should have put the gates in there screwed up?
There is no singular publishing org at MSFT. Each product publishes its own docs, generally following a style guide. But the doc process is up to the doc owner(s).
How hard is this to understand.
Person A, possibly a vendor, pushed the content. Person B, working for MSFT, approved this process where the vendor could just push content, and trusted that this vendor/process would represent the standards of the MSFT brand even amid the temptation of new tooling. Thus, at least 2 people screwed up, and probable more, because MSFT is a large corp and the vendor might be, too.
A common word for saying "2 or more" is "multiple". Multiple people screwed up. Learn to fucking count.
I've seen better review processes in hobby projects
Neither deadlines nor cheap work for hire help any sort of review process, while an hobby project is normally done by someone who cares.
This is correct. It just takes one person to review it and you’re good to go.
There’s also a service that rates your grammar/clarity and you have to be above a certain score.
I'll quote the relevant part of the parent post in reply
> that is in itself a failure of the system
... and add some Beer flavor: POSIWID (the purpose of a system is what it does)
> In either case -- no review process, or a failed review process -- the failure is definitionally systemic.
Ortho and grammar errors should have been corrected, but do you really expect a review process to identify that a diagram is a copy from another one some rando already published on the internet years ago?
It’s not just a copy. It’s a caricature of a copy with a plenty of nonsense in it: typos and weird “text”, broken arrows, etc. Even a cursory look gives a feeling that something’s fishy.
Weird text was already deemed acceptable by microsoft in their documentation as they machine translated most screenshots instead of recreating them in different locales, leading to the same problems as this image.
"Legal reviewed it and did not flag any issues!"
This is the same Microsoft that promised to indemnify any of its customers sued over copyright lawsuits as a result of using its AIs. [0] So I'm sure legal reviewed it the same way, saying "Yep, our war chest is still ample".
[0]: https://www.reuters.com/technology/microsoft-defend-customer...
Shouldn't "where are we sourcing our content" be part of any publication review process?
No. I'd expect that "continvouclous morging" gets caught.
plenty of people on the internet recognised it immediately, so sure, he may have been a rando when he created it, but not so much 15 years later..
Did the one MSFT employee that “reviewed” it know of this image? If not, it doesn’t matter how many people “on the Internet” recognized this image.
I’ll never understand the implied projection.
(I don’t think this was reviewed closely if at all)
Just that tiny image on his blog was enough for me to go "oh yeah, I used his diagram to explain this type of git workflow to colleagues a decade ago". Someone should have spotted that right away.
Yes. This is expected at any serious company as intellectual property violations can have serious consequences.
Here is the original: https://nvie.com/posts/a-successful-git-branching-model/
Here is the slop copy: https://web.archive.org/web/20251205141857/https://learn.mic...
The 'Time' axis points the wrong way, and is misspelled, using a non-existent letter - 'Tim' where the m has an extra hump.
It's pretty clear this wasn't reviewed at all.
A postmortem for that but not Copilot in notepad.exe? Priorities…
An entire post mortem for a morged diagram is wild
post morgem
It's post morgem time. [0]
[0] https://knowyourmeme.com/memes/its-morbin-time
Morgem? I barely know 'em!
Right to morgue
Oldest trick in the book... Shoot the vendor.
> everything is moving very fast right now
Now that's an interesting comment for him to include. The cynic in me could find / can think of lots of reasons from my YouTube feed as to why that might be so. What else is going on at Microsoft that could cause this sense of urgency?
My guess is there is some communication going out to every "manager", even the M1, that says this is your priority.
For example, I know of an unrelated mandate Microsoft has for its management. Anything security team analysis flags in code that you or your team owns must be fixed or somehow acceptably mitigated within the deadline specified. It doesn't matter if it is Newton soft json being "vulnerable" and the entire system is only built for use by msft employees. If you let this deadline slip, you have to explain yourself and might lose your bonus.
Ok so the remediation for the Newton soft case is easy enough that it is worth doing but the point is I have a conspiracy theory that internally msft has such a memo (yes, beyond what is publicly disclosed) going to all managers saying they must adopt copilot, whatever copilot means.
Yeah, isn't this why we're told everything "moves so much slower at a bigco" than at a startup?
> This excuse is hollow to me. In an organization of this size, it takes multiple people screwing up for a failure to reach the public, or at least it should.
Only if this is considered a failure.
Native English speakers may not know, but for a very long time (since before automatic translation tools became adequate) pretty much all MSFT docs were machine translated to the user agent language by default. Initially they were as useless as they were hilarious - a true slop before the term was invented.
Seems like this is going to be the year of AI slop being released everywhere by Microsoft. Just wish they'd put as much effort into a post morten for this one as they're doing for a diagram on a blog post https://github.com/microsoft/onnxruntime/issues/27263#issuec...
Microsoft seems to have thrown quality assurance overboard completely. Vibe generate everything, throw it at a wall, see what sticks. Tech bros are so afraid of regulation they even drop regulation inside their own companies. (just kidding)
It's not just throwing QA out, they are actively striving for lower quality because it saves money.
They're chasing that sweet cost reduction by making cheap steel without regard for what it'll be used for in the future.
Just a thought: the timeline of the vibe techs rolling out and the timeline of increasing product rot, sloppiness, and user-hostile “has anyone ever actually used this shit!?!” coming out of MS overlap.
Vibing won’t help out at all, and years from now we’re gonna have project math on why 10x-LLM-ing mediocre devs on a busted project that’s behind schedule isn’t the play (like how adding more devs to a late project generally makes it more late). But it takes years for those failures to aggregate and spread up the stack.
I believe the vibing is highlighting the missteps from the wave right before which has been cloud-first, cloud-integrated, cloud-upselling that cannibalized MS’s core products, multiplied by the massive MS layoff waves. MS used to have a lot of devs that made a lot of culture who are simply gone. The weakened offerings, breakdown of vision, and platform enshittification have been obvious for a while. And then ChatGPT came.
Stock price reflects how attractive stocks are for stock purchasers on the stock market, not how good something is. MS has been doing great things for their stock price.
LLMs make getting into emacs and Linux and OSS and OCaml easier than ever. SteamOS is maturing. Windows Subsytem for Linux is a mature bridge. It’s a bold time for MS to be betting on brand loyalty and product love, even if their shit worked.
Any excuse that tries to play down its own fault by pointing out other companies also have faults, is dishonest.
And that's exactly what happened here.
They've taken it down now and replaced with an arguably even less helpful diagram, but the original is archived: https://archive.is/twft6
Wow it’s even worse than I thought. I thought that convictungly morhing would be the only problem. The nonsense and inconsistent arrowheads, the missing annotations, the missing bubbles. The “tirm” axis…
That this was ever published shows a supreme lack of care.
And that's what they dared to show to the public. I shudder thinking about the state of their code...
The turn axis is great! Not only have they invented their own letter (it's not r, or n, or m, but one more than m!), it points the wrong way.
Lots of the AIisms with letters remind me of tom7's SIGBOVIC video Uppestcase and Lowestcase Letters [advances in derp learning]
https://www.youtube.com/watch?v=HLRdruqQfRk
Is it truly possible to make GitFlow look worse than reality?
It looks like typical "memorization" in image generation models. The author likely just prompted the image.
The model makers attempt to add guardrails to prevent this but it's not perfect. It seems a lot of large AI models basically just copy the training data and add slight modifications
Remember, mass copyright infringement is prosecuted if you're Aaron Schwartz but legal if you're an AI megacorp.
"continvoucly morged" is such a perfect phrase to describe what happened, it's poetic
It's the sound of speaking when someone is stuffing AI down your throat.
I am waiting for Raymond Chen to post a "Microspeak: Morged" blog post.
Was reading the word morged thinking it was some new slang I hadn't heard of. Incredible.
If it wasn't before, it will be now.
I propose:
Morge: when an AI agent is attempting to merge slop into your repo.
Lifehack: you can prevent many morges by banning user claude on GitHub. Also then GitHub will also tell you when a repo was morged up.
Do your part to keep GitHub from mutating into SourceMorge.
Same! I was about to go duck-searching for meaning, but thanks to jezzamon for pointing it out.
brb, printing a t-shirt that says "continvoucly morged"
You could add one of those Microslop memes that are going around.
Part of the VC/CM pipeline.
"Babe, wake up. New verb for slop just dropped."
It's a perfectly cromulent word.
Regarding the original git-flow model: I've never had anyone able to explain to me why it's worth the hassle to do all the integration work on the "develop" branch, while relegating the master/main branch to just being a place to park the tag from the latest release. Why not just use the master/main branch for integration instead of the develop branch - like the git gods intended - and then not have the develop branch at all? If your goal is to have an easy answer to "what's the latest release?", you have the tags for that in any case. Or if you really want to have a whole branch just to double-solve that one use-case, why not make a "release-tags" branch for that, instead of demoting the master/main branch to that role, when it already has a widely used, different meaning?
It's a pity that such a weird artifact/choice has made its way into a branching model that has become so widely implemented. Especially when the rest of it is so sensible - the whole "feature-branch, release-branch, hotfix" flow is IMO exactly right for versioned software where you must support multiple released versions of it in the wild (and probably the reason why it's become so popular). I just wish it didn't have that one weirdness marring it.
It can be beneficial if there is no mechanism that ensures that develop is always in a working state, but there is one that ensures that master is. The immediate benefit is that a new feature branch can always be started off master from a known-good state.
Of course, there are ways to enforce a known-good state on master without a dedicated develop branch, but it can be easier when having the two branches.
(I just dislike the name “develop”, because branch names should be nouns.)
I am working with main/master for years now, and there's one problem you don't have with develop: Whenever you merge something into master, it kind of blocks the next release until its (non-continuous) QA is done. If your changes are somewhat independent, you can cherry-pick them from develop into master in an arbitrary order and call that a release whenever you want to.
I worked at a place that had Gitlab review apps set up. Where the QA people could just click a button and it would create an instance of the app with just that PR on it. Then they could test, approve, and kill the instance.
Then you can merge to master and it's immediately ready to go.
> Whenever you merge something into master, it kind of blocks the next release until its (non-continuous) QA is done.
That's what tags are for, QA tests the tagged release, then that gets released. Master can continue changing up until the next tag, then QA has another thing to test.
It's useful if your integration work takes some time - easy to run into with open source.
Imagine you have multiple contributors with multiple new features, and you want to do a big release with all of them. You sit down a weekend and merge in your own feature branch, and then tell everyone else to do so too - but it's a hobby project, the other guys aren't consistently available, maybe they need two weekends to integrate and test when they're merging their work with everyone else's, and they don't have time during the weekdays.
So, the dev branch sits there for 2-3 weeks gradually acquiring features (and people testing integration too, hopefully, with any fixes that emerge from that). But then you discover a bug in the currently live version, either from people using it or even from the integration work, and you want that fix live during the week (specific example: there's a rare but consistent CTD in a game mod, you do not want to leave that in for several weeks). Well, if you have a branch reflecting the live status you can put your hotfix there, do a release, and merge the hotfix into dev right away.
Speaking of game mods, that also gives you a situation where you have a hard dependency on another project - if they do a release in between your mods releases, you might need to drop a compat hotfix ASAP, and you want a reflection of the live code where you can do that, knowing you will always have a branch that works with the latest version of the game. If your main branch has multiple people's work on it, in progress, that differs from what's actually released, you're going to get a mess.
And sure you could do just feature branches and merge feature branches one by one into each other, and then into main so you never have code-under-integration in a centralized place but... why not just designate a branch to be the place to do integration work?
You could also merge features one by one into main branch but again, imagine the mod case, if the main code needs X update for compatibility with a game update, why do that update for every feature branch, and expect every contributor to do that work? Much better to merge a feature in when the feature is done, and if you're waiting on other features centralize the work to keep in step with main (and the dependency) in one place. Especially relevant if your feature contributors are volunteers who probably wouldn't have the time to keep up with changes if it takes a few weeks before they can merge in their code.
If this pattern is so pervasive, and so many people care enough to attempt to explain it to you, yet you remain unconvinced, I’m not sure how you reach the conclusion that you are right, and correct, and that it’s such a shame that the world does not conform to how you believe that things should be.
Besides a bit of a puritan argument about “git gods”, you haven’t really justified why this matters at all, let alone why you care so much about it.
On the other hand, the model that you are so strongly against has a very easy to understand mental model that is analogous to real-world things. What do you think that the flow in git flow is referring to?
I’m sorry that you find git flow so disgusting but I think your self-righteousness is completely unjustified.
It's funny how big of an impact individual developers can have with such seemingly simple publications. At the time of the article with that diagram release, I was changing jobs and I distinctly remember, that the diagram was extensively discussed and compared to company standards, at both the old and the new place.
Is this not a good example of how generative AI does copyright laundering? Suppose the image was AI generated and it did a bad copy of the source image that was in the training data, which seems likely with such a widely disseminated image. When using generative AI to produce anything else, how do you know it's not just doing a bad quality copy-paste of someone else's work? Are you going to scour the internet for the source? Will the AI tell you? What if code generation is copy-pasting GPL-licensed code in to your proprietary codebase? The likelihood of this, the lack of a way to easily know it's happening, and the risks it causes, seems to me to be being overlooked amidst all the AI hype. And generative AI is a lot less impressive if it often works as a bad quality copy paste tool rather than the galaxy brain intelligence some like to portray it as.
There are countless examples. Often I think about the fact that the google search AI is just rewording news articles from the search results, when you look at the source articles they have exactly the same points as the AI answers.
So these services depends on journalists to continuously feed them articles, while stealing all of the viewers by automatically copying every article.
Of course Google has a history of copying articles in whole (cf. Google Cache, eventually abandoned).
I actually often have the opposite problem. The AI overview will assert something and give me dozens of links, and then I'm forced to check them one by one to try to figure out where the assertion came from, and, in some cases, none of the articles even say what the AI overview claimed they said.
I honestly don't get it. All I want is for it to quote verbatim and link to the source. This isn't hard, and there is no way the engineers at Google don't know how to write a thesis with citations. How did things end up this way?
ChatGPT was a research prototype thrown at end users as a "product".
It is not a carefully designed product; ask yourself "What is it FOR?".
But the identification of reliable sources isn't as easy as you may think, either. A chat-based interaction really makes most sense if you can rely on every answer, otherwise the user is misled and user and conversation may go in a wrong direction. The previous search paradigm ("ten snippets + links") did not project the confidence that turns out is not grounded in truth that the chat paradigm does.
If you actually care about having that sort of discussion I’d suggest a framing that doesn’t paint anyone that doesn’t agree with you as succumbing to AI hype and believing it has “galaxy brain intelligence”. Please ditch this false dichotomy. At this point, in 2026, it’s tiring.
Is there a single thing that Microsoft doesn’t half-ass? Even if you wanted to AI generate a graph, how hard is it to go into Paint or something and fix the test?
I have been having oodles of headaches dealing with exFAT not being journaled and having to engineer around it. It’s annoying because exFAT is basically the only filesystem used on SD cards since it’s basically the only filesystem that’s compatible with everything.
It feels like everything Microsoft does is like that though; superficially fine until you get into the details of it and it’s actually broken, but you have to put up with it because it’s used everywhere.
Here's the page with the diagram on it:
https://web.archive.org/web/20250908220945/https://learn.mic...
Please let morged become a thing.
A mix between merged, morphed, and morgue. I love it. Should be nominated as word of 2026.
Satya yelled "it's morgin' time" and then morged all over the place.
If you've got the tiന്ന, we've got the morge.
When I read the title, I thought "morg" was one of those goofy tech words that I had missed but whose meaning was still pretty clear in context (like a portmanteau of "Microsoft" and "borged," the latter of which I've never heard as a verb but still works). I guess it's a goofy tech word now.
> Till next 'tim'
It took me a few times to see the morged version actually says tiന്ന
For the curious:
(The "pypyp" package, by Python core dev and mypy maintainer Shantanu Jain, makes this easier:)This is why we don't use diffusion style models for diagrams or anything containing detailed typography.
An LLM driving mermaid with text tokens will produce infinitely more accurate diagrams than something operating in raster space.
A lot of the hate being generated seems due to really poor application of the technology. Not evil intent or incapable technology. Bad engineering. Not understanding when to use png vs jpeg. That kind of thing.
Good example of the fact that LLMs, as its core, are lossy compression algorithm that are able to fill in the gaps very cleverly.
Something tangential..
> people started tagging me on Bluesky and Hacker News
Never knew tagging was a thing on Hacker News. Is it a special feature for crème de crème users?
Don't think so, expect they just mean replying to comments to mention it, or they posted another article and people commented about seeing this and isn't it from another article of yours etc.
People still try to use @user all the time, even though it doesn't work.
> take someone's carefully crafted work, run it through a machine to wash off the fingerprints, and ship it as your own.
I don’t even care about AI or not here. That’s like copying someone’s work, badly, and either not understanding or not giving a shit that it’s wrong? I’m not sure which of those two is worse.
Waiting for the LLM evangelists to tell us that their box of weights of choice did that on purpose to create engagement as a sentient entity understanding the nature of tech marketing, or that OP should try again with quatuor 4.9-extended (that really ships AGI with the $5k monthly subscription addon) because it refactored their pet project last week into a compilable state, after only boiling 3 oceans.
Glorp 5.3 Fast Thinking actually steals this diagram correctly for me locally so I think everyone here is wrong
I may have a new favorite HN comment.
https://news.ycombinator.com/favorites?id=Balinares says you don't
Using an LLM to generate an image of a diagram is not a good idea, but you can get really good results if you ask it to generate a diagram.io SVG (or a Miro diagram through their MCP).
I sometimes ask Claude to read some code and generate a process diagram of it, and it works surprisingly well!
It's microsoft's AI though, not even the totally crazed evangelists like that one.
You're holding the LLM wrong.
I'm glad I actually checked TFA before asking here if "morging" referred to some actual technical concept I hadn't previously heard of.
If we are here, lets at least coin it for something relevant!
Billions must morge
Developors, developors, developors, developors!
Archive.org shows this went live last September: https://web.archive.org/web/20250108142456/https://learn.mic...
It took ~5 months for anyone to notice and fix something that is obviously wrong at a glance.
How many people saw that page, skimmed it, and thought “good enough”? That feels like a pretty honest reflection of the state of knowledge work right now. Everyone is running at a velocity where quality, craft and care are optional luxuries. Authors don’t have time to write properly, reviewers don’t have time to review properly, and readers don’t have time to read properly.
So we end up shipping documentation that nobody really reads and nobody really owns. The process says “published”, so it’s done.
AI didn’t create this, it just dramatically lowers the cost of producing text and images that look plausible enough to pass a quick skim. If anything it makes the underlying problem worse: more content, less attention, less understanding.
It was already possible to cargo-cult GitFlow by copying the diagram without reading the context. Now we’re cargo-culting diagrams that were generated without understanding in the first place.
If the reality is that we’re too busy to write, review, or read properly, what is the actual function of this documentation beyond being checkbox output?
You are assuming: A) That everyone who saw this would go as far as post publicly about it (and not just chuckle / send it their peers privately) and B) Any post about this would reach you/HN and not potentially be lost in the sea of new content.
If you work in a medium to large company, you know most of the documentation is there for compliance reasons or for showing others that you did something at one point. You can probably just put slop at the end of documents, while you still keep headlines relevant and no one will ever read it or notice it.
> readers don’t have time to read properly
> So we end up shipping documentation that nobody really reads
I'd note that the documentation may have been read and noticed as flawed, but some random person noticing that it's flawed is just going to sigh, shake their heads, and move on. I've certainly been frustrated by inadequate documentation before (that describes the majority of all documentation, in my experience), but I don't make a point of raising a fuss about it because I'm busy trying to figure out how to actually accomplish the goal for which I was reading documentation for rather than stopping what I'm doing to make a complaint about how bad the documentation is.
This says nothing to absolve everyone involved in publishing it, of course. The craft of software engineering is indeed in a very sorry state, and this offers just one tiny glimpse into the flimsiness of the house of cards.
I usually would post it in our dev slack chat and rant for a message or two how many hours were lost "reverse-engineering" bad documentation. But I probably wouldn't post about it on here/BlueSky.
From TFA:
> the diagram was both well-known enough and obviously AI-slop-y enough that it was easy to spot as plagiarism. But we all know there will just be more and more content like this that isn't so well-known or soon will get mutated or disguised in more advanced ways that this plagiarism no longer will be recognizable as such.
Most content will be less known and the ensloppified version more obfuscated... the author is lucky to have such an obvious association. Curious to see if MSFT will react in any meaningful way to this.
Edit: typo
> Most content will be less known and the enslopified version more obfuscated...
Please everyone: spell 'enslopified', with two 'p's - ensloppiified.
Signed, Minority Report Pedant
And 3 'i's?
>What's dispiriting is the (lack of) process and care: take someone's carefully crafted work, run it through a machine to wash off the fingerprints, and ship it as your own.
"Don't attribute to malice what can be adequately explained by stupidity". I bet someone just typed into ChatGPT/Copilot, "generate a Git flow diagram," and it searched the web, found your image, and decided to recreate it by using as a reference (there's probably something in the reasoning traces like, "I found a relevant image, but the user specifically asked me to generate one, so I'll create my own version now.") The person creating the documentation didn't bother to check...
Or maybe the image was already in the weights.
In this case, we can chalk it up to malicious stupidity. Someone posting a reference aimed at learners, especially with Microsoft's reach and name recognition, has a responsibility to check the quality and accuracy of the materials. Using an AI tool doesn't absolve that responsibility one bit.
“It was careless, blatantly amateuristic, and lacking any ambition, to put it gently. Microsoft unworthy.”
Seems to be perfectly on brand for Microsoft, I don’t see the issue.
LLM infested crap, directly pushed to customers without any pushback
so standard Microslop
Developer BRUTALLY FRAME-MORGED by Microsoft AI
Morged > Oneshotted
That old beatiful git branching model got printed into the minds of many. Any other visual is not going to replace it. The flood of 'plastic' incarnations of everything is abominable. Escape to jungles!!
Indeed. I don't remember all the details of the flow but the aesthetics of the diagram are still stuck in my head.
Sorry but, isn't this textbook Microsoft? Aside being more blatant, careless and on the nose; what's different than past Microsoft?
These people distilled the knowledge of AppGet's developer to create the same thing from scratch and "Thank(!)" him for being that naive.
Edit: Yes, after experiencing Microsoft for 20+ odd years, I don't trust them.
> The AI rip-off was not just ugly. It was careless, blatantly amateuristic, and lacking any ambition, to put it gently.
That pretty much describes Microsoft and all they do. Money can't buy taste.
He was right:
https://www.youtube.com/watch?v=3KdlJlHAAbQ
> The AI rip-off was not just ugly. It was careless, blatantly amateuristic, and lacking any ambition, to put it gently. Microsoft unworthy.
LOL, I disagree. It's very on brand for Microslop.
I guess this image generation feature should never have been continvoucly morged back into their slop machine
I can already tell this is probably some AI Microslop fuck up without even clicking on the article.
EDIT: Worse than I thought! Who in their right mind uses AI to generate technical diagrams? SMDH!
The new Head of Quality in Microsoft has not started working there yet, so it's business as usual at MS... And now with AI slop on top
Ref: https://www.reddit.com/r/technology/comments/1r1tphx/microso...
So they will get better at publicly dismantling such cases and doing much better damage control in PR only. "Q" in "Microsoft" stands for "quality".
Hey, it's just like the Gas Town diagrams.
https://news.ycombinator.com/item?id=46746045
I love it when the LLM said "it's morgin' time" and proceeded to morg all over the place.
One step closer to the Redditification of HN. And it is entirely because the content out there nowadays.
Ha, I think a user since 2007’s earned the right to do that once in a while.
Maybe you're missing the reference to the Morbius movie joke, which sounds surprisingly fitting. It's not like older HNers never made funny references.
Edit: Apparently you didn't.
The commenter you're responding to a) independently made the exact same reference; b) has a username like that of Jared Leto's other Disney tentpole flop role...
Well spotted, I guess they're pushing for HN's redditification then.
HN is a Serious Place. We're here to make money. Please leave your jokes at home.
> The AI rip-off was not just ugly. It was careless, blatantly amateuristic, and lacking any ambition, to put it gently. Microsoft unworthy.
lmao where has the author been?! this has been the quintessential Microsoft experience since windows 7, or maybe even XP...
I propose to adopt the word „morge”, a verb meaning „use an LLM to generate content that badly but recognizably plagiarizes some other known/famous work”.
A noun describing such piece of slop could be „morgery”.
I read through all the proposals in this discussion and I like yours the best out of them.
Seconded!
Everything you publish now on will be stolen and reused one way or another.
On the one hand, I feel for people who have their creations ripped off.
On the other hand, it makes sense for Microsoft to rip this off, as part of the continuing enshittification of, well, everything.
Having been subjected to GitFlow at a previous employer, after having already done git for years and version control for decades, I can say that GitFlow is... not good.
And, I'm not the only one who feels this way.
https://news.ycombinator.com/item?id=9744059
It seems to me rather less likely that someone at Microsoft knowingly and deliberately took his specific diagram and "ran it through an AI image generator" than that someone asked an AI image generator to produce a diagram with a similar concept, and it responded with a chunk of mostly-memorized data, which the operator believed to be a novel creation. How many such diagrams were there likely to have been, in the training set? Is overfitting really so unlikely?
The author of the Microsoft article most likely failed to credit or link back to his original diagram because they had no idea it existed.
Yes, but from OP's perspective this is a distinction without a difference.
looks like a vendor, and we have a group now doing a post-mortem trying to figure out how it happened. It'll be removed ASAFP
https://www.urbandictionary.com/define.php?term=Morged I got nothing...
Check the article, AI interpreted the phrase “continuously merged” as “continvoucly morged”
I too was confused until I looked at the included screenshot.
This is just another reminder that powerful global entities are composed of lazy, bored individuals. It’s a wonder we get anything done.
we are also stressed, scared for our jobs and bombarded by constant distraction
You apparently did not read the article. "Morged" is a word the LLM that ripped off the article author's diagram hallucinated.
> You apparently did not read the article.
Please don't say things like this in comments (see https://news.ycombinator.com/newsguidelines.html).
I don't think "LLM" and "hallucinated" are accurate; different kinds of AI create images, and I get the impression that they generally don't ascribe semantics to words in the same way that LLMs do, and thus when they draw letter shapes they typically aren't actually modelling the fact that the letters are supposed to spell a particular word that has a particular meaning.
A somewhat contrarian perspective is that this diagram is so simple and widely used and has been reproduced (ie redrawn) so many times that is very easy to assume this does not have a single origin and that its public domain.
That's pretty hard to reconcile with OP's claim:
> In 2010, I wrote A successful Git branching model and created a diagram to go with it. I designed that diagram in Apple Keynote, at the time obsessing over the colors, the curves, and the layout until it clearly communicated how branches relate to each other over time. I also published the source file so others could build on it.
If you mean that the Microsoft publisher shouldn't be faulted for assuming it would be okay to reproduce the diagram... then said publisher should have actually reproduced the diagram instead of morging it.
it's not public domain, it's copyrighted
what's the bet that the intention here was explicitly to attempt to strip the copyright
so it could be shoved on the corporate website without paying anyone
(the only actual real use of LLMs)
I'm failing to understand the criticism here
Is it about the haphazardous deployment of AI generated content without revising/proof reading the output?
Or is it about using some graphs without attributing their authors?
if it's the latter (even if partially) then I have to disagree with that angle. A very widespread model isn't owned by anyone surely, I don't have to reference newton everytime I write an article on gravity no? but maybe I'm misunderstanding the angle the author is coming from
(Sidenote: if it was meant in a lightheaded way then I can see it making sense)
did you read the article? this is explicitly explained! at length!
not at all about the reuse. it's been done over and over with this diagram. it's about the careless copying that destroyed the quality. nothing was wrong with the original diagram! why run it through the AI at all?
Other than that, I find this whole thing mostly very saddening. Not because some company used my diagram. As I said, it's been everywhere for 15 years and I've always been fine with that. What's dispiriting is the (lack of) process and care: take someone's carefully crafted work, run it through a machine to wash off the fingerprints, and ship it as your own. This isn't a case of being inspired by something and building on it. It's the opposite of that. It's taking something that worked and making it worse. Is there even a goal here beyond "generating content"?
I mean come on – the point literally could not be more clearly expressed.