It's nice that people are taking this up, and one of the main benefits of open source in the first place. I have my doubts that this will succeed if it's just one guy, but maybe it takes on new life this way and I would never discourage people from trying to add value to this world.
That said I increasingly have a very strong distaste of these AI generated articles. They are long and tedious to read and it really makes me doubt that what is written there is actually true at all. I much prefer a worse written but to the point article.
I agree completely. I know everyone is tired of AI accusations but this article has all of the telltale signs of LLM writing over and over again.
It’s not encouraging for the future of a project when the maintainer can’t even announce it without having AI do the work.
It would be great if this turns into a high effort, carefully maintained fork. At the moment I’m highly skeptical of new forks from maintainers who are keen on using a lot of AI.
I have nothing against a skilled maintainer with attention to detail using AI tools for assistance.
The important part is the human who will do more than just try to get the LLM to do the hard work for them, though. Once software matures the bugs and edge cases become more obscure and require more thoughtful input. AI is great at getting things to some high percentage of completeness, but it takes a skilled human to keep it all moving in the right direction.
I would cite this blog post as an example of lazy LLM use: It's over-dramatic, long, retains all of the poor LLM output styling that most human editors remove, and suggests that the maintainer isn't afraid to outsource everything to the LLM.
I'll plug that Chainguard has been maintaining a fork for awhile and seems to have a history with supporting forks like this: https://github.com/chainguard-forks/minio
I switched to rustfs this week though and am not looking back. I'd recommend it to others as well for small scale usage. Its maturing rapidly and seems promising.
> MinIO as an S3-compatible object store is already feature-complete. It’s finished software.
I don't see how these two lines can be written together.
The goal is either to remain S3-compatible or to freeze the current interface of the service forever.
As it stands this fork's compatibility with S3, and with the official MinIO itself, will break as soon as one of them pushes an API update. Which works fine for existing users, maybe, but over time as the projects drift further apart no new ones will be able to onboard.
The S3 API is quite stable and most new features are opt-in (e.g. ApplyIfModified) or auxiliary (e.g. S3Tables). It’s highly unlikely that S3 proper will break backwards compatibility for clients with any future API change. So if all you need is basic object storage that works with existing S3 clients, then MinIO is enough. The fork just needs to keep CVEs patched and maintain community hygiene (accept new PRs for small bug fixes, etc.). And as the author points out, this is much easier in the age of AI than it might have been previously.
Moved to Garage, it's actually pretty easy to run and use.
Would be even nicer if the official Docker image would support initializing a default bucket and access key from env variables instead of having to exec into the container and follow https://garagehq.deuxfleurs.fr/documentation/quick-start/ but that's not a dealbreaker.
Note: I only needed the single-node install, it was either this or SeaweedFS. Also used MinIO and Zenko in the past, but even the latter seems pretty much dead.
> A company that raised $126M at a billion-dollar valuation spent five years methodically dismantling the open-source ecosystem it built.
Sounds like Puppet's story. $180M raised, ~$1B valuation ca. 2019, sold to Perforce in 2022, public repo taken private and builds commercialized by Perforce in 2024, community fork shipped early 2025.
I never understood why one would use MinIO over Ceph for serious (multi-node) use. Sure, it might be easier to setup initially, but Ceph would be more likely to work.
For the single node use-case, I'm working on https://github.com/uroni/hs5 . The S3 API surface is large, but at this point it covers the basics. By limiting the goals I hope it will be maintainable.
Seems like a very balanced take on forking Minio. I don't have high hopes for the future Minio, but as mentioned it is more or less feature complete, good enough for most use-cases.
I was searching for a fairly simple replacement for s3 for testing. I'd been using Minio for a while now, and simply ended up implementing my own on top of Postgres. Fun intersection given the post. (Note, I know it isn't optimal, but as I always have Postgres available it fits well, and I don't have high storage needs, just the api compatibility)
I considered it a while ago, but I wasn't totally clear on Read-After-Write. Which was the primary reason why I choose to just implement my own for testing.
I'll probably give GarageHQ a more serious look again.
Wish the effort well. I has plans to self host s3 with minio that took some time to actually get around to and when I did they had done the enterprise rug pull. I do think one maintainer may be able to pull it off with AI assistance if the scope is limited to security bug fixes. Minio is one of the nastiest rug pulls I can think of.
There are 3 new commits, and the only actually fixes are: Go update and revert to earlier version of console.
But there are a bunch of changes to docs, CI workflows and issue templates. Which is what is the easy part of managing a fork, and I've seen a bunch of forks that ended up only updating readme-s, CI, etc.
I'll have more faith in the fork when the maintainers do actual fixes.
Although, to be fair, getting too aggressive off the bat would be concerning. A clean fork that is bit for bit compatible with the last open source version is definitely an attractive proposition from a software supply chain perspective.
I am wondering if Minio Inc has rewritten the software in a clean room. Otherwise wouldn't they need to publish the source anyways? Since it is AGPL anyone might potentially be interacting with the software. Do they do that?
- Code written by the Minio team, which they have full ownership of and can relicense as they wish
- Code written by third party contributors, where Minio required the contributors to provide Minio a BSD license to use the contributions but only published it to other people under AGPL.
So the AGPL doesn't bind Minio themselves because of their licensing policy. (Which is why while pure AGPL might be the open source maximalist license, AGPL + CLA is almost at the opposite end of the scale)
the FSF position is that GPL is unenforceable without a single copyright owner, which is why almost all gnu projects, linux, canonical/redhat/etc projects have a CLA or something functionally similar
Question , can MinIO the company assert AGPL copyright against the fork - i see in the writeup they mentioned trademarks as far as the fork is concerned.
Whats the situation for a AGPL fork , were one to use it can the company assert rights like they did to Nutanix.
As long as the fork complies with the terms of the AGPL, Minio can't stop them from using the code. As the article acknowledge,s hey could potentially rely on trademarks to make them rename it.
Sometimes that’s far more work than it’ll ever be worth.
If I get my patches upstream, then I don’t have to waste time reintegrating patches and rebuilding packages when I could instead be doing productive things.
AGPL is "a plague" by design (viral). It has the explicit goal that any improvements flow back to the community project and the virality is a necessary building block for this. It is an elegant solution to a tragedy of the commons problem.
Companies like MinIO extending the virality beyond the single software/work, even though not intended by license, gives it a bad reputation. They have fixed https://min.io/compliance now, but I guess it does not matter anymore.
It’s nice to see people taking this on, but for a project like this I’d prefer to wait and see if the maintenance continues.
This blog post is extremely heavy on LLM written content, which isn’t a promising early sign
> Normally this is where the story ends — a collective sigh, and everyone moves on.
> But I want to tell a different story. Not an obituary — a resurrection.
I’ve seen several announcements of forked open source projects from people who thought that maintaining a fork is easy now that they can have an LLM do all the work. Then their interest trails off when they encounter problems the AI can’t handle for them or the community tires of doing all of the testing and code review for a maintainer who just wants to prompt the LLM and put their name on the project. When someone can’t even write their own announcement without an LLM it’s not an encouraging sign.
Still, I would probably abandon the name for trademark enforcement reasons. It's low hanging fruit for them if they want to kill you.
(this is also why the Pentium was called the Pentium instead of the numbers that processors used to be called.. and why the gameboy copyright text was embedded into the ROMs)
I very much appreciate the sentiment, and wish him well. However, one guy maintaining a fork as a side project from his core work is not very promising.
He seems to believe AI will help lessen the burden. I hope he's able to find other maintainers.
This had a ton of LLM-ese in it, so, here's an LLM explaining it. I read it, agreed, then read it again for LLM-ese, then shared it. I recommend this pattern when using LLMs. Especially when claiming you'll replicate the role of a 9 figure company with an LLM.
LLM generated TL;DR: The factual sections read like a real person who knows what they're doing. The rhetorical flourishes read like someone pasted their draft into Claude and said "make it more compelling." The work deserves better than the prose it got.
LLM output given "<DOC>X</DOC> Identify parts written by an LLM"
Here are the passages that read as LLM-generated rather than naturally written:
*Overwrought dramatic pivots (LLMs love the "Not X — Y" antithesis):*
- "Not an obituary — a resurrection."
- "Not 'unmaintained' — officially, irreversibly, done."
- "That demand doesn't disappear — it just finds its way out."
*Explicitly labeling rhetoric that should speak for itself:*
- "The ironic part:" — just show the irony, don't announce it.
- "The consensus in the international community is clear:" — "international community" is overbearing. "is clear" is LLM throat-clearing.
- "That's the beauty of open-source licensing by design" — "That's the beauty of" is a hallmark LLM filler phrase.
*Grandiose one-liners that try too hard:*
- "git clone is the most powerful spell in open source."
- "a digital tombstone"
- "If December was the clinical death, this February commit was the death certificate." — the metaphor was already established in the heading; extending it here is overworked.
*LLM vagueness / filler:*
- "Things are different now." — says nothing.
- "Consider:" as a standalone transition into the Elon/Twitter example.
- "I believe the maintenance workload is manageable." — the hedging "I believe" adds nothing; just say it's manageable.
*Cliché deployment:*
- "the dragon-slayer has become the dragon" (in the related-article blurb)
- "Eating your own dog food is the best QA." — explaining the idiom ("dogfooding") one sentence before, then restating it as a maxim, is the LLM pattern of using a phrase and then making sure you understood it.
*The AI-hype paragraph is the worst offender:*
> "With tools like Claude Code, the cost of locating and fixing bugs in a complex Go project has dropped by *more than an order of magnitude*. What used to require a dedicated team to maintain a complex infrastructure project can now be handled by *one experienced engineer with an AI copilot*."
This reads like an LLM writing about itself — vague quantification ("order of magnitude"), the buzzword "copilot," and the utopian framing are all telltale. The Elon/Twitter analogy that follows ("Consider:") makes it worse, not better.
*Overall pattern:* The technical/factual sections (the timeline table, the build instructions, the console revert explanation) read like a real person. The editorializing and rhetorical flourishes — especially the intro, the "But Open Source Endures" section, and the "AI Changed the Game" section — are where the LLM voice creeps in most heavily.
It's nice that people are taking this up, and one of the main benefits of open source in the first place. I have my doubts that this will succeed if it's just one guy, but maybe it takes on new life this way and I would never discourage people from trying to add value to this world.
That said I increasingly have a very strong distaste of these AI generated articles. They are long and tedious to read and it really makes me doubt that what is written there is actually true at all. I much prefer a worse written but to the point article.
I agree completely. I know everyone is tired of AI accusations but this article has all of the telltale signs of LLM writing over and over again.
It’s not encouraging for the future of a project when the maintainer can’t even announce it without having AI do the work.
It would be great if this turns into a high effort, carefully maintained fork. At the moment I’m highly skeptical of new forks from maintainers who are keen on using a lot of AI.
An app that basically reimplements a well documented and tested api is the best possible use case for ai development.
I have nothing against a skilled maintainer with attention to detail using AI tools for assistance.
The important part is the human who will do more than just try to get the LLM to do the hard work for them, though. Once software matures the bugs and edge cases become more obscure and require more thoughtful input. AI is great at getting things to some high percentage of completeness, but it takes a skilled human to keep it all moving in the right direction.
I would cite this blog post as an example of lazy LLM use: It's over-dramatic, long, retains all of the poor LLM output styling that most human editors remove, and suggests that the maintainer isn't afraid to outsource everything to the LLM.
I'll plug that Chainguard has been maintaining a fork for awhile and seems to have a history with supporting forks like this: https://github.com/chainguard-forks/minio
For a web GUI, I had been using this project: https://github.com/huncrys/minio-console
I switched to rustfs this week though and am not looking back. I'd recommend it to others as well for small scale usage. Its maturing rapidly and seems promising.
Do note rustfs has had a...questionable...security posture. See https://github.com/rustfs/rustfs/security/advisories/GHSA-h9... as a good example (hardcoded static token).
The README doesn't seem to have been updated, is there an updated Docker image to use now?
Edit:
https://hub.docker.com/r/pgsty/minio
From the OP's link
> MinIO as an S3-compatible object store is already feature-complete. It’s finished software.
I don't see how these two lines can be written together.
The goal is either to remain S3-compatible or to freeze the current interface of the service forever.
As it stands this fork's compatibility with S3, and with the official MinIO itself, will break as soon as one of them pushes an API update. Which works fine for existing users, maybe, but over time as the projects drift further apart no new ones will be able to onboard.
The S3 API is quite stable and most new features are opt-in (e.g. ApplyIfModified) or auxiliary (e.g. S3Tables). It’s highly unlikely that S3 proper will break backwards compatibility for clients with any future API change. So if all you need is basic object storage that works with existing S3 clients, then MinIO is enough. The fork just needs to keep CVEs patched and maintain community hygiene (accept new PRs for small bug fixes, etc.). And as the author points out, this is much easier in the age of AI than it might have been previously.
Maybe the author isn't aware that Chainguard is going to keep patching MinIO for CVEs:
https://www.chainguard.dev/unchained/secure-and-free-minio-c...
You wouldn't get the other changes in this post (e.g., restoring the admin console) but that's a bit orthogonal.
Also see Garage S3: https://garagehq.deuxfleurs.fr/
Moved to Garage, it's actually pretty easy to run and use.
Would be even nicer if the official Docker image would support initializing a default bucket and access key from env variables instead of having to exec into the container and follow https://garagehq.deuxfleurs.fr/documentation/quick-start/ but that's not a dealbreaker.
Note: I only needed the single-node install, it was either this or SeaweedFS. Also used MinIO and Zenko in the past, but even the latter seems pretty much dead.
> A company that raised $126M at a billion-dollar valuation spent five years methodically dismantling the open-source ecosystem it built.
Sounds like Puppet's story. $180M raised, ~$1B valuation ca. 2019, sold to Perforce in 2022, public repo taken private and builds commercialized by Perforce in 2024, community fork shipped early 2025.
I never understood why one would use MinIO over Ceph for serious (multi-node) use. Sure, it might be easier to setup initially, but Ceph would be more likely to work.
For the single node use-case, I'm working on https://github.com/uroni/hs5 . The S3 API surface is large, but at this point it covers the basics. By limiting the goals I hope it will be maintainable.
Seems like a very balanced take on forking Minio. I don't have high hopes for the future Minio, but as mentioned it is more or less feature complete, good enough for most use-cases.
I was searching for a fairly simple replacement for s3 for testing. I'd been using Minio for a while now, and simply ended up implementing my own on top of Postgres. Fun intersection given the post. (Note, I know it isn't optimal, but as I always have Postgres available it fits well, and I don't have high storage needs, just the api compatibility)
For our needs at work (~100TB), buying Pure Storage flash arrays (hardware, software, onsite support) worked out cheaper than MinIO licensing alone.
It is an interesting time we're in right now where buying physical hardware and support is cheaper than a license.
Same goes for AWS markup on rented hardware. ;)
Man I sometimes miss having physical servers.
I've been using garage without issue
I considered it a while ago, but I wasn't totally clear on Read-After-Write. Which was the primary reason why I choose to just implement my own for testing.
I'll probably give GarageHQ a more serious look again.
Wish the effort well. I has plans to self host s3 with minio that took some time to actually get around to and when I did they had done the enterprise rug pull. I do think one maintainer may be able to pull it off with AI assistance if the scope is limited to security bug fixes. Minio is one of the nastiest rug pulls I can think of.
There are 3 new commits, and the only actually fixes are: Go update and revert to earlier version of console.
But there are a bunch of changes to docs, CI workflows and issue templates. Which is what is the easy part of managing a fork, and I've seen a bunch of forks that ended up only updating readme-s, CI, etc.
I'll have more faith in the fork when the maintainers do actual fixes.
Although, to be fair, getting too aggressive off the bat would be concerning. A clean fork that is bit for bit compatible with the last open source version is definitely an attractive proposition from a software supply chain perspective.
I am wondering if Minio Inc has rewritten the software in a clean room. Otherwise wouldn't they need to publish the source anyways? Since it is AGPL anyone might potentially be interacting with the software. Do they do that?
The copyright for Minio consists of:
- Code written by the Minio team, which they have full ownership of and can relicense as they wish
- Code written by third party contributors, where Minio required the contributors to provide Minio a BSD license to use the contributions but only published it to other people under AGPL.
So the AGPL doesn't bind Minio themselves because of their licensing policy. (Which is why while pure AGPL might be the open source maximalist license, AGPL + CLA is almost at the opposite end of the scale)
the FSF position is that GPL is unenforceable without a single copyright owner, which is why almost all gnu projects, linux, canonical/redhat/etc projects have a CLA or something functionally similar
That would seem a bizarre position from the FSF, since it would make the license on combined GPL works unenforceable. Do you have a source for that?
Question , can MinIO the company assert AGPL copyright against the fork - i see in the writeup they mentioned trademarks as far as the fork is concerned.
Whats the situation for a AGPL fork , were one to use it can the company assert rights like they did to Nutanix.
As long as the fork complies with the terms of the AGPL, Minio can't stop them from using the code. As the article acknowledge,s hey could potentially rely on trademarks to make them rename it.
Doesn't that depend on the CLA?
Could you not have a CLA that only allows the project to use a specific license?
You could, but the reason that companies ask for CLAs is to free themselves by that restriction.
If Minio just wanted to use the changes under AGPL, the contributor could just license them under AGPL, no CLA needed.
Was going to mention the CLA. Each time you sign a CLA you're doing free work. Never do that. Keep and maintain your patches locally instead.
Sometimes that’s far more work than it’ll ever be worth.
If I get my patches upstream, then I don’t have to waste time reintegrating patches and rebuilding packages when I could instead be doing productive things.
Only if they'd taken contributions without authors signing over their rights.
Surely MinIO dual-licenses its software so paying customers get commercial license?
Edit: Never mind, I comment before reading the whole post.
That's the subject of this blog post.
AGPL is a plague. So many possible partners or future stewards won’t touch anything minIO with a ten foot pole unfortunately.
And I say this because minIO started to actively engage on the ugly parts of the license
AGPL is "a plague" by design (viral). It has the explicit goal that any improvements flow back to the community project and the virality is a necessary building block for this. It is an elegant solution to a tragedy of the commons problem.
Companies like MinIO extending the virality beyond the single software/work, even though not intended by license, gives it a bad reputation. They have fixed https://min.io/compliance now, but I guess it does not matter anymore.
It’s nice to see people taking this on, but for a project like this I’d prefer to wait and see if the maintenance continues.
This blog post is extremely heavy on LLM written content, which isn’t a promising early sign
> Normally this is where the story ends — a collective sigh, and everyone moves on.
> But I want to tell a different story. Not an obituary — a resurrection.
I’ve seen several announcements of forked open source projects from people who thought that maintaining a fork is easy now that they can have an LLM do all the work. Then their interest trails off when they encounter problems the AI can’t handle for them or the community tires of doing all of the testing and code review for a maintainer who just wants to prompt the LLM and put their name on the project. When someone can’t even write their own announcement without an LLM it’s not an encouraging sign.
How could the company behind minio not seeing this coming?
Seeing what coming ? They pivoted into storage for AI, lone maintainer is not threat to their business model
Still, I would probably abandon the name for trademark enforcement reasons. It's low hanging fruit for them if they want to kill you.
(this is also why the Pentium was called the Pentium instead of the numbers that processors used to be called.. and why the gameboy copyright text was embedded into the ROMs)
I very much appreciate the sentiment, and wish him well. However, one guy maintaining a fork as a side project from his core work is not very promising.
He seems to believe AI will help lessen the burden. I hope he's able to find other maintainers.
Best luck!
To be fair, most open source is like that.
The most famous one I can think of right now is xz.
A vastly less complex project whose maintainer burned out. You're not wrong, but this only underlines how unsustainable this is.
Yeah, I agree.
But we have to rally around something.
This had a ton of LLM-ese in it, so, here's an LLM explaining it. I read it, agreed, then read it again for LLM-ese, then shared it. I recommend this pattern when using LLMs. Especially when claiming you'll replicate the role of a 9 figure company with an LLM.
LLM generated TL;DR: The factual sections read like a real person who knows what they're doing. The rhetorical flourishes read like someone pasted their draft into Claude and said "make it more compelling." The work deserves better than the prose it got.
LLM output given "<DOC>X</DOC> Identify parts written by an LLM"
Here are the passages that read as LLM-generated rather than naturally written:
*Overwrought dramatic pivots (LLMs love the "Not X — Y" antithesis):* - "Not an obituary — a resurrection." - "Not 'unmaintained' — officially, irreversibly, done." - "That demand doesn't disappear — it just finds its way out."
*Explicitly labeling rhetoric that should speak for itself:* - "The ironic part:" — just show the irony, don't announce it. - "The consensus in the international community is clear:" — "international community" is overbearing. "is clear" is LLM throat-clearing. - "That's the beauty of open-source licensing by design" — "That's the beauty of" is a hallmark LLM filler phrase.
*Grandiose one-liners that try too hard:* - "git clone is the most powerful spell in open source." - "a digital tombstone" - "If December was the clinical death, this February commit was the death certificate." — the metaphor was already established in the heading; extending it here is overworked.
*LLM vagueness / filler:* - "Things are different now." — says nothing. - "Consider:" as a standalone transition into the Elon/Twitter example. - "I believe the maintenance workload is manageable." — the hedging "I believe" adds nothing; just say it's manageable.
*Cliché deployment:* - "the dragon-slayer has become the dragon" (in the related-article blurb) - "Eating your own dog food is the best QA." — explaining the idiom ("dogfooding") one sentence before, then restating it as a maxim, is the LLM pattern of using a phrase and then making sure you understood it.
*The AI-hype paragraph is the worst offender:* > "With tools like Claude Code, the cost of locating and fixing bugs in a complex Go project has dropped by *more than an order of magnitude*. What used to require a dedicated team to maintain a complex infrastructure project can now be handled by *one experienced engineer with an AI copilot*."
This reads like an LLM writing about itself — vague quantification ("order of magnitude"), the buzzword "copilot," and the utopian framing are all telltale. The Elon/Twitter analogy that follows ("Consider:") makes it worse, not better.
*Overall pattern:* The technical/factual sections (the timeline table, the build instructions, the console revert explanation) read like a real person. The editorializing and rhetorical flourishes — especially the intro, the "But Open Source Endures" section, and the "AI Changed the Game" section — are where the LLM voice creeps in most heavily.