Looks interesting for something like local development. I don't intend to run production object storage myself, but some of the stuff in the guide to the production setup (https://garagehq.deuxfleurs.fr/documentation/cookbook/real-w...) would scare me a bit:
> For the metadata storage, Garage does not do checksumming and integrity verification on its own, so it is better to use a robust filesystem such as BTRFS or ZFS. Users have reported that when using the LMDB database engine (the default), database files have a tendency of becoming corrupted after an unclean shutdown (e.g. a power outage), so you should take regular snapshots to be able to recover from such a situation.
It seems like you can also use SQLite, but a default database that isn't robust against power failure or crashes seems suprising to me.
I've been using minio for local dev but that version is unmaintained now. However, I was put off by the minimum requirements for garage listed on the page -- does it really need a gig of RAM?
The current latest Minio release that is working for us for local development is now almost a year old and soon enough we will have to upgrade. Curious what others have replaced it with that is as easy to set up and has a management UI.
That's not something you can do reliably in software, datacenter grade NVMe drives come with power loss protection and additional capacitors to handle that gracefully. If power is cut at the wrong moment the partition may not be mountable afterwards otherwise.
If you really live somewhere with frequent outages, buy an industrial drive that has a PLP rating. Or get a UPS, they tend to be cheaper.
Isn't that the entire point of write-ahead logs, journaling file systems, and fsync in general? A roll-back or roll-forward due to a power loss causing a partial write is completely expected, but surely consumer SSDs wouldn't just completely ignore fsync and blatantly lie that the data has been persisted?
As I understood it, the capacitors on datacenter-grade drives are to give it more flexibility, as it allows the drive to issue a successful write response for cached data: the capacitor guarantees that even with a power loss the write will still finish, so for all intents and purposes it has been persisted, so an fsync can return without having to wait on the actual flash itself, which greatly increases performance. Have I just completely misunderstood this?
If the drives continue to have power, but the OS has crashed, will the drives persist the data once a certain amount of time has passed? Are datacenters set up to take advantage of this?
I'm not sure if it even has any sort of cluster consensus algorithm? I can't imagine it not eating committed writes in a multi-node deployment.
Garage and Ceph (well, radosgw) are the only open source S3-compatible object storage which have undergone serious durability/correctness testing. Anything else will most likely eat your data.
I love garage. I think it has applications beyond the standard self host s3 alternative.
It's a really cool system for hyper converged architecture where storage requests can pull data from the local machine and only hit the network when needed.
One really useful usecase for Garage for me has been data engineering scripts. I can just use the S3 integration that every tool has to dump to garage and then I can more easily scale up to cloud later.
BTW https://deuxfleurs.fr/ is one of the most beautiful website I have ever seen
Looks interesting for something like local development. I don't intend to run production object storage myself, but some of the stuff in the guide to the production setup (https://garagehq.deuxfleurs.fr/documentation/cookbook/real-w...) would scare me a bit:
> For the metadata storage, Garage does not do checksumming and integrity verification on its own, so it is better to use a robust filesystem such as BTRFS or ZFS. Users have reported that when using the LMDB database engine (the default), database files have a tendency of becoming corrupted after an unclean shutdown (e.g. a power outage), so you should take regular snapshots to be able to recover from such a situation.
It seems like you can also use SQLite, but a default database that isn't robust against power failure or crashes seems suprising to me.
I've been using minio for local dev but that version is unmaintained now. However, I was put off by the minimum requirements for garage listed on the page -- does it really need a gig of RAM?
The current latest Minio release that is working for us for local development is now almost a year old and soon enough we will have to upgrade. Curious what others have replaced it with that is as easy to set up and has a management UI.
That's not something you can do reliably in software, datacenter grade NVMe drives come with power loss protection and additional capacitors to handle that gracefully. If power is cut at the wrong moment the partition may not be mountable afterwards otherwise.
If you really live somewhere with frequent outages, buy an industrial drive that has a PLP rating. Or get a UPS, they tend to be cheaper.
Isn't that the entire point of write-ahead logs, journaling file systems, and fsync in general? A roll-back or roll-forward due to a power loss causing a partial write is completely expected, but surely consumer SSDs wouldn't just completely ignore fsync and blatantly lie that the data has been persisted?
As I understood it, the capacitors on datacenter-grade drives are to give it more flexibility, as it allows the drive to issue a successful write response for cached data: the capacitor guarantees that even with a power loss the write will still finish, so for all intents and purposes it has been persisted, so an fsync can return without having to wait on the actual flash itself, which greatly increases performance. Have I just completely misunderstood this?
> ignore fsync and blatantly lie that the data has been persisted
Unfortunately they do: https://news.ycombinator.com/item?id=38371307
If the drives continue to have power, but the OS has crashed, will the drives persist the data once a certain amount of time has passed? Are datacenters set up to take advantage of this?
Is it the same consistency model as S3? I couldn't see anything about it in their docs.
Seeing a ton of adoption of this after the Minio debacle
https://www.repoflow.io/blog/benchmarking-self-hosted-s3-com... was useful.
RustFS also looks interesting but for entirely non-technical reasons we had to exclude it.
Anyone have any advice for swapping this in for Minio?
From what I have seen in the previous discussions here (since and before Minio debacle) and at work, Garage is a solid replacement.
I have not tried either myself, but I wanted to mention that Versity S3 Gateway looks good too.
https://github.com/versity/versitygw
I am also curious how Ceph S3 gateway compares to all of these.
> but for entirely non-technical reasons we had to exclude it
Able/willing to expand on this at all? Just curious.
Not the same person you asked, but my guess would be that it is seen as a chinese product.
RustFS appears to be very early-stage with no real distributed systems architecture: https://github.com/rustfs/rustfs/pull/884
I'm not sure if it even has any sort of cluster consensus algorithm? I can't imagine it not eating committed writes in a multi-node deployment.
Garage and Ceph (well, radosgw) are the only open source S3-compatible object storage which have undergone serious durability/correctness testing. Anything else will most likely eat your data.
What is this based on, honest question as from the landing page I don't get that impression. Are many committers China-based?
https://rustfs.com.cn/
> Beijing Address: Area C, North Territory, Zhongguancun Dongsheng Science Park, No. 66 Xixiaokou Road, Haidian District, Beijing
> Beijing ICP Registration No. 2024061305-1
Oh, I misread the initial comment and thought they had to exclude Garage. Thanks!
I love garage. I think it has applications beyond the standard self host s3 alternative.
It's a really cool system for hyper converged architecture where storage requests can pull data from the local machine and only hit the network when needed.
One really useful usecase for Garage for me has been data engineering scripts. I can just use the S3 integration that every tool has to dump to garage and then I can more easily scale up to cloud later.
No erasure coding seems like a pretty big loss in terms of how much resources do you need to get good resiliency & efficiency
Wasn't expecting to see it hosted on forgejo. Kind of a breath of fresh air to be honest.
Does this support conditional PUT (If-Match / If-None-Match)?
Unfortunately, this doesn’t support conditional writes through if-match and if-none-match [0] and thus is not compatible with ZeroFS [1].
[0] https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/1052
[1] https://github.com/Barre/ZeroFS
https://git.deuxfleurs.fr/Deuxfleurs/garage/src/branch/main-...
this is the reliability question no?