Nice work. Although this model is not very good, I tried a lot of different image-to-3d models, the one from meshy.ai is the best, trellis is in the useless tier, really hope there could be some good open source models in this domain.
Hey, thanks for sharing this. I'm sure TRELLIS.2 definitely has room to improve, especially on texturing.
From what I've seen personally, and community benchmarks, it does fair on geometry and visual fidelity among open-source options, but I agree it's not perfect for every use case.
Meshy is solid, I used it to print my girlfriend a mini 3d model of her on her birthday last year!
Though worth noting it's a paid service, and free tier has usage limitations while TRELLIS.2 is MIT licensed with unlimited local generation. Different tradeoffs for different workflows. Hopefully the open-source side keeps improving.
That’s always been possible with MPS backend, the reason people choose to omit it in HF spaces/demos is that HF doesn’t offer an MPS backend. People would rather have the thing work at best speeds than 10x worse speeds just for compatibility.
IMO TRELLIS.2 is slightly different case from the HF models scenario. It depends on five compiled CUDA-only extensions -- flex_gemm for sparse convolution, flash_attn, o_voxel for CUDA hashmap ops, cumesh for mesh processing, and nvdiffrast for differentiable rasterization. These aren't PyTorch ops that fall back to MPS -- they're custom C++/CUDA kernels. The upstream setup.sh literally exits with "No supported GPU found" if nvidia-smi isn't present. The only reason I picked this up because I thought it was cool and no one was working on this open issue for Silicon back then (github.com/microsoft/TRELLIS.2/issues/74) requesting non-CUDA support.
It’s always been possible, but it’s not possible because there’s no backend, and no one wants to it to be possible because everyone needs it 10x the speed of running on a Mac? I’m missing something, I think.
Nice work. Although this model is not very good, I tried a lot of different image-to-3d models, the one from meshy.ai is the best, trellis is in the useless tier, really hope there could be some good open source models in this domain.
Hey, thanks for sharing this. I'm sure TRELLIS.2 definitely has room to improve, especially on texturing.
From what I've seen personally, and community benchmarks, it does fair on geometry and visual fidelity among open-source options, but I agree it's not perfect for every use case.
Meshy is solid, I used it to print my girlfriend a mini 3d model of her on her birthday last year!
Though worth noting it's a paid service, and free tier has usage limitations while TRELLIS.2 is MIT licensed with unlimited local generation. Different tradeoffs for different workflows. Hopefully the open-source side keeps improving.
So much effort, but no examples in the landing page.
You're right, thanks for flagging this, let me run something and push images
Well done
rad. how long does output take? trellis is a fun model.
That’s always been possible with MPS backend, the reason people choose to omit it in HF spaces/demos is that HF doesn’t offer an MPS backend. People would rather have the thing work at best speeds than 10x worse speeds just for compatibility.
IMO TRELLIS.2 is slightly different case from the HF models scenario. It depends on five compiled CUDA-only extensions -- flex_gemm for sparse convolution, flash_attn, o_voxel for CUDA hashmap ops, cumesh for mesh processing, and nvdiffrast for differentiable rasterization. These aren't PyTorch ops that fall back to MPS -- they're custom C++/CUDA kernels. The upstream setup.sh literally exits with "No supported GPU found" if nvidia-smi isn't present. The only reason I picked this up because I thought it was cool and no one was working on this open issue for Silicon back then (github.com/microsoft/TRELLIS.2/issues/74) requesting non-CUDA support.
Are you saying the original one worked with MPS? Or are you just saying it was always theoretically possible to build what OP posted?
It’s always been possible, but it’s not possible because there’s no backend, and no one wants to it to be possible because everyone needs it 10x the speed of running on a Mac? I’m missing something, I think.
Nothing much here. WTF is this near number 1 on the front page of HN?
Good question.