My project, Lokalized (from 2017, in Java), has the same goal but took a different approach to the "little language" design. I'm guessing I had the same inspiration as the Fluent authors - existing solutions were just not expressive enough for the real world. Mentioning here because I'm always super interested in seeing how others approach the problem of effective i18n (it's a bit complex). Making Fluent more of a spec was the right call imo; I did not do that with my work.
I use this in one of my products. The biggest problem with it is that it lacks any tooling around it. There are no tools to verify that the file itself is not broken / malformed. And tools like POedit do not support it for translations.
In my experience, LLMs are terrific for most translation tasks, but you still need a way to encode the data (rules for genders, cardinalities, ordinalities, ...) for storage on disk/database/etc. for 1. performance and 2. consistency/durability. So LLMs are a big part of the solution, but not the whole picture.
My project, Lokalized (from 2017, in Java), has the same goal but took a different approach to the "little language" design. I'm guessing I had the same inspiration as the Fluent authors - existing solutions were just not expressive enough for the real world. Mentioning here because I'm always super interested in seeing how others approach the problem of effective i18n (it's a bit complex). Making Fluent more of a spec was the right call imo; I did not do that with my work.
https://lokalized.com
Have a look at my solution (also Java): https://github.com/resource4j/resource4j
I like the fluent API - this looks like a much better way to work with ResourceBundle types than Java's out-of-the-box support. Thanks for sharing.
I use this in one of my products. The biggest problem with it is that it lacks any tooling around it. There are no tools to verify that the file itself is not broken / malformed. And tools like POedit do not support it for translations.
This problem is more suitable for tiny LLM.
In my experience, LLMs are terrific for most translation tasks, but you still need a way to encode the data (rules for genders, cardinalities, ordinalities, ...) for storage on disk/database/etc. for 1. performance and 2. consistency/durability. So LLMs are a big part of the solution, but not the whole picture.