Do you handle JSON numbers safely by default or do you require that people make their own deserializers for numbers that would lose precision when coerced into Python's float type? The most common mistake that I see JSON libraries make is using fixed precision floating point types somewhere in the process when handling numbers while JSON's number type specifies no such limitation, which then causes precision loss unless people catch the problem and do their own pre-serialization.
The degree of LLM writing here makes it hard to determine which parts of this are novel and which parts are derivations of existing popular libraries like Pydantic and msgspec.
I also don't think either Pydantic or msgspec struggles with any of the "gotcha" cases in the post. Both can understand enums, type tagging, literals, etc.
Were there any particular challenges when implementing your library? I have implemented my own serialization library [1] (with a focus on not allowing arbitrary code execution), but had skipped dataclasses for now, since they seemed difficult to get right. What was your experience?
I built it after repeatedly running into friction with Python’s built-in json module when working with classes, dataclasses, nested objects, and type hints.
Jsonic focuses on:
- Zero-boilerplate serialization and deserialization
- Strict type validation with clear errors
- Natural support for dataclasses, enums, tuples, sets, nested objects etc.
- Optional field exclusion (e.g. hiding sensitive data)
- Extra features like transient fields definition, suport for __slots__ classes etc.
- Clean interop with Pydantic models
The goal is to make JSON round-tripping feel Pythonic and predictable without writing to_dict() / from_dict() everywhere.
I’d really appreciate feedback on the API design and tradeoffs.
Did you even glance at your article before publishing it? The python code blocks are completely scrambled and haphazardly indented, it gives me a headache to read.
It's much more effort for me to scroll horizontally to read long syntactically incorrect unbroken clipped lines than it would have been for you to break the lines up so they're readable (and syntactically correct) in the first place.
Remember: one writer, many readers. And "Pythonic" strongly implies "readable" by humans, not just "parseable" by machines.
Is that on purpose? It's just as bad if it isn't, because it shows you did zero proofreading. I would hate to use a library or read an article that is so carelessly written. In an article or library about json and python code, syntax and readability matters.
If you want to write read-only one-liners on purpose, stick to Perl.
The article would benefit from a very clear and explicit section on pydantic model_dump_json() vs your tool. As that's the primary thing you're tool is likely competing against
Pydantic is great lib and and has many advantages over Jsonic,
I think main use cases for Jsonic over Pydantic are:
- You already have plain Python classes or dataclasses and don’t want to convert them to BaseModel
- You prefer minimal intrusion - no inheritance, no decorators, no schema definitions
- You need to serialize and deserialize Pydantic models alongside non-Pydantic classes
Having said that, Pydantic is the better choice in most cases.
This is also why Jsonic integrate natively with Pydantic so you can serialize Pydantic models using Jsonic out of the box
I can see that. Pydantic is great but relatively slow (which matters on edge devices) and can be bloated.
The fact that all your projects use Pydantic makes it an easy starting point and created standardisation - of course.
Nevertheless, I can definitely see some use-cases for lightweight JSON-serialisation without bringing in Pydantic. Dataclasses are great, but lack proper json handling.
Do you handle JSON numbers safely by default or do you require that people make their own deserializers for numbers that would lose precision when coerced into Python's float type? The most common mistake that I see JSON libraries make is using fixed precision floating point types somewhere in the process when handling numbers while JSON's number type specifies no such limitation, which then causes precision loss unless people catch the problem and do their own pre-serialization.
The degree of LLM writing here makes it hard to determine which parts of this are novel and which parts are derivations of existing popular libraries like Pydantic and msgspec.
I also don't think either Pydantic or msgspec struggles with any of the "gotcha" cases in the post. Both can understand enums, type tagging, literals, etc.
Were there any particular challenges when implementing your library? I have implemented my own serialization library [1] (with a focus on not allowing arbitrary code execution), but had skipped dataclasses for now, since they seemed difficult to get right. What was your experience?
[1] https://github.com/99991/safeserialize
Side note: I think that a warning in the README about arbitrary code execution for deserialization of untrusted inputs would be nice.
Hi HN - I’m the author of Jsonic.
I built it after repeatedly running into friction with Python’s built-in json module when working with classes, dataclasses, nested objects, and type hints.
Jsonic focuses on: - Zero-boilerplate serialization and deserialization - Strict type validation with clear errors - Natural support for dataclasses, enums, tuples, sets, nested objects etc. - Optional field exclusion (e.g. hiding sensitive data) - Extra features like transient fields definition, suport for __slots__ classes etc. - Clean interop with Pydantic models
The goal is to make JSON round-tripping feel Pythonic and predictable without writing to_dict() / from_dict() everywhere.
I’d really appreciate feedback on the API design and tradeoffs.
all the quoted Python code on the medium post has broken formatting
your comment above has the same broken formatting
does not inspire confidence if you can't spot such obvious breakage
> after repeatedly running into friction
Could you be more specific?
Did you even glance at your article before publishing it? The python code blocks are completely scrambled and haphazardly indented, it gives me a headache to read.
It's much more effort for me to scroll horizontally to read long syntactically incorrect unbroken clipped lines than it would have been for you to break the lines up so they're readable (and syntactically correct) in the first place.
Remember: one writer, many readers. And "Pythonic" strongly implies "readable" by humans, not just "parseable" by machines.
Is that on purpose? It's just as bad if it isn't, because it shows you did zero proofreading. I would hate to use a library or read an article that is so carelessly written. In an article or library about json and python code, syntax and readability matters.
If you want to write read-only one-liners on purpose, stick to Perl.
The article would benefit from a very clear and explicit section on pydantic model_dump_json() vs your tool. As that's the primary thing you're tool is likely competing against
Thanks for sharing, could you please comment on the performance aspect vis-a-vis json reader/writer provided by pydantic
Sorry to be a hater, but wouldn’t using Pydantic be better in almost every circumstance here?
Pydantic is great lib and and has many advantages over Jsonic,
I think main use cases for Jsonic over Pydantic are: - You already have plain Python classes or dataclasses and don’t want to convert them to BaseModel - You prefer minimal intrusion - no inheritance, no decorators, no schema definitions - You need to serialize and deserialize Pydantic models alongside non-Pydantic classes
Having said that, Pydantic is the better choice in most cases.
This is also why Jsonic integrate natively with Pydantic so you can serialize Pydantic models using Jsonic out of the box
I can see that. Pydantic is great but relatively slow (which matters on edge devices) and can be bloated.
The fact that all your projects use Pydantic makes it an easy starting point and created standardisation - of course.
Nevertheless, I can definitely see some use-cases for lightweight JSON-serialisation without bringing in Pydantic. Dataclasses are great, but lack proper json handling.
Looks useful. Will try it out. Thanks for making it.