The best part about this is that you know the type of people/companies using langchain are likely the type that are not going to patch this in a timely manner.
No dig at you, but I take the average langchain user as one who is either a) using it because their C-suite heard about at some AI conference and had it foisted upon them or b) does not care about software quality in general.
I've talked to many people who regret building on top of it but they're in too deep.
I think you may come to the same conclusions over time.
I am not sure what's the stereotype, but I tried using langchain and realised most of the functionality actually adds more code to use than simply writing my own direct API LLM calls.
Overall I felt like it solves a problem doesn't exist, and I've been happily sending direct API calls for years to LLMs without issues.
JSON Structured Output from OpenAI was released a year after the first LangChain release.
I think structured output with schema validation mostly replaces the need for complex prompt frameworks. I do look at the LC source from time to time because they do have good prompts backed into the framework.
Cheers to all the teams on sev1 calls on their holidays, we can only hope their adversaries are also trying to spend time with family. LangGrinch, indeed! (I get it, timely disclosure is responsible disclosure)
CVE-2025-68664 (langchain-core): object confusion during (de)serialization can leak secrets (and in some cases escalate further). Details and mitigations in the post.
I would rather read succinct English written by a non-native speaker filled with broken grammar than overly verbose but well-spelled AI slop. Heck, just share the prompt itself!
If you can't be bothered to have a human write literally a handful of lines of text, what else can't you be bothered to do? Why should I trust that your CVE even exists at all - let alone is indeed "critical" and worth ruining Christmas over?
Unfortunately, the sheer amount of ChatGPT-processed texts being linked has for me become a reason not to want to read them, which is quite depressing.
If I want to cleanup, summarize, translate, make more formal, make more funny, whatever, some incoming text by sending it through an LLM, I can do it myself.
The best part about this is that you know the type of people/companies using langchain are likely the type that are not going to patch this in a timely manner.
Can you elaborate? Fairly new to langchain, but didn't realize it had any sort of stereotypical type of user.
No dig at you, but I take the average langchain user as one who is either a) using it because their C-suite heard about at some AI conference and had it foisted upon them or b) does not care about software quality in general.
I've talked to many people who regret building on top of it but they're in too deep.
I think you may come to the same conclusions over time.
I am not sure what's the stereotype, but I tried using langchain and realised most of the functionality actually adds more code to use than simply writing my own direct API LLM calls.
Overall I felt like it solves a problem doesn't exist, and I've been happily sending direct API calls for years to LLMs without issues.
JSON Structured Output from OpenAI was released a year after the first LangChain release.
I think structured output with schema validation mostly replaces the need for complex prompt frameworks. I do look at the LC source from time to time because they do have good prompts backed into the framework.
Cheers to all the teams on sev1 calls on their holidays, we can only hope their adversaries are also trying to spend time with family. LangGrinch, indeed! (I get it, timely disclosure is responsible disclosure)
CVE-2025-68664 (langchain-core): object confusion during (de)serialization can leak secrets (and in some cases escalate further). Details and mitigations in the post.
WHY on earth did the author of the CVE feel the need to feed the description text through an LLm? I get dizzy when I see this AI slop style.
I would rather just read the original prompt that went in instead of verbosified "it's not X, it's **Y**!" slop.
> WHY on earth did the author of the CVE feel the need to feed the description text through an LLm?
Not everyone speaks English natively.
Not everyone has taste when it comes to written English.
I would rather read succinct English written by a non-native speaker filled with broken grammar than overly verbose but well-spelled AI slop. Heck, just share the prompt itself!
If you can't be bothered to have a human write literally a handful of lines of text, what else can't you be bothered to do? Why should I trust that your CVE even exists at all - let alone is indeed "critical" and worth ruining Christmas over?
I prefer reading the LLM output for accessibility reasons.
More importantly though, the sheer amount of this complaint on HN has become a great reason not to show up.
Unfortunately, the sheer amount of ChatGPT-processed texts being linked has for me become a reason not to want to read them, which is quite depressing.
If I want to cleanup, summarize, translate, make more formal, make more funny, whatever, some incoming text by sending it through an LLM, I can do it myself.
you can use chatgpt to reverse the prompt
Not sure if it's a joke, but I don't think LLM is a bijective function.
ChatGPT can generate you a sentence that plausibly looks like the prompt