This is great, and I keep my fingers crossed for Hatchet!
One use case I imagine is key here is background/async agents, so OpenAI Codex/Jules style, so that's great if I can durably run them with Pickaxe(btw I belive I've read somewhere in temporal docs or some webinar that Codex was built on that ;), but how do I get that real-time and resumable message stream back to the client? The user might reload the page or return after 15 minutes, etc. I wasn't able to think of an elegant way to model this in a distributed system.
If I understand it correctly, tools and agents run() method works in a similar way to react hooks, correct?
Depending on execution order, tool is either called or a cached value returned. That way local state can be replayed, and that's why "no side effects" rule is in place.
I like it. Just, what's the recommended way to have a chat assistant agent with multiple tools? Message history would need to be passed to the very top-level agent.run call, isn't it?
It would be great to have a section in the README about how the code looks without using the library, contrasting it with the example you already have. I would need a significant time-saving reason to use a new external library. This is because, for new libraries like yours, we don't know how long you plan to support it. For that reason, using an external library for a core part of my business is a huge risk for me.
My use case: cursor for open-source terminal-based coding agents.
I see the API is rarely mentioning exact message structure (system prompt, assistant/user history, etc) or the choice of model (other than defaultLanguageModel). And it's not immediately clear to me how `toolbox.pickAndRun` can access any context from an ongoing agentic flow other than within the one prompt. But this is just from skimming the docs, maybe all of this is supported?
The reason I ask is because I've had a lot of success using different models for different tasks, constructing the system prompt specifically for each task, and also choosing between the "default" long assistant/tool_call/user/(repeat) message history vs. constantly pruning it (bad for caching but sometimes good for performance). And it would be nice to know a library like this could allow experimentation of these strategies.
The library name is confusing given https://pickaxe.co/, a nicely done low code platform for building/monetizing chatbots and agents that's been around for 2.5 years or so.
(No connection to pickaxe.co other than using the platform)
As a long time Hatchet user, I understand why you’ve created this library, but it also disappoints me a little bit. I wish more engineering time was spent on making the core platform more stable and performant.
This is great, and I keep my fingers crossed for Hatchet!
One use case I imagine is key here is background/async agents, so OpenAI Codex/Jules style, so that's great if I can durably run them with Pickaxe(btw I belive I've read somewhere in temporal docs or some webinar that Codex was built on that ;), but how do I get that real-time and resumable message stream back to the client? The user might reload the page or return after 15 minutes, etc. I wasn't able to think of an elegant way to model this in a distributed system.
If I understand it correctly, tools and agents run() method works in a similar way to react hooks, correct?
Depending on execution order, tool is either called or a cached value returned. That way local state can be replayed, and that's why "no side effects" rule is in place.
I like it. Just, what's the recommended way to have a chat assistant agent with multiple tools? Message history would need to be passed to the very top-level agent.run call, isn't it?
It would be great to have a section in the README about how the code looks without using the library, contrasting it with the example you already have. I would need a significant time-saving reason to use a new external library. This is because, for new libraries like yours, we don't know how long you plan to support it. For that reason, using an external library for a core part of my business is a huge risk for me.
My use case: cursor for open-source terminal-based coding agents.
I see the API is rarely mentioning exact message structure (system prompt, assistant/user history, etc) or the choice of model (other than defaultLanguageModel). And it's not immediately clear to me how `toolbox.pickAndRun` can access any context from an ongoing agentic flow other than within the one prompt. But this is just from skimming the docs, maybe all of this is supported?
The reason I ask is because I've had a lot of success using different models for different tasks, constructing the system prompt specifically for each task, and also choosing between the "default" long assistant/tool_call/user/(repeat) message history vs. constantly pruning it (bad for caching but sometimes good for performance). And it would be nice to know a library like this could allow experimentation of these strategies.
Fantastic. That's exactly what I wanted to make for a long time but never got around to, writing ad-hoc, lacking, overlapping stuff each time.
Oh this is really cool! I was building out a bit of this with Restate this past week, but this seems really well put together :) will give it a try!
The library name is confusing given https://pickaxe.co/, a nicely done low code platform for building/monetizing chatbots and agents that's been around for 2.5 years or so.
(No connection to pickaxe.co other than using the platform)
What I really like about it, is that this kind of project helps people learn what an agent is.
Love to see more frameworks like this in the Typescript eco-system! How does this compare to Mastra: https://mastra.ai/
How does this compare to agent-kit by inngest?
As a long time Hatchet user, I understand why you’ve created this library, but it also disappoints me a little bit. I wish more engineering time was spent on making the core platform more stable and performant.