I've always been of the view that for a workflow language, you should use a proper, turing-complete functional language which gives you all the usual flexiblity for transformations on intermediate data, while also supporting things like automatic parallelisation of things like external, compute-intensive tasks.
Here's a different kind of workflow engine with a proper DSL. It turns out config management is the same problem as workflow engines, if you use my modern definition of config management.
I was expecting to see some verbose LLM output, but actually the code has a distinctly hand-crafted feel. Nice to see! I'm not sure if "production ready" is a safe claim 7 commits in to a project ;)
Was going to ask the same thing. The orchestration space already has some very well established frameworks like Airflow and Dagster. Would be curious to see the pros and cons.
I think the future of replacements to well established frameworks written in Python/etc.. are zero dependency binaries (from Rust or Go) that require so little configuration and tuning and they "just work".
Type-safe code. Workflows are not configuration! If I wanted YAML hell I could stick to Github Actions.
But that's only the start. There are a lot of other things I would expect of a new workflow orchestrator in 2026 so if you are not comparing yourself to the competition you probably don't know what you're getting yourself into.
Yeah, that makes sense. I looked at a few workflow orchestrators and I'm building something that I will release soon, but my thinking is that the "workflow engine" should be an abstraction that takes the input and executes the steps. "What" you use to define that workflow is probably the SDK layer though, but I can certainly see the value in using type safe code to define as opposed to a YAML file.
I'm mainly focusing on the portability aspect of it (e.g. use TS/Python/etc. to define the workflow/steps or just simple a simple YAML file).
Sort of. My thinking is that the input to define the workflow should be anything you prefer to use (TS, Go, YAML, etc.) and the orchestrator's job is to model that and execute the job, given your deployment model.
DAG Workflow Engine
A production-ready DAG (Directed Acyclic Graph) workflow engine driven by a YAML DSL. Validates, executes, and visualizes workflows with support for parallel execution, retries, conditional branching, batch iteration, and pluggable actions.
I've always been of the view that for a workflow language, you should use a proper, turing-complete functional language which gives you all the usual flexiblity for transformations on intermediate data, while also supporting things like automatic parallelisation of things like external, compute-intensive tasks.
I recommend checking out https://github.com/peterkelly/rex and also my PhD thesis on the topic https://www.pmkelly.net/publications/thesis.pdf.
The gap in flexiblity between DAG-only and a full language designed for the task is a significant one.
Here's a different kind of workflow engine with a proper DSL. It turns out config management is the same problem as workflow engines, if you use my modern definition of config management.
https://github.com/purpleidea/mgmt/
I was expecting to see some verbose LLM output, but actually the code has a distinctly hand-crafted feel. Nice to see! I'm not sure if "production ready" is a safe claim 7 commits in to a project ;)
https://github.com/vivekg13186/Daisy-DAG/blob/main/backend/s...
I've seen LLMs include that exact "production-ready" claim on code they generate. But of course it gets that from its training data.
What makes it production ready? What's the code coverage on your tests? There are only seven commits in this repo as of this comment.
how it compares to airflow?
Was going to ask the same thing. The orchestration space already has some very well established frameworks like Airflow and Dagster. Would be curious to see the pros and cons.
I think the future of replacements to well established frameworks written in Python/etc.. are zero dependency binaries (from Rust or Go) that require so little configuration and tuning and they "just work".
That being said, that's not this project.
I don't see any references to existing orchestrators, which are way more complete, so I presume you did this as an exercise?
Just seeing YAML used for workflows in this age makes me automatically nope out.
Curious, what format would you prefer to use to represent a workflow instead of YAML?
Type-safe code. Workflows are not configuration! If I wanted YAML hell I could stick to Github Actions.
But that's only the start. There are a lot of other things I would expect of a new workflow orchestrator in 2026 so if you are not comparing yourself to the competition you probably don't know what you're getting yourself into.
Yeah, that makes sense. I looked at a few workflow orchestrators and I'm building something that I will release soon, but my thinking is that the "workflow engine" should be an abstraction that takes the input and executes the steps. "What" you use to define that workflow is probably the SDK layer though, but I can certainly see the value in using type safe code to define as opposed to a YAML file.
I'm mainly focusing on the portability aspect of it (e.g. use TS/Python/etc. to define the workflow/steps or just simple a simple YAML file).
Are you planning to map those varied definitions onto varied orchestrators?
Sort of. My thinking is that the input to define the workflow should be anything you prefer to use (TS, Go, YAML, etc.) and the orchestrator's job is to model that and execute the job, given your deployment model.
DAG Workflow Engine A production-ready DAG (Directed Acyclic Graph) workflow engine driven by a YAML DSL. Validates, executes, and visualizes workflows with support for parallel execution, retries, conditional branching, batch iteration, and pluggable actions.