I work at Ramp and have always been on the “luddite” side of AI code tools. I use them but usually I’m not that impressed and a curmudgeon when I see folks ask Claude to debug something instead of just reading the code. I’m just an old(er) neckbeard at heart.
But. This tool is scarily good. I’m seeing it “1-shot” features in a fairly sizable code base and fixes with better code and accuracy than me.
This is a great writeup! Could you share more about the sandbox <-> client communication architecture? e.g., is the agent emitting events to a queue/topic, writing artifacts to object storage, and the client subscribes; or is it more direct (websocket/gRPC) from the sandbox? I’ve mostly leaned on sandbox.exec() patterns in Modal, and I’m curious what you found works best at scale.
I guess we all know and „love“ how every five minutes, some breathless hipster influencer posts „This changes everything!!!“ to every new x.y.1 AI bubble increment.
But honestly? This here really is something.
I can vividly imagine how in a not too far future, there will only be two types of product companies: those that work like this, and those that don’t — and vanish.
Edit: To provide a less breathless take myself:
What I can very realistically imagine is that just like today sane and level-headed startups go „let’s first set up some decent infrastructure-as-code, a continuous delivery pipeline, and a solid testing framework, and then start building the product for good“, in the future sane and level-headed startups will go „let’s first set up some decent infrastructure-as-code, a continuous delivery pipeline, a solid testing framework, and a Ramp-style background agent — and then start building the product for good“.
I work at Ramp and have always been on the “luddite” side of AI code tools. I use them but usually I’m not that impressed and a curmudgeon when I see folks ask Claude to debug something instead of just reading the code. I’m just an old(er) neckbeard at heart.
But. This tool is scarily good. I’m seeing it “1-shot” features in a fairly sizable code base and fixes with better code and accuracy than me.
This basically sums up where we're at. Undeniably useful but careful in approach.
This is a great writeup! Could you share more about the sandbox <-> client communication architecture? e.g., is the agent emitting events to a queue/topic, writing artifacts to object storage, and the client subscribes; or is it more direct (websocket/gRPC) from the sandbox? I’ve mostly leaned on sandbox.exec() patterns in Modal, and I’m curious what you found works best at scale.
I guess we all know and „love“ how every five minutes, some breathless hipster influencer posts „This changes everything!!!“ to every new x.y.1 AI bubble increment.
But honestly? This here really is something.
I can vividly imagine how in a not too far future, there will only be two types of product companies: those that work like this, and those that don’t — and vanish.
Edit: To provide a less breathless take myself:
What I can very realistically imagine is that just like today sane and level-headed startups go „let’s first set up some decent infrastructure-as-code, a continuous delivery pipeline, and a solid testing framework, and then start building the product for good“, in the future sane and level-headed startups will go „let’s first set up some decent infrastructure-as-code, a continuous delivery pipeline, a solid testing framework, and a Ramp-style background agent — and then start building the product for good“.
the chrome extension bit is super interesting and well thought out
i wonder what percentage of PRs etc is now from non eng?
Probably the best internal ai platform I've seen to date, incredible work.
The commitment to reducing friction is really incredible. Are they implying that any developer could recreate the system with AI from the description?