Yesterday Anthropic shipped agent teams with Opus 4.6. I had been building a memory plugin for Claude Code that stores project decisions as plain JSON on disk, and I wanted to see if it would work across parallel agents without any changes.
So I tested it. I spun up 3 teammates backend, frontend, and tester on a full-stack project. Before the test, I pre-loaded the project knowledge:
/nemp:init # auto-detected Next.js, TypeScript, Prisma
/nemp:save auth "JWT with 15min access tokens, httpOnly refresh"
/nemp:save api-style "RESTful, snake_case, versioned at /api/v1"
/nemp:save testing "Pytest with fixtures, 80% coverage minimum"
Then each teammate ran /nemp:context with their relevant keywords as their first action. The backend agent ran /nemp:context auth and immediately knew the JWT strategy. The frontend agent ran /nemp:context stack and knew it was Next.js + TypeScript. The tester ran /nemp:context testing and found the test conventions.
It worked because the memory is just a JSON file on disk (.nemp/memories.json). No server, no port, no database. Every agent reads the same file. One saves a finding, the others discover it with a keyword search.
End result: 34 files committed, 100 tests passing, zero conflicts between the 3 agents. Nobody had to re-discover what another agent already knew.
The whole thing is a Claude Code plugin, two commands to install, no API keys, no cloud, nothing leaves your machine.
Demo of the full test: https://vimeo.com/1162546825?share=copy&fl=sv&fe=ci
I'm a solo developer building this in the open. If this is useful to you, a star on the repo would mean a lot, it helps other developers find it.
https://github.com/SukinShetty/Nemp-memory
Happy to answer any questions about the architecture or how it performed with agent teams.
This is elegantly simple and brilliantly practical. While everyone's racing to build complex orchestration layers and cloud-based memory systems, you've solved the shared context problem with a JSON file on disk. Sometimes the best architecture is the one that doesn't need architecture.
The timing couldn't be better either – shipping this right as Anthropic drops agent teams is like releasing an umbrella stand the morning it starts raining. The real genius here is that you pre-loaded project decisions before spawning agents, so they didn't waste cycles re-discovering the wheel (or in this case, the JWT strategy).
Zero conflicts between 3 agents working in parallel? That's the kind of result that makes you wonder why we've been overcomplicating this. Starred the repo – looking forward to seeing how this evolves as more developers realize they don't need a distributed database to share context between agents running on the same machine.
Really solid work by Sukin. Nemp feels like the missing piece for Claude Code, where agents actually share understanding instead of re-figuring things out in parallel.
Local, simple, and practical memory like this is exactly what makes multi-agent setups usable in the real world.
Yesterday Anthropic shipped agent teams with Opus 4.6. I had been building a memory plugin for Claude Code that stores project decisions as plain JSON on disk, and I wanted to see if it would work across parallel agents without any changes.
So I tested it. I spun up 3 teammates backend, frontend, and tester on a full-stack project. Before the test, I pre-loaded the project knowledge: /nemp:init # auto-detected Next.js, TypeScript, Prisma /nemp:save auth "JWT with 15min access tokens, httpOnly refresh" /nemp:save api-style "RESTful, snake_case, versioned at /api/v1" /nemp:save testing "Pytest with fixtures, 80% coverage minimum" Then each teammate ran /nemp:context with their relevant keywords as their first action. The backend agent ran /nemp:context auth and immediately knew the JWT strategy. The frontend agent ran /nemp:context stack and knew it was Next.js + TypeScript. The tester ran /nemp:context testing and found the test conventions. It worked because the memory is just a JSON file on disk (.nemp/memories.json). No server, no port, no database. Every agent reads the same file. One saves a finding, the others discover it with a keyword search. End result: 34 files committed, 100 tests passing, zero conflicts between the 3 agents. Nobody had to re-discover what another agent already knew. The whole thing is a Claude Code plugin, two commands to install, no API keys, no cloud, nothing leaves your machine. Demo of the full test: https://vimeo.com/1162546825?share=copy&fl=sv&fe=ci I'm a solo developer building this in the open. If this is useful to you, a star on the repo would mean a lot, it helps other developers find it. https://github.com/SukinShetty/Nemp-memory Happy to answer any questions about the architecture or how it performed with agent teams.
This is elegantly simple and brilliantly practical. While everyone's racing to build complex orchestration layers and cloud-based memory systems, you've solved the shared context problem with a JSON file on disk. Sometimes the best architecture is the one that doesn't need architecture.
The timing couldn't be better either – shipping this right as Anthropic drops agent teams is like releasing an umbrella stand the morning it starts raining. The real genius here is that you pre-loaded project decisions before spawning agents, so they didn't waste cycles re-discovering the wheel (or in this case, the JWT strategy).
Zero conflicts between 3 agents working in parallel? That's the kind of result that makes you wonder why we've been overcomplicating this. Starred the repo – looking forward to seeing how this evolves as more developers realize they don't need a distributed database to share context between agents running on the same machine.
[flagged]
Thank you
[flagged]
Thank you AI Agents
Really solid work by Sukin. Nemp feels like the missing piece for Claude Code, where agents actually share understanding instead of re-figuring things out in parallel. Local, simple, and practical memory like this is exactly what makes multi-agent setups usable in the real world.
Thank you so much Syed