This is a neat approach. One thing I wonder about is how it handles services that use the same port number across different protocols (like 443 for both HTTPS and SMTPS). The /etc/services file approach has the same ambiguity, but at least it lists the protocol alongside the port. A lookup table that includes protocol would be more robust for mixed environments.
Sure, but they are running web-apps they've vibe-coded (hence the .vibe tld) and for that use-case of many web apps that I run in docker containers I use nginx-proxy [0]. All the container needs is a VIRTUAL_HOST environment variable with the domain and what my router needs is an address entry for the wildcard subdomains. I even have nginx-proxy on a internet-accessible staging server.
> It's like someone should make a file... maybe in /etc ... and put short names for services in it... maybe it could be called /etc/services...
People shit-talk container orchestration systems like Kubernetes, but if anything they greatly simplified (if not completely eliminated) the need for this sort of network bookkeeping.
I use subdomains on an OVH VPS, since I want to access the services outside the network, so I can use freshrss.mydomain.com. But anything that can rationalize port number sprawl is welcome.
I know it's mixing of layers, but I can't help but feel the IPv6 transition missed the boat when they didn't just get rid of ports in the process. They've changed so much else anyway.
Want to run another webserver instance or whatever on your computer? Get the OS to allocate a new IP for it. Ports be damned.
Could be implemented in a backwards compatible way by requiring all IPv6 TCP/UDP traffic to use a fixed port number.
I use Cloudflare Tunnel so most of the products I build are exposed and listed there. I just add comments for those that aren't exposed (eg. browser extension dev port) to that file too. A single doc means coding agents know to look there and keep it updated too.
This is a valid concern, certainly. I use kube for most things so it's not a problem, but my homeserver and its apps run on quadlets that I manage. In my case, I just added a README.md in the server account folder that each project's CLAUDE.md or whatever is configured to read. Then it selects a port and sticks that in the document and to be honest I have a few tens of services and it works. Haha, a direct replacement of machine for my own process.
There is no need to come up with "local TLDs" like .vibe, .local, .test and so on -- there is already an industry convention! macOS and most Linux distros support subdomains of localhost, so <anything>.localhost works. You still need the reverse proxy to do the host->port mapping, but you save yourself local DNS fiddling.
> There is no need to come up with "local TLDs" like .vibe, .local, .test and so on -- there is already an industry convention! macOS and most Linux distros support subdomains of localhost, so <anything>.localhost works.
That would work if your goal was to route traffic to localhost.
What if it isn't?
There are reasons why the likes of example.com exists.
Vercel’s portless is a great alternative, but unfortunately it doesn’t work well with oauth flows.
I’ve built portmap[0] to solve that. Also comes with skills which makes it work really great with coding agents (instructions in the readme).
Why not resolve everything with UNIX sockets instead, that way you can have them named and scoped instead, hiding behind port 443, since it's mosly HTTP anyway.
works with curl, maybe there is a case to either build a proxy for UDS and expose them to a browser, or open a request ticket to browser maintainers to support UDS
I've built this twice before. The main problem that I hit is that the AI agents suck at the process lifecycle management: leaving processes alive, starting the same daemon multiple times, etc.
From a brief glance over the code I like the approaches I see. Using the `/etc/resolver/` mechanism is a new trick to me!
The interesting part to me isn't the port numbers, it's the automatic service start/stop, including idle route shutdown.
Also, a service port is always qualified by its protocol. There are separate port namespaces for each IP protocol that uses ports. "8483" is not a service port, until you spell it out:
This project is essentially "give me some metadata & a command which takes env $PORT, and I'll handle the rest". Which is neat!
I am also sick of handling port numbers - I end up allocating them on a schema to different services, so for testing I can spool any VM/service combination and avoid crossover. But if I want the same service twice, ah...
It always fascinated me that ports don't have any kind of textual resolver, so you can bind to `:1234` and also say "please also accept `:foobar`".
But that would itself require some kind of "port resolver" on a device, and that's another service to break and fix :)
It is funny, I just built something like this last week and named it "Network". Additionally it scans for any type of data packages arriving at the SonicWall and sees if they are approved by me or not. I am paranoid after using TP Link at home like a dumbass.
This is a neat approach. One thing I wonder about is how it handles services that use the same port number across different protocols (like 443 for both HTTPS and SMTPS). The /etc/services file approach has the same ambiguity, but at least it lists the protocol alongside the port. A lookup table that includes protocol would be more robust for mixed environments.
It's like someone should make a file... maybe in /etc ... and put short names for services in it... maybe it could be called /etc/services...
And then they might code up some sort of service lookup tool thingy to use on the train wreck that is the modern web.
Heck, maybe even `resolvectl service`?
Sure, but they are running web-apps they've vibe-coded (hence the .vibe tld) and for that use-case of many web apps that I run in docker containers I use nginx-proxy [0]. All the container needs is a VIRTUAL_HOST environment variable with the domain and what my router needs is an address entry for the wildcard subdomains. I even have nginx-proxy on a internet-accessible staging server.
[0] https://github.com/nginx-proxy/nginx-proxy
> It's like someone should make a file... maybe in /etc ... and put short names for services in it... maybe it could be called /etc/services...
People shit-talk container orchestration systems like Kubernetes, but if anything they greatly simplified (if not completely eliminated) the need for this sort of network bookkeeping.
I use subdomains on an OVH VPS, since I want to access the services outside the network, so I can use freshrss.mydomain.com. But anything that can rationalize port number sprawl is welcome.
I know it's mixing of layers, but I can't help but feel the IPv6 transition missed the boat when they didn't just get rid of ports in the process. They've changed so much else anyway.
Want to run another webserver instance or whatever on your computer? Get the OS to allocate a new IP for it. Ports be damned.
Could be implemented in a backwards compatible way by requiring all IPv6 TCP/UDP traffic to use a fixed port number.
I use Cloudflare Tunnel so most of the products I build are exposed and listed there. I just add comments for those that aren't exposed (eg. browser extension dev port) to that file too. A single doc means coding agents know to look there and keep it updated too.
I use the tailscale services feature for this, added benefit is I get https.
This is a valid concern, certainly. I use kube for most things so it's not a problem, but my homeserver and its apps run on quadlets that I manage. In my case, I just added a README.md in the server account folder that each project's CLAUDE.md or whatever is configured to read. Then it selects a port and sticks that in the document and to be honest I have a few tens of services and it works. Haha, a direct replacement of machine for my own process.
There is no need to come up with "local TLDs" like .vibe, .local, .test and so on -- there is already an industry convention! macOS and most Linux distros support subdomains of localhost, so <anything>.localhost works. You still need the reverse proxy to do the host->port mapping, but you save yourself local DNS fiddling.
> There is no need to come up with "local TLDs" like .vibe, .local, .test and so on -- there is already an industry convention! macOS and most Linux distros support subdomains of localhost, so <anything>.localhost works.
That would work if your goal was to route traffic to localhost.
What if it isn't?
There are reasons why the likes of example.com exists.
I created something similar to help me spin up complex apps in multiple worktrees with full port orchestration: https://outport.dev/
I wonder why not use nginx and some local DNS settings to just serve all these local services under a new, local URL.
Not too long ago I had a similar issue and solved with that.
I did the same using caddy for ease of getting https certificates
I mean, that's essentially what he's recreating here it looks like
I think about a decade ago pow did something similar, but using the .test domain, and perhaps ruby specific
https://github.com/basecamp/pow/tree/master
Vercel’s portless is a great alternative, but unfortunately it doesn’t work well with oauth flows. I’ve built portmap[0] to solve that. Also comes with skills which makes it work really great with coding agents (instructions in the readme).
[0] https://github.com/JonasKs/portmap
Why not resolve everything with UNIX sockets instead, that way you can have them named and scoped instead, hiding behind port 443, since it's mosly HTTP anyway.
Does this work in the browser? How will paths to different resources used by the web app work?
works with curl, maybe there is a case to either build a proxy for UDS and expose them to a browser, or open a request ticket to browser maintainers to support UDS
Aspire.dev should be mentioned here.
What I do is use a hash function to derive port number from service name.
I've built this twice before. The main problem that I hit is that the AI agents suck at the process lifecycle management: leaving processes alive, starting the same daemon multiple times, etc.
From a brief glance over the code I like the approaches I see. Using the `/etc/resolver/` mechanism is a new trick to me!
The interesting part to me isn't the port numbers, it's the automatic service start/stop, including idle route shutdown.
Not the same, but omeone recently posted this "port" tool here on HN: https://github.com/raskrebs/sonar
HN Thread
https://news.ycombinator.com/item?id=47452515
I hate these signs of LLM generated texts so much!!
> The real annoyance is that it wasn’t just one machine. It was layers.
What is the benefit of using HTTPS for this particular use case?
Some browser functions only work over https, localhost is the exception. So if you change localhost:5173 to myapp.vibe it needs a valid certificate.
I'm slightly annoyed that vite's default port isn't 8483
why?
VITE typed on a T9 keyboard is 8483.
5173 spells Vite...
173 looks like ITE
5 in roman numerals is V.
T9 is predictive and based on a dictionary and training.
If you type "8483" on T9, your phone may offer "THUD" or "TITE" or all three, as choices.
But with a normal telephone keypad, if you dial, e.g. "(800) 555-VITE" then you will always dial "8483".
https://en.wikipedia.org/wiki/Phoneword
Also, a service port is always qualified by its protocol. There are separate port namespaces for each IP protocol that uses ports. "8483" is not a service port, until you spell it out:
or or or etc.A TCP stream, for example, consists of a tuple:
i have something like this too, currently a 60 line nodejs file
This project is essentially "give me some metadata & a command which takes env $PORT, and I'll handle the rest". Which is neat!
I am also sick of handling port numbers - I end up allocating them on a schema to different services, so for testing I can spool any VM/service combination and avoid crossover. But if I want the same service twice, ah...
It always fascinated me that ports don't have any kind of textual resolver, so you can bind to `:1234` and also say "please also accept `:foobar`". But that would itself require some kind of "port resolver" on a device, and that's another service to break and fix :)
There is /etc/services to map port numbers to service names, and using getportbyname() to resolve port numbers.
DNS for /etc/hosts and now vibe.local for /etc/services. What will they think of next!
SVCB DNS records
It is funny, I just built something like this last week and named it "Network". Additionally it scans for any type of data packages arriving at the SonicWall and sees if they are approved by me or not. I am paranoid after using TP Link at home like a dumbass.
Bind to Port 0