> We cannot issue an IPv4 address to each machine without blowing out the cost of the subscription. We cannot use IPv6-only as that means some of the internet cannot reach the VM over the web. That means we have to share IPv4 addresses between VMs.
Give a user a option for use IPv6 only, and if the user need legacy IP add it as a additional cost and move on.
Trying to keep v4 at the same cost level as v6 is not a thing we can solve. If it was we wouldn't need v6.
This is great if you have IPv6 support from your ISP. Not so great if you don't.
Before someone mentions tunnels: Last time I tried to set up a tunnel Happy Eyeballs didn't work for me at all; almost everything went through the tunnel anyway and I had to deal with non-residential IP space issues and way too much traffic.
I complained as a yearly tradition for couple of years to get v6 enabled in my ISP. They had the core network enabled on World IPv6 Launch in 2012, but not deployed to end customers.
One simple way to check if your ISP have some kind of IPv6 netowork is to see if CDN domains given by YouTube and Facebook have AAAA records.
We shouldn't have to ask for ISPs to add IPv6 support but here we are.
It's a nice solution for sure, but a problem by choice. You could just have an AAAA record for the domain in addition to the A record, and as GP pointed out, resolve SSH sessions via the IPv6. If the user wants SSH to work with IPv4 for whatever reason—I see the point that there may be some web visitors without IPv6 still, but devs?—they could pay a small extra for a dedicated IPv4 address.
They could buy a dedicated IPv4 address, but that address still has to be tunneled through [EDIT:] IPv6 networks if that dev has no access to [EDIT:] IPv4 networks. Thus DX still suffers. [ADDENDUM: I mistakenly swapped "IPv4" and "IPv6" there. See comments.]
I'm not sure I understand your point; if exe.dev operates a dedicated IP solely so a specific mythical IPv6-less developer can connect to a specific server, then there's no tunnelling involved at all.
SSH is an incredibly versatile and useful tool, but many things about the protocol are poorly designed, including its essentially made-up-as-you-go-along wire formats for authentication negotiation, key exchange, etc.
In 2024-2025, I did a survey of millions of public keys on the Internet, gathered from SSH servers and users in addition to TLS hosts, and discovered—among other problems—that it's incredibly easy to misuse SSH keys in large part because they're stored "bare" rather than encapsulated into a certificate format that can provide some guidance as to how they should be used and for what purposes they should be trusted:
That's the point, though. An SSH key gives authentication, not authorization. Generally a certificate is a key signed by some other mutually trusted authority, which SSH explicitly tried to avoid.
What a great case of "you're holding it wrong!" I need to add individual configuration to every host I ever want to connect to before connecting to avoid exposing all public keys on my device? What if I mistype and contact a server not my own by accident?
The server matches your purposed public key with one in the authorized keys file. If you don't want to expose your raw public key to the server, you'll need to generate and send the hashed key format into the authorized keys file, which at that point is the same as just generating a new purpose built key, no? Am I missing something?
They are saying they want to directly SSH into a VM/container based on the web hostname it serves. But that's not how the HTTP traffic flows either. With only one routable IP for the host, all traffic on a port shared by VMs has to go to a server on the host first (unless you route based on port or source IP with iptnbles, but that is not hostname based).
The HTTP traffic goes to a server (a reverse proxy, say nginx) on the host, which then reads it and proxies it to the correct VM. The client can't ever send TCP packets directly to the VM, HTTP or otherwise. That doesn't just magically happen because HTTP has a Host header, only because nginx is on the host.
What they want is a reverse proxy for SSH, and doesn't SSH already have that via jump/bastion hosts? I feel like this could be implement with a shell alias, so that:
> The HTTP traffic goes to a server (a reverse proxy, say nginx) on the host, which then reads it and proxies it to the correct VM.
That's one implementation. Another implementation is the proxy looks at the SNI information in the ClientHello and can choose the correct backend using that information _without_ decrypting anything.
Encrypted SNI and ECH requires some coordination, but still doesn't require decryption/trust by the proxy/jumpbox which might be really important if you have a large number of otherwise independent services behind the single address.
The point is that they want the simple UX of "ssh vm1.box1.tld" takes you to the same machine that browsing to vm1.box1.tld takes you to, without requiring their users to set any additional configuration.
>They are saying they want to directly SSH into a VM/container based on the web hostname it serves. But that's not how the HTTP traffic flows either.
> Proceeds to explain how the HTTP traffic flows based on the hostname.
If you wanted to flex on your knowledge of the subject you could have just lead the whole explanation with
>"I know all about this, here's how it works."
Also
>"What they want is a reverse proxy for SSH"
They already did this, I'm much more impressed by the original article that actually implemented it than by your comment "correcting them" and suggesting a solution.
I agree SRV records would have helped with a tremendous number of unnecessary proxies and wasted heat energy from unnecessary computing, but in this day and age, I think ECH/ESNI-type functions should be considered for _every_ new protocol.
It’s also similar with mDNS on local networks. It’s actually nice!
Overall, DNS features are not always well implemented on most software stack.
A basic example is the fact that DNS resolution actually returns a list of IPs, and the client should be trying them sequentially or in parallel, so that one can be down without impact and annoying TTL propagation issues. Yet, many languages have a std lib giving you back a single IP, or a http client assuming only one, the first.
SSH waits for the server key before it presents the client keys, right? Does this mean that different VMs from different users have the same key? (Or rather, all VMs have the same key? A quick look shows s00{1,2,3}.exe.xyz all having the same key.) So this is full MitM?
You are correct, but I expect they instruct their users to run with a host key validation disabled ( StrictHostKeyChecking=no UserKnownHostsFile=/dev/null) , as they expect these are ephemeral instances.
I mean, anytime you use the cloud for anything, you are giving MITM capabilities to the hosting provider. It is their hardware, their hypervisors... they can access anything inside the VMs
Yeah, I ran into this problem too. I tried a few different hacky solutions and then settled on using port knocking to sort inbound ssh connections into their intended destinations. Works great.
I have an architecture with a single IP hosting multiple LXC containers. I wanted users to be able to ssh into their containers as you would for any other environment. There's an option in sshd that allows you to run a script during a connection request so you can almost juggle connections according to the username -- if I remember right, it's been several years since I tried that -- but it's terribly fragile and tends to not pass TTYs properly and basically everything hates it.
But, set up knockd, and then generate a random knock sequence for each individual user and automatically update your knockd config with that, and each knock sequence then (temporarily) adds a nat rule that connects the user to their destination container.
When adding ssh users, I also provide them with a client config file that includes the ProxyCommand incantation that makes it work on their end.
Been using this for a few years and no problems so far.
In kinda the same situation, I was using username for host routing. And real user was determined by the principal in SSH certificate - so the proxy didn't even need to know the concrete certificates for users; it was even easier than keeping track of user SSH keys.
Certificate signing was done by a separate SSH service, which you connected too with enabled SSH agent forwarding, pass 2FA challenge, and get a signed cert injected into your agent.
Can you expand on your solution a little bit? AFAIK principals don't impact the user that is logged in at all. A principal in the cert and in the authorized list just allows the user to log in as any user they want, which is why you have to write a script that validates the username before listing principals to accept.
I'd love to learn more about how you solved it and what I may be mistaken about.
This is a clever trick, but I can’t help but wonder where it breaks. There seems to be an invariant that the number of backends a public key is mapped to cannot exceed the number of proxy IPs available. The scheme probably works fine if most people are only using a small number of instances, though. I assume this is in fact the case.
Another thing that just crossed my mind is that the proxy IP cannot be reassigned without the client popping up a warning. That may alarm security-conscious users and impact usability.
I also wonder what happens if you want to grant access to your VM to additional public keys and one of those public keys happen to already be routed to a different VM on the same IP.
They just need to set the limit on the number of VMs per user to be less than or equal to the number of public IPs they have available. As long as two users don't try to share a key, you are good... which should be easy, just don't let them upload a key that another user has already uploaded.
Wouldn't a much simpler approach be to have everyone log in to a common server which sits on a VPN with all the VMs? It introduces an extra hop, but this is a pretty minor inconvenience and can be scripted away.
I wonder if it's something like https://github.com/cea-hpc/sshproxy that sits in the middle (with decryption and everything) or if they could do this without setting up a session directly with the client.
Well, we're implicitly trusting the host when running a VM anyway (most of the time), but it's something I'd want to check before buying into the service.
Using nonstandard ports would break the `ssh foo.exe.dev` pattern.
This could also have been solved by requiring users to customize their SSH config (coder does this once per machine, and it applies to all workspaces), but I guess the exe.dev guys are going for a "zero-config, works anywhere" experience.
Zero-config usually means the complexity got shoved somewhere less visible.
An SSH config is fine for one box, but with a pile of ephemeral workspaces it turns into stale cruft fast and half the entries is for hosts you forgot existed.
The port issue is also boringly practical.
A lot of corp envs treat 22 as blessed and anything else as a ticket, so baking the routing into the name is ugly but I can see why they picked it, even if the protocool should have had a target name from day one.
Not needing a different port. Middleboxes sometimes block ssh on nonstandard ports. Also, to preserve the alignment between the SSH hostname and the web service hostname, as though the user was accessing a single host at a single public address. Usability is key for them.
Like, I understand the really restrictive ones that only allow web browsing. But why allow outgoing ssh to port 22 but not other ports? Especially when port 22 is arguably the least secure option. At that point let people connect to any port except for a small blacklist.
Asking back, when I limit the outgoing connections from a network, why would I account for any nonstandard port and make the ruleset unwieldy, just in case someone wanted to do something clever?
A simple ruleset would only block a couple dangerous ports and leave everything else connectable. Whitelisting outgoing destination ports is more complicated and more annoying to deal with for no benefit. The only place you should be whitelisting destination ports is when you're looking at incoming connections.
It's hard to think of a clearer example for the concept of Developer Experience.
One similar example of SSH related UX design is Github. We mostly take the git clone git@github.com/author/repo for granted, as if it were a standard git thing that existed before. But if you ever go broke and have to implement GitHub from scratch, you'll notice the beauty in its design.
You don't need SSH. Installing an SSH server to such a VM is a hold over from how UNIX servers worked. It puts you in the mindset of treating your server as a pet and doing things for a single vm instead of having proper server management in place. I would reconsider if offering ssh is an actual requirement here or if it could be better served by offering users a proper control panel to manage and monitor the vms.
I have not worked in the server management in many years, but with how cheap code is with AI rolling your own dashboard may not be such a bad idea.
>with SSH server
My comment was about how you do not need an ssh server. The idea of a server exposing a command line that allows potentially anything to be done is not necessary in order to manage and monitor a server.
Give a user a option for use IPv6 only, and if the user need legacy IP add it as a additional cost and move on.
Trying to keep v4 at the same cost level as v6 is not a thing we can solve. If it was we wouldn't need v6.
Before someone mentions tunnels: Last time I tried to set up a tunnel Happy Eyeballs didn't work for me at all; almost everything went through the tunnel anyway and I had to deal with non-residential IP space issues and way too much traffic.
One simple way to check if your ISP have some kind of IPv6 netowork is to see if CDN domains given by YouTube and Facebook have AAAA records.
We shouldn't have to ask for ISPs to add IPv6 support but here we are.
>legacy IP
lol
> they could pay a small extra for a dedicated IPv4 address.
Did you mean that the dedicated IPv4 address to connect via SSH? Then my objection doesn't apply.
In 2024-2025, I did a survey of millions of public keys on the Internet, gathered from SSH servers and users in addition to TLS hosts, and discovered—among other problems—that it's incredibly easy to misuse SSH keys in large part because they're stored "bare" rather than encapsulated into a certificate format that can provide some guidance as to how they should be used and for what purposes they should be trusted:
https://cryptographycaffe.sandboxaq.com/posts/survey-public-....
> where the affected users might be surprised or alarmed to learn that it is possible to link these real-world identities.
I feel like it's obvious that ssh public keys publically identifies me, and if I don't want that, I can make different keys for different sites.
You can try it yourself [0] returns all the keys you send and even shows you your github username if one of the keys is used there.
[0] ssh whoami.filippo.io
This is just an awfully designed feature, is all.
The HTTP traffic goes to a server (a reverse proxy, say nginx) on the host, which then reads it and proxies it to the correct VM. The client can't ever send TCP packets directly to the VM, HTTP or otherwise. That doesn't just magically happen because HTTP has a Host header, only because nginx is on the host.
What they want is a reverse proxy for SSH, and doesn't SSH already have that via jump/bastion hosts? I feel like this could be implement with a shell alias, so that:
ssh user@vm1.box1.tld becomes: ssh -j jumpusr@box1.tld user@vm1
And just make jumpusr have no host permissions and shell set to only allow ssh.
That's one implementation. Another implementation is the proxy looks at the SNI information in the ClientHello and can choose the correct backend using that information _without_ decrypting anything.
Encrypted SNI and ECH requires some coordination, but still doesn't require decryption/trust by the proxy/jumpbox which might be really important if you have a large number of otherwise independent services behind the single address.
> Proceeds to explain how the HTTP traffic flows based on the hostname.
If you wanted to flex on your knowledge of the subject you could have just lead the whole explanation with
>"I know all about this, here's how it works."
Also
>"What they want is a reverse proxy for SSH"
They already did this, I'm much more impressed by the original article that actually implemented it than by your comment "correcting them" and suggesting a solution.
So far it feels like only LDAP really makes use of it, at least with the tech I interact with
I agree SRV records would have helped with a tremendous number of unnecessary proxies and wasted heat energy from unnecessary computing, but in this day and age, I think ECH/ESNI-type functions should be considered for _every_ new protocol.
Overall, DNS features are not always well implemented on most software stack.
A basic example is the fact that DNS resolution actually returns a list of IPs, and the client should be trying them sequentially or in parallel, so that one can be down without impact and annoying TTL propagation issues. Yet, many languages have a std lib giving you back a single IP, or a http client assuming only one, the first.
I have an architecture with a single IP hosting multiple LXC containers. I wanted users to be able to ssh into their containers as you would for any other environment. There's an option in sshd that allows you to run a script during a connection request so you can almost juggle connections according to the username -- if I remember right, it's been several years since I tried that -- but it's terribly fragile and tends to not pass TTYs properly and basically everything hates it.
But, set up knockd, and then generate a random knock sequence for each individual user and automatically update your knockd config with that, and each knock sequence then (temporarily) adds a nat rule that connects the user to their destination container.
When adding ssh users, I also provide them with a client config file that includes the ProxyCommand incantation that makes it work on their end.
Been using this for a few years and no problems so far.
[0]: https://www.ietf.org/archive/id/draft-michel-ssh3-00.html
Certificate signing was done by a separate SSH service, which you connected too with enabled SSH agent forwarding, pass 2FA challenge, and get a signed cert injected into your agent.
I'd love to learn more about how you solved it and what I may be mistaken about.
Another thing that just crossed my mind is that the proxy IP cannot be reassigned without the client popping up a warning. That may alarm security-conscious users and impact usability.
Well, we're implicitly trusting the host when running a VM anyway (most of the time), but it's something I'd want to check before buying into the service.
EDIT: Ah, it's probably https://github.com/boldsoftware/sshpiper
will try to remember to look later.
This could also have been solved by requiring users to customize their SSH config (coder does this once per machine, and it applies to all workspaces), but I guess the exe.dev guys are going for a "zero-config, works anywhere" experience.
The port issue is also boringly practical. A lot of corp envs treat 22 as blessed and anything else as a ticket, so baking the routing into the name is ugly but I can see why they picked it, even if the protocool should have had a target name from day one.
Like, I understand the really restrictive ones that only allow web browsing. But why allow outgoing ssh to port 22 but not other ports? Especially when port 22 is arguably the least secure option. At that point let people connect to any port except for a small blacklist.
One similar example of SSH related UX design is Github. We mostly take the git clone git@github.com/author/repo for granted, as if it were a standard git thing that existed before. But if you ever go broke and have to implement GitHub from scratch, you'll notice the beauty in its design.
>with SSH server
My comment was about how you do not need an ssh server. The idea of a server exposing a command line that allows potentially anything to be done is not necessary in order to manage and monitor a server.
You can front a TLS server on port 443 and then redirect without decrypting the connection based on the SNI name to your final destination host.
Provided your users will configure something a little - or you provide a wrapping command - you can setup the tunneling for them.