And yet, in the real world we actually use distribution centers and loading docks, we don’t go sending delivery boys point to point. At the receiving company’s loading docks, we can have staff specialise in internal delivery, and also maybe figure out if the package should go to someone’s office or a temporary warehouse or something. The receiver might be on vacation, and internal logistics will know how to figure out that issue.
Meanwhile, the point-to-point delivery boy will fail to enter the building, then fail to find the correct office, then get rerouted to a private residence of someone on vacation (they need to sign personally of course), and finally we need another delivery boy to move the package to the loading dock where it should have gone in the first place.
I get the ”let’s slaughter NAT” arguments, but this is an argument in favour of NAT. And in reality, we still need to have routing and firewalls. The exact same distribution network is still in use, but with fewer allowances for the recipient to manage internal delivery.
Personal opinion: IPv6 should have been almost exactly the same as IPv4, but with more numbers and a clear path to do transparent IPv6 to IPv4 traffic without running dual stack (maybe a NAT?). IPv6 is too complex, error prone and unsupported to deploy without shooting yourself in the foot, even now, a few decades after introduction.
in the real world we actually use distribution centers and loading docks
because we can pass packages in bulk between large distances… in routing, it’s always delivery boys: a single packet is a single packet: there’s no bulk delivery, except where you have eg a VPN packing multiple packets into a jumbo frame or something…
the comment you’re replying to is only providing an analogy: used to explain a single property by abstraction; not the entire thing
we can have staff specialise in internal delivery
but that’s not at all how NAT works: its not specialising in delivery to private hosts and making it more efficient… it’s a layer of bureaucracy (like TURN servers and paperwork - the lookup tables and mapping) that adds complexity, not because it’s ideally necessary but just because of limitations in the data format
routers still route pretty much exactly the same in IPv6 direct or NAT, but just at the NAT layer public IP and port is remapped to internal addresses and ports: the routing is still exactly the same, but now your router has to do extra paperwork that’s only necessary because of the scheme used to address
In the real world, addresses are an abstraction to provide knowledge needed to move something from point A to point B. We could use coordinates or refer to the exact office the recipient sits in, but we don’t. Actually, we usually try to keep it at a fairly high level of abstraction.
The analogy is broken, because in the real world, we don’t want extremely exact addressing and transport without middlemen. We want abstract addresses, with transport routing partially to fully decoupled from the addressing scheme. GP provides a nice argument for IPv4.
I know how NAT works, but we are working within the constraints of a very broken analogy here. Also yes, internal logistics can and will be the harbinger of unnecessary bureaucracy, especially when implemented correctly.
IPv6 is too complex, error prone and unsupported to deploy without shooting yourself in the foot, even now, a few decades after introduction.
Which is purely down to people not testing things before releasing them, because the support is there but there’s layers of unnecessary stuff put in the way. Like I had an old ISP provided router that ran Linux, but the management UI was only ever tested against v4 networks so none of the v6 stuff was actually hooked up correctly.
Support in desktops and mobile devices is effectively 100%, but even in embedded hardware there’s often full support, just not enabled correctly or tested.
Lustre 2.16 got released recently, so in a year or so you may actually be able to run commercially supported Lustre with IPv6 support. Yay!
After that, it’s only a matter of time before it’s finally possible to start testing supercomputers with IPv6! (And finally building a production system with IPv6 a few more years after that, when all the bugs have been squashed)
Look at the Top500 list. Fucking everyone runs Lustre somewhere, and usually old versions. The US strategic nuclear weapons research is practically all on Lustre. My guess is most weather forecasting globally runs on Lustre. (Oh, and a shitton of AI of course.)
Up until now, you were stuck with mounting your filesystem over IPv4 (well, kinda IPv4 over RDMA, ish). If you want commercial support for your hundreds of petabytes (you do), you still can’t migrate. And this isn’t a small indie project without testers, it’s commercially supported with billions in revenue, supporting compute hardware for even more money.
My point with this rambling is that a open source software that is this widely deployed, depended upon and this well funded, still failed to roll out IPv6 support until now. The long tail of migrating the world to IPv6 hasn’t even begun yet, we are still in the early days. Soon someone will start looking at the widely deployed, depended upon and badly funded stuff.
And maybe, if IPv6 didn’t try to change a bunch of extra stuff, we’d be further along. (Though, in the specific case of Lustre, I’ll gladly accuse DDN and Whamcloud for being incompetent…)
I mean yeah, there’s extra stuff layered on top of the underlying protocols that is badly designed. Docker was built with a hard dependency on IPv4, so was the Dat protocol. If these things were designed properly from the start we wouldn’t be having these issues.
Apple was smart here, they mandate that iOS apps must support single stack IPv6 only and perform functional testing of that as part of the app store process. Devs can’t get away with pretending it’s not necessary and not wiring up support for it.
And yet, in the real world we actually use distribution centers and loading docks, we don’t go sending delivery boys point to point. At the receiving company’s loading docks, we can have staff specialise in internal delivery, and also maybe figure out if the package should go to someone’s office or a temporary warehouse or something. The receiver might be on vacation, and internal logistics will know how to figure out that issue.
Meanwhile, the point-to-point delivery boy will fail to enter the building, then fail to find the correct office, then get rerouted to a private residence of someone on vacation (they need to sign personally of course), and finally we need another delivery boy to move the package to the loading dock where it should have gone in the first place.
I get the ”let’s slaughter NAT” arguments, but this is an argument in favour of NAT. And in reality, we still need to have routing and firewalls. The exact same distribution network is still in use, but with fewer allowances for the recipient to manage internal delivery.
Personal opinion: IPv6 should have been almost exactly the same as IPv4, but with more numbers and a clear path to do transparent IPv6 to IPv4 traffic without running dual stack (maybe a NAT?). IPv6 is too complex, error prone and unsupported to deploy without shooting yourself in the foot, even now, a few decades after introduction.
because we can pass packages in bulk between large distances… in routing, it’s always delivery boys: a single packet is a single packet: there’s no bulk delivery, except where you have eg a VPN packing multiple packets into a jumbo frame or something…
the comment you’re replying to is only providing an analogy: used to explain a single property by abstraction; not the entire thing
but that’s not at all how NAT works: its not specialising in delivery to private hosts and making it more efficient… it’s a layer of bureaucracy (like TURN servers and paperwork - the lookup tables and mapping) that adds complexity, not because it’s ideally necessary but just because of limitations in the data format
routers still route pretty much exactly the same in IPv6 direct or NAT, but just at the NAT layer public IP and port is remapped to internal addresses and ports: the routing is still exactly the same, but now your router has to do extra paperwork that’s only necessary because of the scheme used to address
In the real world, addresses are an abstraction to provide knowledge needed to move something from point A to point B. We could use coordinates or refer to the exact office the recipient sits in, but we don’t. Actually, we usually try to keep it at a fairly high level of abstraction.
The analogy is broken, because in the real world, we don’t want extremely exact addressing and transport without middlemen. We want abstract addresses, with transport routing partially to fully decoupled from the addressing scheme. GP provides a nice argument for IPv4.
I know how NAT works, but we are working within the constraints of a very broken analogy here. Also yes, internal logistics can and will be the harbinger of unnecessary bureaucracy, especially when implemented correctly.
Which is purely down to people not testing things before releasing them, because the support is there but there’s layers of unnecessary stuff put in the way. Like I had an old ISP provided router that ran Linux, but the management UI was only ever tested against v4 networks so none of the v6 stuff was actually hooked up correctly.
Support in desktops and mobile devices is effectively 100%, but even in embedded hardware there’s often full support, just not enabled correctly or tested.
Lustre 2.16 got released recently, so in a year or so you may actually be able to run commercially supported Lustre with IPv6 support. Yay!
After that, it’s only a matter of time before it’s finally possible to start testing supercomputers with IPv6! (And finally building a production system with IPv6 a few more years after that, when all the bugs have been squashed)
Look at the Top500 list. Fucking everyone runs Lustre somewhere, and usually old versions. The US strategic nuclear weapons research is practically all on Lustre. My guess is most weather forecasting globally runs on Lustre. (Oh, and a shitton of AI of course.)
Up until now, you were stuck with mounting your filesystem over IPv4 (well, kinda IPv4 over RDMA, ish). If you want commercial support for your hundreds of petabytes (you do), you still can’t migrate. And this isn’t a small indie project without testers, it’s commercially supported with billions in revenue, supporting compute hardware for even more money.
My point with this rambling is that a open source software that is this widely deployed, depended upon and this well funded, still failed to roll out IPv6 support until now. The long tail of migrating the world to IPv6 hasn’t even begun yet, we are still in the early days. Soon someone will start looking at the widely deployed, depended upon and badly funded stuff.
And maybe, if IPv6 didn’t try to change a bunch of extra stuff, we’d be further along. (Though, in the specific case of Lustre, I’ll gladly accuse DDN and Whamcloud for being incompetent…)
I mean yeah, there’s extra stuff layered on top of the underlying protocols that is badly designed. Docker was built with a hard dependency on IPv4, so was the Dat protocol. If these things were designed properly from the start we wouldn’t be having these issues.
Apple was smart here, they mandate that iOS apps must support single stack IPv6 only and perform functional testing of that as part of the app store process. Devs can’t get away with pretending it’s not necessary and not wiring up support for it.