It would be a good thing, if it would cause anything to change. It obviously won't. As if a single person reading this post wasn't aware that the Internet is centralized, and couldn't name specifically a few sources of centralization (Cloudflare, AWS, Gmail, Github). As if it's the first time this happens. As if after the last time AWS failed (or the one before that, or one before…) anybody stopped using AWS. As if anybody could viably stop using them.
> It would be a good thing, if it would cause anything to change. It obviously won't.
I agree wholeheartedly. The only change is internal to these organizations (eg: CloudFlare, AWS) Improvements will be made to the relevant systems, and some teams internally will also audit for similar behavior, add tests, and fix some bugs.
However, nothing external will change. The cycle of pretending like you are going to implement multi-region fades after a week. And each company goes on continuing to leverage all these services to the Nth degree, waiting for the next outage.
Not advocating that organizations should/could do much, it's all pros/cons. But the collective blast radius is still impressive.
the root cause is customers refusing to punish these downtime.
Checkout how hard customers punish blackouts from the grid - both via wallet, but also via voting/gov't. It's why they are now more reliable.
So unless the backbone infrastructure gets the same flak, nothing is going to change. After all, any change is expensive, and the cost of that change needs to be worth it.
Downtimes happen one way or another. The upside of using Cloudflare is that bringing things back online is their problem and not mine like when I self-host. :]
Their infrastructure went down for a pretty good reason (let the one who has never caused that kind of error cast the first stone) and was brought back within a reasonable time.
Same with the big Crowdstrike fail of 2024. Especially when everyone kept repeating the laughable statement that these guys have their shit in order, so it couldn't possibly be a simple fuckup on their end. Guess what, they don't, and it was. And nobody has realized the importance of diversity for resilience, so all the major stuff is still running on Windows and using Crowdstrike.
The problem is far more nuanced than the internet simply becoming too centralised.
I want to host my gas station network’s air machine infrastructure, and I only want people in the US to be able to access it. That simple task is literally impossible with what we have allowed the internet to become.
FWIW I love Cloudflare’s products and make use of a large amount of them, but I can’t advocate for using them in my professional job since we actually require distributed infrastructure that won’t fail globally in random ways we can’t control.
> and I only want people in the US to be able to access it. That simple task is literally impossible with what we have allowed the internet to become.
Is anyone else as confused as I am about how common anti-openness and anti-freedom comments are becoming on HN? I don’t even understand what this comment wants: Banning VPNs? Walling off the rest of the world from US internet? Strict government identity and citizenship verification of people allowed to use the internet?
It’s weird to see these comments get traction after growing up in an internet where tech comments were relentlessly pro freedom and openness on the web. Now it seems like every day I open HN and there are calls to lock things down, shut down websites, institute age (and therefore identify) verification requirements. It’s all so foreign and it feels like the vibe shift happened overnight.
> Is anyone else as confused as I am about how common anti-openness and anti-freedom comments are becoming on HN?
In this specific case I don't think it's about being anti-open? It's that a business with only physical presence in one country selling a service that is only accessible physically inside the country.... doesn't.... have any need for selling compressed air to someone who isn't like 15 minutes away from one of their gas stations?
If we're being charitable to GP, that's my read at least.
If it was a digital services company, sure. Meatspace in only one region though, is a different thing?
> I want to host my gas station network’s air machine infrastructure, and I only want people in the US to be able to access it. That simple task is literally impossible with what we have allowed the internet to become.
That task was never simple and is unrelated to Cloudflare or AWS. The internet at a fundamental level only knows where the next hop is, not where the source or destination is. And even if it did, it would only know where the machine is, not where the person writing the code that runs on the machine is.
Client side SSL certificates with embedded user account identification are trivial, and work well for publicly exposed systems where IPsec or Dynamic frame sizes are problematic (corporate networks often mangle traffic.)
Accordingly, connections from unauthorized users is effectively restricted, but is also not necessarily pigeonholed to a single point of failure.
Literally impossible? On the contrary; Geofencing is easy. I block all kind of nefarious countries on my firewall, and I don't miss them (no loss not being able to connect to/from a mafia state like Russia). Now, if I were to block FAMAG... or Cloudflare...
Yes, literally impossible. The barrier to entry for anyone on the internet to create a proxy or VPN to bypass your geofencing is significantly lower than your cost to prevent them.
I don’t even understand where this line of reasoning is going. Did you want a separate network blocked off from the world? A ban on VPNs? What are we supposed to believe could have been disallowed to make this happen?
not a sysadmin here. why wouldn't this be behind a VPN or some kind of whitelist where only confirmed IPs from the offices / gas stations have access to the infrastructure?
Spot on article, but without a call to action. What can we do to combat the migration of society to a centralized corpro-government intertwined entity with no regard for
unprofitable privacy or individualism?
Individuals are unlikely to be able to do something about the centralization problem except vote for politicians that want to implement countermeasures. I don’t know of any politicians (with a chance to win anything) that have that on their agenda.
That’s called antitrust, and is absolutely a cause you can vote for. Some of the Biden administration’s biggest achievements were in antitrust, and the head of the FTC for Biden has joined Mamdani’s transition team.
Even if you learn to Host, there are many other services that are going to get relied on those centralised platforms, so if you are thinking to Host, every single thing on your own, then it is going to be more work than you can even imagine and definitely super hard to organise as well
If you host you are running on my cPanel SW. 70% of the internet is doing that. Also a kinda centralized point of failure, but I didn't hear of any bugs in the last 14 years.
Have you tried that? I gave up on hosting my own email server seven or eight years ago, after it became clear that there would be an endless fight with various entities to accept my mail. Hosting a webserver without the expectation that you'll need some high powered DDOS defense seems naive, in the current day, and good luck doing that with a server or two.
I have never hosted my own email. It took me roughly a day to set it up on a vanilla FreeBSD install running on Vultr’s free tier plan and it has been running flawlessly for nearly a year. I did not use AI at all, just the FreeBSD, Postfix, and Dovecot’s handbooks. I do have a fair bit of Linux admin and development experience but all in all this has been a weirdly painless experience.
If you don’t love this approach, Mail-in-a-box works incredibly well even if the author of all the Python code behind it insists on using tabs instead of spaces :)
And you can always grab a really good deal from a small hosting company, likely with decades of experience in what they do, via LowEndBox/LowEndTalk. The deal would likely blow AWS/DO/Vultr/Google Cloud out of the water in terms of value. I have been snagging deals from there for ages and I lost a virtual host twice. Once was a new company that turned out to be shady and another was when I rented a VPS in Cairo and a revolution broke out. They brought everything back up after a couple of months.
For example I just bought a lifetime email hosting system with 250GB of storage, email, video, full office suite, calendar, contacts, and file storage for $75. Configuration here is down to setting the DNS records they give you and adding users. Company behind it has been around for ages and is one of the best regarded in the LET community.
It's not insurmountable to set up initially. And when you get email denied from whatever org (your lawyer, your mom, some random business, whatever), each individual one isn't insurmountable to fix. It does get old after awhile.
It also depends on how much you are emailing, and who. If it's always the same set of known entities, you might be totally fine with self hosting. Someone else who's regularly emailing a lot of new people or businesses might incur a lot of overhead. At least worth more than their time than a fastmail or protonmail subscription or whatever.
I wonder what would life without cloudflare look like? What practices would fill the gaps if a company didn't - or wasn't allowed to -- satisfy the the concerns that cloudflare fills.
"Embrace outages, and build redundancy." — It feels like back in the day this was championed pretty hard especially by places like Netflix (Chaos Monkey) but as downtime has become more expected it seems we are sliding backwards. I have a tendency to rely too much on feelings so I'm sure someone could point me to some data that proves otherwise but for now that's my read on things. Personally, I've been going a lot more in on self-hosting lots of things I used to just mindlessly leave on the cloud.
Centralization has absolutely nothing whatsoever to do with the problems of society and technology. It's a particular obsession of software nerds, who, for unknown reasons, seem to attribute some kind of magical thinking to the word. They know it's "bad", and that decentralization is "good". But they never can seem to be able to describe any more detail than this.
You know what else is centralized? A federal government. A car's ECU. Your smartphone. Your friggin' toaster. And who cares, right? Well, nobody does, because it doesn't matter. Centralization in and of itself is not bad. It's just a generic word that is virtually meaningless on its own. It only has value when used in specific ways to distinguish a specific type of thing. It's like saying "left" vs "right". Is "left" bad and "right" good?
But for some bizarre reason (that should really be studied), nerds have latched onto these generic concepts when it comes to anything related to software. They seem obsessed with it, while simultaneously not being able to describe in detail the specific harms they are perceiving due to these terms. Because they don't really know what it means.
All they really are trying to say is "Big company bad!" But they don't know how to say that without sounding like cavemen, so they say "Centralization bad!" because it sounds more scientific.
Yes, big company sometimes bad. But many thing in world from big company; not all bad. Many big company things do good for nerd. Mountain dew, antibiotics, orthopetic shoes. Not all centralization bad. Phrase "it depends" not make feel good blog title, but it right.
Now just wait til every country on earth really does replace most of its employees with ChatGPT... and then OpenAI's data center goes offline with a fiber cut or something. All work everywhere stops. Cloudflare outage is nothing compared to that.
> It puzzles me why the hell they build such a function in the first place.
One reason is similar to why most programming languages don't return an Option<T> when indexing into an array/vector/list/etc. There are always tradeoffs to make, especially when your strangeness budget is going to other things.
My old employer used azure. It irritated me to no end when they said we must rename all our resources to match the convention of naming everything US East as "eu-" because (Eastern United States I guess)
I don't like this argument since you can applied this argument to google,microsot,aws,facebook etc
Tech world is dominated by US company and what is alternative to most of these service???? its a lot fewer than you might think and even then you must make a compromise in certain areas
> They [outages] can force redundancy and resilience into systems.
They won’t until either the monetary pain of outages becomes greater than the inefficiency of holding on to more systems to support that redundancy, or, government steps in with clear regulation forcing their hand. And I’m not sure about the latter. So I’m not holding my breath about anything changing. It will continue to be a circus of doing everything on a shoestring because line must go up every quarter or a shareholder doesn’t keep their wings.
>It's ironic because the internet was actually designed for decentralisation, a system that governments could use to coordinate their response in the event of nuclear war
This is not true. The internet was never designed to withstand nuclear war.
Perhaps. Perhaps not. But it will survive it. It will survive a complete nuclear winter. It's too useful to die, and will be one the first things to be fixed after global annihilation.
But Internet is not hosting companies or cloud providers. Internet does not care if they don't build their systems resilient enough and let the SPOFs creep up. Internet does it's thing and the packets keep flowing. Maybe BGP and DNS could use some additional armoring but there are ways around both of them in case of actual emergency.
ARPANET was literally invented during the cold war for the specific and explicit purpose of networked communications resilience for government and military in the event major networking hubs went offline due to one or more successful nuclear attacks against the United States
Your link is talking about work Baran did before ARPANET was created. The timeline doesn't back your point. And when ARPANET was created after Baran's work with Rand:
>Wired: The myth of the Arpanet – which still persists – is that it was developed to withstand nuclear strikes. That's wrong, isn't it?
>Paul Baran: Yes. Bob Taylor1 had a couple of computer terminals speaking to different machines, and his idea was to have some way of having a terminal speak to any of them and have a network. That's really the origin of the Arpanet. The method used to connect things together was an open issue for a time.
"A preferred alternative would be to have the ability to withstand a first strike and the capability of returning the damage in kind. This reduces the overwhelming advantage by a first strike, and allows much tighter control over nuclear weapons. This is sometimes called Second Strike Capability."
The stated research goals are not necessarily the same as the strategic funding motivations. The DoD clearly recognized packet-switching's survivability and dynamic routing potential when the US Air Force funded the invention of networked packet switching by Paul Baran six years earlier, in 1960, for which the explicit purpose was "nuclear-survivable military communications".
There is zero reason to believe ARPA would've funded the work were it not for internal military recognition of the utility of the underlying technology.
To assume that the project lead was told EVERY motivation of the top secret military intelligence committee that was responsible for 100% of the funding of the project takes either a special kind of naïveté or complete ignorance of compartmentalization practices within military R&D and procurement practices.
ARPANET would never have been were it not for ARPA funding, and ARPA never would've funded it were it not for the existence of packet-switched networking, which itself was invented and funded, again, six years before Bob Taylor even entered the picture, for the SOLE purpose of "nuclear-survivable military communications".
Consider the following sequence of events:
1. US Air Force desires nuclear-survivable military communications, funds Paul Baran's research at RAND
2. Baran proves packet-switching is conceptually viable for nuclear-survivable communications
3. His specific implementation doesn't meet rigorous Air Force deployment standards (their implementation partner, AT&T, refuses - which is entirely expectable for what was then a complex new technology that not a single AT&T engineer understood or had ever interacted with during the course of their education), but the concept is now proven and documented
4. ARPA sees the strategic potential of packet-switched networks for the explicit and sole purpose of nuclear-survivable communications, and decides to fund a more robust development effort
5. They use academic resource-sharing as the development/testing environment (lower stakes, work out the kinks, get future engineers conceptually familiar with the underlying technology paradigms)
6. Researchers, including Bob Taylor, genuinely focus on resource sharing because that's what they're told their actual job is, even though that's not actually the true purpose of their work
7. Once mature, the technology gets deployed for it's originally-intended strategic purposes (MILNET split-off in 1983)
Under this timeline, the sole true reason for ARPA's funding of ARPANET is nuclear-survivable military communication, Bob Taylor, being the military's R&D pawn, is never told that (standard compartmentalization practice). Bob Taylor can credibly and honestly state that he was tasked with implementing resource sharing across academic networks, which is true, but was never the actual underlying motivation to fund his research.
...and the myth of "ARPANET wasn't created for nuclear survivability" is born.
It would be a good thing, if it would cause anything to change. It obviously won't. As if a single person reading this post wasn't aware that the Internet is centralized, and couldn't name specifically a few sources of centralization (Cloudflare, AWS, Gmail, Github). As if it's the first time this happens. As if after the last time AWS failed (or the one before that, or one before…) anybody stopped using AWS. As if anybody could viably stop using them.
> It would be a good thing, if it would cause anything to change. It obviously won't.
I agree wholeheartedly. The only change is internal to these organizations (eg: CloudFlare, AWS) Improvements will be made to the relevant systems, and some teams internally will also audit for similar behavior, add tests, and fix some bugs.
However, nothing external will change. The cycle of pretending like you are going to implement multi-region fades after a week. And each company goes on continuing to leverage all these services to the Nth degree, waiting for the next outage.
Not advocating that organizations should/could do much, it's all pros/cons. But the collective blast radius is still impressive.
the root cause is customers refusing to punish these downtime.
Checkout how hard customers punish blackouts from the grid - both via wallet, but also via voting/gov't. It's why they are now more reliable.
So unless the backbone infrastructure gets the same flak, nothing is going to change. After all, any change is expensive, and the cost of that change needs to be worth it.
Is a little downtime such a bad thing? Trying to avoid some bumps and bruises in your business has diminishing returns.
What's "a little downtime" to you might be work ruined and day wasted for someone else.
Depends on the business.
Downtimes happen one way or another. The upside of using Cloudflare is that bringing things back online is their problem and not mine like when I self-host. :]
Their infrastructure went down for a pretty good reason (let the one who has never caused that kind of error cast the first stone) and was brought back within a reasonable time.
With the rise in unfriendly bots on the internet as well as DDoS botnets reaching 15 Tbps, I don’t think many people have much of a choice.
Same with the big Crowdstrike fail of 2024. Especially when everyone kept repeating the laughable statement that these guys have their shit in order, so it couldn't possibly be a simple fuckup on their end. Guess what, they don't, and it was. And nobody has realized the importance of diversity for resilience, so all the major stuff is still running on Windows and using Crowdstrike.
The problem is far more nuanced than the internet simply becoming too centralised.
I want to host my gas station network’s air machine infrastructure, and I only want people in the US to be able to access it. That simple task is literally impossible with what we have allowed the internet to become.
FWIW I love Cloudflare’s products and make use of a large amount of them, but I can’t advocate for using them in my professional job since we actually require distributed infrastructure that won’t fail globally in random ways we can’t control.
> and I only want people in the US to be able to access it. That simple task is literally impossible with what we have allowed the internet to become.
Is anyone else as confused as I am about how common anti-openness and anti-freedom comments are becoming on HN? I don’t even understand what this comment wants: Banning VPNs? Walling off the rest of the world from US internet? Strict government identity and citizenship verification of people allowed to use the internet?
It’s weird to see these comments get traction after growing up in an internet where tech comments were relentlessly pro freedom and openness on the web. Now it seems like every day I open HN and there are calls to lock things down, shut down websites, institute age (and therefore identify) verification requirements. It’s all so foreign and it feels like the vibe shift happened overnight.
> Is anyone else as confused as I am about how common anti-openness and anti-freedom comments are becoming on HN?
In this specific case I don't think it's about being anti-open? It's that a business with only physical presence in one country selling a service that is only accessible physically inside the country.... doesn't.... have any need for selling compressed air to someone who isn't like 15 minutes away from one of their gas stations?
If we're being charitable to GP, that's my read at least.
If it was a digital services company, sure. Meatspace in only one region though, is a different thing?
> I want to host my gas station network’s air machine infrastructure, and I only want people in the US to be able to access it. That simple task is literally impossible with what we have allowed the internet to become.
That task was never simple and is unrelated to Cloudflare or AWS. The internet at a fundamental level only knows where the next hop is, not where the source or destination is. And even if it did, it would only know where the machine is, not where the person writing the code that runs on the machine is.
Client side SSL certificates with embedded user account identification are trivial, and work well for publicly exposed systems where IPsec or Dynamic frame sizes are problematic (corporate networks often mangle traffic.)
Accordingly, connections from unauthorized users is effectively restricted, but is also not necessarily pigeonholed to a single point of failure.
https://www.rabbitmq.com/docs/ssl
Best of luck =3
Literally impossible? On the contrary; Geofencing is easy. I block all kind of nefarious countries on my firewall, and I don't miss them (no loss not being able to connect to/from a mafia state like Russia). Now, if I were to block FAMAG... or Cloudflare...
Yes, literally impossible. The barrier to entry for anyone on the internet to create a proxy or VPN to bypass your geofencing is significantly lower than your cost to prevent them.
I don’t even understand where this line of reasoning is going. Did you want a separate network blocked off from the world? A ban on VPNs? What are we supposed to believe could have been disallowed to make this happen?
not a sysadmin here. why wouldn't this be behind a VPN or some kind of whitelist where only confirmed IPs from the offices / gas stations have access to the infrastructure?
Spot on article, but without a call to action. What can we do to combat the migration of society to a centralized corpro-government intertwined entity with no regard for unprofitable privacy or individualism?
Individuals are unlikely to be able to do something about the centralization problem except vote for politicians that want to implement countermeasures. I don’t know of any politicians (with a chance to win anything) that have that on their agenda.
That’s called antitrust, and is absolutely a cause you can vote for. Some of the Biden administration’s biggest achievements were in antitrust, and the head of the FTC for Biden has joined Mamdani’s transition team.
Learn how to host anything, today.
Even if you learn to Host, there are many other services that are going to get relied on those centralised platforms, so if you are thinking to Host, every single thing on your own, then it is going to be more work than you can even imagine and definitely super hard to organise as well
If you host you are running on my cPanel SW. 70% of the internet is doing that. Also a kinda centralized point of failure, but I didn't hear of any bugs in the last 14 years.
Have you tried that? I gave up on hosting my own email server seven or eight years ago, after it became clear that there would be an endless fight with various entities to accept my mail. Hosting a webserver without the expectation that you'll need some high powered DDOS defense seems naive, in the current day, and good luck doing that with a server or two.
I have never hosted my own email. It took me roughly a day to set it up on a vanilla FreeBSD install running on Vultr’s free tier plan and it has been running flawlessly for nearly a year. I did not use AI at all, just the FreeBSD, Postfix, and Dovecot’s handbooks. I do have a fair bit of Linux admin and development experience but all in all this has been a weirdly painless experience.
If you don’t love this approach, Mail-in-a-box works incredibly well even if the author of all the Python code behind it insists on using tabs instead of spaces :)
And you can always grab a really good deal from a small hosting company, likely with decades of experience in what they do, via LowEndBox/LowEndTalk. The deal would likely blow AWS/DO/Vultr/Google Cloud out of the water in terms of value. I have been snagging deals from there for ages and I lost a virtual host twice. Once was a new company that turned out to be shady and another was when I rented a VPS in Cairo and a revolution broke out. They brought everything back up after a couple of months.
For example I just bought a lifetime email hosting system with 250GB of storage, email, video, full office suite, calendar, contacts, and file storage for $75. Configuration here is down to setting the DNS records they give you and adding users. Company behind it has been around for ages and is one of the best regarded in the LET community.
It's not insurmountable to set up initially. And when you get email denied from whatever org (your lawyer, your mom, some random business, whatever), each individual one isn't insurmountable to fix. It does get old after awhile.
It also depends on how much you are emailing, and who. If it's always the same set of known entities, you might be totally fine with self hosting. Someone else who's regularly emailing a lot of new people or businesses might incur a lot of overhead. At least worth more than their time than a fastmail or protonmail subscription or whatever.
We could quibble about the premise.
I wonder what would life without cloudflare look like? What practices would fill the gaps if a company didn't - or wasn't allowed to -- satisfy the the concerns that cloudflare fills.
"Embrace outages, and build redundancy." — It feels like back in the day this was championed pretty hard especially by places like Netflix (Chaos Monkey) but as downtime has become more expected it seems we are sliding backwards. I have a tendency to rely too much on feelings so I'm sure someone could point me to some data that proves otherwise but for now that's my read on things. Personally, I've been going a lot more in on self-hosting lots of things I used to just mindlessly leave on the cloud.
My friend wasn't able to do RTG during the outage. They had to use ultrasound machine on his broken arm to see inside.
> My friend wasn't able to do RTG during the outage.
What is RTG?
X-ray, in some languages (like Polish) the acronym comes from https://en.wikipedia.org/wiki/Roentgen_(unit)
X-ray
Centralization has absolutely nothing whatsoever to do with the problems of society and technology. It's a particular obsession of software nerds, who, for unknown reasons, seem to attribute some kind of magical thinking to the word. They know it's "bad", and that decentralization is "good". But they never can seem to be able to describe any more detail than this.
You know what else is centralized? A federal government. A car's ECU. Your smartphone. Your friggin' toaster. And who cares, right? Well, nobody does, because it doesn't matter. Centralization in and of itself is not bad. It's just a generic word that is virtually meaningless on its own. It only has value when used in specific ways to distinguish a specific type of thing. It's like saying "left" vs "right". Is "left" bad and "right" good?
But for some bizarre reason (that should really be studied), nerds have latched onto these generic concepts when it comes to anything related to software. They seem obsessed with it, while simultaneously not being able to describe in detail the specific harms they are perceiving due to these terms. Because they don't really know what it means.
All they really are trying to say is "Big company bad!" But they don't know how to say that without sounding like cavemen, so they say "Centralization bad!" because it sounds more scientific.
Yes, big company sometimes bad. But many thing in world from big company; not all bad. Many big company things do good for nerd. Mountain dew, antibiotics, orthopetic shoes. Not all centralization bad. Phrase "it depends" not make feel good blog title, but it right.
Now just wait til every country on earth really does replace most of its employees with ChatGPT... and then OpenAI's data center goes offline with a fiber cut or something. All work everywhere stops. Cloudflare outage is nothing compared to that.
It's a tragedy of the commons. Even if you don't use Cloudflare does it matter if no one can pay for your products.
The thing I learned from the incident is that rust offer a unpack function. It puzzles me why the hell they build such a function in the first place.
> It puzzles me why the hell they build such a function in the first place.
One reason is similar to why most programming languages don't return an Option<T> when indexing into an array/vector/list/etc. There are always tradeoffs to make, especially when your strangeness budget is going to other things.
how many people are still on us-east-1
My old employer used azure. It irritated me to no end when they said we must rename all our resources to match the convention of naming everything US East as "eu-" because (Eastern United States I guess)
A total clown show
The outage wasn’t a good thing, since nothing is changing as a result. (How many outages does cloud flare had?)
I don't like this argument since you can applied this argument to google,microsot,aws,facebook etc
Tech world is dominated by US company and what is alternative to most of these service???? its a lot fewer than you might think and even then you must make a compromise in certain areas
> They [outages] can force redundancy and resilience into systems.
They won’t until either the monetary pain of outages becomes greater than the inefficiency of holding on to more systems to support that redundancy, or, government steps in with clear regulation forcing their hand. And I’m not sure about the latter. So I’m not holding my breath about anything changing. It will continue to be a circus of doing everything on a shoestring because line must go up every quarter or a shareholder doesn’t keep their wings.
That's ok though, not every website needs 5 9s
>It's ironic because the internet was actually designed for decentralisation, a system that governments could use to coordinate their response in the event of nuclear war
This is not true. The internet was never designed to withstand nuclear war.
Arpanet absolutely was designed to be a physically resilient network which could survive the loss of multiple physical switch locations.
Perhaps. Perhaps not. But it will survive it. It will survive a complete nuclear winter. It's too useful to die, and will be one the first things to be fixed after global annihilation.
But Internet is not hosting companies or cloud providers. Internet does not care if they don't build their systems resilient enough and let the SPOFs creep up. Internet does it's thing and the packets keep flowing. Maybe BGP and DNS could use some additional armoring but there are ways around both of them in case of actual emergency.
ARPANET was literally invented during the cold war for the specific and explicit purpose of networked communications resilience for government and military in the event major networking hubs went offline due to one or more successful nuclear attacks against the United States
It literally wasn't. It's an urban myth.
>Bob Taylor initiated the ARPANET project in 1966 to enable resource sharing between remote computers.
>The ARPANET was not started to create a Command and Control System that would survive a nuclear attack, as many now claim.
https://en.wikipedia.org/wiki/ARPANET
Per interviews, the initial impetus wasn't to withstand a nuclear attack - but after it was first set up, it most certainly a major part of the thought process in design. https://web.archive.org/web/20151104224529/https://www.wired...
>but after it was first set up
Your link is talking about work Baran did before ARPANET was created. The timeline doesn't back your point. And when ARPANET was created after Baran's work with Rand:
>Wired: The myth of the Arpanet – which still persists – is that it was developed to withstand nuclear strikes. That's wrong, isn't it?
>Paul Baran: Yes. Bob Taylor1 had a couple of computer terminals speaking to different machines, and his idea was to have some way of having a terminal speak to any of them and have a network. That's really the origin of the Arpanet. The method used to connect things together was an open issue for a time.
Read the whole article. And peruse the oral history here: https://ethw.org/Oral-History:Paul_Baran - the genesis was most definitely related to the cold war.
"A preferred alternative would be to have the ability to withstand a first strike and the capability of returning the damage in kind. This reduces the overwhelming advantage by a first strike, and allows much tighter control over nuclear weapons. This is sometimes called Second Strike Capability."
The stated research goals are not necessarily the same as the strategic funding motivations. The DoD clearly recognized packet-switching's survivability and dynamic routing potential when the US Air Force funded the invention of networked packet switching by Paul Baran six years earlier, in 1960, for which the explicit purpose was "nuclear-survivable military communications".
There is zero reason to believe ARPA would've funded the work were it not for internal military recognition of the utility of the underlying technology.
To assume that the project lead was told EVERY motivation of the top secret military intelligence committee that was responsible for 100% of the funding of the project takes either a special kind of naïveté or complete ignorance of compartmentalization practices within military R&D and procurement practices.
ARPANET would never have been were it not for ARPA funding, and ARPA never would've funded it were it not for the existence of packet-switched networking, which itself was invented and funded, again, six years before Bob Taylor even entered the picture, for the SOLE purpose of "nuclear-survivable military communications".
Consider the following sequence of events:
1. US Air Force desires nuclear-survivable military communications, funds Paul Baran's research at RAND
2. Baran proves packet-switching is conceptually viable for nuclear-survivable communications
3. His specific implementation doesn't meet rigorous Air Force deployment standards (their implementation partner, AT&T, refuses - which is entirely expectable for what was then a complex new technology that not a single AT&T engineer understood or had ever interacted with during the course of their education), but the concept is now proven and documented
4. ARPA sees the strategic potential of packet-switched networks for the explicit and sole purpose of nuclear-survivable communications, and decides to fund a more robust development effort
5. They use academic resource-sharing as the development/testing environment (lower stakes, work out the kinks, get future engineers conceptually familiar with the underlying technology paradigms)
6. Researchers, including Bob Taylor, genuinely focus on resource sharing because that's what they're told their actual job is, even though that's not actually the true purpose of their work
7. Once mature, the technology gets deployed for it's originally-intended strategic purposes (MILNET split-off in 1983)
Under this timeline, the sole true reason for ARPA's funding of ARPANET is nuclear-survivable military communication, Bob Taylor, being the military's R&D pawn, is never told that (standard compartmentalization practice). Bob Taylor can credibly and honestly state that he was tasked with implementing resource sharing across academic networks, which is true, but was never the actual underlying motivation to fund his research.
...and the myth of "ARPANET wasn't created for nuclear survivability" is born.