Daniel Karrenberg, co-author of RFC1918, said this 2017-10-06 on the NANOG mailing list:
> On 05/10/2017 07:40, Jay R. Ashworth wrote:
> > Does anyone have a pointer to an *authoritative* source on why
> >
> > 10/8
> > 172.16/12 and
> > 192.168/16
> >
> > were the ranges chosen to enshrine in the RFC? ...
>
> The RFC explains the reason why we chose three ranges from "Class A,B &
> C" respectively: CIDR had been specified but had not been widely
> implemented. There was a significant amount of equipment out there that
> still was "classful".
>
> As far as I recall the choice of the particular ranges were as follows:
>
> 10/8: the ARPANET had just been turned off. One of us suggested it and
> Jon considered this a good re-use of this "historical" address block. We
> also suspected that "net 10" might have been hard coded in some places,
> so re-using it for private address space rather than in inter-AS routing
> might have the slight advantage of keeping such silliness local.
>
> 172.16/12: the lowest unallocated /12 in class B space.
>
> 192.168/16: the lowest unallocated /16 in class C block 192/8.
>
> In summary: IANA allocated this space just as it would have for any
> other purpose. As the IANA, Jon was very consistent unless there was a
> really good reason to be creative.
>
> Daniel (co-author of RFC1918)
>>> This is a fuzzy recollection of something I believe I read, which might
well be inaccurate, and for which I can find no corroboration. I
mention it solely because it might spark memories from someone who
actually knows:
>>> A company used 192.168.x.x example addresses in some early
documentation. A number of people followed the manual literally when
setting up their internal networks. As a result, it was already being
used on a rather large number of private networks anyway, so it was
selected when the RFC 1597 was adopted.
>> sun
> Wasn't 192.9.200.x Sun's example network?
of course you are correct. sorry. jet lag and not enough coffee.
I worked in the early 90s getting UK companies connected. The number of people who had copied Suns (and HPs and others) addresses out of the docs was enormous. One of them was a very well known token ring network card vendor.
As the authors themselves note, RFC 1597 was merely formalizing already widespread common practice. If the private ranges were not standardized then people would still have created private networks, but just used some random squatted blocks. I can not see that being better outcome.
The optimist in me wants to claim that not assigning any range for local networks would have lead to us running out of IPv4 addresses in the late 90s, leading to the rapid adoption of IPv6, along with some minor benefits (merging two private networks would be trivial, much fewer NATs in the world leading to better IP based security and P2P connectivity).
The realists in me expects that everyone would have used one of the ~13 /8 blocks assigned to the DoD
The realist in me thinks that we'd probably have had earlier adoption of V6 but the net good from that is nil compared to the headaches.
V6 is only good when V4 is exhausted, so it's tautological to call it a benefit of earlier exhaustion of V4, or am I missing something? I'm probably missing something.
I'm guessing the reason they think it would have been better is that right now the headaches are from us being a weird limbo state where we're kinda out of IPv4 addresses but also not really at the point where everything supports IPv6 out of necessity. If the "kinda" were more definitive, there would potentially have been enough of a forcing factor that everyone make sure to support IPv6, and the headaches would have been figured out.
I'm not the OP or author, but the argument against private network addresses is that such addresses break the Internet in some fundamental ways. Before I elaborate on the argument, I want to say that I have mixed feelings on the topic myself.
Let's start with a simple assertion: Every computer on the Internet has an Internet address.
If it has an Internet Address, it should be able to send packets to any computer on the Internet, and any other computer on the Internet should be able to send packets to it.
Private networks break this assumption. Now we have machines which can send packets out, but can't receive packets, not without either making firewall rule exceptions or else doing other firewall tricks to try to make it work. Even then, about 10-25% of the time, it doesn't work.
But it goes beyond firewall rules... with IP addresses being tied to a device, every ISP would be giving every customer a block of addresses, both commercial and residential customers.
We'd also have seen fast adoption of IPv6 when IPv4 ran out. Instead we seem to be stuck in perpetual limbo.
On team anti-private networking addresses:
- Worse service from ISPs
- IPv4 still in use past when it should have been replaced
- Complex work around overcoming firewalls
I'm sure we all know the benefits of private networks, so I don't need to reiterate it.
Honestly though... does it, all that much? Even in a world where NAT didn't exist and we all switched to IPv6, we'd still all be behind firewalls, as everyone on an IPv6 home network is today. Port forwarding would just be replaced by firewall exemptions.
Like on a philosophical level, I do wish we had a world where the end-to-end principle still held and all that, but I'm not actually sure what difference it would make, practically speaking. "Every device is reachable" didn't die because of IPv4 exhaustion or NAT, it died because of security, in reality most people don't actually want their devices to be reachable (by anyone).
> I'm sure we all know the benefits of private networks, so I don't need to reiterate it
That is I think the key. Private networks have sufficient benefit that most places will need one.
The computers and devices on our private network will fall into 3 groups: (1) those that should only communicate within our private network, (2) those that sometimes need to initiate communication with something outside our network but should otherwise have no outside contact, and (3) those that need to respond to communication initiated from something outside our network.
We could run our private network on something other than IP, but then dealing with cases #2 and #3 is likely going to be at least as complicated as the current private IP range approach.
We could use IP but not have private ranges. If we have actual assigned addresses that work from the outside for each device we are then going to have to do something at the router/firewall to keep unwanted outside traffic from reaching the #1 and #2 types of devices.
If we use IP but do not have assigned addresses for each device and did not have the private ranges I'd expect most places would just use someone else's assigned addresses, and use router/firewall rules to block them off from the outside. Most places can probably find someone else's IP range that they are sure contains nothing they will ever need to reach so should be safe to use (e.g., North Korea's ranges would probably work for most US companies). That covers #1, but for #2 and #3 we are going to need NAT.
I think nearly everyone would go for IP over using something other than IP. Nobody misses the days when the printer you wanted to buy only spoke AppleTalk and you were using DECnet.
At some point, when we are in the world where IP is what we have on both the internet and our private networks but we do not have IP ranges reserved for private networks, someone will notice that this would be a lot simpler if we did have such ranges. Routers can then default to blocking those ranges and using NAT to allow outgoing connections. Upstream routers can drop those ranges so even if we misconfigure ours it won't cause problems outside. Home routers can default to one of the private ranges so non-tech people trying to set up a simple home network don't have to deal with all this.
If for some reason IANA didn't step in and assign such ranges my guess is that ISPs would. They would take some range within their allocation, configure their routers to drop traffic using those address, and tell customers to use those on their private networks.
> every ISP would be giving every customer a block of addresses, both commercial and residential customers.
or more likely, you would still receive only handful of addresses and would have needed to be far more considerate what you connect to your network, thus restricting the use of IP significantly. Stuff like IPX and AppleNet etc would have probably then been more popular. The situation might have been more like what we had with POTS phones; residential houses generally had only one phone number for the whole house and you just had to share the line between all the family members etc.
The phone company would have been happy to sell you more phone lines. I knew people who had some.
But you're right that as dumb as it is, it's likely that ISPs would have charged per "device" (ie per IP address).
Before 1983 in the US, you could only rent a phone, not own one (at least not officially) and the phone company would charge a rental fee based on how many phones you had rented from them. Then, when people could buy their own phones, they still charged you per phone that you had connected! You could lie, but they charged you.
Like I said, I have mixed feelings about NATs, but you're right that the companies would have taken advantage of customers.
They worked around this with IPv6 by the fact that SLAAC exists and some devices insist on always using it. Your ISP has to give you at least 64 bits of address space or else some phones won't work on your network. And even if they only give you the bare minimum of 64 bits, you can subdivide it further without SLAAC if you know what you're doing.
Furthermore, the use of privacy addresses obfuscates how many devices you have.
Weirdly enough, there are a few systems at my workplace which are in the 192.9.200.x subnet! They're only about 20 years old, though. We are actively looking to replace the entire system.
I've done work for several municipalities and police departments in western Ohio and found 192.9.200.0/24 in several. They all had a common vendor who did work back in the 90s and was the source.
Most SMB companies did not have IP addresses in 1994 when RFC 1597 was published, although the range was known. However, the well known companies did, and some of those have the older full class B assignments. It was common for those companies to use those public IP addresses internally to this day, although RFC-1918 addresses were also in use.
Since Netware was very popular in businesses and it was possible/common to use only the IPX protocol for endpoints, you could configure endpoints to use a host that had both an IPX and IP address as the proxy, and not use an IP address on most endpoints. That was common due to Netware actually charged for DHCP and DNS add-ons. When Windows became more popular, IP on endpoints likely used RFC-1918 around ~1996.
Well, I'll try summarize answers and my experience.
At beginning, Internet used network classes, because of hardware limitations (later switched to address blocks). And even in 1990s still existed very old hardware, only could use class addresses.
What classes mean, existed early very large organizations, got more addresses than they could use. And even happen few cases, when such organizations lost rights for these addresses.
And these unlucky organizations was some big whales, like IBM or ATT/Bell or Sun.
And once invented solution - state some big enough network as not allocated to use under NAT (or when network is not connected to Internet). So, departments of big organizations could use TCP/IP stack in their networks, even with old hardware, but don't need to contact Internet officials to got real internet addresses.
192.168 was just first C-class network prefix, was not assigned at the moment (or just released).
Later, to list of unassigned added 172.16/12 network.
Note, the CIDR RFC didn't come out until Sep 1993. Thus even brand new network equipment in the mid 1990's were still very classful.
And even then, knowledge of how to properly use /etc/netmasks in SunOS v4.x (or the equivalent if some other network stack even had one) was very scarce.
In the mid 90's, SMBs connecting to the Internet would have very typically obtained a /24 from their ISP, and had direct connection online, no firewalls, barely any proxy servers (although that was popular for some mid sized customers that would have needed multiple /24s or even a /16 to get all their workstations online).
It wasn't until the company Network Translation, with the PIX came about that anybody even considered doing private IP address in general as a firewall strategy with NAT translation using private IPs. And then it took years and years to become popular. Long bought by Cisco at that point.
I don't think Cisco IOS even had NAT until something like 10.2, when it was a premium license package.
This is probably apocryphal, and I'm probably getting the details wrong anyway, but tangentially related to this, when I worked for a small network security firm (later purchased by Cisco, as most were), we had a customer that used, I'm told, the IP ranges typically seen in North Korea as their internal network. They TOLD us they did it because the addresses wouldn't conflict with anything they cared about, and no one had told them about 1918 + NAT, which I find dubious.
I don't think this does anything to explain why 192.168/16 was chosen specifically. Three netblocks (10/8, 172.16/12, and 192.168/16) were selected from the class A, B, and C address spaces to accommodate private networks of various sizes. Class C addresses by definition have the two most significant bits set in their first octet and the third set to 0 (i.e., 192 - 223.)
192 in the first octet starts the class C space, but 10 and 172 do not have the same relationship in classes A and B.
Yes you are right. I researched a bit and there are other reserved blocks next the 168 that obviously don't have a nice pattern. So the 101010 is just a coincidence.
Is it? What section do you mean? I don't see anything in there about private networks or 192.168.0.0/16 (in CIDR notation, which didn't exist at the time).
User bmacho cites this Superuser question [1] in a reply to a downvoted comment at the bottom of this thread. It’s much more illuminating than the OP emails; Michael Hampton’s answer in particular is amazing. I had never heard of Jon Postel before.
Mm. I’m an older millennial, so solidly in the Web 1.0 generation, but never had the chance to use the internet before the web took off. I missed BBSs too, which were big where I’m from (probably bigger than the pre-Web internet, outside universities at least). I was fourteen when Postel died in 1998. My earliest memories of internet use are probably from ’96 or so, using library or school computers after classes.
While I've got some eyeballs on the subject, I'm tiring of mistyping this across my local network devices. How many of you folks alias this, and in what way? /etc/hosts works for my *nix machines, but not my phones, I think?
I'm also tired of remembering ports, if there's a way of mapping those. Should I run a local proxy?
> I'm also tired of remembering ports, if there's a way of mapping those. Should I run a local proxy?
If we're talking web-services - absolutely. I put Caddy in front of everything just to be able to simply use domains. You can also use it to map ports to either standard or more convenient ones if that suffices. Configuring reverse-proxy with Caddy [0] takes just a few lines:
After setting up a reverse-proxy or two you might want to expand your infrustructure with the following to to neaten thing up even more:
- DNS-server: most routers can be that; another easy option would be PiHole.
- DHCP-server: same as above (PiHole does DHCP too).
- Reverse-proxie(s): you can have either just one for the entire network or a number closer to the amount of services if you choose to have HTTPS between everything. Wouldn't bother with Nginx for that unless there is a strong incentive.
- ACME-server: provides the certs for the local reverse-proxies if you choose to have HTTPS between everything. Caddy can also act as a very easy to set up ACME-server [1].
If you have all that set up, you can access all the local services securely and via readable URLs. Given all the services get their certs from the ACME-server, the consumers only need to trust (install) one root cert in order to consider all the local connections secure.
Might seem like a lot at first, but the configuration is fairly straightforward and I found it's worth the effort.
Theoretically SRV records can be set in dns to solve the port issue, realistically Nothing uses them so.... You are probably out of luck there. The way SRV records work is you are supposed to ask a network "Where is the foo service at?"(SRV _foo._tcp.my.network.) and dns sez "it's at these machines and ports" (SRV 1(pri) 1(weight) 9980(port) misc.my.network.(target))
My personal low priority project is to put mac address in DNS, I am about as far as "I could fit them in an AAAA record"
As for specific software recomendations, I am probably not a good source. I run a couple of small openbsd machines(apu-2) that serve most of my home networking needs. But, I am a sys-admin by trade, while I like it, I am not sure how enjoyable others would find the setup.
Local proxies are nice for these kinds of things, but most phones are running some kind of mDNS service so try setting up avahi/openmDNS to advertise services.
I just stick all my DNS records in a normal DNS server. In my case I’m terraforming some Route53 zones. So I havd a subdomain off a real domain I own that I use for LAN gear and they all have real DNS.
For ports, anything that can just be run on 443 on its own VM, I do that. For things that either can’t be made to run on 443, or can’t do their own TLS, etc, I have a VM running nginx that handles certificates and reverse proxying.
mDNS works well for names on your local network, you can integrate it with your dhcp server, works on hosts and phones. I don't have a good answer for ports.
mDNS is like the LLM of DNS: sometimes, for some audiences, it works well, but when it doesn't work you're SoL trying to fix it other than "have you tried $(sudo killall -INT mDNSResponderHelper)?"
I'm not aware of any DHCP change needed for that, since to the very best of my knowledge mDNS is a broadcast protocol. Involving DHCP would be pointing it at the copy of dnsmasq running on your router, such that the hostname that the devices present to DHCP are then resolved by dnsmasq, no mDNS required
That whole /8 is reserved for loopback, but sometimes (usually?) only 127.0.0.1 is implemented as a loopback if you know that that’s true of your equipment, you could use the rest of that space for local addresses instead of 192.168/16, 172.16/12 and/or 10/8.
On my (fedora) system I can ping 127.anything and the host responds. I think in practice it is indeed implemented. I haven't used windows/macos in a very long time but I think the same applies. (Also in fedora by default systemd-resolved binds to 127.0.0.53)
That's how I think it should be, but Paul Graham disagrees (or at least he did in 2008 and I haven't seen anything later about him changing his mind).
In [1] he wrote:
> I think it's ok to use the up and down arrows to express agreement. Obviously the uparrows aren't only for applauding politeness, so it seems reasonable that the downarrows aren't only for booing rudeness
The problem with this idea would be all the existing software, hardware and infrastructure out there. You would either need to make it an alias, which wouldn't really change anything, or you would need to update everything everyone everywhere has, which is essentially the IPv6 migration and we all know how that is going.
How would you express that in an IPv4 header? These address ranges serve a real purpose.
edit: OP: just like the downvote button is not for disagreement, the delete button is not for karma management. Not sure why you would respond to my post here and then immediately delete it.
Isn’t there a max of -4 per comment anyway? I’ll admit I get upset when people downvote me for my opinion, but I don’t think deleting the comment is ever really worthwhile.
> It also disruptive to anyone who comes here and sees replies to a deleted comment and can't see the context.
Amen. I try to quote what I'm referring to for just this reason. I have been searching for some sort of browser plugin that would do it for me like many mail clients that were "newsgroup aware" of old, but alas none yet.
They needed private IP ranges that wouldn't conflict with the real internet. 192.168 was just sitting there unused, so they grabbed it along with 10.x.x.x and 172.16-31.x.x.
It isn't an article, but a mailing list post, and the post starts out with:
This is a fuzzy recollection of something I believe I read, which might well be inaccurate, and for which I can find no corroboration. I mention it solely because it might spark memories from someone who actually knows:
Spoiler: it sparks one memory from one person, who winds up being mistaken.
Offering an alternative hypothesis seems reasonable given the content of the post.
Daniel Karrenberg, co-author of RFC1918, said this 2017-10-06 on the NANOG mailing list:
https://web.archive.org/web/20190308152212/https://mailman.n...And said the same on SuperUser the day after.
* https://superuser.com/a/1257080/38062
And I suppose 127/8 because it's the highest /7 or highest /8 without the MSB on?
[flagged]
[flagged]
The entire thread:
>>> This is a fuzzy recollection of something I believe I read, which might well be inaccurate, and for which I can find no corroboration. I mention it solely because it might spark memories from someone who actually knows:
>>> A company used 192.168.x.x example addresses in some early documentation. A number of people followed the manual literally when setting up their internal networks. As a result, it was already being used on a rather large number of private networks anyway, so it was selected when the RFC 1597 was adopted.
>> sun
> Wasn't 192.9.200.x Sun's example network?
of course you are correct. sorry. jet lag and not enough coffee.
---
So no answers.
I worked in the early 90s getting UK companies connected. The number of people who had copied Suns (and HPs and others) addresses out of the docs was enormous. One of them was a very well known token ring network card vendor.
You should read https://datatracker.ietf.org/doc/html/rfc1627 for a path not travelled.
Not everyone thought this was a good idea, and I still maintain the alternative path would have led to a better internet than the one we today.
As the authors themselves note, RFC 1597 was merely formalizing already widespread common practice. If the private ranges were not standardized then people would still have created private networks, but just used some random squatted blocks. I can not see that being better outcome.
The optimist in me wants to claim that not assigning any range for local networks would have lead to us running out of IPv4 addresses in the late 90s, leading to the rapid adoption of IPv6, along with some minor benefits (merging two private networks would be trivial, much fewer NATs in the world leading to better IP based security and P2P connectivity).
The realists in me expects that everyone would have used one of the ~13 /8 blocks assigned to the DoD
The realist in me thinks that we'd probably have had earlier adoption of V6 but the net good from that is nil compared to the headaches.
V6 is only good when V4 is exhausted, so it's tautological to call it a benefit of earlier exhaustion of V4, or am I missing something? I'm probably missing something.
I'm guessing the reason they think it would have been better is that right now the headaches are from us being a weird limbo state where we're kinda out of IPv4 addresses but also not really at the point where everything supports IPv6 out of necessity. If the "kinda" were more definitive, there would potentially have been enough of a forcing factor that everyone make sure to support IPv6, and the headaches would have been figured out.
Agreed.
Also, fun fact, the Google IPv6 tracker says we're about to reach 50%. Time to throw s party!
Can you please elaborate? How would such a minute change lead to "a better internet"?
I'm not the OP or author, but the argument against private network addresses is that such addresses break the Internet in some fundamental ways. Before I elaborate on the argument, I want to say that I have mixed feelings on the topic myself.
Let's start with a simple assertion: Every computer on the Internet has an Internet address.
If it has an Internet Address, it should be able to send packets to any computer on the Internet, and any other computer on the Internet should be able to send packets to it.
Private networks break this assumption. Now we have machines which can send packets out, but can't receive packets, not without either making firewall rule exceptions or else doing other firewall tricks to try to make it work. Even then, about 10-25% of the time, it doesn't work.
But it goes beyond firewall rules... with IP addresses being tied to a device, every ISP would be giving every customer a block of addresses, both commercial and residential customers.
We'd also have seen fast adoption of IPv6 when IPv4 ran out. Instead we seem to be stuck in perpetual limbo.
On team anti-private networking addresses:
- Worse service from ISPs - IPv4 still in use past when it should have been replaced - Complex work around overcoming firewalls
I'm sure we all know the benefits of private networks, so I don't need to reiterate it.
> But it goes beyond firewall rules
Honestly though... does it, all that much? Even in a world where NAT didn't exist and we all switched to IPv6, we'd still all be behind firewalls, as everyone on an IPv6 home network is today. Port forwarding would just be replaced by firewall exemptions.
Like on a philosophical level, I do wish we had a world where the end-to-end principle still held and all that, but I'm not actually sure what difference it would make, practically speaking. "Every device is reachable" didn't die because of IPv4 exhaustion or NAT, it died because of security, in reality most people don't actually want their devices to be reachable (by anyone).
> Every computer on the Internet has an Internet address
By every computer did you include every MCU that can run TCP/IP stack ?
> I'm sure we all know the benefits of private networks, so I don't need to reiterate it
That is I think the key. Private networks have sufficient benefit that most places will need one.
The computers and devices on our private network will fall into 3 groups: (1) those that should only communicate within our private network, (2) those that sometimes need to initiate communication with something outside our network but should otherwise have no outside contact, and (3) those that need to respond to communication initiated from something outside our network.
We could run our private network on something other than IP, but then dealing with cases #2 and #3 is likely going to be at least as complicated as the current private IP range approach.
We could use IP but not have private ranges. If we have actual assigned addresses that work from the outside for each device we are then going to have to do something at the router/firewall to keep unwanted outside traffic from reaching the #1 and #2 types of devices.
If we use IP but do not have assigned addresses for each device and did not have the private ranges I'd expect most places would just use someone else's assigned addresses, and use router/firewall rules to block them off from the outside. Most places can probably find someone else's IP range that they are sure contains nothing they will ever need to reach so should be safe to use (e.g., North Korea's ranges would probably work for most US companies). That covers #1, but for #2 and #3 we are going to need NAT.
I think nearly everyone would go for IP over using something other than IP. Nobody misses the days when the printer you wanted to buy only spoke AppleTalk and you were using DECnet.
At some point, when we are in the world where IP is what we have on both the internet and our private networks but we do not have IP ranges reserved for private networks, someone will notice that this would be a lot simpler if we did have such ranges. Routers can then default to blocking those ranges and using NAT to allow outgoing connections. Upstream routers can drop those ranges so even if we misconfigure ours it won't cause problems outside. Home routers can default to one of the private ranges so non-tech people trying to set up a simple home network don't have to deal with all this.
If for some reason IANA didn't step in and assign such ranges my guess is that ISPs would. They would take some range within their allocation, configure their routers to drop traffic using those address, and tell customers to use those on their private networks.
> every ISP would be giving every customer a block of addresses, both commercial and residential customers.
or more likely, you would still receive only handful of addresses and would have needed to be far more considerate what you connect to your network, thus restricting the use of IP significantly. Stuff like IPX and AppleNet etc would have probably then been more popular. The situation might have been more like what we had with POTS phones; residential houses generally had only one phone number for the whole house and you just had to share the line between all the family members etc.
The phone company would have been happy to sell you more phone lines. I knew people who had some.
But you're right that as dumb as it is, it's likely that ISPs would have charged per "device" (ie per IP address).
Before 1983 in the US, you could only rent a phone, not own one (at least not officially) and the phone company would charge a rental fee based on how many phones you had rented from them. Then, when people could buy their own phones, they still charged you per phone that you had connected! You could lie, but they charged you.
Like I said, I have mixed feelings about NATs, but you're right that the companies would have taken advantage of customers.
They worked around this with IPv6 by the fact that SLAAC exists and some devices insist on always using it. Your ISP has to give you at least 64 bits of address space or else some phones won't work on your network. And even if they only give you the bare minimum of 64 bits, you can subdivide it further without SLAAC if you know what you're doing.
Furthermore, the use of privacy addresses obfuscates how many devices you have.
Related. Others?
What's the history behind 192.168.1.1? - https://news.ycombinator.com/item?id=17467203 - July 2018 (48 comments)
Weirdly enough, there are a few systems at my workplace which are in the 192.9.200.x subnet! They're only about 20 years old, though. We are actively looking to replace the entire system.
From another post on here:
> > Wasn't 192.9.200.x Sun's example network?
> of course you are correct. sorry. jet lag and not enough coffee.
I've done work for several municipalities and police departments in western Ohio and found 192.9.200.0/24 in several. They all had a common vendor who did work back in the 90s and was the source.
Most SMB companies did not have IP addresses in 1994 when RFC 1597 was published, although the range was known. However, the well known companies did, and some of those have the older full class B assignments. It was common for those companies to use those public IP addresses internally to this day, although RFC-1918 addresses were also in use.
Since Netware was very popular in businesses and it was possible/common to use only the IPX protocol for endpoints, you could configure endpoints to use a host that had both an IPX and IP address as the proxy, and not use an IP address on most endpoints. That was common due to Netware actually charged for DHCP and DNS add-ons. When Windows became more popular, IP on endpoints likely used RFC-1918 around ~1996.
> It was common for those companies to use those public IP addresses internally to this day
Yep, a desktop PC with its own IPv4 address. Back in the day, no firewall afaik.
Well, I'll try summarize answers and my experience.
At beginning, Internet used network classes, because of hardware limitations (later switched to address blocks). And even in 1990s still existed very old hardware, only could use class addresses.
What classes mean, existed early very large organizations, got more addresses than they could use. And even happen few cases, when such organizations lost rights for these addresses.
And these unlucky organizations was some big whales, like IBM or ATT/Bell or Sun.
And once invented solution - state some big enough network as not allocated to use under NAT (or when network is not connected to Internet). So, departments of big organizations could use TCP/IP stack in their networks, even with old hardware, but don't need to contact Internet officials to got real internet addresses.
192.168 was just first C-class network prefix, was not assigned at the moment (or just released).
Later, to list of unassigned added 172.16/12 network.
Note, the CIDR RFC didn't come out until Sep 1993. Thus even brand new network equipment in the mid 1990's were still very classful. And even then, knowledge of how to properly use /etc/netmasks in SunOS v4.x (or the equivalent if some other network stack even had one) was very scarce.
In the mid 90's, SMBs connecting to the Internet would have very typically obtained a /24 from their ISP, and had direct connection online, no firewalls, barely any proxy servers (although that was popular for some mid sized customers that would have needed multiple /24s or even a /16 to get all their workstations online).
It wasn't until the company Network Translation, with the PIX came about that anybody even considered doing private IP address in general as a firewall strategy with NAT translation using private IPs. And then it took years and years to become popular. Long bought by Cisco at that point.
I don't think Cisco IOS even had NAT until something like 10.2, when it was a premium license package.
This is probably apocryphal, and I'm probably getting the details wrong anyway, but tangentially related to this, when I worked for a small network security firm (later purchased by Cisco, as most were), we had a customer that used, I'm told, the IP ranges typically seen in North Korea as their internal network. They TOLD us they did it because the addresses wouldn't conflict with anything they cared about, and no one had told them about 1918 + NAT, which I find dubious.
This was in the 10's of 1000's of devices.
Apparently this is an example of paving the cowpath.
https://en.m.wikipedia.org/wiki/Desire_path
Since the posting does not give a real answer.
192 is 11000000 in binary.
So it is simply the block with the first two bits set in the netmask.
168 is a bit more difficult. It is 10101000, a nice pattern but I don't know why this specific pattern.
I don't think this does anything to explain why 192.168/16 was chosen specifically. Three netblocks (10/8, 172.16/12, and 192.168/16) were selected from the class A, B, and C address spaces to accommodate private networks of various sizes. Class C addresses by definition have the two most significant bits set in their first octet and the third set to 0 (i.e., 192 - 223.)
192 in the first octet starts the class C space, but 10 and 172 do not have the same relationship in classes A and B.
Yes you are right. I researched a bit and there are other reserved blocks next the 168 that obviously don't have a nice pattern. So the 101010 is just a coincidence.
101010 in decimal is 42.
That is the answer..!
192 is the first C class, 168 likely next available when rfc1918 was written.
This is the most likely thing that happened.
This is a bit of history in https://www.rfc-editor.org/rfc/rfc1466
Is it? What section do you mean? I don't see anything in there about private networks or 192.168.0.0/16 (in CIDR notation, which didn't exist at the time).
User bmacho cites this Superuser question [1] in a reply to a downvoted comment at the bottom of this thread. It’s much more illuminating than the OP emails; Michael Hampton’s answer in particular is amazing. I had never heard of Jon Postel before.
[1] https://superuser.com/questions/784978/why-did-the-ietf-spec...
> I had never heard of Jon Postel before.
Reading this makes me a bit sad and reminds me that I'm older now and lucky to have grown up during the golden age of the Internet.
Mm. I’m an older millennial, so solidly in the Web 1.0 generation, but never had the chance to use the internet before the web took off. I missed BBSs too, which were big where I’m from (probably bigger than the pre-Web internet, outside universities at least). I was fourteen when Postel died in 1998. My earliest memories of internet use are probably from ’96 or so, using library or school computers after classes.
While I've got some eyeballs on the subject, I'm tiring of mistyping this across my local network devices. How many of you folks alias this, and in what way? /etc/hosts works for my *nix machines, but not my phones, I think?
I'm also tired of remembering ports, if there's a way of mapping those. Should I run a local proxy?
> I'm also tired of remembering ports, if there's a way of mapping those. Should I run a local proxy?
If we're talking web-services - absolutely. I put Caddy in front of everything just to be able to simply use domains. You can also use it to map ports to either standard or more convenient ones if that suffices. Configuring reverse-proxy with Caddy [0] takes just a few lines:
After setting up a reverse-proxy or two you might want to expand your infrustructure with the following to to neaten thing up even more:- DNS-server: most routers can be that; another easy option would be PiHole.
- DHCP-server: same as above (PiHole does DHCP too).
- Reverse-proxie(s): you can have either just one for the entire network or a number closer to the amount of services if you choose to have HTTPS between everything. Wouldn't bother with Nginx for that unless there is a strong incentive.
- ACME-server: provides the certs for the local reverse-proxies if you choose to have HTTPS between everything. Caddy can also act as a very easy to set up ACME-server [1].
If you have all that set up, you can access all the local services securely and via readable URLs. Given all the services get their certs from the ACME-server, the consumers only need to trust (install) one root cert in order to consider all the local connections secure.
Might seem like a lot at first, but the configuration is fairly straightforward and I found it's worth the effort.
[0]: https://caddyserver.com/docs/caddyfile/directives/reverse_pr...
[1]: https://caddyserver.com/docs/caddyfile/directives/acme_serve...
DNS obviously. It’s easy, don’t let memes put you off.
For port mapping depends what specifically you’re aiming for. SVCB/HTTPS records are nice for having many https servers on a single system.
DNS (queue the "now you have two problems" meme)
Theoretically SRV records can be set in dns to solve the port issue, realistically Nothing uses them so.... You are probably out of luck there. The way SRV records work is you are supposed to ask a network "Where is the foo service at?"(SRV _foo._tcp.my.network.) and dns sez "it's at these machines and ports" (SRV 1(pri) 1(weight) 9980(port) misc.my.network.(target))
https://www.rfc-editor.org/rfc/rfc2782
My personal low priority project is to put mac address in DNS, I am about as far as "I could fit them in an AAAA record"
As for specific software recomendations, I am probably not a good source. I run a couple of small openbsd machines(apu-2) that serve most of my home networking needs. But, I am a sys-admin by trade, while I like it, I am not sure how enjoyable others would find the setup.
> realistically Nothing uses them
Depending on how one defines "nothing," they are honored by XMPP clients.
CoreDNS in Kubernetes also publishes SRV records, for any client in-cluster who wishes to look up the port number used by a named port on a v1.Service
> My personal low priority project is to put mac address in DNS
There's the EUI48 rr type, but I don't know how widely supported it is
https://www.rfc-editor.org/rfc/rfc7043.html
10.0.0.1 or 10.1.1.1 would be a bit easier to type. You could migrate there.
Local proxies are nice for these kinds of things, but most phones are running some kind of mDNS service so try setting up avahi/openmDNS to advertise services.
I just stick all my DNS records in a normal DNS server. In my case I’m terraforming some Route53 zones. So I havd a subdomain off a real domain I own that I use for LAN gear and they all have real DNS.
For ports, anything that can just be run on 443 on its own VM, I do that. For things that either can’t be made to run on 443, or can’t do their own TLS, etc, I have a VM running nginx that handles certificates and reverse proxying.
mDNS works well for names on your local network, you can integrate it with your dhcp server, works on hosts and phones. I don't have a good answer for ports.
mDNS is like the LLM of DNS: sometimes, for some audiences, it works well, but when it doesn't work you're SoL trying to fix it other than "have you tried $(sudo killall -INT mDNSResponderHelper)?"
I'm not aware of any DHCP change needed for that, since to the very best of my knowledge mDNS is a broadcast protocol. Involving DHCP would be pointing it at the copy of dnsmasq running on your router, such that the hostname that the devices present to DHCP are then resolved by dnsmasq, no mDNS required
Working a large company who was allocated a massive block of IPs in the early days, being one off of a reserved subnet has resulted in so many typos.
Weirdly enough I grew up having inside my network during the 90s, 127.26.0.X instead of the widely spread 192.168.
It created a big trauma when I joined the uni and hit the wall. I suppose this how americans feel about the metric system :p
Does that even work? I thought all 127/8 was loopback
That whole /8 is reserved for loopback, but sometimes (usually?) only 127.0.0.1 is implemented as a loopback if you know that that’s true of your equipment, you could use the rest of that space for local addresses instead of 192.168/16, 172.16/12 and/or 10/8.
Linux has a few sysctls that allow you to treat 127/8 as normal addresses. probably a Bad Idea to enable them though
On my (fedora) system I can ping 127.anything and the host responds. I think in practice it is indeed implemented. I haven't used windows/macos in a very long time but I think the same applies. (Also in fedora by default systemd-resolved binds to 127.0.0.53)
For real? I thought it somehow relates to bits and bytes...
(2009)
Added above. Thanks!
[flagged]
> the downvote button is not for disagreeing
That's how I think it should be, but Paul Graham disagrees (or at least he did in 2008 and I haven't seen anything later about him changing his mind).
In [1] he wrote:
> I think it's ok to use the up and down arrows to express agreement. Obviously the uparrows aren't only for applauding politeness, so it seems reasonable that the downarrows aren't only for booing rudeness
[1] https://news.ycombinator.com/item?id=117171
Yup. More here:
https://news.ycombinator.com/item?id=16131314
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
The problem with this idea would be all the existing software, hardware and infrastructure out there. You would either need to make it an alias, which wouldn't really change anything, or you would need to update everything everyone everywhere has, which is essentially the IPv6 migration and we all know how that is going.
flagged for removing useful content for the discussion thread
You already have .internal reserved, you know?
How would you express that in an IPv4 header? These address ranges serve a real purpose.
edit: OP: just like the downvote button is not for disagreement, the delete button is not for karma management. Not sure why you would respond to my post here and then immediately delete it.
Isn’t there a max of -4 per comment anyway? I’ll admit I get upset when people downvote me for my opinion, but I don’t think deleting the comment is ever really worthwhile.
I think so.
It also disruptive to anyone who comes here and sees replies to a deleted comment and can't see the context.
> It also disruptive to anyone who comes here and sees replies to a deleted comment and can't see the context.
Amen. I try to quote what I'm referring to for just this reason. I have been searching for some sort of browser plugin that would do it for me like many mail clients that were "newsgroup aware" of old, but alas none yet.
no
They needed private IP ranges that wouldn't conflict with the real internet. 192.168 was just sitting there unused, so they grabbed it along with 10.x.x.x and 172.16-31.x.x.
Read the article rather than making something up.
It isn't an article, but a mailing list post, and the post starts out with:
Spoiler: it sparks one memory from one person, who winds up being mistaken.Offering an alternative hypothesis seems reasonable given the content of the post.
Which article? The posted emails? Superuser audience disagree https://superuser.com/questions/784978/why-did-the-ietf-spec...
That’s such an awesome answer by Michael Hampton. I had never heard of Jon Postel before now.
Narrator: noone came up with an answer. Someone purported the origin to be Sun but it turned out they used a different address in examples.
And what did you learn from what article, actually?