I found the multicast registery here.
https://www.iana.org/assignments/multicast-addresses/multicast-addresses.xhtml
I already knew that addresses between 224.0.0.1 and 239.255.255.255 are reserved by multicast.
Obviously multicast could be immensely useful if used by the general public, it would obsolete much of facebook, youtube, nearly all CDNs (content delivery networks), would kill cloudflare and company’s business model and just re-arrange the internet with far reaching social implication.
So, why hasn’t all these multicast addresses been converted in usable private IPv4 unicast address space ?
Tell me if I understand the use case correctly here. I want to livestream to my 1000 viewers but don’t want to go through CDNs and gatekeepers like Twitch. I want to do it from my phone, as I am entitled to by the spirit of free internet and democratization of information, but I obviously do not have enough bandwidth for 1000 unicast video streams. If only I had ability to use multicast, I could send a single video stream with multicast up my cellular connection, and at each internet backbone router it would get duplicated and split as many times as necessary to reach all my 1000 subscribers. My 100 viewers in Japan are served by a single stream in the trans-Pacific backbone that gets split once it touches land, is that all correct?
In that case, torrent/peertube-like technology gets you almost all of the way there! As long as my upload ratio is greater than 1 (say I push the bandwidth equivalent of TWO video streams up my cellular), and each of my two initial viewers (using their own phones or tablets or whatever devices that can communicate with each other equally well across the global internet without any SERVERS, CDNS, or MIDDLEMEN in between, using IPv6 as God intended) pushes it to two more, and so on, then within 10 hops and 1 second of latency, all 1000 of my viewers can see my stream. Within 2 seconds, a million could see me in theory, with zero additional bandwidth required on my part, right? In terms of global bandwidth resource usage, we are already within a factor of two of the ideal case of working multicast!
It is true that my 100 peertube subscribers in Japan could be triggering my video stream to be sent through the intercontinental pipe multiple times (and even back again!), but this is only so because the peertube protocol is not yet geographic-aware! (Or maybe it already is?) Have you considered adding geographic awareness to peertube instead? Then only one viewer in Japan will receive my stream, and then pyramid-share it with all the other Japanese.
P2P, IPv6, and geographic awareness is something that you can pursue right now, and it gets you within better than a factor of 2 of the ideal multicast dream! Is factor of 2 an acceptable rate of waste of resource usage? And you can implement it all on your own, without requiring every single internet backbone provider and ISP to cooperate with you and upgrade their router hardware to support multicast. AND you get all the other features of peertube, like say being able to watch a video that is NOT a livestream. Or being able to read a comment that was posted when your device was powered off.
Also, I am intrigued by the great concern you give for intercontinental bandwidth usage, considering those pipes are owned by the same types of big for-profit companies as the walled-garden social networks and CDNs that are so distasteful. From the other end, the reason why geographic awareness has not already been implemented in bittorrent and most other P2P protocols is precisely because bandwidth has been so plentiful. I can easily go to any website in Japan, play video games with the Chinese, or upload Linux images to the Europeans, without worrying about all the peering arrangements in between. If you are Netflix you have to deal with it and pay for peerage and build out local CDN boxes, but as a P2P user I’ve never had to think about it. Maybe if 1-to-millions torrent-based server-less livestreaming from your phone were to become popular, the intercontinental pipe owners might start complaining, but for now the internet just works.
Yep, that is exactly it
I am also excited at peer-to-peer technology, however P2P unicast remains a store-and-forward technology, under the best of condition we’re look at at least a 10 millisecond latency per hop and of course a doubling of the total network bandwidth used per node as they each send and receive at least once. Still very exciting stuff that I wish were further along than it is, but this isn’t the “multicast dream” as such, which does not use “Zuck’s computer” by wish I mean it does not use the cloud which is “someone else’s computer”. We can imagine a glorious benevolent P2P swarm that understands that it’s own participation is both a personal and a public good, that warm and fuzzy feeling of a torrent with a 10 to 1 seeding ratio. But we’re still using “someone else’s computer” … at “we’re” using “our computer” and that’s the royal “we”. Multicast is all switch no server, all juice, no seed.
Yes, well each node is a server and a middleman, but it’s “our” guys I guess, of course in the real world we’ve now got NAT, firewalls, STUN/TURN/ICE, blocked ports, port forwarding you know all that jazz that used to put a serious strain on my router and might ending up killing “our” phones battery, plus here P2P if you’re on cell your bandwidth is ratioed, some scummy ISP do not treat traffic the same way up or down, we’re starting to accumulate quite a lot of asterisks here.
Ah no, in this case the total bandwidth use has massively increase, those users aren’t communicating with multicast efficiency, they are point-to-point and those points run through the same backbones hundreds of times, coming AND going, while the sender does not have to carry that load, the internet is not a MUCH more congested place because of the lack of multicast
I don’t know enough about peertube to answer that, I suspect it’s a best effort, but here I’m sure the focus is on “unstoppable delivery” BEFORE “efficient delivery”
I’m not sure this math is mathing, we’re having double the total network bandwidth per host, and it’s not geography aware, it’s network topology aware, a topology that is often obscured by the ISPs for a variety of benign and malevolent reasons. The worst is that the peers will cross the backbone many time, I think we’re looking at a “network effect scale” in wasted bandwidth compared with multicast. “n2” ? I’m not sure, probably n2 is the worst case ontario.
Yes this is essential, for multicast to “work” it would have to be like that. Unicast and IPv4, the internet would be useless if you had to negociate each packets between you and your peers.
Yes, I believe they do stand in the way, I believe most of the long range communication is dark fiber, which they have bought on the cheap and have made their business model to exploit and therefore NEED to keep the utility of the public internet as low as possible, that includes never allowing “actually existing multicast” to flourish.
You can because you’re in a drop in the consumer bucket, you exist in the cracks of the system. If everyone suddenly used the internet to this full potential, then we would get the screws turned on us. The internet is largely built like a cablo-distribution network and we’re supposed to just be passive consumers, we purchase product, we receive, we are not meant to send.
yes I think so too, and they wouldn’t wait for their complaints to be heard, we have been here before, throttling, QoS deprioritizing (to drops), dropped packet, broken connections, port blocking, transient IP bans, we are sitting ducks on the big pipes if we start really using them proper. Multicast would essentially fly under the radar.
Yes, I’m using “geographic awareness” here as shorthand for the same algorithm that BGP uses to calculate shortest route. As far as I know, BGP has no knowledge of “countries” or “continents”, it makes decisions purely on local policy and connectivity info available to it. However, the resulting topology map does greatly resemble the corresponding geographic map, a natural consequence of the internet being a physical engineering structure. I’m not sure how publicly available the global BGP data is. If you were designing a backbone-bandwidth-preserving P2P app you would either give it BGP data directly, or if that’s not available, give it the world map to get most of the same benefit.
The multicast proposal would need to be routed through the very same ISP-obscured topology, so there is no advantage over topology-aware P2P.
As a graph problem, it does look to me within factor of 2 is practical.
First consider a hypothetical topology-aware “daisy chain” scheme, where every swarm user has upload ratio of exactly one. Then every backbone and last-mile connection gets used exactly twice. This is why I say factor of 2 is the upper limit. It’s like a maze problem where you can navigate an entire maze and only traverse each corridor twice. Then look at the more practical “pyramid” scheme where half the users have upload ratio of about 2. Some links get used twice but many get used only once! UK-UK1 link is the only one to be used 3 times. Notably observe that US-JP and US-UK transcontinental links only get used once, as you wanted! Overall this pyramid scheme looks to me to be within 20% efficiency of the optimal multicast scheme.
What do you think backbone routers are? They are computers! Specialized for a particular task, but computers nonetheless. Owned by someone other than you. Your whole lament is that you can’t force those owners to implement multicast on their routers. I think using the royal “our” computer, something we can do right now without forcing anyone else, is much better by comparison. If you insist that P2P swarm members, they who actually want to see your livestream, are not good enough, that you only want to use “your” computer to broadcast and no one else’s, then you are left with no options other than bouncing HAM video signals off the ionosphere. And even the radio spectrum is claimed by governments.
I think you underestimate the size. Imagine if multicast were ubiquitous, billions of internet-connected users each with dozens? hundreds? of multicast subscriptions. Each video content creator is a multicast, each blogpost you follow, each multi-twitter handle, each lemmy community you subscribe to. Hundreds easily. Thats many gigabytes, possibly hundreds of gigabytes, of state to fit into every router. BGP is simple because you care only about the physical links you actually have. You can stuff entire IP ranges into a single routing table entry. Your entire table could be a dozen entries. Fits inside the silicon. With multicast I don’t think you can fold it in, you must keep the entire many-to-many table on every single router[1]. And consult the 100GB table to route every single packet, in case it needs to get split. As you said, impossible with 1990s technology, probably possible but contrary to business goals in 2020.
You are concerned about the battery life of your phone when you use the bandwidth of 2 video streams compared to watching just 1? Yet you expect every single router owner to plug in hundreds of gigabytes of extra RAM sticks and spend extra CPU power and electricity to look up routing tables to handle your multicast traffic for you. You are just offloading the resource usage onto other people’s computers! Not “our” computers - “theirs”. Remember how much criticism Bitcoin got for wasting resources? Not the proof of work, but the having to store a duplicate copy of 100GB’s of transactions blockchain on every single node? All that hard drive space wasted! When “Mastercard” and “Visa” can do it with only a single database on a mainframe. Yet now you want “them” to do the same and “waste” 100GB’s of RAM on every single router just so your battery life is a little better.
This does not follow. Didn’t you say that multicast was already sabotaged by the very same cablo-distribution networks to maintain their send-monopoly? You expect to force the ISPs to turn multicast back on and somehow have it fly under the radar, but P2P would get the screws turned? It can’t be one and not the other! If you plan to have the governments force the ISPs to fall in line and implement multicast standards, then why couldn’t you have the same governments (driven by democratic pressure of billions of internet users demanding freedom, presumably) enshrine P2P rights? Again, remember that P2P is something we already have, something that already works and can be expanded with no additional cooperation from other players. Multicast is something that would need to be forced on others, on everyone, and require physical hardware updates. If there are future restrictions on P2P, they would be easier to defend against politically and technologically. If you cannot defend P2P, then you for sure do not have enough political power to force multicast.
[1]: Thinking about this, maybe you could roll it in a little. Given N internet users (~a billion), each with S subscriptions (say a hundred), C number of content feeds (a hundred million? 10% of users are also creators, 90% are pure consumers), and each router has P physical links (say ten), then instead of N*S amount of state (100GB’s), each router could fold it down into C*P amount of state (1GB’s). As in “If I receive a multicast packet from [source ip=US.5.6.7] to [destination ip=anyone], route copies of it out through phy04, phy07, and phy12”. You would still need a mechanism to propagate table changes pretty rapidly (full refresh about once every minute?). Your phone can be switching cells or powering on and off. You don’t want to multicast packets to a powered-off IP - that would be waste of resources!
And how do you detect oversubscribing? If a million watchers subcribe to 1 multicast livestream - it’s fine, but what happens when 1 troll subscribes to a million livestreams? If I subscribe to 1 million video streams, obviously my last-mile connection cannot fit them all. With TCP unicast, the senders would not receive TCP ACK replies from me and throttle down. But with multicast, the routers in between do not know about my last mile, or even if my phone is still powered on since later than a minute ago. All they know is “if receive multicast from IP1, send to phy04; if receive multicast from IP2, send to phy04;” etc. Would my upstream routers not get saturated trying to send a million video streams to a dead IP? Would we need to implement some sort of a reverse-multicast version of “TCP ACK”?
1 ↩︎