I found the multicast registery here.
https://www.iana.org/assignments/multicast-addresses/multicast-addresses.xhtml
I already knew that addresses between 224.0.0.1 and 239.255.255.255 are reserved by multicast.
Obviously multicast could be immensely useful if used by the general public, it would obsolete much of facebook, youtube, nearly all CDNs (content delivery networks), would kill cloudflare and company’s business model and just re-arrange the internet with far reaching social implication.
So, why hasn’t all these multicast addresses been converted in usable private IPv4 unicast address space ?
I don’t see how that would work. So all my friends video streams, for instance, would be streaming data to all my devices as they are broadcast.
But my laptop is currently asleep. It wouldn’t receive anything.
How do you solve that without storing the video on a server that I can pull from on demand?
Even for my devices that are on, they’d have to store everything as it was broadcast.
And the streams (including every other broadcast) would constantly be eating up my bandwidth.
How would I not receive streams that I’m not interested in? What would decide which broadcast packets do or don’t get sent to my router?
You subscribe to multicast groups, when the cast happens, you have something to receive it, or you don’t that’s up to you.
In the old days we have these glass tubes, you turn them on and the streams appears on the front, when they’re off you can’t see the stream.
We had little black box underneath the tube, and if you push the right incantation of button, it would store the stream and you’d watch it later.
I know that might sound a little far fetched, a little magical, but the black square in your pocket that receives emails, it can receives those casts as well.
Depending on how much spam the manufacturer has injected in it, it can probably store around a couple hundred hours of videos, and a couple hundred million tweets or “short text messages” as they used to be called
From the research I’ve done since posting that, it has become evident that all the little internet fiefdoms that make up the net, each want a slice of the pie, and CDN networks, a parrallel pseudo network to the internet, is the bridge where they get to collect that toll.
If multicast worked as designed, this toll would be in danger, because anyone could just use it instead of the CDN or using unicast and local caches.
No the streams wouldn’t be “constantly offering your bandwidth” that’s a “broadcast” a broadcast you do always receive it, but a multicast you need to enter the multicast group or else you don’t receive it.
That’s the same as above, you just don’t subscribe or you unsubscribe to the multicast group.
There would be multicast groups just for knowing what streams you could join if you wanted to.
Your street would have a multicast of just your neighbours and just for text.
This might feel foreign to you, but that’s because the software to browser is as non-existent and the capability of multicast on the internet itself.
Nobody is going to make these browsers and pseudo internet VHS, when we all know that ISPs will never allow multicast through unless forced to by the statists, finally a legitimate use of force besides building roads and powerlines !
multicast subscription to multicast groups, a trivial affair, it was trivial in 1980s, it is still trivial now, the ISPs will tell you this is a complex scaling impossibility, they are lying
I see I misunderstood how you mean this to work, that routing would handle sending data only to subscribers. I was imagining that it mean a simple LAN broadcast using a packet with the subnet bits all set (e.g. 192.168.255.255). I think that it’s more analogous to a mailing list distribution, but for general data/streams?
But your earlier example of downloading the cat video still fails unless many people request the video at the same time (otherwise you’re multicasting to one). What happens if I watch the video on my phone while out, then watch it again on my laptop at home? It will still need sending twice.
Wouldn’t a more efficient approach just be to have something like ipfs with lots of local caching?
It’s not “video-on-demand”, you don’t subscribe to a file but to an address, the multicaster sends the file or stream or message when they’re ready, you receive them if you’re listening, everyone subscribed gets the same series of packets. It’s the only benefit that multicast really has over unicast, the sender just sends the packet once. There’s no server, no caching, no repeats. Direct from you to them and it can work for everything.
So when a video is created it is immediately sent to subscribers?
In that case, for things to be sent once, it relies on the receivers always being online. That doesn’t work if my laptop is closed at the time.
That’s why I’m thinking that it needs online caching to work. Or everyone has a cloud server that handles sending and receiving while they’re not online.
In fact, that starts to sound like everyone running their own personal lemmy-like instance, to which their friends subscribe.
And in that case it wouldn’t matter if messages were sent more than once, each person’s server would handle it.
The information could be live streamed from the camera or from a recording, that doesn’t make a difference. It could also be ANY data, not just video.
Also, yes, if you are not listening for the packets, then you will not receive them later. There is no servers between the sender and receiver, this means no gatekeeper, no middleman, it’s a democratization of broadcast without intermediaries.
The only reason it is more efficient is because of how direct it is.
Before the internet we had TVs which, if they were not turned on, could not store and receive any of the video stream being broadcast, it’s a lot like that. You didn’t ask the TV station to send you a video file, they sent it out regardless and you listened to it or you didn’t.
The problem with caching or storing anything, is now you’re back to need one connection per receiver, you’re no longer sending out a single copy, you have to send 500 hundred copies if 500 people want it, that takes far too much resources.
Realistically that single sent packet is going to get copied multiple times in order to re-route it just to the subscribers. We’re not all one one big LAN.
What mechanism causes a single sent packet to get to all the subscribers (and only them)?
Assuming that we all have a static IP for simplicity, a sent packet needs to be routed to the subscriber IPs (via their ISPs). Where is that table stored? Is it sent with each packet so that it can be routed on the way? That would be a huge bloat of the packet size.
BTW, I do remember life before VCRs. Pre internet, I downloaded QWK packets from BBSs.
I get the appeal of removing communication from the hands of FB etc, but I don’t see how switching to a broadcast system that increases unreliably would help. And I don’t see how the broadcast would work on the Internet that we have.
Multicast group where the listener subscribes dynamically to sender. When a packet in the multicast IP range arrives in the router, they get routed depending on the subscription list. So instead of a packet being sent down one link, it might get sent down both of them, the routing rules are the same as unicast except in the destination of all hosts in the subscription list, which is only read for packets of the specific multicast destination address of the packet so not all rules (there would be a lot of them, megabytes of rules, too much for 80s computers, but trivial to route under 2 microsecond with our hardware)
I don’t know the exact mechanism or if that’s how it works, but it’s how it SHOULD work and I think that’s how IGMP actually works
I don’t think the hurdle is technical, it is political, stopping all these monopolistic ISPs in between the people from trying to extract a toll for every passing packets, which of course just kills the entire concept. If it’s not dynamic, fully automatic and “as free as” unicast packets then it quickly become not worth doing at small scale.
I think only government action can ram this in.
I think I get more of what you mean, now. I’m sure that there are technical issues to solve, like you said from the start, but that doesn’t mean they can’t be solved.