Hi again this is your host Habib Korea. We will start the course by first talking about modes of communication unicast as you know is the most commonly used mode of communication. In unicast what happens is if you have one receiver and one sender, the receiver and the sender will be talking to one another, creating a form of one to one communication. In a campus design network you will see one server talking to one receiver for example, but if you have a second receiver, then the same server will talk to the second receiver but it will generate twice as much as many packets as it is generating for that same single receiver. So basically it will add a lot of bandwidth. And it will basically utilize a lot of CPU processing power from that on the server.
So that's unicast. In general, multicast is basically one to many type of topology and design. When it comes to multicast, the same packet will simultaneously be received by by many receivers. And this is the topology and the technology that we will be covering in this course. All About multicast broadcast similar to multicast but it's a little bit different broadcast what happens it doesn't care if if if the n nodes are receivers, other clients, but it will go ahead and populate that packet and generate a lot of traffic to each and every port on the network even even to those nodes that are not interested to receive those packets, so broadcasts can be is that is a technology it can be used in a form of generating sirens alerts recently have implemented a siren solution for our company. And we have used, you know, a single source of siren from a server That will be basically triggered and it will be loud and clear, and it will reach every end node of the network.
So that is broadcast for you. Let's move on. In this slide I'll be talking about IP multicast fundamentals. The first and the most important is transferring of packets from one source to many receivers simultaneously. Number two is the receiver should be ready to accept these packets otherwise, multicast will not work. Number three is it is used to conserve bandwidth as we have seen in our previous slide.
It is scalable. It's used a lot. It's scalable and its utilization. is independent of the number of receivers, meaning if you have more than one receivers, it will not matter how much utilization the network will use, it will use the same utilization. It's stable, stable performance, it will provide stable performance all receivers, all receivers will have the same experience. It uses the class t which is two to four dot zero dot zero dot zero to 239 dot 525522552255 subnet block, which is basically for all multicast services.
There are some reservations Of course in certain blocks two to four dot zero dot two dot zero slash 24 is known as the local control block and it's mainly used By interior routing protocols 232 dot zero dot two dot zero slash eight is basically used by source specific multicast. And those sources are 1239 dot zero dot zero dot zero slash eight is the local scope that we can use in our networks. The multicast components similar to any other network implementation there will be a source which is basically used by streaming servers for audio and video then there will be a multicast routers that will help routing of the multicast traffic. There will be multicast which is to provide and node connectivity to clients, there will be multicast clients known as receivers. Then there's a routing protocol, which is the Pim p i m. Then there is the G group Management Protocol known as igmp.
Igmp is of course, ipv4 protocol. mld is ipv6 protocol. And it's basically used by clients to send signals. mld is stands for multicast listening discovery. And, as I said it's used by ipv6. Let's talk about igmp.
First igmp is stands for internet group Management Protocol. It's used by a host to notify the local law To that it wishes to receive multicast traffic. That's why it's important. And it also informs the local router that it it is interested to join a group, a destination group. Right there are two versions of igmp. One is the igmp version two, and it's mostly deployed and supported nowadays.
It stands for any source multicast as well. And more For more information you can review RFC 2236. The other version of igmp is the version three igmp and it is basically known as SSM source specific multicast igmp version three is also used nowadays and it's good. It has Good network support. Now, when is multicast implemented, it's implemented when you have an application that delivers too many receivers and in the form of server and website replication as well. If you have a situation where you need to provide distributed interactive SAML simulations or virtual reality type networks, it can be used when you have periodic data delivery content, which is basically the content similar to stock quotes or sports scores.
Video streaming services of course, collaboration group where services Now let's talk about Pim. Pim is actually The protocol used between router to router communication when it comes to multicast so if you have basically two routers enabling each other in a campus design network, the two router the interfaces that are facing each other when need to have Pim. There are two types of Pim one is Pim, dense mode, and the other one is Pim sparse mode. So let's talk about the Pim dense mode. First, it operates by flooding the multicast packets in all directions of the network, which is not a good thing. It assumes that the network is densely populated with receivers or clients.
So it's basically a protocol doesn't care about the network as much. So it will basically flood the multicast half traffic in any direction and it assumes that there are clients in every port and they will be receiving the multicast traffic, it has scaling problems, it is not popular in the industry, believe it or not, I have actually seen him dense mode is still in use in some of the networks that I went to, to check and troubleshoot Pim sparse mode, it is. It is more efficient for transmitting to multicast groups, the receivers will request a membership in multicast group multicast packets will be sent only to receivers that show interest and this point is important. This means that the traffic We'll be going only in the direction where the receivers are. And that will help basically conserve bandwidth if it streamlines communication and so on. It is a scalable as I mentioned, because you can add many receivers to the network and and the same packet will basically be received at the end nodes simultaneously.
So, the network doesn't care how many receivers you have placed. It's popular in the industry multicast service model, this service model is one of the most common models that's that is being in under multicast. So you have a members The members are the receivers basically Then you have the switch, the switch will be talking to the receivers. And then that this which are basically at the layer two of the network, then you have the layer three, the layer three, which is the network layer, and that's where the routers operate and then you have the source. Basically, this topology is gives you the full picture of where the multicast will be, will be enabled, right? So you have this switch that's connected to an EndNote and igmp will be enabled on the switch.
Usually, most of the switches have igmp enabled by default. This switch uses something known as igmp snooping. So it notes it basically makes a note Have who is interested to receive what type of traffic and it it basically classifies the Class D network traffic and sends it through to the route. And basically and the router will basically send those signals to other neighboring routers using the Pim protocol. And depending on where the source is the source could be behind the router or in front of the switch and so on. So, we will be doing a lot of labs, basically, and we will talk about and, and I'll show you how to configure igmp how to enable the interfaces and we will apply Pim dense mode and sparse mode and basically go through the through the case as we Do it step by step.
Thank you very much.