Troubleshooting guide for IP Multicast-routing on Omada Pro Switch
Contents
Case 1. L3 multicast service on-demand is unavailable when configuring PIM-DM
Case 2. L3 multicast service on-demand is unavailable when configuring PIM-SM
Objective
This article provides troubleshooting guidance for L3 multicast on-demand errors based on the PIM-DM and PIM-SM modes.
Requirements
Introduction
PIM (Protocol Independent Multicast) is independent of any specific unicast routing protocol. It does not need to maintain special unicast routing information, but directly uses the routing information in the unicast routing table to perform a reverse path forwarding (RPF) check on multicast packets. If the check passes, it creates a multicast routing table entry to forward the multicast packets. Currently, L3 switches support both PIM-Dense Mode (PIM-DM) and PIM-Sparse Mode (PIM-SM). PIM-DM is suitable for small networks with dense multicast receivers in each network segment; PIM-SM is more complex to configure and suitable for large-scale networks.
IGMP (Internet Group Management Protocol) is a part of the TCP/IP protocol suite and is used for IPv4 multicast membership management. IGMP is used to establish and maintain multicast group membership between the receiver host and its directly connected multicast routers by exchanging IGMP messages.
L3 multicast routing requires the deployment of both PIM and IGMP in the network. PIM needs to be deployed in both dense mode and sparse mode in the entire L3 multicast domain. IGMP needs to be deployed on multicast routers connected to multicast receivers.
Troubleshooting steps
Case 1. L3 multicast service on-demand is unavailable when configuring PIM-DM
To help understand the troubleshooting process, the following briefly introduces the principle of PIM-DM first. The general process of the PIM-DM data stream forwarding can be summarized as Flooding - Prune - Graft:
- Flooding: After receiving the multicast data stream, the PIM router directly connected to the multicast source floods the data stream to the entire PIM-DM multicast domain., In this way, all PIM routers in the PIM-DM multicast domain can obtain the information of the multicast source.
- Prune: After the data stream is flooded, if no multicast group member is connected to the PIM router, Prune is required to notify the upstream router to stop forwarding the multicast group information.
- Graft: When a client is connected to the PIM router, the router first checks whether the multicast group information obtained during the flooding process contains the multicast group requested by the client. If yes, a graft message will be sent to the multicast source of the multicast group. Through the RPF check mechanism, the PIM graft message is forwarded hop by hop on the shortest path tree between the PIM router directly connected to the client and the source. After receiving the graft message, the PIM router directly connected to the source releases the corresponding multicast data stream to the downstream that sent the graft message, and the multicast data stream will be forwarded to the client along the multicast shortest path tree generated by the graft.
As shown in the figure below, four Omada switches form a typical L3 multicast topology with a loop. The multicast server is located in the 172.19.1.0/24 network segment, and the Client is located in the 172.16.1.0/24 network segment. Because the server and Client are in different network segments, L3 multicast service is required to enable the Client to normally play on-demand multicast data. Refer to the following table for the configuration of each DUT in the topology.
Device |
Interface |
IP Address |
VLAN |
DUT#1 |
1/0/1 |
172.16.1.1/24 |
100 |
1/0/2 |
10.10.10.1/30 |
101 |
|
DUT#2 |
1/0/1 |
10.10.10.2/30 |
101 |
1/0/2 |
10.10.10.5/30 |
102 |
|
1/0/3 |
10.10.10.9/30 |
103 |
|
DUT#3 |
1/0/1 |
10.10.10.10/30 |
103 |
1/0/2 |
10.10.10.13/30 |
104 |
|
DUT#4 |
1/0/1 |
10.10.10.6/30 |
102 |
1/0/2 |
10.10.10.14/30 |
104 |
|
1/0/3 |
172.19.1.1/24 |
200 |
The following is the troubleshooting guidance for common PIM-DM issues based on the above topology:
Step 1. Check the connectivity between the multicast server and its directly connected PIM router
For L3 multicast services, the IP of the multicast server and the IP of its directly connected PIM router interface must be in the same network segment. You can use the command show ip route to check the unicast routing table on the PIM router directly connected to the multicast server. The prefix Code of the directly connected routing entry is C. In the example, the PIM router directly connected to the source is DUT#4. The following is the result of show ip route on DUT#4. The entry in the red box is the routing entry of the PIM router directly connected to the source.
Furthermore, you can directly ping the IP address of the multicast server on the PIM router. If the ping succeeds, it means that the multicast server and the IP address of the directly connected PIM router interface are in the same network segment and there is no connectivity problem.
In addition, in PIM-DM mode, entries will be created by default for legal data streams. Therefore, you can also use the show ip mroute command to check the multicast routing table of the PIM router directly connected to the source. If the multicast table entry corresponding to the multicast server has been established, there will be no problem with the connectivity between the multicast server and its directly connected PIM router. The figure below shows the result of show ip mroute on DUT#4, displaying that the multicast server has created a total of 20 channels.
Step 2. Make sure that all network segments in the entire multicast domain are unicast reachable.
The PIM protocol uses the routing information of the unicast routing table to perform RPF checks on multicast packets. If a PIM router cannot reach a certain network segment in the multicast domain, the multicast data stream RPF check may fail, and eventually the on-demand broadcast may not be available. You can use the command show ip route to check the routing table. In this example, the multicast domain includes six network segments: vlan100, vlan101-104, and vlan200. Therefore, all PIM routers in the multicast domain should contain routing entries to reach these six segments. For details on how to configure dynamic routing, refer to the corresponding dynamic routing protocol configuration guide.
In particular, check the route entries on each PIM router that reach the network segment where the source is located. If the source IP of the multicast data stream is unreachable, the corresponding data stream will be directly discarded. You can use the command show ip route specify <multicast server ip> on the PIM router to check the entries.
Step 3. Ensure that the PIM neighbor relationship between each PIM router in the multicast domain is established normally
Multicast data streams can only be forwarded hop by hop between PIM neighbors, so it is necessary to ensure that PIM neighbor relationship is established among all PIM routers in the multicast domain. To perform this check, use the show ip pim neighbor. In this example, only the interface vlan101 of DUT#1 has PIM neighbors, and the interfaces vlan101-103 of DUT#2 have PIM neighbors. If PIM neighbor ship cannot be established normally, check whether the PIM is enabled on each interface and whether the IP of each interface is configured correctly.
Step 4. Ensure that the interface directly connected to the client has both IGMP and PIM enabled and that the L3 IGMP multicast table is correctly created
The interface with IGMP function enabled can serve as an IGMP querier. The L3 IGMP querier performs strict checks on the source IP of report/leave messages, and only processes IGMP protocol messages whose source IP and querier are in the same network segment. The default IGMP is enabled as IGMPv3, which is compatible with IGMPv1 and IGMPv2 messages. In this example, an IGMPv2 multicast group is established for the IGMPv3 querier. You can use the command show ip igmp interface statistic to check the sending and receiving of IGMP messages. In addition, you can use the command show ip igmp group interface <type><id>{detail} to check the group establishment of L3 IGMP.
Step 5. If the L3 multicast still fails, check the multicast routing hop by hop
In PIM-DM mode, multicast data streams are forwarded hop by hop on the shortest path tree (SPT) between Source and Client. In the example topology above, the SPT is SourceàDUT#4àDUT#2àDUT#1àClient. Since we have checked the hops SourceàDUT#4 and DUT#1àClient, now we need to check the part DUT#4àDUT#2àDUT#1. It is recommended to check the multicast routing table from Client to Source using the command show ip mroute. Check whether the incoming interface and outgoing interface in the multicast routing table meet expectations. The figure below shows the complete multicast routing table of DUT#4àDUT#2àDUT#1.
Case 2. L3 multicast service on-demand is unavailable when configuring PIM-SM
The flooding of PIM-DM occupies a large bandwidth and increases the potential pressure on all PIM routers in the multicast domain. Therefore, in large-scale networks, it is recommended to use the PIM-SM mode. By setting Bootstrap Router (BSR) and Rendezvous Point (RP), PIM-SM allows all multicast data streams in the network to register on RP in unicast mode, ensuring that RP can record all multicast information in the multicast domain, and other PIM routers obtain multicast group information through RP. The operation of PIM-SM can be divided into two major phases:
- Phase 1: Client receives data stream through the RPT (Rendezvous Point Tree)
After receiving the IGMP group join request from the client, the PIM router directly connected to the client requests the multicast group data from the RP, and then the RP requests the multicast data from the corresponding multicast source based on the registered multicast group information. The multicast data stream reaches the client via the RP. In this phase, the multicast forwarding path through the RP is called the Rendezvous Point Tree (RPT).
- Phase 2: Client receives data stream through the SPT (Shortest Path Tree)
In phase 1, the PIM router directly connected to the client has received the multicast data stream, and it can obtain the multicast source information based on the source IP of the multicast data stream. Afterwards, this PIM router sends a Join message carrying the multicast source IP information to the multicast source based on the RPF check. The PIM intermediate router receiving the Join message continues to perform RPF checks and sends Join messages to the multicast source, until the PIM router connected to the source receives the Join message and forwards the corresponding data stream to the devices connected. Now the multicast data stream is forwarded to the client along the SPT generated by the Join message, and the data stream transferring in phase 1 is cancelled, reducing the bandwidth pressure on the RP and other non-shortest paths.
As shown in the figure below, four Omada switches form a typical topology with a loop, deploying L3 multicast services in PIM-SM mode. The IP address of the multicast server, client, and each interface are the same as that of PIM-DM. In PIM-SM, DUT#2 is set as BSR and DUT#3 is set as RP.
Compared with PIM-DM, PIM-SM is more complex and requires more checks for troubleshooting. Steps 1-4 of PIM-DM troubleshooting are still applicable to PIM-SM, and it is recommended to check these steps first when troubleshooting PIM-SM. In addition, when checking network connectivity in the PIM-SM mode, special attention should be paid to the connectivity between source/RP/client. The following introduces the troubleshooting steps specific to PIM-SM.
Step 1. Ensure that the BSR and RP information of each PIM router in the multicast domain is the same, and the multicast group used by the multicast server has the corresponding RP information.
Use the command show ip pim bsr-router to check the BSR information. The following are the results of using this command on DUT#1 and DUT#2 (BSR). For the device with BSR configured, the Candidate BSR information can be displayed. To configure the BSR, use the command ip pim bsr-candidate interface <type><id>. It is recommended to use this command on all PIM routers in the multicast domain to check whether the Elected BSR information is consistent.
Use the command show ip pim rp mapping {candidate} to check the RP information. The following are the results of using this command on DUT#2 and DUT#3 (RP). For the device with RP-candidate configured, use the command show ip pim rp mapping candidate to check the RP-candidate configuration. It is recommended to use this command on all PIM routers in the multicast domain to check whether the Elected BSR information is consistent.
To configure the RP, use the command ip pim rp-candidate interface <type><id><group addr><group mask>.
Step 2. Use the command show ip pim rp hash <group addr> to confirm the RP address of the multicast group with on-demand failure.
In a large-scale network, different multicast groups may correspond to different RP addresses. For a multicast group with on-demand failure, find the corresponding RP address first. The following shows the query of the RP address of multicast group 235.0.0.11 on DUT#4.
Step 3. Check the establishment and maintenance of RPT/SPT on the RP and the PIM router directly connected to the client
RP should contain all the multicast group registration information of the entire multicast domain. As shown in the figure below, the result of show ip mroute for DUT#3 (RP) shows the multicast group information of all multicast groups 235.0.0.11-235.0.0.30. Here the 10 multicast routing entries 235.0.0.11-235.0.0.20 are displayed in two formats, and the multicast routing entries whose source IP is * are RPTs. Since the devices connected use IGMPv2 to join groups, IGMPv2 messages are not limited to specific multicast source IPs, so * is used for display.
The following figure shows the multicast routing table information on the PIM router directly connected to the client. This table only contains information of 235.0.0.11-235.0.0.20 because the connected clients only initiates IGMP group join requests for these multicast groups.
Step 6. If the faulty device is still not located, perform a hop-by-hop check along the multicast forwarding path.
The red boxes and arrows in the figure below highlight the SPT forwarding paths checked hop by hop, which indicate the location of the fault. Depending on the stage of the fault, it may be necessary to analyze the RPT and SPT forwarding paths separately using the same method.
Conclusion
This article briefs on the characteristics and principle of L3 multicast, and introduces the deployment methods of PIM-DM and PIM-SM, as well as their troubleshooting steps. If your problem persists, contact the TP-Link Support for technical assistance
Get to know more details of each function and configuration please go to Download Center to download the manual of your product.
Esta FAQ é útil?
Seu feedback ajuda a melhorar este site.