Checking Faulty Cables

I recently had to work with a 3rd part to diagnose a link between our devices and came across this handy command. The link in question was a pretty hefty (75m-ish) UTP cable run between a Cisco and HP switch. I have visibility of the Cisco switch, into the structured cabling into the patch panel, and the 3rd parties cable. Unfortunately I didn’t have a DC Operations tech with access to a Fluke, or the ability to interpret the output of a Fluke, but they did have a laptop with a 100Mbps NIC (this becomes important later on).

So I started by running the diagnostic on the production connection. It’s not working, so I don’t have to worry about taking stuff down. This gives me the following:

test cable-diagnostics tdr interface gi7/21
TDR test started on interface Gi7/21
A TDR test can take a few seconds to run on an interface
Use 'show cable-diagnostics tdr' to read the TDR results.

switchA#show cable-diagnostics tdr interface gi7/21

TDR test last run on: July 09 10:30:20
Interface Speed Pair Cable length Distance to fault Channel Pair status
——— —– —- ——————- ——————- ——- ————
Gi7/21 auto 1-2 77 +/- 6 m N/A Invalid Terminated
3-6 75 +/- 6 m N/A Invalid Terminated
4-5 75 +/- 6 m N/A Invalid Terminated
7-8 N/A 8 +/- 6 m Invalid Open

It doesn’t really tell me anything, apart from that there’s most likely, an infrastructure fault. This could be a faulty cable, patch panel, or a faulty switchport.

I now start my troubleshooting. I need to ascertain the path I have visibility of. The tech tells me there is structured cabling to the line card on the switch, so I get him to connect his laptop to the section the B end connects to, and I run the diagnostic.

switchA#show cable-diagnostics tdr interface gi7/21

TDR test last run on: July 09 10:31:55
Interface Speed Pair Cable length Distance to fault Channel Pair status
——— —– —- ——————- ——————- ——- ————
Gi7/21 100 1-2 16 +/- 6 m N/A Pair A Terminated
3-6 16 +/- 6 m N/A Pair B Terminated
4-5 N/A 12 +/- 6 m Invalid Short
7-8 N/A 11 +/- 6 m Invalid Short

Interestingly, this works, but at 100Mbps. I also see that a couple of pairs in the cable are not terminated correctly. Pair D is for 1Gbps and pair C is for PoE. This was turned up by a quick Google search. I also see that the fault is located +/-6m from the switchport, suggesting the problem is at my end.

I double check this against a working port:
Another_switch#show cable-diagnostics tdr interface gigabitEthernet 7/16

TDR test last run on: July 09 10:18:43
Interface Speed Pair Cable length Distance to fault Channel Pair status
--------- ----- ---- ------------------- ------------------- ------- ------------
Gi7/16 1000 1-2 0 +/- 6 m N/A Pair B Terminated
3-6 0 +/- 6 m N/A Pair A Terminated
4-5 309 +/- 6 m N/A Pair D Terminated
7-8 99 +/- 6 m N/A Pair C Terminated

I now reconnected the customer end, and set the port speed to 100Mbps manually using

interface Gi7/16
speed 100

Unsurprisingly, the port comes up. The cable diagnostic shows the same output as it did above. I suspect the problem is either a faulty punch down on pair D, an incorrectly crimped cable, or the customer manually setting their speed to 100Mbps for some reason.

Now usually, if this were structured cabling to a new switch, I would ask for the patch panel terminations to be punched down again. However, this time, I know the cable is the newest part in the puzzle and ask the tech to re-crimp it. It works, and is now up at 1Gbps.

I now have learnt something new about cabling pairs. There are a few things to be aware of.
– The “Distance to fault” field can throw up false positives.
– I do not know how a correctly terminated cable connected to a 100Mbps will output. Will it see the cable D termination even if it is not being used for communication?
– The connection drops momentarily when the diagnostic is run. Remember to be aware of this if troubleshooting a working connection.

Check 10Gb Interfaces On An ASA

I recently had to deploy and ASA pair. One of the pre-requisites is to make sure there’s an optic in the interface we’re going to use. On a switch you have the following options:

#show int te5/4 transceiver
Transceiver monitoring is disabled for all interfaces.

ITU Channel not available (Wavelength not available),
Transceiver is internally calibrated.
If device is externally calibrated, only calibrated values are printed.
++ : high alarm, + : high warning, - : low warning, -- : low alarm.
NA or N/A: not applicable, Tx: transmit, Rx: receive.
mA: milliamperes, dBm: decibels (milliwatts).

Optical Optical
Temperature Voltage Current Tx Power Rx Power
Port (Celsius) (Volts) (mA) (dBm) (dBm)
---------- ----------- ------- -------- -------- --------
Te5/4 27.0 0.00 7.6 -- -2.2 -2.7


Or

#show int tenGigabitEthernet 5/4 capabilities
TenGigabitEthernet5/4
Model: VS-S720-10G
Type: 10Gbase-SR
Speed: 10000
Duplex: full
Trunk encap. type: 802.1Q,ISL
Trunk mode: on,off,desirable,nonegotiate
Channel: yes
Broadcast suppression: percentage(0-100)
Flowcontrol: rx-(off,on),tx-(off,on)
Membership: static
Fast Start: yes
QOS scheduling: rx-(8q4t), tx-(1p7q4t)
QOS queueing mode: rx-(cos,dscp), tx-(cos,dscp)
CoS rewrite: yes
ToS rewrite: yes
Inline power: no
Inline power policing: no
SPAN: source/destination
UDLD yes
Link Debounce: yes
Link Debounce Time: yes
Ports-in-ASIC (Sub-port ASIC) : 1-5 (3-4)
Remote switch uplink: no
Dot1x: yes
Port-Security: yes

Or

#show inventory "Transceiver Te6/4"
NAME: "Transceiver Te6/4", DESCR: "X2 Transceiver 10Gbase-SR Te6/4"
PID: X2-10GB-SR , VID: V05 , SN: FNS14501PP3

Unfortunately, on an ASA, there is no way to check if a transceiver is present. You can use your CCO account to submit a feature request though.

Kill An SSH Connection

Check what’s connected to the switch first:

#show ssh
%No SSHv1 server connections running.
Connection Version Mode Encryption Hmac State Username
0 2.0 IN aes128-cbc hmac-md5 Session started user1
0 2.0 OUT aes128-cbc hmac-md5 Session started user1
1 2.0 IN aes128-cbc hmac-md5 Session started user1
1 2.0 OUT aes128-cbc hmac-md5 Session started user1

Kill session using “disconnect” command:

#disconnect ssh ?
The number of the active SSH connection
vty Virtual terminal

#disconnect ssh 0

Fun With Route-Maps And BGP

I’ve always been a little bit hazy on the circumstances under which a BGP neighbour needs to be cleared. This extremely informative page from Cisco casts a bit of light on the situation. Especially, the section on when to clear a BGP neighbourship.

The official line is any in/outbound policy update will require the BGP session to be cleared to take effect. Obviously, this depends on the direction the policy is applied when you clear the neighbourship in/outbound.

So my question is whether a new route-map constitutes a policy update. Now this may sound like a stupid question (remember the title of the blog please dear reader). But someone legitimately asked me if applying a new policy constituted an update. So let’s find out.

This is my topology:

Test Topology
Test Topology

This is what I’m doing:
– Loopback0 (10.1.1.1/32) is advertised into OSPF on R1 along with the 1.1.1.0/30 network.
– The 1.1.1.0/30 network is advertised into OSPF on R2.
– BGP is used to advertise the 3.3.3.0/24 network using a peer-group TEST.
– R1 and R2 have an iBGP peering in AS 65000 using the physical addresses of the /30.
– R1 and R2 use a loopback address for their peering.

BGP configuration on R1 and R2 looks like this:


R1#show run | section router bgp
router bgp 65000
bgp router-id 1.1.1.1
bgp log-neighbor-changes
neighbor TEST peer-group
neighbor TEST remote-as 65000
neighbor 1.1.1.2 peer-group TEST
!
address-family ipv4
neighbor TEST soft-reconfiguration inbound
neighbor TEST route-map TEST out
neighbor 1.1.1.2 activate
no auto-summary
no synchronization
network 2.2.2.2 mask 255.255.255.255
network 3.3.3.0 mask 255.255.255.0
exit-address-family

R1#show run | section access-lis
access-list 1 permit 2.2.2.2
access-list 1 permit 10.1.1.1
access-list 1 permit 3.3.3.0 0.0.0.255
R1#show run | section route-map
neighbor TEST route-map TEST out
route-map TEST permit 5
match ip address 1
set ip next-hop 10.1.1.1

R2#show run | section router bgp
router bgp 65000
no synchronization
bgp log-neighbor-changes
neighbor 1.1.1.1 remote-as 65000
neighbor 1.1.1.1 soft-reconfiguration inbound
no auto-summary

Pretty vanilla as you can see.

You can see the routes being learnt on R2 via BGP:

R2#sh ip bgp
BGP table version is 17, local router ID is 10.1.1.2
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path
r>i2.2.2.2/32 10.1.1.1 0 100 0 i
*>i3.3.3.0/24 10.1.1.1 0 100 0 i

This shows the next hop for the 3.3.3.0/24 network having its next hop set to 10.1.1.1 by the route map on R1.

I now remove the route map on R1, so it’s no longer applied to the peer-group and clear the session to see what BGP comes up with:


R2#sh ip bgp
BGP table version is 19, local router ID is 10.1.1.2
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path
r>i2.2.2.2/32 1.1.1.1 0 100 0 i
*>i3.3.3.0/24 1.1.1.1 0 100 0 i

You can see BGP naturally wants to set the next hop using the 1.1.1.1 address as the next hop on R1.

I then reapplied the route-map on R1 so the next-hop is set to 10.1.1.1 and checked the route:

I observed no next hop change.

I then cleared the BGP neighbourship outbound using “clear ip bgp 1.1.1.2 soft out” to see what would happen:

R2#sh ip bgp
BGP table version is 21, local router ID is 10.1.1.2
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete

Network Next Hop Metric LocPrf Weight Path
r>i2.2.2.2/32 10.1.1.1 0 100 0 i
*>i3.3.3.0/24 10.1.1.1 0 100 0 i

As you can see, the next hop set by R1 has taken effect. Pretty basic stuff, but it’s something you find yourself questioning in the absence of explicit information. I blame the way a lot of the technical specs need to be interpreted very carefully.

This has also cleared up the confusion regarding which direction the neighbourship needs to be cleared in depending on the direction the policy is applied. Here, we advertise a route out to a neighbour, but we influence the next hop inbound to us (we’re R1 by the way). Clearing the neighbourship outbound from R1 effects the change we’re trying to achieve.