Hi folks,<div>I'm new, first post, yadda yadda..</div><div><br></div><div>Narender</div><div>In my experience with the bnx2 driver on RedHat and thus Centos you need to disable MSI since in my testing DELL R710's use to randomly disconnect from the network when MSI is not disabled. Add the follwoing and reload the bnx2 module. Some say you need to reboot, but see what works. I can't test it now sorry.</div>
<div> </div><div># cat /etc/modprobe.d/bnx2 </div><div><div> options bnx2 disable_msi=1</div><div><br></div><div>Secondly - I also like to check the HWADDR of each interface to ensure they follow on from each other. If they do not count up nicely (on Dell's you'll see 2 hexadecimal numbrs apart) then the enumeration might be borked. consider adding HWADDR lines to ifcfg-files then. Anyways, I digress. .. Do: </div>
<div><br></div><div>grep -i hwaddr /etc/sysconfig/network-scripts/ifcfg-eth[0-9]</div><div><br></div><div>Else, it could always be switch port related. Any work around the area? Could someone have bumped a cable and it's now a bit unhappy? You never know. The issue could well be related to something else to what I've mentioned here. I'm letting you know what I've seen so please let me know if it turns out to be something different.</div>
<div><br></div><div>Cheers</div><div><br></div><div>-Gus</div><div><br></div><div><div class="gmail_quote">On 24 November 2010 11:52, Narender <span dir="ltr"><<a href="mailto:narender.hooda@gmail.com">narender.hooda@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">Hi<br>
<br>
We are facing a strange problem from past few days. Below are the logs<br>
attached for ref.<br>
<br>
We are using nic bonding to our dell server. It has centos 5.4<br>
installed with 4 nic cards. This machine was working good from past<br>
few months. But from previous 2-3 days it went out of network by<br>
itself.<br>
<br>
Any pointer or help would be much appreciated.<br>
<br>
<br>
<br>
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++<br>
cat /proc/cpuinfo<br>
<br>
processor : 15<br>
vendor_id : GenuineIntel<br>
cpu family : 6<br>
model : 26<br>
model name : Intel(R) Xeon(R) CPU E5530 @ 2.40GHz<br>
stepping : 5<br>
==============================================================================================<br>
<br>
[root@S log]# uname -a<br>
Linux ABC.NETXXXXXX 2.6.18-164.6.1.el5 #1 SMP Tue Nov 3 16:12:36 EST<br>
2009 x86_64 x86_64 x86_64 GNU/Linux<br>
============================================================================================<br>
<br>
[root@S log]# cat /etc/redhat-release<br>
CentOS release 5.4 (Final)<br>
<br>
[root@SJC-SRCH-03-R ~]# dmesg |grep eth | more<br>
eth0: Broadcom NetXtreme II BCM5709 1000Base-T (C0) PCI Express found<br>
at mem d6000000, IRQ 90, node addr 00219b8fd3bc<br>
eth1: Broadcom NetXtreme II BCM5709 1000Base-T (C0) PCI Express found<br>
at mem d8000000, IRQ 98, node addr 00219b8fd3be<br>
eth2: Broadcom NetXtreme II BCM5709 1000Base-T (C0) PCI Express found<br>
at mem da000000, IRQ 106, node addr 00219b8fd3c0<br>
eth3: Broadcom NetXtreme II BCM5709 1000Base-T (C0) PCI Express found<br>
at mem dc000000, IRQ 114, node addr 00219b8fd3c2<br>
cnic: Added CNIC device: eth0<br>
cnic: Added CNIC device: eth1<br>
cnic: Added CNIC device: eth2<br>
cnic: Added CNIC device: eth3<br>
bonding: bond0: Adding slave eth0.<br>
bnx2: eth0: using MSIX<br>
bnx2i: iSCSI not supported, dev=eth0<br>
bonding: bond0: enslaving eth0 as a backup interface with a down link.<br>
bnx2i: iSCSI not supported, dev=eth0<br>
bonding: bond0: Adding slave eth1.<br>
bnx2: eth1: using MSIX<br>
bnx2i: iSCSI not supported, dev=eth1<br>
bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex, receive &<br>
transmit flow control ON<br>
bonding: bond0: enslaving eth1 as a backup interface with a down link.<br>
bnx2i: iSCSI not supported, dev=eth1<br>
bonding: bond0: link status definitely up for interface eth0.<br>
bonding: bond0: making interface eth0 the new active one.<br>
bonding: bond0: link status definitely up for interface eth1.<br>
bonding: bond0: link status definitely down for interface eth1, disabling it<br>
bnx2: eth1 NIC Copper Link is Up, 1000 Mbps full duplex, receive &<br>
transmit flow control ON<br>
bonding: bond0: link status definitely up for interface eth1.<br>
bonding: bond0: making interface eth1 the new active one.<br>
bnx2: eth2: using MSIX<br>
ADDRCONF(NETDEV_UP): eth2: link is not ready<br>
bnx2i: iSCSI not supported, dev=eth2<br>
bnx2i: iSCSI not supported, dev=eth2<br>
bnx2: eth2: using MSIX<br>
ADDRCONF(NETDEV_UP): eth2: link is not ready<br>
bnx2i: iSCSI not supported, dev=eth2<br>
bnx2i: iSCSI not supported, dev=eth2<br>
bnx2: eth3: using MSIX<br>
ADDRCONF(NETDEV_UP): eth3: link is not ready<br>
bnx2i: iSCSI not supported, dev=eth3<br>
bnx2i: iSCSI not supported, dev=eth3<br>
bnx2: eth3: using MSIX<br>
ADDRCONF(NETDEV_UP): eth3: link is not ready<br>
bnx2i: iSCSI not supported, dev=eth3<br>
bnx2i: iSCSI not supported, dev=eth3<br>
bnx2: eth2: using MSIX<br>
ADDRCONF(NETDEV_UP): eth2: link is not ready<br>
bnx2i: iSCSI not supported, dev=eth2<br>
bnx2: eth2: using MSIX<br>
ADDRCONF(NETDEV_UP): eth2: link is not ready<br>
bnx2i: iSCSI not supported, dev=eth2<br>
bnx2i: iSCSI not supported, dev=eth2<br>
bnx2: eth3: using MSIX<br>
ADDRCONF(NETDEV_UP): eth3: link is not ready<br>
bnx2i: iSCSI not supported, dev=eth3<br>
bnx2: eth3: using MSIX<br>
ADDRCONF(NETDEV_UP): eth3: link is not ready<br>
bnx2i: iSCSI not supported, dev=eth3<br>
bnx2i: iSCSI not supported, dev=eth3<br>
bonding: bond0: Removing slave eth1<br>
bonding: bond0: releasing active interface eth1<br>
bonding: bond0: making interface eth0 the new active one.ease 5.4 (Final)<br>
bonding: bond0: Removing slave eth0<br>
bonding: bond0: releasing active interface eth0<br>
bonding: unable to remove non-existent slave eth1 for bond bond0.<br>
bonding: bond0: Adding slave eth0.<br>
bnx2: eth0: using MSIX<br>
bnx2i: iSCSI not supported, dev=eth0<br>
bonding: bond0: enslaving eth0 as a backup interface with a down link.<br>
bnx2i: iSCSI not supported, dev=eth0<br>
bonding: bond0: Adding slave eth1.<br>
bnx2: eth1: using MSIX<br>
bnx2i: iSCSI not supported, dev=eth1<br>
bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex, receive &<br>
transmit flow control ON<br>
bonding: bond0: enslaving eth1 as a backup interface with a down link.<br>
bonding: bond0: link status definitely up for interface eth0.<br>
bonding: bond0: making interface eth0 the new active one.<br>
bonding: bond0: link status definitely up for interface eth1.<br>
bnx2i: iSCSI not supported, dev=eth1<br>
bonding: bond0: link status definitely down for interface eth1, disabling it<br>
bnx2: eth1 NIC Copper Link is Up, 1000 Mbps full duplex, receive &<br>
transmit flow control ON<br>
bonding: bond0: link status definitely up for interface eth1.<br>
bonding: bond0: making interface eth1 the new active one.<br>
bonding: bond0: Removing slave eth0<br>
bonding: bond0: Warning: the permanent HWaddr of eth0 -<br>
00:21:9B:8F:D3:BC - is still in use by bond0. Set the HWaddr of eth0<br>
to a different address to avoid<br>
conflicts.<br>
bonding: bond0: releasing backup interface eth0<br>
bonding: bond0: Removing slave eth1<br>
bonding: bond0: releasing active interface eth1<br>
bonding: bond0: Adding slave eth0.<br>
bnx2: eth0: using MSIX<br>
bnx2i: iSCSI not supported, dev=eth0<br>
bonding: bond0: enslaving eth0 as a backup interface with a down link.<br>
bnx2i: iSCSI not supported, dev=eth0<br>
bonding: bond0: Adding slave eth1.<br>
bnx2: eth1: using MSIX<br>
bnx2i: iSCSI not supported, dev=eth1<br>
bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex, receive &<br>
transmit flow control ON<br>
bonding: bond0: enslaving eth1 as a backup interface with a down link.<br>
bonding: bond0: link status definitely up for interface eth0.<br>
bonding: bond0: making interface eth0 the new active one.<br>
bonding: bond0: link status definitely up for interface eth1.<br>
bnx2i: iSCSI not supported, dev=eth1<br>
bonding: bond0: link status definitely down for interface eth1, disabling it<br>
bnx2: eth1 NIC Copper Link is Up, 1000 Mbps full duplex, receive &<br>
transmit flow control ON<br>
bonding: bond0: link status definitely up for interface eth1.<br>
bonding: bond0: making interface eth1 the new active one.<br>
===========================================================================================================================================<br>
<br>
============================================================================================<br>
[root@S ~]# modinfo e1000 | more<br>
filename:<br>
/lib/modules/2.6.18-164.6.1.el5/kernel/drivers/net/e1000/e1000.ko<br>
version: 7.3.20-k2-NAPI<br>
license: GPL<br>
description: Intel(R) PRO/1000 Network Driver<br>
author: Intel Corporation, <<a href="mailto:linux.nics@intel.com">linux.nics@intel.com</a>><br>
srcversion: 26DD82C709EB760C93D4103<br>
alias: pci:v00008086d00001000sv*sd*bc*sc*i*<br>
depends:<br>
vermagic: 2.6.18-164.6.1.el5 SMP mod_unload gcc-4.1<br>
parm: TxDescriptors:Number of transmit descriptors (array of int)<br>
parm: TxDescPower:Binary exponential size (2^X) of each<br>
transmit descriptor (array of int)<br>
parm: RxDescriptors:Number of receive descriptors (array of int)<br>
parm: Speed:Speed setting (array of int)<br>
parm: Duplex:Duplex setting (array of int)<br>
parm: AutoNeg:Advertised auto-negotiation setting (array of int)<br>
parm: FlowControl:Flow Control setting (array of int)<br>
parm: XsumRX:Disable or enable Receive Checksum offload (array of int)<br>
parm: TxIntDelay:Transmit Interrupt Delay (array of int)<br>
parm: TxAbsIntDelay:Transmit Absolute Interrupt Delay (array of int)<br>
parm: RxIntDelay:Receive Interrupt Delay (array of int)<br>
parm: RxAbsIntDelay:Receive Absolute Interrupt Delay (array of int)<br>
parm: InterruptThrottleRate:Interrupt Throttling Rate (array of int)<br>
parm: SmartPowerDownEnable:Enable PHY smart power down (array of int)<br>
parm: KumeranLockLoss:Enable Kumeran lock loss workaround<br>
(array of int)<br>
parm: copybreak:Maximum size of packet that is copied to a<br>
new buffer on receive (uint)<br>
parm: debug:Debug level (0=none,...,16=all) (int)<br>
=================================================================================================<br>
<br>
[root@ ~]# ethtool eth0<br>
Settings for eth0:<br>
Supported ports: [ TP ]<br>
Supported link modes: 10baseT/Half 10baseT/Full<br>
100baseT/Half 100baseT/Full<br>
1000baseT/Full<br>
Supports auto-negotiation: Yes<br>
Advertised link modes: 10baseT/Half 10baseT/Full<br>
100baseT/Half 100baseT/Full<br>
1000baseT/Full<br>
Advertised auto-negotiation: Yes<br>
Speed: 1000Mb/s<br>
Duplex: Full<br>
Port: Twisted Pair<br>
PHYAD: 1<br>
Transceiver: internal<br>
Auto-negotiation: on<br>
Supports Wake-on: g<br>
Wake-on: d<br>
<br>
<br>
<br>
=============================================<br>
root@S ~]# ethtool -i eth0<br>
driver: bnx2<br>
version: 1.9.3<br>
firmware-version: 4.6.4 NCSI 1.0.6<br>
bus-info: 0000:01:00.0<br>
[=========================<br>
[root@ ~]# ethtool -i eth1<br>
driver: bnx2<br>
version: 1.9.3<br>
firmware-version: 4.6.4 NCSI 1.0.6<br>
bus-info: 0000:01:00.1<br>
================================<br>
[root@S ~]# ethtool -i bond0<br>
driver: bonding<br>
version: 3.4.0<br>
firmware-version: 2<br>
bus-info:<br>
===================================<br>
<font color="#888888">--<br>
Gllug mailing list - <a href="mailto:Gllug@gllug.org.uk">Gllug@gllug.org.uk</a><br>
<a href="http://lists.gllug.org.uk/mailman/listinfo/gllug" target="_blank">http://lists.gllug.org.uk/mailman/listinfo/gllug</a><br>
</font></blockquote></div><br></div></div>