Friday, 11 July 2014

Playing around with NIC card settings.

Setting NIC speed and duplex

Solaris is often unable to correctly auto-negotiate duplex settings with a link partner (e.g. switch), especially when the switch is set to 100Mbit full-duplex. You can force the NIC into 100Mbit full-duplex by disabling auto-negotiation and 100Mbit half-duplex capability.

Example with hme0:

1. Make the changes to the running system.
# ndd -set /dev/hme adv_100hdx_cap 0
# ndd -set /dev/hme adv_100fdx_cap 1
# ndd -set /dev/hme adv_autoneg_cap 0

2. Make kernel parameter changes to preserve the speed and duplex settings after a reboot.
# vi /etc/system
Add:
# set hme:hme_adv_autoneg_cap=0
# set hme:hme_adv_100hdx_cap=0
# set hme:hme_adv_100fdx_cap=1

Note: the /etc/system change affects all hme interfaces if multiple NICs are present (e.g. hme0hme1).



Procedure to add the new disk to Solaris Zone..

By default it is not possible to add raw device to zone without taking a reboot on zone, but here is some cool stuff for usage to avoid the downtime.

I found a little hack to accomplish the objective of adding raw device to zone without rebooting it. Here is a way out - 

1) Add the device to the zonecfg 

#zonecfg -z barvozone1
zonecfg:barvozone1> add device
zonecfg:barvozone1:device> set match=/dev/rdsk/c3t60050768018A8023B8000000000000F0d0s0
zonecfg:barvozone1:device> end
zonecfg:barvozone1>exit

2) use the mknod command to create the device in the zones dev folder

#ls -l /dev/rdsk/c3t60050768018A8023B8000000000000F0d0s0
lrwxrwxrwx   1 root     root          67 Feb 18 15:34 /dev/rdsk/c3t60050768018A8023B8000000000000F0d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018a8023b8000000000000f0:a,raw

#ls -l /devices/scsi_vhci/ssd@g60050768018a8023b8000000000000f0:a,raw
crw-r-----   1 root     sys      118, 128 Mar  5 23:55 /devices/scsi_vhci/ssd@g60050768018a8023b8000000000000f0:a,raw

# cd /barvozone1/zonepath/dev

# mknod c3t60050768018A8023B8000000000000F0d0s0 c 118 128

That's it. The raw device is now visible within zone and now you can start with your stuffs without any downtime. Isn't it cool?

Sunday, 14 April 2013

NIC Bonding in RHEL


Steps for configuring bonding
In this document we are configuring bond0 with interfaces eth0 and eth1

Step 1- Load Kernel module
For a channel bonding interface to be valid, the kernel module must be loaded. To ensure that the module is loaded when the channel bonding interface is brought up, create a new file as root named <bonding>.conf in the /etc/modprobe.d/ directory. Note that we can name this file anything but it should with ends with a .conf extension. Insert the following line in this new file alias bond<N> bonding
Replace <N> with the interface number, such as 0. If we want to configure  configuring more than on bonding interface, For  each configured channel bonding interface, there must be a corresponding entry in  /etc/modprobe.d/<bonding>.conf file
In this example we are configuring bond0 and  file name is bonding.conf
  
[root@praji2 modprobe.d]# cat /etc/modprobe.d/bonding.conf
  alias bond0 bonding

Step2- create channel bonding interface
We  need to create a channel bonding interface configuration file on/etc/sysconfig/network-scripts/ directory called ifcfg-bond<N> ,replacing <N> with the number for the interface, such as 0 and specify the bonding parameters on the file. Here we are creating ifcfg-bond0 file with following contents
[root@praji2 network-scripts]# cat ifcfg-bond0
DEVICE=bond0
IPADDR=172.16.1.207
NETMASK=255.255.255.0
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
BONDING_OPTS="mode=0 miimon=1000"

Step 3- Configure Network interfaces
After the channel bonding interface is created, the network interfaces to be bound together must be configured by adding the MASTER= and SLAVE= directives to their configuration files. The configuration files for each of the channel-bonded interfaces can be nearly identical. For example, if two Ethernet interfaces are being channel bonded, both eth0 and eth1 may look like the following example
Interface eth0 configuration
 [root@praji2 network-scripts]# cat ifcfg-eth0
DEVICE=eth0
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
USERCTL=no
TYPE=Ethernet
Interface eth1 configuration
[root@praji2 network-scripts]# cat ifcfg-eth1
DEVICE=eth1
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
TYPE=Ethernet
USERCTL=no
After configuring the interfaces we have to bring up the bond by using command
[root@praji2 network-scripts]# ifconfig bond0 up
If the bonding is correctly configured we can view the configuration using ifconfig command
[root@praji2 network-scripts]# ifconfig
bond0     Link encap:Ethernet  HWaddr 00:0C:29:69:31:C4
          inet addr:172.16.1.207  Bcast:172.16.1.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe69:31c4/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:19676 errors:0 dropped:0 overruns:0 frame:0
          TX packets:342 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1623240 (1.5 MiB)  TX bytes:42250 (41.2 KiB)

eth0      Link encap:Ethernet  HWaddr 00:0C:29:69:31:C4
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:10057 errors:0 dropped:0 overruns:0 frame:0
          TX packets:171 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:832257 (812.7 KiB)  TX bytes:22751 (22.2 KiB)
          Interrupt:19 Base address:0x2000

eth1      Link encap:Ethernet  HWaddr 00:0C:29:69:31:C4
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:9620 errors:0 dropped:0 overruns:0 frame:0
          TX packets:173 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:791043 (772.5 KiB)  TX bytes:20207 (19.7 KiB)
          Interrupt:19 Base address:0x2080


lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:2 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:104 (104.0 b)  TX bytes:104 (104.0 b)

To view all existing bonds we can run following command, it will list bond0
[root@praji2 network-scripts]# cat /sys/class/net/bonding_masters
bond0
To view the existing mode of bonding we can use following command
[root@praji2 network-scripts]# cat /sys/class/net/bond0/bonding/mode
balance-rr 0
For verifying bonding , we can use following command. It will list bonding details
[root@praji2 network-scripts]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.5.0 (November 4, 2008)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 1000
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:69:31:c4

Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:69:31:ce
  

Saturday, 30 March 2013

Cheat Sheet for VXDMP

VXDMP cheat Sheet.

list controllers vxdmpadm listctlr all
list enclosures vxdmpadm listenclosure all
vxdmpadm listenclosure <enclosure>
Display Paths vxdmpadm getsubpaths ctlr=fscsi2
vxdmpadm getsubpaths dmpnodename=<dmp-device-name>
Who controls the path vxdmpadm getdmpnode nodename=c3t2d0
vxdmpadm getdmpnode enclosure=<enclosure>
Controller (enable/disable) vxdmpadm disable ctlr=<controller>
vxdmpadm enable ctlr=<controller>
Statistics vxdmpadm iostat start
vxdmpadm iostat reset
vxdmpadm iostat show all
I/O balancing & load vxdmpadm getattr enclosure <enclosure> iopolicy
vxdmpadm setattr enclosure <enclosure> iopolicy=<policy>
adaptive
priority
balanced (default)
round-robin
minimumq
single-active
Path Type vxdmpadm setattr path <path-name> pathtype=<path-type>

active
nomanual
nopreffered
primary
secondary
standby
Displays the restore/error daemon vxdmpadm stat restored
vxdmpadm stat errord
Starts the restore daemon vxdmpadm start restored
Stops the restore daemon vxdmpadm stop restored

Friday, 22 March 2013

Scsi-Initiator-ID

Why Change scsi-initiator-id
============================ 

Shared storage in clusters uses the Multi-Initiator capability of the SCSI
specification.  PCI systems use PCI host adapter cards for the SCSI interface.
Otherwise, the operation is the same as in SBus systems.  The scsi-initiator-id
must be changed because you cannot have two controllers (or SCSI devices) with
the same SCSI ID on the same SCSI chain.  This is true for all shared storage.

However, the scsi-initiator-id of controllers that link private storage should
be returned to 7 or it will conflict with the SCSI ID 6 of the CD-ROM.  Also,
for 6-slot MultiPacks you must change the SCSI IDs to the 9-14 range.  Use the
switch on the back of the MultiPack to change this.

The normal documented procedure for setting up dual-hosted MultiPacks is to set
the scsi-initiator-id for one node to 6 and then reset specific SCSI adapters 
(the ones that are not attached to the dual-hosted disks) to 7.  The 
scsi-initiator-id for the other node attached to the disk must be left at the
default setting (7). This approach may be somewhat ERROR-PRONE because you can
get SCSI errors if you do not reset ALL of the adapters that are NOT attached to
the dual-hosted disks to 7.

A method that works well is to leave the scsi-initiator-id for both nodes at 7.
Then, set the scsi-initiator-id for ONLY the adapters that are connected to the
dual-hosted disk to 6 on one of the nodes (normally the second node). 

Which scsi-initiator-id to Change and What to Set It To
=======================================================

Leave the scsi-initiator-id of one node set to the default (7) and change the
scsi-initiator-id to 6 for the other node. Do NOT change jumper settings on
any SCSI device (CD-ROM).

CAUTION:  Do NOT change the scsi-initiator-id to 8 because it WILL cause a
conflict with some other storage devices (D1000).

When to Change scsi-initiator-id
================================

You must change the scsi-initiator-id BEFORE connecting the shared storage.
If the storage has already been connected, disconnect it first.

How to Change the scsi-initiator-id
===================================

Only change the scsi-initiator-id on one node in the chain of the dual-hosted
SCSI device. At the ok (OBP) prompt, use the probe-scsi-all command to
identify the controllers connected to shared storage and those connected to
private storage. You have to first set auto-boot to false then reset-all before
the probe-scsi-all command will work. Depending on your configuration, there are
two methods of doing this.


NOTE:  Use Method 1 if your system is an E450.


First, identify the SCSI adapters by entering the following from the boot PROM:


      ok  setenv auto-boot? false
      ok  reset-all
      ok  probe-scsi-all
      /pci@6,4000/scsi@3,
      /pci@6,4000/scsi@3
      Target 2
          Unit 0        Disk    SEAGATE ST32171W SUN2.1G7462
      Target 3
          Unit 0        Disk    SEAGATE ST32171W SUN2.1G7462
        
      /pci@6,4000/scsi@2,1
      Target 2
          Unit 0        Disk    SEAGATE ST32171W SUN2.1G7462
      Target 3
          Unit 0        Disk    SEAGATE ST32171W SUN2.1G7462


Method 1
========

If more controllers are connected to the private storage than controllers
connected to shared storage, more scsi-initiator-id 7s than scsi-initiator-id 6s,
or more private storage controllers than shared storage controllers:

NOTE:  Steps 1 through 5 should be done on one of the nodes attached to the disk.
       Step 6 needs to be done on both nodes (assuming you are not using Ultra SCSI).

1.  Edit or create the nvramrc to set the scsi-initiator-id to 6 for these devices.
    From the OBP enter:
    
      ok  nvedit
      0:  probe-all install-console banner
      1:  cd /pci@6,4000/scsi@3
      2:  6 " scsi-initiator-id" integer-property
      3:  device-end
      4:  cd /pci@6,4000/scsi@2,1
      5:  6 " scsi-initiator-id" integer-property
      6:  device-end
      7:  banner (Control C)
      
2.  Do a ctrl-c, and store the nvramrc:

      ok  nvstore
      
3.  Set the system to use the nvramrc and reset auto-boot:

      ok  setenv use-nvramrc? true
      ok  setenv auto-boot? true
      
4.  Do a reset:

      ok  reset-all

5.  Edit the /etc/system file (on both nodes) and add the following line to set
    fast/wide SCSI (disable Ultra SCSI):
    
      set scsi_options=0x3f8
      
6.  Boot both systems and verify that you can see the multi-hosted disks from
    both nodes.
    

Method 2
========

If your system is not an E450, and there are more controllers connected to
shared storage than controller connected to private storage, more scsi-initiator-id6s
than scsi-initiator-id 7s, or more shared storage controllers than private
storage controllers, then:

1.  Set the global scsi-initiator-id to 6:

      ok  setenv scsi-initiator-id 6
      scsi-initiator-id = 6
      
2.  Edit or create the nvramrc script and set the scsi-initiator-id of the 
    controllers connected to private storage to 7.  The line numbers (0:, 1:,
    and so on) are printed by the OBP, for example:
    
CAUTION!  Insert EXACTLY one space after the double quote and before
scsi-initiator-id.

      ok  nvedit
      0:  probe-all
      1:  cd /sbus@70,0/SUNW,fas@1,8800000
      2:  7 encode-int " scsi-initiator-id" property
      3:  device-end
      4:  cd /sbus@70,0/QLGC,isp@0,10000
      5:  7 encode-int " scsi-initiator-id" property
      6:  device-end
      7:  cd /sbus@50,0/SUNW,fas@1,8800000
      8:  7 encode-int " scsi-initiator-id" property
      9:  device-end
      10:  install-console
      11:  banner (Control C)
      ok
      
In this example you have set three controller scsi-initiator-ids to 7. Your script
may be different because you will be resetting controllers that were listed by the probe-scsi-all command.

The following is an example of the internal/external controllers in an E250/E450:

      ok  setenv auto-boot? false
      ok  reset-all
      ok  probe-scsi-all
      /pci@1f,4000/scsi@3 (internal controller)
      /pci@1f,4000/pci@4/SUNW,isptwo@4 (external controller)
      /pci@1f,4000/scsi@5 (external controller)
      
3.  Store or discard the changes.

    The changes you make through the nvedit command are done on a temporary
    copy of the nvramrc script. You can continue to edit this copy without risk.
    Once you have completed your edits, save the changes. If you are not sure about
    the changes, discard them.
    
    To store the changes, enter:
    
      ok  nvstore
      ok
      
    To discard the changes, enter:
    
      ok  nvquit
      ok
      
4.  Enter the nvedit command to create and store an nvramrc script, also, reset
    auto-boot back to the default of true.

      ok  setenv use-nvramrc? true
      ok  setenv auto-boot? true
      
5.  Connect the shared storage devices and then enter:

      ok  boot -r                                            


Monday, 11 March 2013

Cluster outlook

Hi Guys,
The image gives u complete outlook of any cluster technology

Thanks

Tuesday, 12 February 2013

Renaming a disk group of VXVM

Hi Guys,
       Here i post steps to rename a disk group in VXVM.

Note: Steps remain same for any flavor

STEPS

Step 1: Perform the pre-checks using below commands.

  1. vxprint -g <dg name> -hvpst
  2. vxdg list
  3. vxconfigd status
Step 2: Check if there is any other node which recognizes the diskgroups. Usually renaming of  
             DG is  needed on Clustered environment.

Step3 : Deport the DG to other node with below command.
  1. vxdg deport <diskgroup>
Step 4: Import the disk group.
  1. vxdg [-t] -n newdg import diskgroup
Assuming newdg is desired name to change.
diskgroup :--- old name.
-t --: Temporary (optional in this case.)

Step 5: Start the volumes using vxvol command.

  1. Vxvol -g <newdg name> startall