Friday, 19 June 2015

Elastic Virtual Switch

Elastic Virtual Switch

 The Packets are encapsulated which are coming from two different machines with different networks with 200.X series or 192.X series or 10.X series, all the packets would be transferred using the concept of tunneling and encapsulation.

digest -v -a md5 /export/share/mys11.uar

on server S11-server1 (

#pkg install evs
#pkg install rad-evs-controller (RAD :- remote administration daemon)
#svcadm refresh rad:local
#svcadm disable rad:local
#svcadm enable rad:local

configure password less authentication for the evsuser

#ssh-keygen -t rsa
#public key would be generated... 
#cat /root/.ssh/
#cat /root/.ssh/> /var/user/evsuser/.ssh/authorized_keys
#cd /var/tmp/
#cat >>/var/user/evsuser/.ssh/authorized_keys
#evsadm show-controlprop
#evsadm set-controlprop -p l2-type=vxlan
#evsadm set-controlprop =p vxlan-range=200-300

****** vxlan-addr parameter is Tunneling network address*******

#evsadm set-controlprop -p vxlan-addr=
#evsadm create-evs App_Evs
#evsadm show-evs
#evsadm show-evsprop
#evsadm add-ipnet -p subnet= App_Evs/ipnet1
#evsadm show-ipnet
#evsadm help
#evsadm add-vport App_Evs/vport0
#evsadm add-vport App_Evs/vport1
#evsadm show-vport

on S11-desktop (

#pkg install evs
#which evsadm 
#grep evsuser /etc/passwd
#grep evsuser /etc/shadow
#scp /root/.ssh/ oracle@s11-server1:/var/tmp/
#evsadm set-prop -p controller=ssh://evsuser@s11-server1

Go to any zone and change the net value to VPORT


Thursday, 18 June 2015

Network High Availability in SOLARIS 11

Network High Availability

1. Trunking
2. DLMP (Dynamic Link Multipathing) only 11.2
3. IPMP (Internet protocol Multipathing)

Trunking + DLMP = Agreegration

Aggregation is at LINK Layer
IPMP is IP Layer/Network Layer.


Basically can be done in two modes
         a. Trunking
          b. DLMP


#dladm create-aggr -l net1 -l net2 aggr0
#ipadm create-ip aggr0
#ipadm create-addr -T static -a aggr0/v4

if net1 and net2 has got 1GB speed each would give me 2gb speed but only dependency is that two NIC ports should be connected to the same switch, if switch fails then the Trunking fails, However it works fine if the Switch is configured as cluster in CASCADED MANNER.

However this gives me the availability of NIC until the switch fails..


#dladm create-aggr -m dlmp -l net1 -l net2 aggr1
#dladm create-ip aggr0
#dladm create-addr -T static -a aggr1/v4

if you want to modify the trunking to DLMP

#dladm modify-aggr -m dlmp aggr1


1. Create IPMP Group
2. Put Network ports
3. Assign IP to group

#ipadm create-ip net1
#ipadm create-ip net2
#ipadm create-ipmp -i net1 -i net2 ipmp0
#ipadm create-addr -T static -a ipmp0/db1
#ipadm show-addr

by default the health check would be link based.

Monitor the IPMP status and gain INFO

#ipmpstat -i 
#ipmpstat -g 
#ipmpstat -p (to check probe activity)
#ipmpstat -t (to check the targets)

To configure in Probe based.
just assign IP to both the NIC cards

Test is the key word

#ipadm create-addr -T static -a net1/test
#ipadm create-addr -T static -a net2/test

for deleting the IPMP.

#ipadm delete-addr ipmp0/data1
#ipadm delete-addr net1/test
#ipadm delete-addr net2/test
#ipadm remove-ipmp -i net1 ipmp0
#ipadm remove-ipmp -i net2 ipmp0
#ipadm delete-ip net1
#ipadm delete-ip net2
#ipadm delete-ipmp ipmp0
#ipadm show-addr

Integrated Load Balancer

1. Health check
2. Server group
3. Rule

#pkg install ilb
#svcs ilb
#svcadm enable ilb (will take it to maintenance)
#ipadm set-prop -p forwarding=on ipv4
#svcadm clear ilb
#ilbadm create-hc -h hc-test=PING,hc-timeout=3,hc-count=3,hc-interval=10 hc1
#ilbadm create-sg -s server=, sg1
#ilbadm create-rule -e -p -i vip=,port=80,protocol=tcp -m lbalg=rr,type=HALF-NAT -h hc-name=hc1 -o servergroup=sg1 rule1

Wednesday, 17 June 2015

Networking in Oracle Solaris 11

Default networking service in Solaris 11 is


# svcprop network/physical:default|grep -i active_ncp
gives us the defaultfixed which defines type of installation

dladm command is used in PHYSICAL and DATA LINK layers
ipadm command is used in NETWORK layer

commands to find the network-ports

         #dladm show-phys

default naming convention for the network cards are renamed as "NET0,1,2,3"

         # dladm show-phys -m 

this would show you the MAC address

        # dladm show-link

network ports which are plumbed

Plumbing the network card

#ipadm create-ip net1

Assigning IP Address

#ipadm show-addr (lists the current IP's configured)
#ipadm create-addr -T static -a net1
        -T type of network
        -a Address
        -t to assign temporary IP address

To add a tag for easy identification for which virtual ip is assigned for which purpose as below.

#ipadm create-addr -T static -a net1/apache1

/etc/ipadm/ipadm.conf (this is the file which holds the information of all the ip's)

Command to change the hostname

#svcs identity/node
#svcprop identity/node|grep nodename
#svccfg -s identity/node setprop config/nodename=<desired hostname>
#svcadm refresh identity/nodename

To change the DNS client entries..

#svcs dns/client
#svcprop dns/client |grep nameserver
#svccfg -s dns/cleint setprop config/nameserver=<desired DNS Server name>

To see the IP properties

#ipadm show-ifprop net1

for IPMP

#ipadm set-ifprop -p standby=on -m ip net1

To change the MTU value

#ipadm set-ifprop -p mtu=1400 -m ipv4 net1

To enable forwarding for all the NIC's

#ipadm set-prop -p forwarding=on ipv4 

To enable the MTU value as jumbo frames (MTU value to 9000) this is possible only in the link layer..

#dladm show-linkprop -p mtu net1

Note: To do above operation you have to unplumb the interface

To create Virtual NIC 

#dladm create-vnic -l net1 vnic0
#dladm set-linkprop -p maxbw=100m vnic0
#dladm show-vnic (Gives the details of all VNICS)

firewall rules are applicable to VNIC's which was not possible in SOLARIS 10.

Network Virtualization

dladm show-vnic
dladm create-etherstub stub0 
dladm show-etherstub

dladm create-vnic -l vnic0 stub0
dladm create-vnic -l vnic1 stub0
dladm create-vnic -l vnic2 stub0

dladm show-vnic

Tuesday, 16 June 2015

AI Configuration for solaris 11

jumpstart concept has been removed from SOLARIS 11
Basically 5 steps:--

Create DHCP server
Create Service
configure client
create manifest
Create profile

Update netmasks file
AI  Manifests
1. Default (/export/ai/install/auto_install/manifest/default.xml)
name="default"hard disk partitioning<IPS><IPS>}
2. Custom
#cp /export/ai/install/auto_install/manifest/default.xml mymanifest.xml
# cd /var/tmp
#mv default.xml mymanifest.xml
#vi mymanifest.xml

3. Criteria Manifest
DHCP server               Install Server     IPS

DHCP configuration
#installadm set-server -i -c 5 -m
if installadm command is not found then install it from IPS 
#pkg install installadm
-c is number of servers to install concurrently from total list of servers.
-i  initial ips
-m managed by AI server

#installadm create-service -n basic_ai -s /var/tmp/ai_X86.iso -d /export/ai/install

#install create-client -e 00:4F:F8:00:00:00 -n basic_ai
-e  "MAC address" in this case 
-n "name of the service"
for booting from OK prompt
ok > boot net:dhcp - install
#installadm create-manifest -f /var/tmp/mymanifest.xml -c mac=<mac address> -n basic_ai -s /opt/ora/iso/<ISO image> -d /export/ai/install

#installadm list -c
#installadm delete-service default_i386
#sysconfig create-profile -o /var/tmp
# installadm create-profile -p client1 -f /var/tmp/sc-profile.xml -c mac="MAC Address" -n basic_ai

Friday, 11 July 2014

Playing around with NIC card settings.

Setting NIC speed and duplex

Solaris is often unable to correctly auto-negotiate duplex settings with a link partner (e.g. switch), especially when the switch is set to 100Mbit full-duplex. You can force the NIC into 100Mbit full-duplex by disabling auto-negotiation and 100Mbit half-duplex capability.

Example with hme0:

1. Make the changes to the running system.
# ndd -set /dev/hme adv_100hdx_cap 0
# ndd -set /dev/hme adv_100fdx_cap 1
# ndd -set /dev/hme adv_autoneg_cap 0

2. Make kernel parameter changes to preserve the speed and duplex settings after a reboot.
# vi /etc/system
# set hme:hme_adv_autoneg_cap=0
# set hme:hme_adv_100hdx_cap=0
# set hme:hme_adv_100fdx_cap=1

Note: the /etc/system change affects all hme interfaces if multiple NICs are present (e.g. hme0hme1).

Procedure to add the new disk to Solaris Zone..

By default it is not possible to add raw device to zone without taking a reboot on zone, but here is some cool stuff for usage to avoid the downtime.

I found a little hack to accomplish the objective of adding raw device to zone without rebooting it. Here is a way out - 

1) Add the device to the zonecfg 

#zonecfg -z barvozone1
zonecfg:barvozone1> add device
zonecfg:barvozone1:device> set match=/dev/rdsk/c3t60050768018A8023B8000000000000F0d0s0
zonecfg:barvozone1:device> end

2) use the mknod command to create the device in the zones dev folder

#ls -l /dev/rdsk/c3t60050768018A8023B8000000000000F0d0s0
lrwxrwxrwx   1 root     root          67 Feb 18 15:34 /dev/rdsk/c3t60050768018A8023B8000000000000F0d0s0 -> ../../devices/scsi_vhci/ssd@g60050768018a8023b8000000000000f0:a,raw

#ls -l /devices/scsi_vhci/ssd@g60050768018a8023b8000000000000f0:a,raw
crw-r-----   1 root     sys      118, 128 Mar  5 23:55 /devices/scsi_vhci/ssd@g60050768018a8023b8000000000000f0:a,raw

# cd /barvozone1/zonepath/dev

# mknod c3t60050768018A8023B8000000000000F0d0s0 c 118 128

That's it. The raw device is now visible within zone and now you can start with your stuffs without any downtime. Isn't it cool?

Sunday, 14 April 2013

NIC Bonding in RHEL

Steps for configuring bonding
In this document we are configuring bond0 with interfaces eth0 and eth1

Step 1- Load Kernel module
For a channel bonding interface to be valid, the kernel module must be loaded. To ensure that the module is loaded when the channel bonding interface is brought up, create a new file as root named <bonding>.conf in the /etc/modprobe.d/ directory. Note that we can name this file anything but it should with ends with a .conf extension. Insert the following line in this new file alias bond<N> bonding
Replace <N> with the interface number, such as 0. If we want to configure  configuring more than on bonding interface, For  each configured channel bonding interface, there must be a corresponding entry in  /etc/modprobe.d/<bonding>.conf file
In this example we are configuring bond0 and  file name is bonding.conf
[root@praji2 modprobe.d]# cat /etc/modprobe.d/bonding.conf
  alias bond0 bonding

Step2- create channel bonding interface
We  need to create a channel bonding interface configuration file on/etc/sysconfig/network-scripts/ directory called ifcfg-bond<N> ,replacing <N> with the number for the interface, such as 0 and specify the bonding parameters on the file. Here we are creating ifcfg-bond0 file with following contents
[root@praji2 network-scripts]# cat ifcfg-bond0
BONDING_OPTS="mode=0 miimon=1000"

Step 3- Configure Network interfaces
After the channel bonding interface is created, the network interfaces to be bound together must be configured by adding the MASTER= and SLAVE= directives to their configuration files. The configuration files for each of the channel-bonded interfaces can be nearly identical. For example, if two Ethernet interfaces are being channel bonded, both eth0 and eth1 may look like the following example
Interface eth0 configuration
 [root@praji2 network-scripts]# cat ifcfg-eth0
Interface eth1 configuration
[root@praji2 network-scripts]# cat ifcfg-eth1
After configuring the interfaces we have to bring up the bond by using command
[root@praji2 network-scripts]# ifconfig bond0 up
If the bonding is correctly configured we can view the configuration using ifconfig command
[root@praji2 network-scripts]# ifconfig
bond0     Link encap:Ethernet  HWaddr 00:0C:29:69:31:C4
          inet addr:  Bcast:  Mask:
          inet6 addr: fe80::20c:29ff:fe69:31c4/64 Scope:Link
          RX packets:19676 errors:0 dropped:0 overruns:0 frame:0
          TX packets:342 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1623240 (1.5 MiB)  TX bytes:42250 (41.2 KiB)

eth0      Link encap:Ethernet  HWaddr 00:0C:29:69:31:C4
          RX packets:10057 errors:0 dropped:0 overruns:0 frame:0
          TX packets:171 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:832257 (812.7 KiB)  TX bytes:22751 (22.2 KiB)
          Interrupt:19 Base address:0x2000

eth1      Link encap:Ethernet  HWaddr 00:0C:29:69:31:C4
          RX packets:9620 errors:0 dropped:0 overruns:0 frame:0
          TX packets:173 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:791043 (772.5 KiB)  TX bytes:20207 (19.7 KiB)
          Interrupt:19 Base address:0x2080

lo        Link encap:Local Loopback
          inet addr:  Mask:
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:2 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:104 (104.0 b)  TX bytes:104 (104.0 b)

To view all existing bonds we can run following command, it will list bond0
[root@praji2 network-scripts]# cat /sys/class/net/bonding_masters
To view the existing mode of bonding we can use following command
[root@praji2 network-scripts]# cat /sys/class/net/bond0/bonding/mode
balance-rr 0
For verifying bonding , we can use following command. It will list bonding details
[root@praji2 network-scripts]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.5.0 (November 4, 2008)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 1000
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:69:31:c4

Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:69:31:ce