VMware vSphere Design Matrix Workflow

VMware vSphere Design Matrix

Converged Infrastructure WorkFlow

Converged Infrastructure workflow

Data Transfer Bandwidth Calculation based on 10G link with 3ms latency and 50% of TCP windows size i.e 32KB

Data Transfer Bandwidth Calculation based on 10G link with 3ms latency and 50% of TCP windows size i.e 32KB

TCP Windows size calculation in bits = 32KB * 1024 ( bytes) * 8 (bits) = 262,144 bits

Network Latency = 3ms / 1000 (sec) = 0.003 second

Throughput = TCP windows size calculation in bits/ Network Latency in second

                                                 =   262,144 / 0.003

Bits/Second                                =   87381333.33333333 bits/second

Bytes/Second                             =   87381333.33333333 bits/second /8

                                                =   10922666.6666667 bytes/Second

KB/Second                                =   10922666.6666667 bytes/Second / 1024

                                               =   10666.66666666667 KB/Second

MB/Second                               =   10666.66666666667 KB/Second / 1024

                                               =   10.4166 MB/Second

MB/Hours                                 =   10.4166 MB/Second * 3600 seconds

                                               =   37500 MB/Hours

GB/Hours                                  =   37500 MB/Hours / 1024

                                               =   36.62 GB/Hours

Note – There could be more running instance at same time so I have reduced TCP windows size packet for the following reasons.

1-  To avoid packet buffering during data transfer

2-  Running Multiple instance at same time.

I have observed that how much latency you are getting the only factor deciding how much data will be transfer in said period.

Let me know if there is any comments???

 

Chassis Comparison of (Cisco, Dell, HP and IBM)

I have tried to consolidate the difference between HP c7000/IBM/Dell and Cisco UCS Chassis.
The details provided are based on information available publicly and may be subject to errors.

Comparison (Cisco, Dell, HP and IBM) Chassis

Linux – RHEL unattended installation step by step with PXE boot

***Prerequisite***
########################################

  • DHCPD install & running
  • TFTP install & running
  • HTTPD install & running ( Though NFS & FTP will also work but I prefer HTTPD over these)

NoteI am not covering how to install DHCPD/TFTP(xinet.d) & HTTPD installation.

#######################################

***Configuration***

The following is DHCPD.CONF file based on the requirement.

  • I will use 192.168.10.0/24 subnet assigned to auto-kickstart deployment on multiple server.
  • DHCPD/TFTPD/HTTPD are placed on same system in order to better sync
  • next-server is TFTP IP address while in ourcase this is same as DHCP/HTTPD
  • Though you can also configure one by one by mapping hw address of machine where you want to deploy like

#host esxi5first {
#        hardware ethernet 00:0C:29:90:B1:B2;
#        fixed-address 192.168.10.221;
#        option host-name “esxifirst”;
#        filename “pxelinux.0”;
#        next-server 192.168.10.200;
#}

but in this case you need to remove the following lines for subnet section ( i. e subnet 192.168.10.0 netmask 255.255.255.0 {)

filename “pxelinux.0”;
next-server 192.168.10.200;
#########################################################
DHCPD.CONF
#########################################################
[root@rhel1 pxelinux.cfg]# cat /etc/dhcp/dhcpd.conf
ddns-update-style ad-hoc;
allow booting;
allow bootp;
#gPXE options
option space gpxe;
option gpxe-encap-opts code 175 = encapsulate gpxe;
option gpxe.bus-id code 177 = string;
subnet 192.168.10.0 netmask 255.255.255.0 {
range 192.168.10.220 192.168.10.254;
default-lease-time 3600;
max-lease-time 4800;
option routers 192.168.10.200;
option domain-name-servers 192.168.10.200;
option subnet-mask 255.255.255.0;
option ntp-servers 192.168.10.200;
filename “pxelinux.0”;
next-server 192.168.10.200;

}
[root@rhel1 pxelinux.cfg]#
##############################################################
TFTPD SETUP

  •  Create /tftpboot directory & create folder images/rhel6
  •  Copy menu.c32 & pxelinux.0 from /usr/local/syslinux to /tftpboot
  • Copy initrd.img & vmlinuz from mounted RHEL DVD images directory to /tftp/images/rhel6
  • Create a directory named ‘pxelinux.cfg’

##############################################################
[root@rhel1 pxelinux.cfg]# tree -AF /tftpboot/

/tftpboot/

├── images/

│   └── rhel6/

│       ├── initrd.img

│       └── vmlinuz

├── menu.c32

├── pxelinux.0

└── pxelinux.cfg/

└── default
#################################################
Default file content under /tftpboot/pxelinux.cfg/default

  • Create a file named ‘default’ with the following content ks=http://192.168.10.200/rh.cfg is web server ( httpd address) where kickstart file (rh.cfg) is kept.
  • This kickstart file also read the content (RPM) of RHEL DVD in order to install the same. Syntax is configured on rh.cfg file as url –url http://192.168.10.200/rhel

######################################################
[root@rhel1 html]# cat /tftpboot/pxelinux.cfg/default
default menu.c322
prompt 0
timeout 300
MENU TITLE ******* PXE BOOT MENU ********
LABEL RHEL6.0 x64
MENU LABEL RHEL6.0 x64
KERNEL images/rhel6/vmlinuz
append vga=normal initrd=images/rhel6/initrd.img ramdisk_size=1024 ksdevice=eth1 ks=http://192.168.10.200/rh.cfg
#####################################################
Kickstart file rh.cfg
You can modified as per your requirement, you can add %post section, modified %package as well.This is unattended installation
####################################################
# This is an installation not an upgrade
install
url –url http://192.168.10.200/rhel
lang en_US
autostep –autoscreenshot
text
keyboard us
#xconfig –defaultdesktop kde –resolution 640×480 –depth 8
network –device eth0 –bootproto dhcp –onboot=on
rootpw –iscrypted $1$tihTg7ne$hohhkj87hGGddg9B4WkXV1
authconfig –useshadow –enablemd5
selinux –disabled
timezone America/New_York
firewall –disabled
firstboot –disable
# Reboot after installation
reboot
bootloader
reboot
clearpart –all –initlabel
# define partitions
part /boot –fstype ext3 –size=512
part /opt –fstype ext3 –size=5000 –grow
part /usr –fstype ext3 –size=5000
part /tmp –fstype ext3 –size=7500
part /var –fstype ext3 –size=7500
part /home –fstype ext3 –size=2500
part swap –size=2048
part / –fstype ext3 –size=2048
part /usr/local –fstype ext3 –size=1000
########################################################

Downgrade OA ( Firmware v 3.60) & VCM ( Firware v 3.70) to OA ( Firmware v 3.56) & VCM ( Firmware v 3.51)

Here are the steps I followed during OA & VCM firmware downgrade in order to sync other VCM domain running with same version 3.51 and managed by VCEM.

I would be covering only SUDO steps over here 🙂

  • Delete domain ( if any) from VCM module ( used https://vcmdomainip) – (That is very important to delete domain first if we are downgrading the firmware. That is not applicable if we are upgrading the same.)
  • Downgrade Active OA firmware ( Check Force downgrade) to 3.56. Once this is downgraded it will update standby OA automatically
  • You must have administrative credential before proceeding for VCM downgrade ( as user created during domain creation will be lost).
  • Install VCSU 1.7.x ( Version should be greater than 1.5 if we are working with 3.x VCM module firmware upgrade ) – this is UNIX style. However you can use HP DVD method but that looks very convincing and promising to me.
  • Run VCSU and enter ‘update’ command following with OA IP and their credential. It may take around 30 minutes to update the same.
  • Once done, check OA with their settings like EBAY addressing, Interconnect/Devices bay IP and verified your OA and VCM Modules firmware as v3.56 & v3.51
  • Create VC domain like SATYENDRA_A24R1 ( whatever naming standard you guys follow) and follow the wizard ( in last wizard, DO NOT click on network as that would be populated once we add domain to VCEM domain group).
  • Once domain is created, discover/add them on VCEM.
  • License the domain
  • Put the domain in maintenance once license is done and change the SAN PATH Login-redistribution setting from MANUAL to AUTOMATIC .
  • Exit the maintenance mode.
  • Create the servers profile as per requirement
  • Server is ready for deployment.

Few Points to take into consideration before architect HP c7000 Networking for Blades. – VCM (VCEM) firmware 3.x onwards

1 –  What could be VLAN Capacity mode i.e. expanded or Legacy.

Note -Virtual Connect release 3.30 provides an expanded VLAN capacity mode when using Shared Uplink Sets, this mode can be enabled through the Ethernet Settings tab or the VC CLI. The default configuration is “Legacy VLAN Capacity” mode. This scenario does require a change to this setting as Expanded VLAN capacity will be required. Once a domain has been set to Expanded VLAN Capacity, it cannot be reset back to Legacy mode. In order to go back to legacy mode, the Domain would need to be deleted and recreated. Expanded VLAN capacity will be greyed out, this is only supported with 10Gb based VC modules

If the VC domain is not in Expanded VLAN capacity mode, you will receive an error when attempting to create more that 128 VLANs in a SUS. Expended VLAN supports 1000 VLAN/SUS.There are 8 faceplate ports. Ports X1-X4 are SFP+ transceiver slots only; which can accept a 10Gb or 8Gb SFP+ transceiver. Ports X5-X8 are SFP and SFP+ capable, and do not support 8Gb SFP+ transceivers.

2 – The protocol personality of HP Virtual Connect Flex Fabric module uplink ports X1-X4 is determined by the type of SFP+ transceiver plugged in, i.e. – 4/8Gb FC or 10GbE SFP+. The remaining ports X5-X8 are fixed 1/10GbE protocol ports and will not accept FC SFP+ transceivers.

3 –  Enclosure EBAY addressing, OA and ILO IPs must be on same VLAN/Range in order to proper communicate.

4 – It’s up to you how you wants to configure VCM module Active/Active or Active/Passive. an Active/Standby configuration places the redundancy at the VC level, where Active/Active places it at the OS NIC teaming or bonding level.

5 – For supporting A/A or A/P you needs to configure LACP within the same Link Aggregation Group.

6 – Configure FAST MAC Cache Failover settings if you are architecting VCM modules on Active/Passive mode. Enabling Fast MAC Cache Failover forces Virtual Connect to transmit Ethernet packets on newly

active links, which enables the external Ethernet switches to identify the new connection (and update their MAC caches appropriately). This transmission sequence repeats a few times at the MAC refresh interval (five seconds is the recommended interval) and completes in about one minute. ( Not applicable in Active/Active Mode)

7 – VMware does provide several NIC teaming algorithms.. HP will support any of the algorithms except for IP Hash.  IP Hash requires switch assisted load balancing (802.3ad), which Virtual Connect does not support 802.3ad with server downlink ports. HP and VMware recommend using Originating Virtual Port ID with Standard vSwitch, and Physical NIC Load when using vDS and NetIOC,

Note – If configuring with Vsphere5, I would not recommend to implement NetIOC as we are segregating traffic based on different VLAN i.e Vmotion, Management, ILO for different purpose and we do have trunk enabled port configuration at switch level to handle these traffic based on VLAN Packet shaking mechanism

FlexFebric architectural limitation

Note – Each FlexNIC on a physical port can be assigned to a different vNet or remain unassigned, but multiple FlexNICs on a single port cannot be assigned to the same vNet. This is an architectural limitation and will not change (Adding the same vNet multiple times on same single port  looks as a constraint by architectural design & limitation of Flexfebric) means, same vlan like A96 and B96 ( A & B are SUS) cannot be used twice in same profile.