Site icon Default Reasoning

VMware vSphere: Design Workshop day #3

OK, the last and final day of the VMware vSphere Design Workshop covers the VMware vSphere Storage design, the Virtual Machine design and the Management and Monitoring design modules. These topics look interesting so let’s have a closer look.

1.   VMware vSphere Storage design:

  • Storage Design Guidelines: the storage design should provide several benefits to the enterprise. It should reduce costs, ease administration, improve availability and should not decrease performance.
  • Storage Network Technology (NFS, iSCSI, FC): there is no storage network technology that is “the best”. It really depends on the case. All technologies have their pros & cons. The key to performance is proper sizing, reducing saturation and latency.
  • Storage size: when designing the storage size, add about 20% to 30% extra capacity to accommodate snapshots, swap and logfiles. Keep in mind that recovery time objective (RTO) could also affect the size of the storage.
  • VMFS Datastore size: the main factor for choosing the right size of the VMFS datastore is the number of VM’s that can be run in the datastore with acceptable latency.
  • VMFS Block Size: the VMFS block size should be determined by the largest required virtual disk. As best practice, keep the same block size across all datastores.
  • Command Queuing: this can occur at the host and/or at the storage array level and can degrade the storage performance. The LUN queue depth parameter determines how many commands can be active to one LUN at the same time. If a host generates more commands to a LUN than the LUN queue depth, the excess commands are queued in the VMkernel, which leads to increased latency.
  • VMFS Volumes per LUN: use single VMFS volume per LUN. This will minimize the number of SCSI reservations per LUN and will improve performance in a multi host environment.
  • Storage Security & Access: access to the storage array depends on chosen technology. In NFS you can use network segmentation or choose not to mount a NFS volume to a ESX(i) host. On a iSCSI network, you can use VLAN’s and the Challenge Handshake Authentication Protocol (CHAP). On a Fiber Channel fabric, use zoning and LUN masking.
  • Host LUN ID numbers: the LUN’s presented to the ESX(i) host should be configured to present the same LUN ID to all hosts. This will prevent inconsistency between the hosts and will ease administration.
  • Redundant Storage Paths: to provide storage high availability, you should always configure multipathing. The paths should be configured using separate HBA’s/NIC’s, switches and storage processor. You should also consult the array documentation for specific multipath configuration and the multipath policy that is applicable to your array.
  • Raw Device Mapping: in most cases the VMDK’s are sufficient but if you want to take advantage of, for example, SAN software inside a VM you should use RDM’s. The I/O performance between a VMDK and RDM is almost negligible.
  • Virtual Disks: when choosing between thick and thin provisioned disk in a design, you should first make a decision on using NFS or VMFS. The NFS datastores are thin provisioned by default and the monitoring/management is done entirely at the NFS server side. When using VMFS datastores, the thin provisioning is optional. Before configuring, consider all pros & cons of thin vs. thick disk provisioning.
  • Boot from SAN: boot ESX(i) from SAN if you are using diskless systems like blade servers. Keep in mind that you cannot boot from SAN when using NFS datastores or software iSCSI adapters. This also create a dependency on the SAN and since the ESXi Embedded version is gaining popularity, I would suggest to use it where possible.
  • N_Port ID Virtualization: NPIV gives each VM a virtual WWN identity on the FC fabric. This is useful for access control and if there is a requirement to monitor LUN usage on the VM level. Keep in mind that NPIV requires the VM to use RDM and the HBA must support the NPIV technology.
  • Storage Naming Conventions: Just like in other sub-components of the vSphere infrastructure, the names of the storage units should be used consistently and reflect for example the location, type and number of the unit.

2.    Virtual Machine design:

  • Number of vCPU’s: the default is one vCPU unless there is an obvious need for more. Scheduling VM’s with one vCPU is a lot easier to do for DRS than VM’s with multiple vCPU’s.
  • Memory: for best memory performance, the memory of the VM’s should be kept in physical RAM memory. Limit the memory overcommitment and be careful when using reservations. Make sure the VMware tools are installed and the transparent page sharing (TPS) is enabled.
  • Virtual Machine Disk: as best practice, use separate system and data disk and place them on one datastore unless they require different I/O characteristics (RAID level, latency etc.). Separate disks simplify backup/restore and help distribute the I/O load.
  • Swap File location: the swap file location of the VM from the ESX(i) host level can be stored on several locations. You can store it on the shared storage with the VM files. This is the default option and in most cases the preferable one. Other option is to store the VM files on shared storage and the swap files on local disk. This will reduce the replication bandwidth required but slows down vMotion. Another alternative is to store the swap files on a dedicated shared storage. This will improve replication performance but adds more administrative overhead. Fourth option is to store the swap file and VM files on a local disk but this not advisable unless you are creating a test environment.
  • Virtual SCSI HBA type: use default choice provided by the wizard when creating a new VM. One exception is the use of the Paravirtual SCSI adapter (PVSCSI). The PVSCSI offers more throughput, lower CPU utilization and should be used in I/O extensive VM’s.
  • Virtual NIC’s: wherever possible, use the VMXNET3 NIC adapter. It offers the most features and performance.
  • VMware Tools: there are many features that VMware tools provide like memory management, improved display/mouse performance, graceful shutdown of the VM etc. Always install VMware tools for supported operating systems.
  • Virtual Machine Security: keep the VM’s secure as the physical machines, there is no exception. For an extra layer of security, you can use VMware vShield Zones in your environment.
  • Virtual Machine Naming Conventions: name the VM’s in an easy, consistent and logical way to ease the management and administration.

3.   Management and monitoring design:

  • Management Guidelines: in general, limit the number of monitoring and management agents that use the service console. Use tools like vSphere CLI, vSphere PowerCLI or vMA.
  • Host Installation & Configuration: to simplify the installation of the hosts, use wherever possible the ESXi Embedded version and use automated installers for ESX. The post installation tasks can be easily deployed using the Host Profiles feature.
  • Number of vCenter Server Systems: the number of vCenter Servers depends on two key factors: the infrastructure size (is it exceeding the maximums?) and requirements for other products like Site Recovery Manager. Geographical location of the datacenters is also a good reason to deploy multiple vCenter Servers. If the design includes multiple vCenter Servers, configure vCenter Linked Mode unless the systems are administrated by different administration teams and require separate authentication system.
  • Templates: a good way to deploy the templates is to configure a template per OS type. Templates ease administration and are faster to deploy than new VM’s. A good way to save storage cost is to place the templates on a less expensive storage.
  • vCenter Update Manager: for updating the ESX(i) hosts, the vCenter Update Manager is the obvious choice. Updates are automatic and the compliance check is build-in.
  • Time Synchronization: the time synchronization must be maintained between the critical components of every infrastructure. Configure the VM’s to sync the time from a PDC or internal stand-alone NTP server. For more info on time keeping in VM’s, read the “Time keeping in VMware Virtual Machines” document.
  • Snapshot Management: develop and maintain snapshot policy for the virtual infrastructure to prevent performance issues and storage space overhead. Alternatively, you could make snapshots part of the change management procedure.
  • CIM & SNMP: in the ESXi version, the CIM component is installed by default. The vendor version of ESXi software might provide more information than the standard VMware version. Enabling SNMP might be useful in an infrastructure where a SNMP management application is already running.
  • Performance Monitoring & Alarms: first question that arise is what to monitor? You can use the organizations SLA’s in combination with the “Performance Best Practices for VMware vSphere” document to determine the monitoring strategy and then configure the alarms to meet those requirements.
  • Logging: just like in monitoring and alarms, the first question that arises when configuring logging is what to log and how long the information should be retained. Longer retention means more information to troubleshoot and audit but if there are no specific requirements from the organization, the best way is to keep the defaults. To simplify the management of the logs, a central logging system will take care of storing and archiving of the logs. If there already is a logging server running in the infrastructure, use it. Otherwise, use the one provided with VMware vMA (vilogger).

Course summary

After three days of immense load of information about the design of a new VMware vSphere 4 infrastructure, now it’s the time to recap the whole course. In my opinion, the VMware vSphere 4 Design Workshop will give you a clear picture of the design process, step-by-step. The course will not teach you “the best” way to design a new infrastructure or provide you “the best” practices because there is no such thing. The best practices are merely guidelines. Every design is different and every decision made in a design has its pros & cons. Just make sure you understand why these decisions have been made and you are able to explain it in your design. This course is close in line with the VMware Plan and Design Kit for vSphere that is available through the VMware Partner Central so I suggest you  take a look at it as well. The course is not intended for administrators but for system engineers, (virtual) infrastructure specialist and consultants responsible for designing new virtual infrastructures. The course length is too short in my opinion. One more day should be sufficient to discuss some of the topics deeper and share the experience among the participants. Overall, it is a good, informative course and I would recommend it to anyone who is interested in designing VMware vSphere 4 infrastructures.

Cheers!

– Marek.Z

Exit mobile version