Default Reasoning

Construction of sensible guesses when some useful information is lacking and no contradictory evidence is present…

  • Subscribe

  • Disclaimer

    This blog is personal. The views and opinions expressed here represent my own and not those of the people, institutions or organizations that I may or may not be related with unless stated explicitly. All blog post, white papers and manuals were written during the projects, mostly at a customer site and are scenario specific. Use at your own risk.
  • Meta

Archive for the ‘Dell’ Category

How to install Dell MD Storage Array vCenter Plug-in.

Posted by Marek.Z on 2 February 2012

The Dell Modular Disk Storage Array vCenter Management Plug-in is a plug-in for your vCenter (doh! :) ) that provides integrated management of Dell MD series of storage arrays through the vSphere Client. This plug-in allows you to configure the hosts with the storage arrays and to create, map, and delete virtual disks from the storage arrays. It also allows you to create hardware snapshots, virtual disk copies, and remote replication but these are premium features and need to be purchased separately. I must say, I have been working with this plug-in for several days now and I really start to like it! Before you start the installation, make sure you have downloaded the latest version of the plug-in from Dell. The installation is really easy and straight forward so I won’t go in to detail here but there are some caveats you should consider during and after the installation.

SSL and Non-SSL Jetty Port Numbers

After the initial installation you will be presented with a configuration window for the port of the Jetty Web Service. If you have VMware Update Manager running on your vCenter Server, change the port number to something else because port 8084 is used by VMware Update Manager.

If you don’t change it the plug-in will not be enabled in the vCenter and you’ll get the following error.

Continue with the configuration wizard and provide the IP address of the vCenter Server as well as the password for the Administrator account and optionally the e-mail address. When the registration with the vCenter is complete, open your vSphere Client and login to the vCenter Server. Install the SSL certificate and ignore the warning. You should now have the Dell MD Storage Array vCenter Plug-in icon in your vCenter under Solutions and Applications.

Assign Storage Administrator role to the users

If you go straight to the plug-in after the installation, you’ll receive the following error.In order to use the plug-in you have to assign the Storage Administrator role to appropriate users. Go to Roles under Administration on the vCenter Server and add a new role. Name it Storage Administrators and assign Read/Write permission under Storage Administrator privilege.

Close the window and go to Hosts and Clusters. Select the vCenter object, right click it and select Add Permissions. Under Assigned Role pane, pull down the menu and select Storage Administrators. Under Users and Groups, click the Add button and add appropriate users to the Storage Administrators group.

Close and reopen the vSphere Client for the permissions to take effect.

Add the storage array to the plug-in

Open the plug-in once again, install SSL certificate to Trusted Root Certification Authorities and accept the SSL warning. Next, click Add Array and enter the IP addresses for both controllers and the password for the array. The plug-in should successfully connect to the array and you can now manage your storage array from the vCenter Server. Have fun!

Cheers!

- Marek.Z

Posted in Dell, PowerVault, Tools & Utilities, vCenter, VMware | Tagged: , | Leave a Comment »

Lessons Learned: Dell PowerConnect 5524 switches and vSphere 5.

Posted by Marek.Z on 30 January 2012

Some time ago, I configured a pair of Dell PowerConnect 6224 switches for iSCSI storage network and wrote a small blog post about the configuration. This time I had a chance to work with the Dell PowerConnect 5524 switches which were also used for an iSCSI storage network. These switches are cheaper and a bit less powerful than the 6224 series but still good for a small, dedicated iSCSI network. Before you begin with the configuration, update the firmware if applicable. Connect the stack cables, run the configuration wizard, set the (enable) password etc. The rest of the configuration is quite straightforward, just like on the 6224 series but there are some settings that should be considered.

  • Create a dedicated iSCSI VLAN and add appropriate ports to the VLAN.
  • Turn on the iSCSI Auto-Configuration feature this will enable Jumbo Frames, set the Spanning-Tree Port-Fast feature, disable the Unicast Storm Control and enable Flow Control.
  • Set the Speed of the ports in the iSCSI VLAN to gigabit connection.

Here is a quick how-to of the configuration.

Create a dedicated VLAN for iSCSI traffic

  1. Login to the switch en go to the configuration mode.
  2. Enter VLAN database: Switch(config)#  vlan database
  3. Create VLAN: Switch(config-vlan)#  vlan 2
  4. Back to config mode: Switch(config-vlan)#  exit
  5. Enter VLAN 2 interface config: Switch(config)#  interface vlan 2
  6. Name the VLAN: Switch(config-if)# name iSCSI
  7. Back to enable mode: Switch(config-if)# end
  8. Verify: Switch# show vlan

Enable iSCSI Auto-Configuration

  1. Enter the configuration mode and type: Switch(config): iscsi enable
  2. You will be asked if you want to continue and the Flow Control will be enabled on all interfaces. Answer with Yes.
  3. Now you need save your settings and reload the switch: Switch# write memory
  4. Reload the switch: Switch# reload
  5. After the reload, verify iSCSI settings with Switch# show iscsi

Assign the interfaces to the iSCSI VLAN

In this case port 1 to 10 on switch 1.

  1. Select multiple interfaces: Switch(config)# interface range gigabitethernet 1/0/1-10
  2. Add interfaces to VLAN 2: Switch(config-if-range)# switchport access vlan 2
  3. Force gigabit connection for all iSCSI ports:
  4. Switch(config-if-range)# no negotiation
  5. Switch(config-if-range)# speed 1000
  6. Save your settings: Switch# write memory
  7. You view the configuration with: Switch# show run

Repeat the steps above for the interfaces on switch 2, use interface range gigabitethernet 2/0/1-10 command to select port 1 to 10 on switch 2. It’s also a good idea to shutdown the unused ports on the switches for security reasons.

That’s it, you’re done! :)

All you have to do now is connect the cables to the switch, storage array and servers and you’re good to go.

Cheers!

- Marek.Z

Posted in Dell, ESXi 5, Hypervisor, iSCSI, PowerConnect, Storage, VMware | Tagged: | 1 Comment »

Preparing Internal Flash Cards on Dell R710 for ESXi 5.x Installation.

Posted by Marek.Z on 26 January 2012

On my last project I worked once again with the Dell PowerEdge R710 servers but this time the customer followed our advice and purchased the servers with internal 2 GB flash cards. Auto deploy would of course be more awesome :) but due to the limited knowledge of vSphere at our customer, we decided to go with the flash cards. During the installation of ESXi 5.0 I noticed something unusual. The flash cards were detected correctly in the BIOS but the ESXi installer failed to install the software. Apparently the flash cards were not prepared for the installation.

Here is a quick guide on how to prepare the flash cards for the installation of vSphere 5 Hypervizor.

  1. Place the flash card in the card reader of your laptop or PC.
  2. Windows will detect the card and will ask you to format it. In my case it failed.
  3. Fire up Diskpart and create a new partition.
  4. First, list the disks in your system: DISKPART> list disk
  5. Select the correct disk (in my case disk 2): DISKPART>  select disk 2
  6. Create new partition: DISKPART> create partition primary
  7. Format the disk with FAT32 as you normally would
  8. Place the flash card back in the server, power on and go to BIOS.
  9. Make sure the flash card is the first boot device in the Boot Sequence settings.
  10. Verify that the USB Flash Drive Emulation Type is set to Hard Disk.
  11. Save your settings and reboot the server.

Wait for the ESXi installer to start and follow the default procedure. The vSphere 5 Hypervizor should now install correctly.

Cheers!

- Marek.Z

P.S. Don’t forget to wear an antistatic wrist strap when you remove and install the flash cards in the server ;)

Posted in Dell, ESXi 5, PowerEdge, VMware | Tagged: | 1 Comment »

Install Dell OpenManage on ESXi 4.1 using vSphere CLI or vMA.

Posted by Marek.Z on 9 May 2011

Here is a quick step-by-step guide on how to install the Dell OpenManage software on an ESXi host using the VMware vSphere Command Line interface (vSphere CLI) or the VMware vSphere Management Assistant (vMA). After the installation you’ll have to enable the CIM OEM provider so you can manage the host with Dell OpenManage Server Administrator. Before you begin make sure you have the following:

  • VMware vSphere CLI installed on your system or
  • VMware vMA up and running
  • Downloaded Dell OpenManage software bundle for ESXi
  • VMware vSphere Client (optional)

Ready? Go!

Part 1: Installing the software and enabling the CIM OEM provider using the vSphere CLI

  1. First, put the ESXi host in the maintenance mode through the vCenter Server using the GUI or with vSphere CLI by typing the following command. Make sure you execute the command from the “C:\Program Files\VMware\VMware vSphere CLI\Perl\apps\host” directory. Type: C:\>…\hostops.pl --target_host --operation enter_maintenance --url https:///sdk/vimService.wsdl
  2. Provide the vCenter Server credentials.
  3. If successful, you will see “Host <hostname> entered maintenance mode successfully” message.
  4. Next, install the software by typing: C:\Program Files\VMware\VMware vSphere CLI>vihostupdate.pl --server <FQDN_ESXi_Host> -i -b D:\…\OM-SrvAdmin-Dell-Web-6.5.0-2247.VIB-ESX41i_A01
  5. Enter the root username and password of the host and press Enter.
  6. Wait until the installation is finished. You will see the following message:.
  7. Reboot the host by typing: C:\>…\hostops.pl --target_host <FQDN_ESXi_Host > --operation reboot --url https:///sdk/vimService.wsdl
  8. The installation part is finished. Next, enable the CIM OEM provider on the host.
  9. Enter the following command: C:\>…\ vicfg-advcfg.pl --server --set 1 UserVars.CIMoemProviderEnabled
  10. Enter the root credentials and press Enter.
  11. Reboot the host.
  12. Wait until the host is back online and exit the maintenance mode by typing: C:\>…\hostops.pl --target_host --operation enter_maintenance --url https:///sdk/vimService.wsdl
  13. Enter the vCenter Server credentials and press Enter.
  14. Done! :)

Part 2: Installing the software and enabling the CIM OEM provider using the vMA

  1. First, copy the downloaded Dell OpenManage software to a directory on the vMA. In my case, I created a directory called /Software/DellOpenManage under the /home/vi-admin directory.
  2. Login directly or through SSH to the vMA.
  3. First, add the ESXi host to the vMA: [vi-admin@vMA/]$ vifp addserver <FQDN_ESXi_Host>
  4. Provide the root password for the ESXi host and press Enter.
  5. Set the ESXi host as the target for this session: [vi-admin@vMA/]$ vifptarget –set (Tip: Hit the Tab button for a list of known servers)
  6. Place the host in the maintenance mode by typing: [vi-admin@vMA/][Server_Name]$ vicfg-hostops -o enter
  7. Wait untill the hosts enter the maintenance mode and install the software by typing: [vi-admin@vMA/][Server_Name]$ vihostupdate –i –b /home/vi-admin/Software/DellOpenManage/
  8. Wait until the software is installed and reboot the server by typing: [vi-admin@vMA/][Server_Name]$ vicfg-hostops –o reboot
  9. After the reboot, enable the CIM OEM provider by typing: [vi-admin@vMA/][Server_Name]$ vicfg-advcfg –s 1 UserVars.CIMoemProviderEnable
  10. Reboot the server once again.
  11. Done! :)

Alternatively, you can enable the CIM OEM provider using the vSphere Client after the software installation and reboot of the host:

  1. Select the host in the vCenter Server navigate to Configuration -> Software -> Advanced Settings.
  2. Click on UserVars in the left panel and change the value of the CIMoemProviderEnabled field to 1. Click OK.
  3. Restart the ESXi host.
  4. Wait untill the host is back online, exit the maintenance mode and you’re done :)

Cheers!

- Marek.Z

Posted in Dell, ESXi 4, OpenManage, VMware | 12 Comments »

Dell PowerEdge R710 BIOS settings for VMware vSphere 4.x.

Posted by Marek.Z on 14 April 2011

Recently, I was involved in a big infrastructure refreshment project for one of our customers across different locations in Europe. The old hosts were replaced with the brand new Dell PowerEdge R710 hosts with Intel X5650 processor and 96 GB of memory. All hosts were installed with the vSphere Hypervisor (ESXi). Here are some best practice BIOS settings that we used during the project.

Power on the server and press F2 during the initialization process to enter the BIOS and let’s start with:

Memory Settings:

  • Set Node Interleaving to Disabled

Processor Settings:

  • Set Virtualization Technology to Enabled
  • Set Turbo Mode to Enabled
  • Set C1E to Disable

Integrated Devices:

  • Set the Embedded NIC1 to Enabled without PXE

Serial Communication:

  • Set Serial Communication to Off
  • Set Redirection After Boot to Disabled

Power Management:

  • Set for Maximum Performance

Save the settings and reboot the server. You can now start the installation of the VMware vSphere Hypervisor :)

Cheers!

- Marek.Z

Posted in Dell, ESXi 4, PowerEdge, VMware, vSphere | 21 Comments »

 
Follow

Get every new post delivered to your Inbox.

Join 614 other followers