Posts tagged storage
I haven’t used the vSphere Storage Appliance in a production environment yet, since it still has a bit of room for maturity both from a technical standpoint (not able to expand beyond three nodes, etc) and from a pricing standpoint (low end iSCSI arrays are at a close price point); however, I recently found out about the vSphere Storage Appliance Offline Demo, while I had a chance to read and review Brian Atkinson’s VCP5 Study Guide book.
The demo goes through the process of setting up the VSA, putting nodes in maintenance mode, testing resiliency, etc. in a semi-interactive manner; the interface is basically a guided walk-through that appears as a vSphere client interface. If you haven’t used the product yet, it’s a neat way to get familiar with it. The publicity of this guide seems to be limited to the post on the community forums, so I imagine a lot of people haven’t had a chance to use it yet.
The question of whether to defragment virtual guests popped up on the VMware Communities forums again, and I remember debating this with a colleague a few months ago. If you do a Google search, you’ll find differing opinions (usually ones that are pro-defrag tend to be older documents), so whether to do it can be confusing. I wanted to clarify why virtual machines should not be defragmented courtesy of the sources listed below:
- Thin Provisioning: Performing a defrag will cause significant growth to your thin provisioned disk. Imagine an administrator that performs defrags using a tool on a scheduled basis. The growth has the potential to be catastrophic to your datastores. Your thin provisioned datastore at 75% is now completely full.
- Changed Block Tracking: Most backup utilities, such as Veeam Backup & Replication, use CBT in order to minimize the amount of data that needs to be backed up. A defrag will lead to lots of changed blocks by design, and will cause your backups to sky-rocket in size.
- Snapshots: Defragmentation will also lead to any VMs that have snapshots to grow in size. Of course, snapshots are not backup and they should be cleared out regularly, but this is still a negative.
- Linked Clones: If utilizing Linked Clones, for example in your VMware View VDI environment, you’re again going to run into the linked clone disks growing unnecessarily. It would be better to defragment the parent disk that your replicas are based on instead of each desktop created from them.
- Unnecessary Disk I/O: Defragmentation is going to cause a lot of disk I/O on your SAN. Theoretically, you could try to carefully time your defrag cycle for off-hours to avoid this.
- No Observable Benefit: VMware’s own internal tests show no noticeable improvements after defragging virtual machines on SAN/NAS devices.
- Storage Auto-tiering/SAN Block Handling: With a large amount of vendors now utilizing SAN-based auto-tiering (Equallogic, Compellent, Nimble, etc.), the hotspots on your VM have likely been moved to a higher tier. After a defragmentation, auto-tiering will have to relearn, which depending on the vendor can take awhile. Moreover, the defragmentation tool is unaware of how the SAN is handling the disk layout, and defragmenting a volume on a storage pool is not recommended regardless of whether it is in VMware or not.
- Solid-State Drives: SSD drives are becoming increasingly common. There is no reason to defragment when using this type of storage as there is no disk head movement involved.
- Some Vendors Specifically Advise Against it: NetApp states, “VMs stored on NetApp storage arrays should not use disk defragmentation utilities because the WAFL file system is designed to optimally place and access data at a level below the guest operating system (GOS) file system. If a software vendor advises you to run disk defragmentation utilities inside of a VM, contact the NetApp Global Support Center before initiating this activity.“
If you’re using an Equallogic PS series array for your vSphere storage, it is highly recommended to utilize the Equallogic MEM Plugin. Some of the benefits that your environment will reap are:
- Easing iSCSI Setup
- Increasing bandwidth
- Reducing network latency
- Automating load balancing across multiple active paths
- Automating connection management
- Automating failure detection and failover
The following is a walk-through of an example installation. For more details on the steps performed, please consult the Technical Report.
- ESX or ESXi >= 4.1 with Enterprise licensing
- VMware vMA 4.01 or VMware CLI 4.1
- Equallogic array firmware >= 4.3
The following utilizes the vMA to perform the installation. The vMA is an excellent tool, and in addition to help set this up, it can also be used as an excellent ESXi Syslog Server.
- Obtain the zip file containing the setup perl script and the package itself: https://www.equallogic.com/support/download_file.aspx?id=1101. You will need to SCP this over to your vMA server.
- Unzip the file which you scp’d over. Do NOT unzip the zip file inside, which is a VIB offline bundle.
- Place your host into maintenance mode.
Shrinking a volume is not available via the Equallogic GUI.
To do so, it must be done via the CLI. Telnet into the group, and select the volume that is to be shrinked.
You must then put the volume offline (in this case, I put the volume offline via the GUI already). To shrink the volume, simply execute:
The size to put here is the total size you want the volume to be after the shrink, it is NOT the size you want to shrink the volume by. Additionally, put a G or M after the size to specify the unit.
A snapshot is automatically created prior to the shrink, and can be deleted after testing. This obviously doesn’t cover the necessary steps to shrink the partition residing on the volume, which can vary depending on the format.