Interesting Q&A’s from July:
- Question: Is the VMware View 5.0 client compatible with View 5.1?
- Answer: Yes, the 5.0 version of the View Client is compatible with version 5.1. Versions 4.x of the View Client, however, must be upgraded.
- Question: How can the license count be found in VMware View?
- Answer: The license count is not available via the View Administrator, and the product works on the honor system as long as there is a valid key. The count can be viewed from the ‘My VMware‘ licensing portal. It is best practices to combine multiple license keys into a single key to be applied within the View Administrator licensing page.
- Question: How to properly move virtual desktops among datastores with VMware View?
- Answer: For full virtual desktops (non-linked clones) then a traditional svMotion will suffice; however, regular storage vMotion is not supported with linked clone pools. To move Linked Clones, you will want to use the rebalance functionality within View (KB 1028754).
- Question: Can VMs be moved among AMD/Intel processors when powered off?
- Answer: From a vSphere standpoint, there are no problems with this. The concern here is when using a guest OS that is compiled specifically for either AMD or Intel. Modern, stock kernels should not have a problem moving between the two. For example, RHEL had issues due to installing with optimized kernels (KB 1909), but this is no longer done in RHEL 5 or 6.
- Question: How to use a static IP with floating desktop pool?
- Answer: Despite being named differently, a dedicated pool with refresh after first use would technically achieve this goal.
- Question: How to allow users to change their virtual desktop screen resolution with zero clients?
- Answer: The screen resolution will need to be changed on the zero client itself. Users should be able to do that via the menu on the top left, unless it has been explicitly locked down. View will only give the resolution that the zero client requests.
While at the vCenter credentials portion of a vCenter Operations deployment onto the environment I inherited, it yielded the following error which was relatively self-explanatory but had no results on Google: com.integrien.alive.common.security.ExpiredCertificateException
It turns out that the SSL certificate for vCenter had expired. The expiration had not affected anything else, but it appears vCops takes it very seriously. I had not ran into the vCenter SSL certificate expiring before, and that would be because vCenter 4.x and later generate SSL certificates that last for 10 years; however, vCenter 2.5 generated SSL certificates that are only valid for 2 years.
The process to regenerate the SSL certificate for vCenter is described in KB 1009092: Regenerating expired SSL certificates after2 years. Essentially, it involves taking the rui.key and rui.pfx from C:\ProgramData\VMware\VirtualCenter\SSL and using OpenSSL to generate a new self-signed certificate. In my case, I scp’d the files to a Linux server and used OpenSSL on it instead of trying to use OpenSSL on Windows.
The commands used were:
- openssl req -new -x509 -days 3650 -sha1 -nodes -key rui.key -out rui.crt -subj “/C=US/ST=NC/L=CHARLOTTE/CN=FQDN.OF.VCENTER.COM”
- openssl pkcs12 -export -in rui.crt -inkey rui.key -name rui -passout pass:testpassword -out rui.pfx
The ‘testpassword’ is the default password used by VMware. After generating on the Linux server, I scp’d them back over to the Windows host, backed up the current keys, stopped vCenter, copied the new keys in, and started vCenter backup. Voilà, new SSL cert installed and vCenter Operations install was able to proceed.
Yesterday, an Experimenting with Windows 8 Desktops in View post was put up on the VMware EUC blog. I hadn’t used Windows 8 in general yet, so it seemed like a good way to knock out two birds with one stone: test Windows 8 and play with it on View.
The install process is essentially the same as deploying a new virtual machine in general, but there are a few gotchas:
- Windows 7 has to be selected as the Guest OS since there is no Windows 8 option.
- A fairly recent build of ESXi 5.0u1 is required.
- ‘Enable 3D Support‘ must be selected within the guest settings.
Without 3D support enabled, the guest will bug out and won’t be accessible via the traditional console or View. The actual install of Windows 8 is quick and easy in the familiar Windows 7/2008 style:
After installing the base OS, the typical View install requirements are needed: install VMware Tools, join to the domain, install View Agent, etc. Since Windows 7 is selected when installing the guest, it will install the Windows 7 VMware Tools; both the tools and the View agent install normally with no special flags or tinkering required.
The desktop pool that will hold the Windows 8 desktop does need some special settings to work properly automatically. All that is needed is to edit the pool settings, change ‘Allow users to choose protocol‘ to ‘No‘ and then enable ‘Windows 7 3D Rendering‘. Without these settings, View will uncheck the ‘Enable 3D Support’ that was selected earlier; of course, these settings can also just be alone, and just manually re-enable the option within the vSphere Client after adding the desktop to the pool. Also, since View pulls the OS information from the guest configuration settings, adding to the View Admin will also show the guest as Windows 7.
Voilà, we now have a Windows 8 View virtual desktop:
Obviously, this is completely unsupported and no one should deploy this in production yet, but it’s good to see it works relatively well already so we should expect to see great Windows 8 support with VMware View as soon as Microsoft is ready to ship it.
Interesting Q&A’s from June:
- Question: How can we restrict certain users from accessing their desktops through the VMware View Security Server?
- Answer: This can be done through the use of tagging, as defined within the Architecture Planning guide in the Restricting View Desktop Access section.
- Question: How to shadow a View desktop?
- Answer: The ability to shadow a desktop through the vSphere Client console can be enabled via a GPO.
- Question: How does vRAM work with Fault Tolerance (FT) virtual machines?
- Answer: Both copies of the VM count against the vRAM total.
- Question: Where is the cold-clone ability in VMware Converter?
- Answer: VMware has removed the cold-clone CD from the Converter product. A replacement cold-clone CD has been created by the community: MOA.
- Question: Does disconnecting from an old vCenter and connecting to a new vCenter affect the hosts?
- Answer: HA/DRS will not be available while disconnected and reconnecting, but virtual machines will persist and not incur downtime. When migrating to a new version, be sure to read the upgrade guide and compatibility matrixes, as some versions such as 4.0 Update 2 are not compatible with 5.0 but are compatible with 5.0 Update 1.
- Question: Why is the View Client installation no longer available directly through the View Connection Server web site?
- Answer: This was done to decouple the client from the server portions, so that the client could be updated independently and more often.
- Question: How to hide local drives from being redirected inside the virtual desktop when using RDP?
- Answer: This can be done by modifying Local Group Policy or by creating and applying a GPO to the necessary desktops, as defined in KB 1013457.
There are a lot of good virtual backup products out there, and in the past I have used many different flavors. I had not had a chance to work with PHD Virtual Backup personally yet, so I made it a goal of mine to give it a try. On several of the forums I frequent, I have heard great things about the product. One individual of note switched from another large player to PHD Virtual simply due to their great support in comparison to others, and another switched due to the scalability.
I’m going to try to skip too much of generic install, backup setup, restore setup, and other details since PHDVB offers videos and documents that detail these processes much better than I could. Instead, I hope to highlight some of the neat features and methodologies used.
The suite is packaged in an OVF template and runs on a customized Ubuntu build. Unlike others that require a Windows server, there is no need to pay extra for OS licensing. Since everything is self-contained, it is also one less OS install you have to manage. It also uses a PostgreSQL database, so no dependencies to MSSQL or MSSQL Express, which is a plus. This is very similar to the path VMware is taking with the vCenter Server Appliance.
To start out, you will need to install the console on your workstation, which adds the plugin to your vSphere Client.
Next up is the Virtual Backup Appliance, which is the engine of the product. Rolling out a Virtual Backup Appliance is as easy as deploying the OVF, configuring hypervisor credentials, and adding storage. It’s an extremely quick and easy process. Once finished, if you view the console for the new VBA, it will be a classic text-based interface that you will rarely need to touch again directly, as configuration is done through the vSphere Client PHDVB Console.
I did run into an odd issue where the filesystem on the appliance required a manual fsck. This required me to log into the console directly and run the command, but it was well documented on their knowledge base: PHDVBA contains a file system with errors, check forced.
The last piece to install is the PHDVB Exporter. This is a Windows application, which can reside on your BackupExec server for example, which takes the backups stored by the PHDVBA and converts them into OVF format for backup to tape. This is probably my favorite feature of the product, and will be discussed in greater detail later.
This is an area where I think the other major players should learn from. As noted above, PHD Virtual Backup includes a plugin for the vSphere Client that allows the Administrator to manage backups directly within the client, instead of having to RDP to a Windows Server then open up a legacy console then manage everything from within there. Instead, if I need to work on backups then I can do it from the vSphere Client which I always have open anyway. This becomes truly awesome when you need to launch a one-off backup prior to a change or whatnot. It’s such a pain to create a one-off in other products that I tend to use snapshots more than I probably should.
The console itself, instead of a pane inside vCenter, launches a window on top of the vSphere Client from which you can manage some of the more granular items, such as Configuration and Licensing, as well as gain more control of the backup/restore/replicate functions.
Backup jobs are a straight-forward process in setting up, and backups work as expected in regards to speed, deduplication, scheduling, stability, etc. You can select which appliance performs the backups to increase efficiency, as well. The additional, special options for a backup job are:
- Verify backup: None/New blocks only/All blocks
- Backup Powered off virtual machines
- Set backups as archived (won’t be deleted via retention policy)
- Quiesce the VM before backing up (Windows only)
- Use Changed Block Tracking
Pretty standard stuff, and it works well. The only oddity I noticed is in regards to the backup retention policy, which is managed on a per-appliance basis instead of a per-job basis. This means if you want a different retention period for a special job, you need a different appliance to do it.
You can select three different options for Storage Type: Attached Virtual Disk, NFS, and CIFS. The neat one here is ‘Attached Virtual Disk’, where it allows you to add a VMDK to the appliance and use it as a backup repository. This is a neat way to utilize the local storage in your hosts, if you so choose; however, performance may be better storing the backups elsewhere. If there ever comes a time the VMDK needs expansion, it is as easy as expanding the disk and then rebooting the appliance; it will expand the actual partition automatically. They have released a white-paper on Storage Best Practices for PHD Virtual Backup that is worth checking out.
There are two recovery options: file-level and VM-level. The VM-level jobs are launched via the ‘Jobs’ tab of the Console, and file-level are launched via the ‘File Recovery’ tab. VM-level restores work as expected and restore directly; however, the file-level restores operate in a more unique manner. To restore individual files, an iSCSI target is created on the backup appliance and is then optionally automatically mounted onto the desktop kicking off the restore job. This is a pretty nifty way to access files, and works really well. With Windows, it’s more or less seamless and you will see a disk added that can be browsed to via Windows Explorer.
With Linux guests, it’s a little trickier. Since Windows doesn’t have built-in support for Linux filesystems, external tools are required. The PHDVB User Guide recommends to use either explore2fs or ext2explore (now renamed Ext2Read). Explore2fs claims support for ext2/ext3/LVM2, but has a new beta version called Virtual Volumes which adds support for ReiserFS. Ext2Read claims support for ext2/ext3/ext4/LVM. In the next release, the need for these tools will no longer be required as they’re implementing the ability to mount the backup to the VBA which can then be presented out using CIFS.
As I mentioned earlier, this feature is the bee’s knees. The only downside is that it is not managed via the vSphere Client plugin, so it has to be on a Windows server and has a separate console. To start off, the option to share the backup folder via CIFS needs to be enabled on the VBA under the Connectors tab; it’s also worth mentioning, you can enable regular CIFS access inside the same tab in order to view your backups directly as well, but they will not be in OVF format. Once this is enabled, the Exporter Console on the backup server is used to create a job, which is then either stored in a Windows Scheduled Task or manually ran via the command line:
Once the job completes, you will find an OVF inside your specified Staging Location, along with a .txt file stating the VM name, time of backup, and source location. This OVF can be deployed via the typical ‘File -> Deploy OVF Template’ inside the vSphere Client without any requirement on the backup software. This feature will really shine if you have a large outage that also affects your backup infrastructure, whereas with other backup software you will need to reinstall their software.
I think it’s worth mentioning that replication is also included, and I have heard good things about it; however, due to my lab size I did not have a chance to give that a test. Finally, in closing, I think PHD Virtual Backup for VMware is a very neat product. My personal highlights that I recommend to check out if you give it a trial in your lab or business are:
- Ubuntu-based Appliance
- No requirement for MSSQL
- Ability to use Attached Virtual Disks for backup storage
- vSphere Client Plugin Interface
- Ease of file-level recovery via iSCSI target
- PHD Exporter transforming backups to OVF format
While there are neat highlights in other products as well, PHD Virtual Backup provides some really unique, handy methods of doing things, and is definitely a product to consider when designing your backup architecture.
I haven’t used the vSphere Storage Appliance in a production environment yet, since it still has a bit of room for maturity both from a technical standpoint (not able to expand beyond three nodes, etc) and from a pricing standpoint (low end iSCSI arrays are at a close price point); however, I recently found out about the vSphere Storage Appliance Offline Demo, while I had a chance to read and review Brian Atkinson’s VCP5 Study Guide book.
The demo goes through the process of setting up the VSA, putting nodes in maintenance mode, testing resiliency, etc. in a semi-interactive manner; the interface is basically a guided walk-through that appears as a vSphere client interface. If you haven’t used the product yet, it’s a neat way to get familiar with it. The publicity of this guide seems to be limited to the post on the community forums, so I imagine a lot of people haven’t had a chance to use it yet.
Interesting Q&A’s from May:
- Question: What is the difference between top and esxtop?
- Answer: esxtop is a modified version of top that gives detailed metrics on the virtual environment (More detail: Interpreting esxtop 4.1 statistics). top is now defunct and not included in ESXi, but in ESX it provides details on the Service Console only.
- Question: Can I upgrade to vSphere 5.0 and keep View at version 4.6?
- Answer: As confirmed by the VMware Product Interoperability Matrixes, if running vSphere 5.x then you must run View 5.x.
- Question: How can I backup virtual machines directly from the SAN?
- Answer: Using ‘SAN Transport Mode‘ within BackupExec, you can read directly from the storage array. Other vendors such as Veeam also support this backup mode.
- Question: What are the time-keeping best practices with vSphere?
- Answer: VMware’s Knowledge-Base states the best practice for time-keeping in a Windows guest is to use w32time or NTP inside the guest instead of using VMware Tools time synchronization (KB 1318).
- Question: How do you auto-connect USB by default with VMware View?
- Answer: These options can be set in command-line arguments to the View Client. The options are: ‘-connectUSBOnStartup XXX‘ and ‘-connectUSBOnInsert XXX‘, where XXX is either true or false.
- Question: Is it possible to license half of the CPUs inside an ESX(i) Host?
- Answer: No, you must physically remove the sockets or disable the socket via the BIOS, if that option is available for your system.
- Question: Attributes passed to the VM via the .vmx file do not show up?
- Answer: These attributes can be put into the .vmx file, but it requires a reboot to take affect.
- Question: Running Mixed Clusters with vSphere 4.x and 5.x
- Answer: Mixed clusters of 4.x and 5.x are fully supported by VMware; however, you must not update virtual hardware to version 8 or VMFS to version 5 as these are not supported on ESX(i) 4.x.
- Question: How to install the correct VMware Tools version on virtual machines running with different builds?
- Answer: You can install the latest version of VMware Tools 5.x and run it on any version of ESX(i) 5.x and 4.x; it is fully backwards compatible with all patch levels of 5.x and 4.x. There is no need to install legacy tools.
The 5th annual Carolina VMware Users Summit was today in the Charlotte Convention Center, and the event was great as always. The speakers list featured a cast of VMware experts like Jason Nash, Scott Lowe, Alan Renouf, and more.
The keynote was “Two perspectives: The Past and Future of VMware Storage” by Satyam Vaghani. Satyam was one of the first 100 employees of VMware and played an important role in VMware’s storage architecture which he shared much of the history on from discussing how they dealt with clustering, SCSI reservations, etc. He has recently moved on from VMware to a new CTO position with another company, but he closed the keynote on where VMware is likely headed in regards to storage. He noted that it is likely VMware will work with storage array providers to allow the hypervisor to interface directly with the storage instead of needing VMFS allowing VMware’s storage architecture to come full circle. It was an interesting topic and a great kickoff to the event.
I was able to catch the following sessions:
- vSphere Distributed Switch – Technical Deep Dive by Jason Nash
- PowerCLI 201 by Alan Renouf
- Network Architectures for VXLAN: Enabling Stateful vMotion with Existing Network Addressing by Arista Networks
- vSphere and Network-Attached Storage Design Considerations by Scott Lowe
- vCloud Director PowerCLI by Alan Renouf
- Hands-on Labs provided by Varrow
The sessions were all outstanding from both a technical material standpoint, but the speakers were also excellent speakers that were able to really keep the audience engaged and interested. The hands-on labs from Varrow were implemented through View virtual desktops, and the systems were stable and speedy (which can be tough by the end of the day to maintain in these environments). I chose the EMC VNX lab, which went through iSCSI setup on the VNX platform with vSphere 5.x.
Of course, such as with all of these events, there was plenty of vendor swag, and I was lucky enough to win a helicopter from EMC:
Next year I’m hoping for a life-size version. A big thanks goes out to all of the speakers, vendors, and organizers for setting up this event.
Interesting Q&A’s from April:
- Question: How can SRM/vSphere Replication’s network utilization be throttled?
- Answer: There is no built-in method. The best solution is to use Network I/O Control, which requires Enterprise Plus licensing.
- Question: Is it possible to change swap file location for a running VM?
- Answer: Yes, this process is described in KB 2003956.
- Question: Can’t activate OEM XP Install after P2V
- Answer: P2V is successful, but cannot activate. This is due to OEM licensing, which is bound to particular hardware. Switch to a Volume Licensing Key or try contacting Microsoft.
- Question: How to properly use Storage vMotion with VMware View Virtual Desktops
- Answer: For Manual Pools (non-Linked Clones), Storage vMotion will work fine. For Linked Clones, rebalance the VMs to the new datastores.
- Question: Does adding more vCPUs do permanent damage?
- Answer: No, this only causes problems with older versions of Windows that had specific HALs for SMP/non-SMP. If there is a problem, the VM can be reverted back to the previous vCPU count.
- Question: Should re-assigning desktops move the profile in VMware View?
- Answer: Re-assigning users to new desktops does not move the profile. To get that experience, one could detach the persistent disk then recreate a desktop from it.
The question of whether to defragment virtual guests popped up on the VMware Communities forums again, and I remember debating this with a colleague a few months ago. If you do a Google search, you’ll find differing opinions (usually ones that are pro-defrag tend to be older documents), so whether to do it can be confusing. I wanted to clarify why virtual machines should not be defragmented courtesy of the sources listed below:
- Thin Provisioning: Performing a defrag will cause significant growth to your thin provisioned disk. Imagine an administrator that performs defrags using a tool on a scheduled basis. The growth has the potential to be catastrophic to your datastores. Your thin provisioned datastore at 75% is now completely full.
- Changed Block Tracking: Most backup utilities, such as Veeam Backup & Replication, use CBT in order to minimize the amount of data that needs to be backed up. A defrag will lead to lots of changed blocks by design, and will cause your backups to sky-rocket in size.
- Snapshots: Defragmentation will also lead to any VMs that have snapshots to grow in size. Of course, snapshots are not backup and they should be cleared out regularly, but this is still a negative.
- Linked Clones: If utilizing Linked Clones, for example in your VMware View VDI environment, you’re again going to run into the linked clone disks growing unnecessarily. It would be better to defragment the parent disk that your replicas are based on instead of each desktop created from them.
- Unnecessary Disk I/O: Defragmentation is going to cause a lot of disk I/O on your SAN. Theoretically, you could try to carefully time your defrag cycle for off-hours to avoid this.
- No Observable Benefit: VMware’s own internal tests show no noticeable improvements after defragging virtual machines on SAN/NAS devices.
- Storage Auto-tiering/SAN Block Handling: With a large amount of vendors now utilizing SAN-based auto-tiering (Equallogic, Compellent, Nimble, etc.), the hotspots on your VM have likely been moved to a higher tier. After a defragmentation, auto-tiering will have to relearn, which depending on the vendor can take awhile. Moreover, the defragmentation tool is unaware of how the SAN is handling the disk layout, and defragmenting a volume on a storage pool is not recommended regardless of whether it is in VMware or not.
- Solid-State Drives: SSD drives are becoming increasingly common. There is no reason to defragment when using this type of storage as there is no disk head movement involved.
- Some Vendors Specifically Advise Against it: NetApp states, “VMs stored on NetApp storage arrays should not use disk defragmentation utilities because the WAFL file system is designed to optimally place and access data at a level below the guest operating system (GOS) file system. If a software vendor advises you to run disk defragmentation utilities inside of a VM, contact the NetApp Global Support Center before initiating this activity.“