3.30.2015

My KVM Guide Part IV: PIecing it All Together

I don't have a whiteboard in my cube yet, so you have the luxury of a napkin diagram.  The following shows how end user interfaces such as virsh, virt-viewer and virt-manager interface with libvirtd, and in turn with KVM.


3.27.2015

My KVM Guide Part III: Overview & Components


Alright, since we already know at a very high level what KVM is (a hypervisor), let’s delve into the different components that it is comprised of.  KVM is implemented as a kernel module that can be loaded to transform Linux into a Virtual Machine Manager (VMM).  As Linux already had all of the tools and mechanisms needed to house several VMs, the developers just needed to add a few components to support virtualization. Each process in a standard Linux environment runs in one of two modes: user-mode or kernel-mode.  The advent of KVM introduced a third: guest-mode, which relies on a virtualization capable CPU.  With guest-mode, certain instruction sets can be “trapped”, so to speak.  In KVM, each VM is implemented as a process, which relies on it’s scheduler for the assignment of computing power to the virtual machines; memory is allocated via the Linux memory allocator.



The two components that make up KVM are: /dev/kvm and QEMU (Wow, it's that simple!).  Once the KVM kernel module is loaded (this is not enough to run virtual machines on all by it’s lonesome), the /dev/kvm device node appears in the file system.  The hypervisor can be controlled through this interface via a set of ioctls - system calls that execute operations to create new and assign resources to VMs.  KVM also used a generic emulator Quick Emulator, better known as QEMU to present hardware to the VMs.  For each virtual machine, a separate QEMU process is started in user-mode, and certain emulated devices are virtually attached.  Read and write I/O operations from the VM are intercepted by the hypervisor and redirected to the associated QEMU process for that specific guest.

“Since a virtual machine is simply a process, all of the standard Linux process management tools apply: one can destroy, pause, and resume a virtual machine with the kill command (or even using Ctrl-C and similar keyboard shortcuts) and view resource usage with top. Permissions are handled by the normal Linux method: the virtual machine belongs to the user who started it (which need not be root; all that is required is access to /dev/kvm), and all accesses are verified by the kernel.”

Sources: http://www.linuxinsight.com/files/kvm_whitepaper.pdf
http://www.cs.hs-rm.de/~linn/fachsem0910/hirt/KVM.pdf

My KVM Guide Part II: KVM History

A bit of history on KVM (wow, really?  I feel like I am writing a paper for grade school):

KVM was developed by a low-profile Israeli startup called Qumranet (pronounced: coom-rah-net), which was formerly known as Comanet.  The company was founded in 2005 by Benny Schnaider, Rami Ramir, Moshe Bar and Giora Yaron, and was sold to RedHat in 2008 for $107 million (The KVM software was initially written by Avi Kivity).

KVM is a Type 1 hypervisor.  If you are not familiar with hypervisor types, there are 2.  I will let you guess what they are called.  Type 1 hypervisors run directly on a host to control underlying hardware and manage guest VMs (i.e. KVM, ESXi, Hyper-V).  Type 2 hypervisors run on top of a standard operating system (i.e. VirtualBox/VMware Fusion).  “KVM is one of the most popular open-source virtualization technologies in use today - the first to be integrated into the vanilla Linux kernel.  Both IBM and Red Hat use it as the basis for their Linux virtualization technologies, and it is the most widely used virtualization technology in the OpenStack cloud as well.”  Source: http://www.eweek.com/cloud/how-did-kvm-virtualization-get-into-the-linux-kernel.html

My KVM Guide Part I: The Beginning


So, I started working at Nutanix a few weeks ago, and our platform ships with KVM by default.  Cool… no problem, I can learn that, except for the fact that I could not find a very comprehensive resource online that takes you through from an overview of the architecture through configuration and example, tips, tricks, etc.  Therefore, Jen will build her own.

So what exactly is KVM?  The Kernel-based Virtual Machine (KVM) is an open source hypervisor that runs on Linux.  It’s purpose in the world serves the same as that of ESXi or Hyper-V: a hypervisor that aids in the consolidation of physical servers into virtual machines for resource and cost savings. 

While starting my search for applicable resources,  I found what seems to be a promising book on QEMU, KVM, Xen + libvirt: http://qemu-buch.de/de/index.php?title=QEMU-KVM-Book
… if you speak German.  The English translation was not so readable for me, but you could maybe fuddle your way through it.  KVM installation is covered pretty thoroughly at the following site: http://pacita.org/books/server-setup/output/pdf/doc.pdf

9.15.2014

Registering UCS Domains with UCS Central

And now for the moment we've all been waiting for :]
UCS domains need to be registered with UCS Central in order for them to be managed through the server.  Upon registration, you can elect which policies/configurations you want to be managed by UCS Central and which ones you want to keep local to UCSM.  Each registered domain can have specific policies/configurations managed by UCS Central; you don't need to have the same global/local configurations across all UCS domains.  The following is a list of items that can be managed:
-Infrastructure and Catalog Firmware
-Time Zone Management
-Communication Servers
-Global Fault Policy
-User Management
-DNS Management
-Backup and Export Policies
-Monitoring
-SEL Policy
-Power Allocation Policy
-Power Policy

You should also review the Consequences of Policy Resolution Changes/Consequences of Service Profile Changes on Policy Resolution tables so an understanding of how things work with UCS Central can be had:
http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/ucs-central/deployment-guide/1-0/b_UCSC_Deployment_Guide_10/b_UCSC_Deployment_Guide_10_chapter_0100.html#reference_E5D010B53E054876BDA0FD9D75A92E35

There are a couple of prerequisites for registering a UCS domain with UCS Central:
1. Configure an NTP server and ensure that Central and UCSM are in sync.  Make sure the timezones match as well, else you will end up with an FSM failure:








You can verify the time from the CLI of both nodes:
ucs-esc-n25-B# show clock
Sun Sep 14 09:48:38 EDT 2014


central# show clock
Sun Sep 14 13:56:40 UTC 2014


Why do I have 2 servers in PST timezone configured for everything but PST timezone? :]
It is obviously recommended to configure Central and UCSM to point to an NTP server, but if you need to change the time manually on both servers, you can SSH to the CLI and perform the following:

ucs-esc-n25-B# scope system
ucs-esc-n25-B /system # scope services

ucs-esc-n25-B /system/services # set clock sep 14 2014 11 29 00

NTP Configuration for UCS Central can be found in the Operations Management tab ->Domain Groups -> [Domain Group] -> Operational Policies:





NTP Configuration for UCSM can be found under the Admin tab -> Time Zone Management:




2. Gather your Central and UCSM IPs as well as the Central shared secret (Shhh... it's a secret!)
**Note: You cannot change your UCSM IP address while it is registered with UCS Central.  If this needs to be done for whatever reason, you need to unregister, change and re-register back up with Central.

In order to register your domain, launch UCSM and navigate to the Admin tab -> Communication Management -> UCS Central



Click on Register with UCS Central.


You will be presented with a dialogue box to enter the UCS Central server IP address and shared secret.



And then...



And of course if you would like to unregister your UCS from Central, simply click on the 'Unregister from UCS Central' button.  **Note: If the registered Cisco UCS domains have a latency of greater than 300ms for a round trip from Cisco UCS Central, there might be some performance implications for the Cisco UCS domains.