Discussion On Hypervisiors And KVM

Views:
 
     
 

Presentation Description

No description available.

Comments

By: snehal_kashikar (123 month(s) ago)

Good Presentaion

By: chenqxi (125 month(s) ago)

Good slides, I would like to share with my colleague.

By: priyadharshini.bca (126 month(s) ago)

Hi, could u plz allow us to download this presentation :-)

Presentation Transcript

Slide 1: 

SCALING COMPUTING UNITS ReddyRaja

Slide 2: 

Approaches Scale Within Computer has more than one CPU Enable/disable based on demand Scale Out Provision one or more virtual machines in a core or in different cores

Slide 3: 

Virtualization Improve IT throughput and costs by using physical resources as a pool from which vitual resources can be allocated One Physical server to multiple virtual servers Popek and Goldberg virtualization requirements Equivalence  A program running under the VMM should exhibit a behavior essentially identical to that demonstrated when running on an equivalent machine directly. Resource control  The VMM must be in complete control of the virtualized resources. Efficiency  A statistically dominant fraction of machine instructions must be executed without VMM intervention.

Slide 4: 

System Virtualization Consolidate systems, workloads and operating environments using single physical system to create multiple virtual systems Divides single physical server into partitions Each partition can run an operating system Hypervisors Provides underpinnings for virtualization management Policy based virtualization Virtual hard disk Life cycle management Live Migration Real-time resource allocation Allocate system's processor, memory and other resources required to run an operating system Logical divide single physical server or blade Multiple OS run securely on the same CPU and increase the CPU virtualization

Slide 5: 

Hypervisior Consolidate systems, workloads and operating environments using single physical system to create multiple virtual systems Divides single physical server into partitions Each partition can run an operating system Hypervisors Provides underpinnings for virtualization management Policy based virtualization Virtual hard disk Life cycle management Live Migration Real-time resource allocation Allocate system's processor, memory and other resources required to run an operating system Logical divide single physical server or blade Multiple OS run securely on the same CPU and increase the CPU virtualization

Slide 6: 

How Virtualization works In recent years, the hardware and operating systems have matured to the point of making the promise of virtualization a reality. The most fundamental part of virtualization is the hypervisor. The hypervisor acts as a layer between the virtualized guest operating system and the real hardware. In some cases, the hypervisor is an operating system, such as with Xen; in other cases, it's user-level software, such as VMware. The virtualized guest operating system, or the virtualized instance, is an isolated operating system that views the underlying hardware platform as belonging to it. But, in reality, the hypervisor provides it with this illusion.

Slide 7: 

Processor Support for Virtualization Due to the resurgence of interest in virtualization technology, microprocessor manufacturers have updated their processors Native support for virtualization. Doing so allows the processor to support a hypervisor directly and simplifies the task of writing hypervisors, as is the case with KVM. The processor manages the processor states for the host and guest operating systems, and it Also manages the I/O and interrupts on behalf of the virtualized operating system. Intel-VT and AMD-V

Slide 8: 

Elements of Hypervisor - Benefits Reduced power consumption, realestate, cooling and management costs Improved realiability Cost and energy advantages . Two of the most common approaches to software-emulated virtualization are full virtualization and paravirtualization. In full virtualization, a layer, commonly called the hypervisor or the virtual machine monitor, exists between the virtualized operating systems and the hardware. This layer multiplexes the system resources between competing operating system instances. Paravirtualization is different in that the hypervisor operates in a more cooperative fashion, because each guest operating system is aware that it is running in a virtualized environment, so each cooperates with the hypervisor to virtualize the underlying hardware.

Slide 9: 

Hypervisor types Type1 Directly runs on hardware Also called as full virtualization, VMM exists between the guest os and the hardware Layer multiplexes the system resources between competing operating system instances Offers a higher level of virtualization efficiency and security Vmware is type 1 Type2 Guest OS is ware of aware that it is running in Virtual machine environment. Co-operates with hypervisor Run on host operating system that provides virtualization services such as I/O device support and memory management Used when efficiency is less critical Also called as the ParaVirtualization Allows fastest sottware based virtualization and does not support propritary OS’s. Xen paravirtualized and now supports Windows in full virtualization with hardware support With the advent of Hardware support to abstract the software support for virtualization, the difference between type 1 and type 2 is blurred

Slide 10: 

KVM architecture Instead of creating major portions of operating system for hypervisor, LVM makes Linux itself as a hypervisor KVM is a kernel module This approach has simplified management and improved performance. – Main reason for developers to add it as part of the kernel Kernel resident virtualization infrastructure for Linux on X86 hardware First hypervisor to be part of native linux kernel(2.6.20) Developed and maintained by Avi Kivity through Qumaranet startup, now owned by Redhat

Slide 11: 

KVM architecture contd KVM provides X86 virtualization, with ports to powerPC and IA64 in process KVM recently added support for SMP hosts Supports enterprise level features such as live migration (Allow gues operating systems to migrate between physcial servers) KVM supports full virtualization on hardware supported by Intel-VT and AMD-V Implementation KVM is impelemented as a kernel module. By loading the module, Linux OS becomes a hypervisior

Slide 12: 

KVM architecture contd Implementation KVM is implemented as a kernel module. By loading the module, Linux OS becomes a hypervisor Has two main components First Kernel loadable module, when installed in the linux kernel, provides virtualization hardware, exposing its capabilities through /proc file system Second is the component that provides PC platform emulation, which is provided by the modified version of QEMU. (QEMU executes as a user space process, coordinating with the kernel for guest operating system requests)

Slide 13: 

KVM architecture contd A device driver for managing the virtualization hardware Driver exposes its capabilites vis a character device /dev/kvm A user space component for emulating PC hardware. Currently this is handled by KVM process When a new OS is booted on KVM, it becomes a process in the user space of host operating system and therefore schedulable like any other process KVM introduces new state to the kernel called guest mode Each guest OS is mapped tthrough /dev/kvm device having its own virtual address space that is mapped into hosts’s kernel’s physical address space I/O requests are mapped through the kernel to the QEMU process that executes on the host.

Slide 14: 

Lguest architecture Another open source VMM Supports para virtualization Minimal code base(5000 lines) Has bus architecture supporting various devices Uses hyper calls Easy to understand

Slide 15: 

Linux Hypervisor benefits Using Linux as hypervisor has real and tangible benefits Benefits from the steady progression of Linux and large amount of work that goes onto it Scheduling and memory-management innovations to support different processor architectures Use the Host OS also a paltform along with guest OS Standard Linux platform for application development(no new API’s and interfaces)

Slide 16: 

Hypervisors - New battle ground VMWare Type 1, a ground breaking technology VM ware manages to fully virtualize the notoriously complex x86 architecture using software techniques Acheives good performance and scalability Vmware is large and complex piece of software and as a result has to give way for simpler open sourve technologies Virtual center provides management and provisioning of virtual machines KVM Relies on new hardware support appeared Very small code based and relatively simple Biggest benefit it is open source Uses QEKU for hardware I/O emulation Virtual Box Type 1 virtualization Open source and distributed and GPL Xen Type 1 open source Hypervisor Large project offers both para and full virtualization Designed as a standalone kernel and requires linux to perform I/O Makes it large because it has its own scheduler, memory manager, timer handling and machine initialization Close to native performance Runs directly on system hardware QEME Is a user space emulator Supports vareity of processors on several host processors Since it is in user space, cannot achive native speeds without kernel accelerator Microsoft Virtual Server Part of LongHorn server For Windows Server platform

Slide 17: 

Xen Architecture Xen Virtual environment consist of several components that work together Xen Hypervisor Domain 0 Domain Management and Control Domain U (DOM U) PV guest Domain U(DOM U) HVM Guest

Slide 18: 

Xen Hypervisor Basic abstraction layer of software Sits above the hardware Responsible for CPU scheduling and memory partitioning of virtual machines running on the hardware device Abstracts hardware for virtual machines Controls execution of virtual machines as the VM share common processing environment It has no knowledge of networking, external storage storage devices, video or any other I/O functions

Slide 19: 

Domain 0 A modified Linux Kernel, is unique Virtual machine running on the Xen hypervisor Has special rights to access Physicao I/O resources Interacts with other virtual machines Domain U:PV and HVM guests All Xen VM’s requires Domain 0 to be running before any other virtual machines are started Two drivers are included to support network and local disk access from Domain U:PV and HVM guests Network Backend driver communicates directly with local networking hardware to process all virtual machines requests coming from Virual machine Block backend driver communicates with the local storage disk to read and write data from drive based upon Domain U requests

Slide 20: 

Xen continued Domain U: PV All paravirtualized Virtual machines running o a Xen Hypervisior are referred as Domain U PV guests They are modified Linux operating systems, Solaris, Free BSD and other Unix OS Domain U PV guest virtual machine does not have direct access to hardware Recognizes that other VM’s are running on the same machine Has two drivers, Network and Block driver Domina UL HVM Guests Runs on any statndard OS or any umodified OS Is not aware that it is sharring processing time on the hardware and other virtual machines present

Slide 21: 

Xen continued Does not have drivers, instead spical daemon is started for each HVM Guest in Domain 0, QUEM-dm. Qemu deamon supports domain U HVM guest for networking and disk acess requests Initiailizes as it would on a typical machine Has Xen Virtual firmare to simulate the BIOS

Slide 22: 

Xen continued Domain Management and control Xend Xm LibXenctrl Qemu-dm Xen Virtual Firmware

Slide 23: 

Xen Continued

Slide 24: 

Xen continued Xen Operation Xen Hypervisor does not have access to Network and disk request Hence Domain U must communicate Via the Xen Hypervisor with the Domain 0 to accomplish a network or disk request

Slide 25: 

Other stuff 64 bit HP-UX 128 processor cores HP VSE(Virtual Server environment) pool of virtual servers

Slide 26: 

Hypervisors Kernel-based Virtual Machine (KVM) - a Linux kernel module and hypervisor. It supports both architectures (AMD-V and VT-x) and requires one of them. Supports real-time guests. VirtualBox runs on Windows, Linux, Mac OS X and Solaris. It supports both architectures. [8] Xen — Xen is a separate and independent operating system that virtualizes everything else on the machine. It supports both architectures, but does not require them for supported guest OS's. Blue Pill (proof of concept malware) Hyper-V - Microsoft's Windows Server 2008 hosted platform (requires hardware virtualization support).[9] LynxSecure - Secure MILS Hypervisor from LynuxWorks. Supports Intel VT-x and VT-d. Microsoft Virtual Server (also branded Microsoft Virtual PC, Windows Virtual PC) — Virtual Server 2005 R2 SP1 supports hardware assisted virtualization.[clarification needed][10][11] Oracle VM - Oracle VM Server (GPL license) uses the Xen hypervisor; while Oracle VM Manager is closed source. [12] Parallels Workstation and Parallels Desktop for Mac — lightweight hypervisor with Intel VT-x and AMD-V support. Parallels Server (Beta) — Enterprise version of Parallels Workstation and Desktop for Mac. It will support Intel's IOMMU, VT-d. Padded Cell - virtual machine technology from Green Hills Software hosted on INTEGRITY real-time operating system. Supports both architectures. Real-Time Systems RTS Real-Time Hypervisor for x86 Sun xVM - xVM Server is based on Xen on x64 Virtual Iron - Supports both architectures. VirtualLogix - Supports both architectures. VMware Workstation 6, VMware Fusion, VMware Server — Recent versions support both architectures.[13] VMware ESX Server - Requires hardware support to run 64 bit virtual machines. VMware Server - Requires hardware support to run 64 bit virtual machines. TenAsys eVM Virtualization Platform for Windows

authorStream Live Help