Been ages since I updated the blog. Was really busy lately with lots of commitments and work-related task.
Just wanna share 1 thing to remember when you intend to run RHEL 6 on EFI-based BIOS. Few months ago I was tasked to setup RHEL 6 on an EFI-based system (IBM x-series 3850 M3 if not mistaken). Linux on IBM x series? should be a piece of cake rite? Just put the CD, setup the partition and voila. How wrong I was. It took me and a friend of mine (which is an expert in Linux with more than 10 years of experience, mind you) around 8 hours to got it right.
To make matters worse, we could not find any documentation regarding it on the Internet and even RH Knowledge Base and documentation didnt help either (at that time, dunno now). To shorten the matter, here are the simple guideline that I can give you. We are installing a RHEL 6 64bit OS BTW.
Linux boot the OS from the /boot partition. Problem is after the installation, it fails to even go to the bootloader menu. And installation is never completed without Anaconda raising a flag that it fails to install RHEL 6. Turn out to be the boot for EFI BIOS partition must be in vfat format in order for the system to detect it. So you have to mount /boot on EXT-based filesystem and /boot/EFI on vfat filesystem (Anaconda labels it as EFI boot partition if I am not mistaken). That should make the server able to boot the OS properly.
What really annoys me is that Anaconda didnt do the proper detection to flag us that our filesystem configuration is not workable. Or put something on the knowledge base about this issue. However if you do the installation using the default setup without any customization, it will actually detect you are installing on an EFI system and put the partition accordingly (Thats how we manage to solve this problem BTW).
Hope this can help some sysadmin from going back late at night to solve the same issue.
I started to read with a bipartisan mindset about “Xen and Theory of RHEL-evance” posted in the Citrix community blog by Simon Crosby. What appears to be a great title at first seems to be mostly FUD on why KVM is doomed for failure especially in the enterprise marketplace and Red Hat will drown with it. It did not have enough facts, just FUD most of the time. I would to counter his so called “facts” here as its been a long time anyway since I last updated my blog.
Firstly Simon recommends that if you wanna go virtual with a Linux / RHEL mindset, that you should go with Oracle Enterprise Linux (OEL) because …
It is a superior, enterprise class version of RHEL, and typically more up to date than it, and OEL is guaranteed to be compatible with RHEL.
What did Simon means by superior, enterprise class version of RHEL? Typically more up to date? and being binary compatible at the same time? last time I check OEL is based on RHEL and using mostly CentOS patches. Can CentOS or OEL exists without RHEL at the first place? seems full of bull to me. And Oracle with its new shiny Solaris added to the mix, how do you think its gonna effect its Linux offerings?
Next, he also recommends to use SLES if you are non-Larry group …
Alternatively, if you’re wary of giving Larry more control than he already has over your environment, Novell SUSE Linux offers a superb enterprise Linux platform boasting more than 3,000 certified applications, fully supported on Xen and XenServer, with complete support for SAP and (via Mono) many Microsoft .Net apps.
As the world is moving to KVM, so do smart companies such as Novell. Its already a tech preview in SLES 11 and expected to be supported by SLES 11 SP1. So thanks Simon for recommending people to KVM-world. BTW welcome to the club Novell, great to have you onboard.
Availability of the KVM platform by Red Hat also seems to be another major issue. Simon noted that there’s no freely downloadable product for Red Hat KVM offering. Although I quite agree that RHEV is not freely available (for a reason that I will come to later), KVM support in RHEL is available since version 5.4 and for that matter OEL (from my source Oracle Linux team Australia seems to confirm KVM is in OEL) and CentOS have it on their distro. As I said earlier above, both were built from the source of RHEL and had the same packages, kernel version etc minus the RH branding. So go ahead download CentOS and run it as a Hypervisor for your VMs. If you feel the urge to get support, you know who to call.
The reason why Red Hat Enterprise Virtualization or RHEV is not available as open source is due to the current management console runs on .NET which requires Windows Server and other Microsoft dependencies. This is due to code being developed by Qumranet, a start-up that now part of Red Hat Inc. Red Hat is now heavily migrating all the tools into a Java-based application before releasing its source code (or say my Red Hat source). It will hopefully will be launched during RHEV version 3 soon. And I believe Red Hat due to its past history, where all its software being released fully bits by bits to GPL such as dogtag, 389ds, Teiid and JBoss.org initiative. BTW did Citrix release their Xen management tools as GPL?
Next up, history lesson.
Red Hat’s endeavors in virtualization started with profound endorsements of Xen, followed, when Novell SUSE Linux was the first vendor to ship Xen 3.0 in 2006 as a component of SLES 10, with an accusation that Novell was “irresponsible” and then by a completely unsubstantiated statement by Red Hat VP Alex Pinchev to ZDNet Australia, that “Xen is not stable yet, it’s not ready for the enterprise.”. Then, as RHEL 5 belatedly readied itself for delivery in late 2006, Red Hat proudly proclaimed that only it could deliver an enterprise class virtualization product based on Xen, together with a rich management infrastructure.
When Novell launched its Xen offering to the market with SLES 10, not long after that the Xen Community changes its default scheduler due to the rapid development of Xen at the time. Also Novell launched it offerings without any real management tools at that time (which later it includes in SP1).
Red Hat on the other hand launched with libvirt and virt-manager to protect its investment in the Virtualization marketplace. Libvirt is an API to hook to any virtualization technology underneath it OS so that if any new hot virtualization technology surfaced (which in this case KVM) it can then switch its current management tools (e.g. virsh and virt manager) to the new technology easily. More info on Libvirt here.
So IMHO Red Hat was “quite” right to say Xen is not ready when SLES 10 launch and put its money where its mouth is by developing libvirt and virt-manager which nearly all distro with any virtualization technologies bundles today.
Simon also seems to play down the relevance of KVM on his ending notes.
Essentially, Linus would have to agree to turn Linux into a great hypervisor, and historically he has maintained a balanced path that has not yielded to special interests.
Linux is meritocracy and only the best codes win. If the code is good enough, it will be in the kernel. KVM which written by Avi is a clean implementation of Virtualization on Linux just by enabling a module without rewriting the critical components of the kernel code. Why reinvent the wheel when some of the best and greatest mind in OS development is there in the mainstream kernel?
KVM hypervisor needs a scheduler? thats already in Linux, KVM can use that. NUMA support? yes KVM can use a little bit of that etc etc. The slick implementation just by turning the module to turn your Linux as a hypervisor seems great to me anyway, more or so for Linus which approves it.
Do note that Xen is yet to be included into the mainstream kernel as of today.
The industry has three excellent type 1 hypervisors: Xen, Hyper-V and ESX. Does it really need a fourth, and a fourth that is incompatible with the others?
Red Hat and the Linux community generally have failed to acknowledge the fundamental need in virtualization for a stable interface between the hypervisor and guests, with both backward and forward compatibility, for all time.
… and Citrix and Xen failed to acknowledge that we need to have only single standardized code base for everyone that is the mainstream Linux kernel. Why reinvent the wheel again?
Moreover the infrastructural “basics” of resource pooling, multi-tenancy, virtual switching, and virtual storage are quite simply missing from its offering.
While I do agree there is certain features not available in RHEV/KVM offerings, Xen do have its own deficiencies by itself. Kernel Samepage Merging (KSM) feature is still not yet available in the Xen Hypervisor which enables memory overcommitting. This will ensure you can run more RAM that you have as KVM will look for similarities in the memory footprint and merge them together. The only other virtualization technology that have this is VMWare last time I check. And NO, Memory Ballooning is not the same with KSM. I bet most customers wants KSM more than what you listed above. With KSM you can run more VMs on a smaller number of physical nodes for KVM than Xen.
In other words, a large Linux distro, such as RHEL 6 with KVM, will have a substantially larger code base and attack surface than any type 1 hypervisor. Moreover it is likely to have to deal with every CERT update due to vulnerabilities in Linux. By contrast, a small embedded Xen hypervisor (together with a tiny embedded Linux kernel to run the management stack and drivers) can easily be delivered within a 16MB footprint.
Finally, I have previously argued that the relevance of the OS-centric approach to next-gen infrastructure is questionable. The next-gen OS will not be a thing that runs a server – but a thing that runs apps across virtualized infrastructure.
Yeah but saying “OS-centric approach is questionable” and “Hyper-V is an excellent Hypervisor” in the same blog post is more questionable than your argument. Did Hyper-V is not based on Windows Server code base? Can we virtualize using Hyper-V on Windows Server 2008 R2?
Its business model, which requires that RHEL is a binary that is available only to paying customers, leaves it vulnerable to truly free, enterprise grade products.
Did you know you can run RHEV without paying for any RHEL? Just pick up how many sockets that you requires and runs RHEV-M and RHEV-H to virtualize your windows guest all without paying a single dollar for RHEL. Maybe this page can help you understand it better. You can email me if you need more info at our company website.
The end note is this, both Xen and KVM are wonderful piece of achievement by the community at large. Having both are a testament to the open source model itself and keeps each other honest and competitive product. By bashing KVM mindlessly and with FUD all over it doesn’t make Citrix Xen offering better and so do vice versa for Red Hat. And KVM do gain from strength to strength and now powering the big blue cloud and also The Planet. Not bad a piece of technology that just come around circa 2007.
Naturally for all Linux communities, distribution and vendors will favour the KVM approach due to its simplicity and tight integration with the kernel they knew and love. Red Hat, Fedora, SLES 11 SP1, OpenSUSE, Ubuntu, Debian, Mandriva etc had all integrate their offerings around KVM. But it is foolish to write off Xen, not when Oracle and Citrix are still behind it. Though I still thinks Citrix (and Simon) lost quite a few sleep due to KVM, RHEV and Red Hat lately.
Disclaimer : Warix Technologies is one of the partner for Red Hat Enterprise Virtualization in Malaysia and I am a Fedora Ambassador.