From Simon’s blog post at http://community.citrix.com/display/ocb/2010/03/29/Open+Source+does+not+mean+Interoperable+or+Compatible
Derrick Harris at GigaOm has written an interesting, and unfortunately confusing piece which illustrates the frequent confusion between openness, interoperability and compatibility. His thesis is that the open source nature of Red Hat Enterprise Linux (RHEL), and cloud management software such as Eucalyptus is a powerful change for the better, because the openness is essentially standardization of a kind, and with standardization comes interoperability, compatibility, portability and therefore lower costs.
He’s unfortunately wrong. They are excellent technologies, but their open source nature does not in itself deliver on the values of compatibility, interoperability and portability.
Open Source is without doubt the most productive way for a community of individuals and organizations to collaborate on a common code base / feature set. The benefits of the approach to all participants is huge, and the innovative forces that one can muster to work on common technology components are in my view more powerful than one can find in any one organization. The Xen community, for example, has outpaced the rate of development, feature for feature, of any proprietary hypervisor platform. The rate of innovation by the KVM community is similarly superb, and the Linux community continues to lead in OS development. Open Source is good because it fosters collaboration and innovation, and because it commoditizes components of the IT stack that the participants in a community agree should be commoditized, and moves the industry forward at an accelerated pace.
But Open Source Software in does not necessarily deliver on the general desire of the user/customer base for interoperability, compatibility or portability between different vendor offerings. While there are obvious ways (for example) in which say RHEL is interoperable with Novell SUSE Linux Enterprise Server (SLES) (or Windows), such as the ability to communicate using TCP/IP, in general it is not possible to move an application between any of those OSes, and expect it to work, or expect the OS vendor to support that. Compatibility at the Application layer (the Application Binary Interface) is not supported, and so portability of apps between different Linux distros is not supported. (Note that I’m not saying you can’t get it to work: you can, but it might require losing the support of your OS vendor, recompiling the app if you have source, or using an experimental compatibility layer.) Each distro commits to ABI stability for its product, for a period of time (for example 7 years in the case of Red Hat), but it is in general not possible for them to commit to interoperability with another vendor’s product. Nor indeed is it in their commercial interest to do so.
Historically, the Linux community has had justifiable objections to requirements for compatibility and portibility, because it would force the community to work to ABIs and not with the source. For example, the community is opposed to supporting an ABI for device drivers, as you can see from this nugget from an interview with Linus Torvalds:
“Well, the lack of an ABI is two-fold: one is we really, really, really don’t want one. Every single time people ask for a stable ABI, the main reason for wanting a stable ABI is they want to have their binary drivers and they don’t want to give out source and they don’t – certainly don’t want to merge that source into the stable kernel or the standard kernel.
And that, in turn, means that all the people who actually do all the kernel work and maintain the kernel are basically unable to work with that piece of hardware and that vendor because if there’s any bugs whatsoever, we can’t fix them. So, all the commercial vendors—even the ones who used to accept binary drivers—have moved or are moving away from wanting to have anything at all to do with binary drivers because they’re completely unmaintainable.
So, there’s one of the reasons. Some people see it as political. There is probably a political aspect to it, but a lot of it is very pragmatic; we just can’t maintain it.”
In the case of Virtual Machine (VM) compatibility and portability, the Linux community has invented an ingenious plug-in API that allows different hypervisors to plug into Linux and offer a consistent set of services to Linux as a guest. This is generally referred to as paravirt_ops, and it arose out of the necessity to support optimized Linux virtualization on Xen, VMware ESX and Microsoft Hyper-V. This allows a Linux guest binary to move between hypervisors without recompilation, which is a powerful concept. But this effort has been complicated by the additional need to support interoperability via the file-format used to store VMs, which still differs between the major vendors: Oracle VM uses raw images, and XenServer uses Microsoft VHD files or “LUN per Virtual Disk Image” format, and VMware VMDK format via a plugin.
But paravirt_ops does nothing to help interoperability or portability of non-Linux guests between hypervisors: A Windows VM on KVM can’t directly move to Hyper-V without brain surgery via a V2V. Indeed the open source community has not committed to a standard virtual hardware layer for VMs. Why? Well, the clue comes from the specific word most commonly associated with interoperability and portability: standard.
- First, the community reserves the right to innovate ahead of any standard, breaking it if need be. Let’s call this “Features Lead”.
- Second, the availability of source code is loophole in the notion of a “standard”: a vendor could subtly modify the code to ensure that customers would have difficulty moving off its product to that of a competitor. The open source vendors indeed rely on this notion of potential incompatibility to build their businesses. Red Hat needs customers to stick to RHEL, so if you quite literally take the (GPL mandated) code of the RHEL distro, recompile it and ask them to support it, they will refuse to do so because they only support their shipped products. Not unreasonable at all, but also far from interoperable, compatible and portable.
So what of interoperability, compatibility and portability in the context of open source and cloud computing?
- First, it is crucial to view something like Amazon Web Services as a product (a distribution, that it runs itself) built from open source, but with one key distinction by comparison with say RHEL. Whereas Red Hat is obliged by the GPL to provide the source to its products, Amazon is not under any obligation to offer its changes back to the community. It may do so, for reasons of expediency, but does not have to. AWS is proprietary software, built from open source, but extended as needed by Amazon.
- Could you move a workload from an enterprise implementation of Eucalyptus managing Red Hat’s KVM to EC2? No, not directly. Red Hat does not guarantee interoperability of VMs between RHEL or RHEV KVM and any other hypervisor, so the AWS Xen implementation would not be supported. Similarly, those VMs would fail to directly port to ESX or Hyper-V.
- So, how about the management layer? Could a Eucalyptus based enterprise cloud interoperate with AWS? Maybe. Though Eucalyptus aims to continually update its APIs to reflect those of Amazon Web Services, there is no guarantee in practice that this can be achieved, and in general Eucalyptus cannot hope to offer all API services from AWS simply because it is not AWS. The Amazon folk that I’ve spoken to are not opposed to standardization of APIs at all, but they don’t believe that they know enough to develop a standard yet – they need to continually refine the APIs to ensure that they can offer a rich set cloud features via their API (and let’s be clear, it is their API and not a standard) to their customers. Similarly any other cloud that uses Eucalyptus as its front-end interface could merely hope to be compatible with AWS at any time.
It is worth pointing out that Eucalyptus itself has no view as to the VM layer compatibility, and while Eucalyptus could conceivably be used to manage a set of ESX Servers, those VMs would not be directly portable to a Eucalyptus managed Red Hat stack in the enterprise.
The bottom line is that open source and the community deliver innovation but not standardization. The notion of standardization (and hence compatibility, portability and interoperability at some particular interface) requires an additional party: A dominant vendor (one could argue Red Hat is one in Linux, or AWS for cloud) or a powerful brand (the Xen project mandates that the use of the Xen brand signifies by the user that it will faithfully implement and support interoperability at the Xen virtual hardware layer and the Xen management API).
At the end of the day, customers should not confuse “open source” with notions that require either an external agency (standard) or multi-vendor relationships (compatibility, portability, interoperability). Various open source efforts are committed to these notions: the Xen project commits to interoperability between all hypervisors, and not only various Xen implementations, and the Linux ABI enables increased compatibility & portability between Linux and other operating systems, but ultimately commitments to compatibility, portability and interoperability rest on vendor commitments and not simply openness