This is an on-demand blog post, none of the actors are real institutions nor people, anything resembling real life might be pure coincidence
So there’s a day, that day when an organization realizes that there’s a real need to have a solid cloud platform as an official infrastructure offering. Admit it, we all have some idle cycles we could make better use of.
Why the Rest of Us Need Virtualization Even If Facebook Doesn’t feedproxy.google.com/~r/PuppetLabs/…
— Roman Valls (@braincode) April 27, 2013
A bad cloud
Then someone owning some computer resources types some commands frenetically in a console and voilà, a beta cloud service is born, a fiction dialog between user and cloud provider follows:
- This is great! I want to have an account on your new service, where can I get it?
- Well, you have to come at our offices, we will scan your passport, have a 1 hour long meeting and then give you an account.
- Ermm, ok, I just want to use that service…
In the meeting, there’s an introduction on how to use the service, many people did not prepare their machines before the session and they get stuck by the overcomplicated client installation instructions, which involve installing pre-compiled binaries and config files inside a .tar.bz (ABI issues galore!). Next, you are given a password via SMS, “abc123″, which you cannot change (nor are encouraged to). You should explicitly ask the admins to change it for you.
Dirty secret: if they are not forced to change it, nobody ever does.
After editing some cloud templates text files, your first instance is up and running. Time to clone your CloudBioLinux copy and get some bioinformatics software installed in it… Unfortunately, it does not take very long to discover that the base distribution is more than 3 releases old. The user emails the imaginary beta cloud support and says:
- Hi cloud-support! Is it possible to have the newest Ubuntu release as an image?
- No it’s not at the moment.
- Emm, ok, I tried to apt-get dist-upgrade but it just runs out of space, can I get more space in the VM to do that upgrade myself then? One cannot do much with 4GB of disk these days, you know.
- No, it is not possible, you can use the 1TB NFS-mounted scratch space instead.
- Why isn’t it possible? Anyway, I see no straightforward way to use that scratch as an extension of the OS, while the VM is running, and I cannot access the filesystem offline and move, say, /usr away without doing some hackish stuff involving squashfs, ramdisks, etc… this is actually giving me more headaches than is worth.
- I’m sorry, we cannot bundle another distribution for you.
So what can we learn from that experience? What can a bad cloud do to become a better cloud?
- Distributing a readily installable and tested client CLI package for the most popular platforms instead of a precompiled .tar.gz would have cut down that 1 hour long meeting to nil. Documentation should never be a substitute nor shortcut for a tested, directly installable package.
- Distributing passwords, even via SMS, should adhere to basic good password policies at all times, even in beta services. Go double factor authentication if you fancy it.
- All services should be auto-provisioned. Asking sysadmins to perform routine operations like changing passwords should be off the table.
- Dimensioning a cloud (disk, memory, network interfaces) is not an easy task if the users have wildly different needs, but at least, it should be possible to easily increase VM image space, within reasonable limits. Other metrics such as RAM, network interfaces, DNS records, mountpoints should be directly accessible to the user, auto-provisioned.
- Creating new cloud images from vanilla OS’s automatically should be in place somehow and before launching the beta.
One year passes, some more console typing and moving to new hardware resources should get the service a new face, ready for a second try.
The same issues arise, instead of automating the deployment of the whole cloud to other machines, it just has been moved to the new hardware. There is no evidence of automation being done since last year.
A better cloud
Automated testing is about software, since clouds are software, why not automate bits and pieces of the deployment until it becomes fully automatic? It is easier said than done, it takes a great deal of patience to go and:
- Build and test a cloud component.
- Automate its deployment, testing it elsewhere.
- Take the whole cloud stack down, recreating it again from scratch.
- Automate basic user-side (stress)-testing: create instance, record a DNS change, attach new volumes, destroy instance, etc…
Automation and testing are hard, it takes time to get them right and not overfit your immediate environment. But look, those guys over there seem to have gotten it right:
- So you only need my public SSH key? That’s all? No meetings nor passport, fingerprints, blood samples or photos?
- That’s exactly right, just login as root, break as much as you want in your own cloud, we can wipe your whole stuff out in less than 20 minutes. We’ll of course be gathering metrics from outside, just in case we detect something bad coming out your instance(s). We don’t want to get in your way.
- Nice! What about having the latest Ubuntu release…
- We just provisioned it as we speak (true story).
- I’m launching some hadoop jobs right now. It took me a few minutes to provision the nodes. thank you guys, you’re awesome!