July 12, 2014
This is more about a kudos to Google for “finally” reaching out to the community with their release of kubernetes as a fully “co-operative” open-source code base.
As I had mentioned in my earlier post, Google lets something out of the bag once in a while. This one, at least is more than the paper that started the Hadoop map-reduce trend. Here, Google seems to be actually actively involved in the community.
I like containers (linux containers — specifically namespaces + cgroups + selinux) simply for the reason that it provides me an easy abstraction to package and ship my code (ala docker ), but also, it gives me the power of the whole linux kernel + GNU / Unix ecosystem to help me manage mine and other peoples code.. and that includes doing inter-process communication right.
In this case, since nowadays it’s all about cross-device-processes, it’s about doing inter-service communication right. That means getting naming of services right, getting the network routes to services right, getting discovery of services right as well as being able to tune the different levels of what linux and it’s ecosystem provides to make sure i can deliver container based services properly, with the right SLAs etc..
I had my mind set on Mesos as a way to orchestrate linux processes throughout the datacenter. With kubernetes in the mix, it seems to me kubernetes along with cAdvisor give me the right tools to help me either create or choose the right frameworks to actually do real workload management, properly.
we shall see.. it’s an exciting world for people who develop and ship datacenter/cloud applications.. viva user driven infrastructure..
That OVS blurb in the title- was a teaser.. none of kubernetes, mesos, docker etc can give you a powerful network link abstraction.. the most open way is OVS… stay tuned. Similarly, my requirements are not just to run application type services using this container paradigm, I actually want to / have to build other services (what were typically referred to as backend services – database, analytics etc) using the same framework. Here, google,redhat,ibm,microsoft etc.. probably want you to use their kubernetes optimized cloud for apps and provide the additional services like streaming data-analysis, transactional data services, identity services etc.. What interests me here is — turtles all the way down, i want to be able to build those “additional” value-add services using the same paradigm, so that it’s actually a competitive marketplace. I’m not discounting Amazon AWS out on this.. they’re probably racing to counter this recent movement. I don’t consider AWS to be truly cutting edge in any technology, they just make it accessible, cheap and user friendly – in any market. I’m not necessarily sure that’s gonna cut it. A script-kiddie in a chai-cafe in Rio or Mumbai’s slums could be writing the next cutting edge software .. and I don’t believe Amazon AWS ( or any of the afore mentioned companies ) can compete with that .. but that’s where we’re heading, aren’t we ?
March 24, 2014
It seems like Google lets out something out of the bag once in a while.. take Google Omega..
and their recent announcement about their use of linux containers –
To me, this sounds like a developer / app deployer being able to specify the characteristics of the workload when they deploy it (represented by SLAs – priority, latency expectation etc..) and the management platform using metadata about resource pools, their available capacities to fulfill the SLAs , then choosing the right pools and then deploying them there..
So, in effect, it’s not just their workload scheduler, they require the right metadata to be populated along with their workloads..
It just so happens that the unit of deployment they may be using is containers using cgroups and kernel namespaces, and they add additional metadata to the definition of the containers that users can manipulate
one can start doing this with docker today, with custom metadata.. the harder part is the scheduler, which would have to be something custom (maybe piggy backing on openstack work around nova scheduler, neutron etc .. or an existing PaaS ecosystem like openshift)
This is probably a truer version of workload management moving towards the idea of autonomic computing, than just moving VMs around.. Granted you could do the same with VMs, add metadata, but you’d also have to deal with resource management at two levels – at the hypervisor and at the vm level — which is usually not a good idea.
May 20, 2013
If you have been following the cloud trend, with virtualization, programmable stacks, sdn, message based orchestration etc.., this bit of news http://www.lightreading.com/software-defined-networking/rumor-google-to-add-sdn-smarts-to-android/240155256 may not seem like anything new, just another trend.
However, if you dig deeper into this, you’ll notice that the result of adding networking / sdn smarts to android actually has much deeper connotations. This first paves the way for running android in the datacenter servers. With SDN and orchestrated spin-up, spin-down of android vms in the datacenter, you could run in split seconds or minutes thousands of these in the datacenters. And if you’re an android phone/tablet user, you will know that this operating system is quite capable of running any kind of application.
One minute detail that may be easily overlooked is the fact that the entire android operating system comprises of most of the code run in the Dalvik virtual machine, with few bits and pieces running on a custom linux kernel.Now, if most of the SDN pieces being talked about in the article is actually added to the VM portion of android, then what you end up with is a fully programmable container for code and data. That container could be serialized and shipped via network to any platform that can run the dalvik vm and run again…
This brings up a whole new connotation to the “cloud” paradigm. It’s a cloud that’s going to be raining runnable bits of code and data everywhere. The possibilities, if this is where it’s heading, is actually endless..
Comments welcome as always.
There’s a lot of buzz lately in the industry around cloud stacks and how to build and manage them. Openstack, cloudstack, tosca, EC2, google compute, openshift, cloudfoundry, heroku, cloudbees etc..
Notice I clubbed IaaS and PaaS stacks together.
To me, what stands out from the rest is openstack.
Openstack is now proving to be a workable and replicable solution for standing up a managed IaaS cloud stack in real world situations. You could argue that cloudstack is the same way and amazon, GC have already proven it.
The reason openstack stands out is because it reminds me of the linux phenomenon 20+ years ago. The community believes in it. And the community is willing to put forth blood and toil to expand the reach of openstack.
As a result, openstack today, is not just an IaaS management framework anymore. It’s an ecosystem of “service” enablers all tied by a thread of common interest – commoditizing information systems – via common apis, common component architecture model, opensource roots, modularity etc.
What comes out of that is an ecosystem of pluggable components bound by interfaces and a scaleable architectural principle for the components.
That’s a breeding ground for explosive growth and exponential uptake of any technology.
And just because it’s opensource does not mean there’s no money to be made. Examples abound. VCs salivate!
It’s the ecosystem that openstack enables. You can’t say that for cloudstack, or google compute. You could argue about EC2.. but that’s just because they were first on the ground and cheap (not sure for how long)
And the ecosystem will start to go up the stack, you’re already seeing ties to cloudfoundry and openshift. Heroku probably has to watch out.
It’s easy to counter a company or a group of people. It is very hard to counter a movement .. and that’s what openstack is turning out to be.
I know this is a loaded post, and mostly opinions.. I’ll take the flak and take comments.