autonomic computing in the works

It seems like Google lets out something out of the bag once in a while.. take Google Omega..

and their recent announcement about their use of linux containers —

lmctfy (on github)

To me, this sounds like a developer / app deployer being able to specify the characteristics of the workload when they deploy it (represented by SLAs – priority, latency expectation etc..) and the management platform using metadata about resource pools, their available capacities to fulfill the SLAs , then choosing the right pools  and then deploying them there..

So, in effect, it’s not just their workload scheduler, they require the right metadata to be populated along with their workloads..

It just so happens that the unit of deployment they may be using is containers using cgroups and kernel namespaces, and they add additional metadata to the definition of the containers that users can manipulate 

one can start doing this with docker today, with custom metadata.. the harder part is the scheduler, which would have to be something custom (maybe piggy backing on openstack work around nova scheduler, neutron etc .. or an existing PaaS ecosystem like openshift)

This is probably a truer version of workload management moving towards the idea of autonomic computing, than just moving VMs around.. Granted you could do the same with VMs, add metadata, but you’d also have to deal with resource management at two levels – at the hypervisor and at the vm level — which is usually not a good idea.