There are entire classes of products being invented, revamped, repositioned to join the connected revolution.

In generalized terms, one could, perhaps, classify the different products and offerings in these broad categories

a) connected devices

b) Core Software/OS on connected devices

c) Library / Application Software on connected devices

d) Software for command, control, configuration of connected devices

e) Data collection system for connected devices

f) Analytics of data from connected devices

g) Infrastructure for centralized/cloud based deployment of software

Out of all these categories, beside a) and g) all categories concern software.

Out of all the software, I would contend that d) is the lynchpin that ties all of the IoT world together. That’s where peripheral but necessary components like event systems, machine identities, security and encryption, access management, communication protocols etc all come together.. In a sense, it’s the core of the IoT platform.

Any company or organization that makes headway in creating the core platform, that exhibits behaviors that allow ease of use, low cost to entry, ease of programming, extensible etc will be the winner. There are none such platforms today that exhibit all of the behaviors.

This is more about a kudos to Google for “finally” reaching out to the community with their release of kubernetes as a fully “co-operative” open-source code base.

As I had mentioned in my earlier post, Google lets something out of the bag once in a while. This one, at least is more than the paper that started the Hadoop map-reduce trend. Here, Google seems to be actually actively involved in the community.

I like containers (linux containers — specifically namespaces + cgroups + selinux) simply for the reason that it provides me an easy abstraction to package and ship my code (ala docker ), but also, it gives me the power of the whole linux kernel + GNU / Unix ecosystem to help me manage mine and other peoples code.. and that includes doing inter-process communication right.

In this case, since nowadays it’s all about cross-device-processes, it’s about doing inter-service communication right. That means getting naming of services right, getting the network routes to services right, getting discovery of services right as well as being able to tune the different levels of what linux and it’s ecosystem provides to make sure i can deliver container based services properly, with the right SLAs etc..

I had my mind set on Mesos as a way to orchestrate linux processes throughout the datacenter. With kubernetes in the mix, it seems to me kubernetes along with cAdvisor give me the right tools to help me either create or choose the right frameworks to actually do real workload management, properly.

we shall see.. it’s an exciting world for people who develop and ship datacenter/cloud  applications.. viva user driven infrastructure..


That OVS blurb in the title- was a teaser.. none of kubernetes, mesos, docker etc can give you a powerful network link abstraction.. the most open way is OVS… stay tuned. Similarly, my requirements are not just to run application type services using this container paradigm, I actually want to / have to build other services (what were typically referred to as backend services – database, analytics etc) using the same framework. Here, google,redhat,ibm,microsoft etc..  probably want you to use their kubernetes optimized cloud for apps and provide the additional services like streaming data-analysis, transactional data services, identity services etc.. What interests me here is — turtles all the way down, i want to be able to build those “additional” value-add services using the same paradigm, so that it’s actually a competitive marketplace. I’m not discounting Amazon AWS out on this.. they’re probably racing to counter this recent movement. I don’t consider AWS to be truly cutting edge in any technology, they just make it accessible, cheap and user friendly – in any market. I’m not necessarily sure that’s gonna cut it. A script-kiddie in a chai-cafe in Rio or Mumbai’s slums could be writing the next cutting edge software .. and I don’t believe Amazon AWS ( or any of the afore mentioned companies ) can compete with that .. but that’s where we’re heading, aren’t we ?

It seems like Google lets out something out of the bag once in a while.. take Google Omega..

and their recent announcement about their use of linux containers –

lmctfy (on github)

To me, this sounds like a developer / app deployer being able to specify the characteristics of the workload when they deploy it (represented by SLAs – priority, latency expectation etc..) and the management platform using metadata about resource pools, their available capacities to fulfill the SLAs , then choosing the right pools  and then deploying them there..

So, in effect, it’s not just their workload scheduler, they require the right metadata to be populated along with their workloads..

It just so happens that the unit of deployment they may be using is containers using cgroups and kernel namespaces, and they add additional metadata to the definition of the containers that users can manipulate 

one can start doing this with docker today, with custom metadata.. the harder part is the scheduler, which would have to be something custom (maybe piggy backing on openstack work around nova scheduler, neutron etc .. or an existing PaaS ecosystem like openshift)

This is probably a truer version of workload management moving towards the idea of autonomic computing, than just moving VMs around.. Granted you could do the same with VMs, add metadata, but you’d also have to deal with resource management at two levels – at the hypervisor and at the vm level — which is usually not a good idea.

forget the analytics that google and facebook give you around your social connections..

just based on who i talk to via email.. this is what I get.. this is so much more powerful than google+ or facebook or twitter.

and it’s all based on old internet media – email


it’s available for your use at

If you have been following the cloud trend, with virtualization, programmable stacks, sdn, message based orchestration etc.., this bit of news may not seem like  anything new, just another trend.

However, if you dig deeper into this, you’ll notice that the result of adding networking / sdn smarts to android actually has much deeper connotations. This first paves the way for running android in the datacenter servers. With SDN and orchestrated spin-up, spin-down of android vms in the datacenter, you could run in split seconds or minutes thousands of these in the datacenters. And if you’re an android phone/tablet user, you will know that this operating system is quite capable of running any kind of application.

One minute detail that may be easily overlooked is the fact that the entire android operating system comprises of most of the code run in the Dalvik virtual machine, with few bits and pieces running on a custom linux kernel.Now, if most of the SDN pieces being talked about in the article is actually added to the VM portion of android, then what you end up with is a fully programmable container for code and data. That container could be serialized and shipped via network to any platform that can run the dalvik vm and run again…

This brings up a whole new connotation to the “cloud” paradigm. It’s a cloud that’s going to be raining runnable bits of code and data everywhere. The possibilities, if this is where it’s heading, is actually endless..

Comments welcome as always.



Get every new post delivered to your Inbox.

Join 653 other followers