little bits of code and data everywhere

If you have been following the cloud trend, with virtualization, programmable stacks, sdn, message based orchestration etc.., this bit of news http://www.lightreading.com/software-defined-networking/rumor-google-to-add-sdn-smarts-to-android/240155256 may not seem like  anything new, just another trend.

However, if you dig deeper into this, you’ll notice that the result of adding networking / sdn smarts to android actually has much deeper connotations. This first paves the way for running android in the datacenter servers. With SDN and orchestrated spin-up, spin-down of android vms in the datacenter, you could run in split seconds or minutes thousands of these in the datacenters. And if you’re an android phone/tablet user, you will know that this operating system is quite capable of running any kind of application.

One minute detail that may be easily overlooked is the fact that the entire android operating system comprises of most of the code run in the Dalvik virtual machine, with few bits and pieces running on a custom linux kernel.Now, if most of the SDN pieces being talked about in the article is actually added to the VM portion of android, then what you end up with is a fully programmable container for code and data. That container could be serialized and shipped via network to any platform that can run the dalvik vm and run again…

This brings up a whole new connotation to the “cloud” paradigm. It’s a cloud that’s going to be raining runnable bits of code and data everywhere. The possibilities, if this is where it’s heading, is actually endless..

Comments welcome as always.

 

Advertisements

why #openstack matters or it’s the ecosystem, stupid!

There’s a lot of buzz lately in the industry around cloud stacks and how to build and manage them. Openstack, cloudstack, tosca, EC2, google compute, openshift, cloudfoundry, heroku, cloudbees etc..

Notice I clubbed IaaS and PaaS stacks together.

To me, what stands out from the rest is openstack.

Openstack is now proving to be a workable and replicable solution for standing up a managed IaaS cloud stack in real world situations. You could argue that cloudstack is the same way and amazon, GC have already proven it.

The reason openstack stands out is because it reminds me of the linux phenomenon 20+ years ago. The community believes in it. And the community is willing to put forth blood and toil to expand the reach of openstack.

As a result, openstack today, is not just an IaaS management framework anymore. It’s an ecosystem of “service” enablers all tied by a thread of common interest – commoditizing information systems – via common apis, common component architecture model, opensource roots, modularity etc.

What comes out of that is an ecosystem of pluggable components bound by interfaces and  a scaleable architectural principle for the components.

That’s a breeding ground for explosive growth and exponential uptake of any technology.

And just because it’s opensource does not mean there’s no money to be made. Examples abound. VCs salivate!

It’s the ecosystem that openstack enables. You can’t say that for cloudstack, or google compute. You could argue about EC2.. but that’s just because they were first on the ground and cheap (not sure for how long)

And the ecosystem will start to go up the stack, you’re already seeing ties to cloudfoundry and openshift. Heroku probably has to watch out.

It’s easy to counter a company or a group of people. It is very hard to counter a movement .. and that’s what openstack is turning out to be.

I know this is a loaded post, and mostly opinions.. I’ll take the flak and take comments.

DevOps and XaaS

It seems there is a lot of confusion and debate around DevOps and XaaS, specifically the PaaS portion of XaaS.

The below is an attempt to shed light on some of the definitions and thinking around that.

XaaS (DC as a service, Infrastructure as a Service, Platform as a Service and Software as a Service) is all about delivering services to consumers (subscribers of a service). There are notions of expectations from a subscriber (requirements) and notions of guarantees that a service provider can provide, which are bound together by a contract between the subscriber and the provider using Service Level Agreements.

It helps to think of XaaS as layers that build upon an underlying layer and providing capabilities. As such, IaaS builds upon capabilities provided by DCaaS services and provides it’s capabilities that are services that can be bound in SLAs. Similarly PaaS builds upon those IaaS services and provides platform capabilities as services that themselves can be wrapped in SLAs. SaaS then builds upon PaaS services and provides applications and other higher-level capabilities as services. This layering provides a facility to bound the capabilities into like umbrellas that can be delivered/changed independently of each other. It’s sort of like the OSI model in a certain sense.

All this, keeping in mind that SaaS capabilities do not always have to deal with the PaaS layer, they can directly consume services from the IaaS layer. If there are more cases of this happening then it may help to think of it as a case of the PaaS layer not being able to fulfill all of the SaaS requirements. The PaaS layer may need to accommodate those requirements in future versions of it’s services.

DevOps is an evolving model that attempts to ease some of the hardships around delivering and managing these services. It comes from the disciplines of development (as in software development) and operations (infrastructure and platform operations). As such, the goal of DevOps is to minimize the hurdles that a developer (builder of application services) faces while still maintaining typical operational goals like uptime, availability, resiliency, change tracking etc.

DevOps stresses things like provisioning automation, build automation, test automation, repeatability, scriptability, version tracking, near-instantaneous instantiation etc to enable the development folks to work closely with the operations teams. In fact, the idea is to meld the development aspects and the operations aspects into one continuous and cohesive set of processes and teams.

As such, besides the processes and actions that are recommended by DevOps, there are foundational components that are also recommended by DevOps, such as host configuration management systems, centralized version control systems, distributed version control systems, continuous build systems, debugging hooks, image build systems etc. In fact I would go so far as to say that DevOps takes the typical components in an application Lifecycle Managment (ALM) set of systems and merges Operations principles with it.

All of these components can also be one of the XaaS offerings themselves with their own guarantees and SLA bindings.

One could go so far as to say that DevOps principles and services could be used to itself build not just SaaS services (applications), but also PaaS services and even IaaS services. On the flip side, one could say that the XaaS service offering themselves can and should provide hooks and apis that enable integration with other services to holistically provide the developer and operator conveniences and control that DevOps advocates.

As one can see, DevOps and XaaS do not conflict with each other. In fact I would argue that they are complementary viewpoints to the same set of issues. The key to seeing the synergy between them is to see offerings of capabilities from a Service management point of view, even DevOps oriented offerings.

This will be crucial to enterprises trying to tackle PaaS/SaaS/Cloud technologies and DevOps paradigms together.

As always thoughts and comments are welcome.

API Management in the cloud is a misnomer

API management as hosted solutions, is not where larger enterprises or government entities want to be (or even small enterprises for that matter).

As an enterprise, we want control over every piece of our infra.. (Hold your thought before you make your opinion though) Even if it’s in the cloud.

And the way we’re going, we’re gonna be extending our internal services as orderable components in the cloud, and extend our app platform to be in any of the commercial platform providers — EC2, Rackspace, Google.. etc..

There’s a complete shift happening here from an enterprise perspective to the ITIL, or serviceprovider-servicebuilder-serviceconsumer model .. With our own controls on top of each layer

And this has nothing to do with cloud hosted api management solutions or packaged and cloud enabled (in vms, unit-packageable-licensable software) solutions..  It’s each vendors approach to interfaces in general, hosting/monetization aspects etc..

There’s no sharing going on between cloud-hosted api-management  tenants. That was the whole argument. And that’s a losing proposition. The cloud-hosted api-management folks have realized that in time.

There’s no way, a company that’s exposing apis “a” that are similar to apis “b” offered by another company appear in the same space in the same context, that’s potentially in control by another company “api-hoster-at-large”.

Granted you have control about SLAs, but SLAs have not evolved to the concept of  SLAs for “marketing value” or other higher level business functions. And no company even with a 500M revenue would want to be in that position where some aspects of your apis are in the hands of a “cloud” provider.

Why would a medium to large company like “YYY” want to compete in such a marginal space where they end up being api providers for everybody else? They probably want to be in a space where they end up being api providers to their own APIs.

What they would want is an api monetization platform, that I whole heartedly agree with.

Which has nothing to do with being in the cloud. We are in the cloud. We better be.. all of us.

Our services should be in the cloud, but as providers, we better have control points in every aspect, including the stack.. that’s where your billing aspects come out from and your differentiation in terms of how you meter, how you bill, how expected security controls changes the whole billing aspects etc..

And if you did have apis that you wanted to compete with others on, then, sure, you should look at api federation, or even driving the standards on where that should go.. None of that is happening here. OR for that matter, an api marketplace.. that’s orthogonal to API management as a whole.

It may sound like I’m against cloud-hosted-api-portals, and I am. Api Portals being in the cloud for enterprises, managed by someone else, is not something enterprises, any one, is comfortable about right now.

I’m not against api management. Some of these cloud-hosted-api-portals helped kickstart the whole api management movement, (barring some currently large api providers who would not trust these vendors with the their control-points) .

The clients who’re getting the best kick out of it are either startups who’s core business is not APIs but want to expose APIs, or smaller companies, who want to outsource their entire IT, or larger companies who’s API aspirations are limited to a small Business Unit.

An enterprise should have an enterprise-wide (or cloud) integration strategy, whether it’s api monetization for services they offer, or point to point integration with their partners around high volume high SLA transactions, or whether it’s social apis that they’re using for their marketing and/or sales efforts, or open-social apis in general.It has to be a concerted effort from a technology execution point of view.

The way I look at it is, apis are  just another aspect of a control point. That was what gateways are about in the first place. Control points, even if it means SLA management or metering or whatever, have that notion of a  control-plane versus a data-plane concept and the difficulties of balancing that. In the internet at large, gateways as control points do not work. It does start working, when you have a whole bunch of gateways working in conjunction with each other across the cloud.

API management is an evolution of the whole “RPC” concept, doled out to users outside of your control. And the control point here is monetization, not much more. The bigger aspects of monetization around api are ease of use, etc.. everyone in this space realizes that.

 

**UPDATE

The same goes for mobile app providers.. they just happened to duke it out first.. there’s no clear winners and there’s not gonna be one. The cat’s out of the bag.. anyone can create “apps” in any platform.. same goes for “apis”.

What you need is things like “api management” for groups, distributed groups and federated groups.

same for “apps”, you need “app management” for groups, distributed groups and federated groups.

The market’s wide open. Wondering when the current “api management” folks will branch out to “app management”. It’s just an additional bunch of metadata, and maybe some additional binaries.. everything else is the same..
At this point, would you call it “interface management” 🙂
… just a thought.

This is in response to the ridiculous claim about cisco router hijacking

This is in response to ESR’s post about the topic…

verbatim response on his blog comments section

I think you guys are way off base on this one..

Several reasons.. all my personal opinions but here it goes..
1) The control you talk about, Cisco wanting to control “your router”, is about trying to give consumers a more centralized way to control “your router” in a more intuitive way, hopefully that’s guided by consumer vision (and the backlash that’s evident here) rather than a pan-galactic paranoia about corporate hegemony… which you can always decide not to use. Gasp, it’s NOT mandatory.. your choice… hmm .opt out, opt in blah blah..
2) I have a cisco router, and i don’t see my router being hijacked. haven’t seen it before, I haven’t seen it now, i doubt i’ll see it later. This is my home router/switch, and i want to keep it that way.
3) If you’re buying new routers from cisco AND if you watch the fine print on the product, you’d undoubtably not buy one if it said anything about you losing control. If it does say it, and if you still buy it, that’s your fault.
4) If it doesn’t say it, but you sign up for centralized or cloud based control of your router, without reading the fine print on your “cloud signup”, then you’re at fault, not the manufacturer(s).
5) You still have control. Turn it OFF and return IT.
6) Cisco’s consumer business, if you’ve heard their quarterly earnings statements, especially the division that makes consumer routers and switches (ala the linksys router that is being talked about here), is hardly the policy maker of the company, with less than 1 % of the revenue. Get real. There’s no conspiracies here. Move on. Get something else to bite on. This fox-news chasing paradigm is getting old.

More here..

Here’s a case of “I don’t have the big picture (re: the facts) on all of what’s going on, but everyone’s talking about it. And hence I will also talk about it, whether it’s the small and insignificant linksys router that can only be catered to home consumers or that it’s actually owned by big bad Cisco, or that it occurs on the same day that the discovery of experimental proof around the existence of the Higgs Boson was made, or that there are thousands of deaths reported today on different news channels about different places…

Is this what we’ve come down to?

holy crap! really!!!

I use, contribute to open source. But get off your high horse.

the confusion about authentication and authorization

There’s a lot of confusion about authentication and authorization.

At the basic level:

Authentication is about proving your identity, or how  you prove to someone that you are who you say you are.

Authorization is about your entitlements, what you have access, what you are “authorized” to do or act upon.

Authorization should be completely based on authentication first. If I can’t authenticate who you are, then my authorization parameters will be useless.

At a basic level, at a “chic” bar/dance club, if you’re on my guest list, with your name on it, I have to first check your id to verify who you say you are – authentication.. then check my guest list, to see if you’re on the list – authorization.

If you think about how we authenticate people, there’s a whole range of things you authenticate them on.. but it’s mostly about some level of trust.

if you give me your id, and I check your id, there’s some level of trust that the id (credentials) you’re providing is trustworthy.. it’s got a government seal on it and everything. If my bar has been hit a few times for underage drinking, then I’m a little less trustworthy of the id.. so I have an eye for fake ids, or I check it against a “swipe” machine nowadays.. That means, i’m extending my trust holdings, or trustees, or trust circle, whatever you call it, a little further.. my trust circle now includes, my experience, or my government sanctioned “swipe” fandangle.

If you think about it a little further, the extension of my trust circle, again includes identity first, authentication, and authorization.. In the first case, my employer “trusts” my identity and therefore “trusts” me to do the job of authenticating customers to the club. In the second case, my employer, “trusts” the identity of the government “swipe” verifier system.

And on and on it goes..essentially, it comes down to a web of trust, for identities.. and today, in the real world, the whole thing is held up by unverifiable trusts all along the way. any one of them could break down, and your whole setup is fair game. One of the reasons our society has held up to the gaming of this is the fear of mass breakdown of this social infrastructure. That’s why we create laws about identity, ids, passports all of which are tied to physical verfication and characteristics – fingerprints, voiceprints, facial algorithms etc.. and it has held up, upto now.

If you extend that to the the internet space and the digital space, we’ve come up with things like pki (public key infrastructure) came about, and pgp/gpg, mime, ssl, you name it, one building upon the other..A whole bunch of them are just virtual identity infrastructures that fall down when you actually scrutinize them.

So, you see, I mislead you as a reader.. When it comes down to it,

it’s actually about identity and identity.

Once you have a foolproof way to create identity, you have a way to verify or not verify or nullify it (authenticate). Authorization is easy after that.. after all it’s just a guest list.. or a guest list of guest lists or this guest list and that guest list but not that guest list, but also all of that other guest list. Or a specialize type of guest list that might say on tuesdays you’re allowed in if you have moustache, but not if you are wearing sandals, and on wednesdays, which are slow night, everyone is allowed in for free if they’re of a certain age or sex ..

I don’t think we’re completely there yet. We have the right thinking in terms of components, but not quite the right system, yet.

The components are an immutable digest or signature of your identity .. along the lines of fingerprint signatures etc.. and these have to evolve with time to be more accurate and more immutable towards infinity (somewhat like pi .. everyone gets their own pi)

The system is completely wrong, and is prone to break down any time. There’s a multitude of ways one could go about creating his/her identity (component) from a breaking down system (a country in chaos, a person/family with intent to game the system including birth certificates etc) like creating fake but verifiable birth certificates or passports, you name it..

What you need now, with the aid of digital space, is a mutually verifiable, multi-way replicated public digest archive that’s immutable and available across the globe..of the public portion of the identity.

This has privacy connotations, in the face of it, not really .. but that’s something for another post..

Four pillars of the information economy

Four pillars of the information economy..

Identity – who you are
Language – how you communicate
Content/information – what you want or can provide
Interface – how you can procure / provide what you want or can provide

Identity has to be pervasive across the information space. It cannot be mistaken with credentials. Credentials are how one proves who one claims they are. Credentials are a majority accepted baseline. Identity is not. Identity should not change across language, content, or even interface changes. Of the four pillars identity is the constant. Identities are the representatives of actors/entities.

Language is the protocols, the schemas and the data formats that travel across the transports. Event transports are themselves a form of language. tcp/ip,  http, rest, json, xml, soap, oci, sql, web-services etc. Identity holders have to understand it, systems have to understand it. Language is also decided by the majority as a baseline. It is also how you describe the information that an entity wants or provides. Languages describe the information, the content, the relevance of the information. They also describe how to get or provide it. Languages also describe the identity of the entities providing and asking for information. Languages evolve over time to describe richer information content, richer interfaces and more descriptive and precise definitions of identity.

Content / Information is the foundation for why Identity, language and the interfaces exist in the information space. It’s the unit of barter and trade. Content is forever changing, in fact it’s the piece that’s in constant change. Some types of information evolve to be entities on themselves, and as such demand their own identities.
                                                                                                                                                                                                                                                         
Interfaces are what define where to get the information / provide the information and how to get or provide it. Interfaces should not move or change that often, because they provide a reference point to information, or data access points. Of the four pillars, after identities, interfaces should be the ones that change the least. Interfaces can evolve along with the language, though less frequently, they only evolve to describe the semantics of the information they provide/want in a better way to be more precise. They should not change to reflect the source of the information. Sources are irrelevant. Storage is irrelevant. As identities are the sole identifiers of entities requesting or providing data, interfaces are / should be the sole identifiers of information. URIs a form of basic interfaces.  Interfaces are typically defined by the providers. The stability and relevance of the interfaces drive the provider’s longevity.

In that sense, storage and compute services should be hidden, irrelevant and always assumed to be there. They are the pieces that constantly change as technology churns. The cost of hiding and the cost of prevalence /the assumption of ever-presence is carried forward by the interfaces as a cost-for-doing-business to the consuming identities. Although that cost was large yesterday and and is large today, it should move toward infinitesimally smaller amounts. The true cost should be in providing constant interfaces and the ability to verify at every point, the identities and whether they have access to the interfaces, and the information/content they seek. And those costs will grow larger and larger over time.

These have parallels to human society and how we interact. The information age just makes these four pillars more apparent yet harder to grasp. But, hopefully, if we don’t lose sight of these four pillars, and strive to excel at how we provide services to cater to all four, we should be in good shape.