Of the various organizations I know that have been instrumental in helping Nepal and Nepalis recover, the two organizations that really shine in my mind are :
– America Nepal Medical Foundation (http://americanepalmedicalfoundation.com/ and https://life.indiegogo.com/fun…/nepal-earthquake-relief-fund and )
Sahayeta Nepal
(http://sahayeta.org)

They have shown professionalism, openness and more importantly sticking to the very core of the beliefs of their organizational founding and not straying far from it in this time of chaos, emotions and turmoil.

There are many more organizations that come to mind, and I will probably be dinged for not naming them, but I wanted to point out these two specifically, because of the points raised above.

Many thank yous from me and a host of folks. Please keep doing the work you’re doing and inspire other organizations and individuals to do similarly.
I know I’m inspired.

Again, thank you for being Nepali from the core.

This “article” appeared on LinkedIn Blogs.. written by Ravi Krishnappa , entitled Today Ransomware – Tomorrow Haramware

It showed up on my feeds, I felt compelled to reply.. and here’s my response, which somehow didn’t get posted in the comments section of the linkedin blog

“””

I have some comments on your post:
My comments follow snippets of your post quoted within sideways-chevrons(>>>some text<<<)

1.
>>>
I regret the whole computer and internet security design that is in place today.
<<<

Very true, and lots of folks, including like yourself, are working on this. It’s a known problem, as you say.

2.
>>>
The whole thing will one day crash and burn and take the civilization back by 200 years. The current security model is like living in an iron fortress with known secret doors. Everyday, we see huge data thefts from private, public and government agencies. I strongly believe that similar leaky infrastructure could be the reason behind the sudden disappearance of mighty dynasties in Egypt, Mexico, Peru and India.
<<<

The whole thing may crash one day, but I doubt the “whole thing” that you say, is the same as what I say or what someone else says. I don’t believe my “whole thing” is the internet and the security therein. Half of the world’s population doesn’t have enough to sustain themselves physically. The internet crashing would barely twitch their eyelashes. But you do make a point, however, you lost me when you made a connection between data thefts and dynasty disappearances.

3.
>>>
A internet dependent society that has no manual fail-back mechanism can be crippled if enemies lock up key computers that are essential for commerce and communications.
<<<
See above. If the basis for the manual fail-back mechanism is human civilization, then there already is one – human ingenuity — paper books are still available.. for a while anyway.. There’s even organizations that deal with long term human survival (http://www.longnow.org) .. and an effort for preserving the knowledge to bootstrap if something like that happens http://blog.longnow.org/category/manual-for-civilization/ .

But I doubt it’ll come to that. I do agree that where we’re heading as humanity is more information centric. Where we may differ is who has access to that information.. and how. History has shown that control of any valuable entity by a few leads to circumstances that destabilize the situation and eventually commoditize the entity.

4.
>>>
I call this kind of wanton destruction as Haramware after the brutal methods used by certain groups in Nigeria, Kenya, Syria, Afghanistan and Iraq to destroy whole villages, cities and ancient artifacts.
<<<
You lost me again. I see where you’re going with brutality and the abuse of power, but I was struggling to see the co-relation to internet security.

5.
>>>
We are sitting on a time bomb right now. The big players in networking (Cisco, Juniper etc), virus protection (McAfee, Symantec etc), Network protection (Palo Alto, Checkpoint etc) are not coming forward to offer 100% protection against intrusion, stealing and damaging the computer infrastructure.
>>>
So, back at information and information security, I fail to see how a few infrastructure level companies like Cisco, Juniper, McAfee, Symantec, Palo Alto, Checkpoint etc.. are liable for the lack of internet security. Yes, as networking companies, they have a prerogative to provide solutions to “known” and “foreseen” problems in the network. Why aren’t companies like Facebook, Twitter, Apple, Samsung, Google, Yahoo in that list? For that matter, why aren’t Amazon, AirBnB, Netflix, Uber in that list either? Information leaks because of osmosis.. again a known issue.( http://www.misentropy.com/2010/05/information-osmosis-and-the-case-against-chief-culture-officer.html sheds light on *some* of it)

6.
>>>
Why ? Because they can protect a few known secret doors but they don’t know about other secret burrows dug below the foundation. They are like the tunnels built be Hamas under the Israeli border walls using Israeli cement.
<<<
I don’t know enough to comment on the accusations. Conspiracy theories are rarely productive. Companies exist for profit ( at least in America ) and are usually beholden to their shareholders. As such, they will act to increase their profit and to increase the trust of their shareholders. Being held on a leash by secretive organizations that have the keys to “secret doors” that the companies have to support rarely go in the favor of the companies livelihood in the long run. I could be mistaken.

7.
>>>
We need a brand new computing and internet infrastructure that is simply not hackable. What we have today is pure crap. That new infrastructure will probably cost more than $20 Trillion dollars and it is worth spending that amount. That spending could revamp the sluggish worldwide economy and bring back the basic security that is essential for living in a digital world.
<<<
I think that whatever we as humans make, we can unmake. There is no “other” human-made infrastructure that cannot be hacked by humans — simply because we thought it and we made it. How would one think of security with the “new infrastructure” – someone or some group within a hierarchy would still end up owning the responsibility for upholding the last mile of security – and they would still be human. How is that different from what we have today. It’s probably worse.

I didn’t want to feed the trolls, but this showed up on my feeds.

“””

There are entire classes of products being invented, revamped, repositioned to join the connected revolution.

In generalized terms, one could, perhaps, classify the different products and offerings in these broad categories

a) connected devices

b) Core Software/OS on connected devices

c) Library / Application Software on connected devices

d) Software for command, control, configuration of connected devices

e) Data collection system for connected devices

f) Analytics of data from connected devices

g) Infrastructure for centralized/cloud based deployment of software

Out of all these categories, beside a) and g) all categories concern software.

Out of all the software, I would contend that d) is the lynchpin that ties all of the IoT world together. That’s where peripheral but necessary components like event systems, machine identities, security and encryption, access management, communication protocols etc all come together.. In a sense, it’s the core of the IoT platform.

Any company or organization that makes headway in creating the core platform, that exhibits behaviors that allow ease of use, low cost to entry, ease of programming, extensible etc will be the winner. There are none such platforms today that exhibit all of the behaviors.

This is more about a kudos to Google for “finally” reaching out to the community with their release of kubernetes as a fully “co-operative” open-source code base.

As I had mentioned in my earlier post, Google lets something out of the bag once in a while. This one, at least is more than the paper that started the Hadoop map-reduce trend. Here, Google seems to be actually actively involved in the community.

I like containers (linux containers — specifically namespaces + cgroups + selinux) simply for the reason that it provides me an easy abstraction to package and ship my code (ala docker ), but also, it gives me the power of the whole linux kernel + GNU / Unix ecosystem to help me manage mine and other peoples code.. and that includes doing inter-process communication right.

In this case, since nowadays it’s all about cross-device-processes, it’s about doing inter-service communication right. That means getting naming of services right, getting the network routes to services right, getting discovery of services right as well as being able to tune the different levels of what linux and it’s ecosystem provides to make sure i can deliver container based services properly, with the right SLAs etc..

I had my mind set on Mesos as a way to orchestrate linux processes throughout the datacenter. With kubernetes in the mix, it seems to me kubernetes along with cAdvisor give me the right tools to help me either create or choose the right frameworks to actually do real workload management, properly.

we shall see.. it’s an exciting world for people who develop and ship datacenter/cloud  applications.. viva user driven infrastructure..

PS:

That OVS blurb in the title- was a teaser.. none of kubernetes, mesos, docker etc can give you a powerful network link abstraction.. the most open way is OVS… stay tuned. Similarly, my requirements are not just to run application type services using this container paradigm, I actually want to / have to build other services (what were typically referred to as backend services – database, analytics etc) using the same framework. Here, google,redhat,ibm,microsoft etc..  probably want you to use their kubernetes optimized cloud for apps and provide the additional services like streaming data-analysis, transactional data services, identity services etc.. What interests me here is — turtles all the way down, i want to be able to build those “additional” value-add services using the same paradigm, so that it’s actually a competitive marketplace. I’m not discounting Amazon AWS out on this.. they’re probably racing to counter this recent movement. I don’t consider AWS to be truly cutting edge in any technology, they just make it accessible, cheap and user friendly – in any market. I’m not necessarily sure that’s gonna cut it. A script-kiddie in a chai-cafe in Rio or Mumbai’s slums could be writing the next cutting edge software .. and I don’t believe Amazon AWS ( or any of the afore mentioned companies ) can compete with that .. but that’s where we’re heading, aren’t we ?

It seems like Google lets out something out of the bag once in a while.. take Google Omega..

http://www.theregister.co.uk/2013/11/04/google_living_omega_cloud/

and their recent announcement about their use of linux containers —

lmctfy (on github)

To me, this sounds like a developer / app deployer being able to specify the characteristics of the workload when they deploy it (represented by SLAs – priority, latency expectation etc..) and the management platform using metadata about resource pools, their available capacities to fulfill the SLAs , then choosing the right pools  and then deploying them there..

So, in effect, it’s not just their workload scheduler, they require the right metadata to be populated along with their workloads..

It just so happens that the unit of deployment they may be using is containers using cgroups and kernel namespaces, and they add additional metadata to the definition of the containers that users can manipulate

http://stackoverflow.com/questions/19196495/what-is-the-difference-between-lmctfy-and-lxc 

one can start doing this with docker today, with custom metadata.. the harder part is the scheduler, which would have to be something custom (maybe piggy backing on openstack work around nova scheduler, neutron etc .. or an existing PaaS ecosystem like openshift)

This is probably a truer version of workload management moving towards the idea of autonomic computing, than just moving VMs around.. Granted you could do the same with VMs, add metadata, but you’d also have to deal with resource management at two levels – at the hypervisor and at the vm level — which is usually not a good idea.

Follow

Get every new post delivered to your Inbox.

Join 663 other followers