Filtering by Tag: cloud

unicorns and the language of otherness

Because even in the face of overwhelming evidence, people will come up with excuses for why they should not, will not, can not—learn or change.

Presented at Velocity NY 2014.

Transcribed:

This man is albino, which means he has no skin pigmentation.

The red you see is the blood below the skin. His name is Brother Ali. He is a muslim rapper from Minnesota. That makes him different from all of us, in some way. And in all likelihood, we don’t think like him.

Let’s say that I believe the earth is flat. It’s part of my identity. It’s a strong belief. I have convictions around it, decisions that I’ve made around it. I identify as an earth-is-flatter. My identity is invested in the earth being flat. An attack on the idea is an attack on me. If the idea is wrong than I am wrong. Personally. Not just about that one thing, but about my person.

Let’s say you believe something different. You believe that the earth is round. You’re an earth-is-rounder. That makes you apart from me. Not because you have a different idea, but because you have a different identity. I cannot identify with you. If you’re successful in your belief, then maybe my way isn’t the only way. If you’re more successful than I am, then maybe my way isn’t the best way. If you are successful and then I am less successful, then maybe I’m wrong. But I’m not just wrong about the idea, I am wrong as a person.

But, I don’t have to see that. I don’t have to see anything. I have labeled you as something other than me. I cannot identify with you, thus I do not have to see your success. I can ignore it. I can bury my head in the sand. My ingrained belief creates a bias about you that I have. And I rationalize that bias by calling you something else, by putting a label on you. 

There is a saying by our friend, Brother Ali, that we have a “legacy so ingrained in the way that we think that we no longer need chains to be slaves.” He’s taking about racial biases. but any ingrained way of thinking creates a bias. Biases pile up and compound into a kind of psychological debt. It’s like technological debt: you have to refactor it in order to move on. It will eventually slow you down, bog you down, prevent you from seeing things. Prevent you from noticing thing. Prevent you from seeing a thing you might want to learn. 

And what’s true of you as an individual is true of us as groups. Teams can have shared biases created by their entrenched ideas and ways of doing things that create a shared psychological debt that prevent them—not just from learning—but from seeing that they should be learning. And while they are not learning, while we are not learning, there are other people who have learned and through their learning have changed the world around us. 

I was an analyst at Gartner for a couple of years and I heard this all the time: - “These companies are not like us. They do things differently. They have different users. They have different environments. They can do whatever they want. They don’t have the same security concerns we do.” Any litany of excuses that say “we don’t have to learn from them because they are unicorns” and unicorns are different and different people are others. So, eh. It’s ok. 

Turns out that unicorns are just people. And as people, they’re just like us. They’ve just made a different set of decisions in a different context in a different environment. We can make different decisions. We can create a new context. We can pay down our psychological debts. We can even declare bankruptcy like people do with economic debt and start over, throwing out ideas and practices. 

Cause the thing is, if we really want to move forward and expand and learn and grow and change for a changing environment—we have to get past the mess of our past decisions. We have to separate our identities, who we are and who we will be, from who we were, what we have done and what we have been. So that when we encounter something different or see change, or see change in others, that is not a threat to our identity and it doesn’t hurt so much to accept change and to do change. 

I don’t want to be a unicorn. I don’t want to be someone who is apart from you, other than you, does not have to be listened to, can be dismissed. And I don’t want to think of anyone else as something special, apart, different, cannot be learned from, to be dismissed, not part of the same humanity that I’m in. 

Cause, in the beginning and in the end, we are all still people. Thus, mainly in essence the same. The fact that we have some simultaneous differences, that have evolved, that don’t cause us to die out there in the world—suggests that the single strongest signal that you have something to learn is the fact that a difference exists. 

..the single strongest signal that you have something to learn is the fact that a difference exists. 

what aws is not

In 2004, SQS and AWIS beta-ed.

In 2005, MT beta-ed.

In 2006, S3 and EC2 beta-ed.

From there, the pace of releases has skyrocketed (something we should put value on). AWS started by turning basic computation services into utilities. They've since done the same to a wide range of technology capabilities--dozens of services, hundreds of options, a combinatorial explosion of capabilities. So far so that we could reproduce all the functions and services provided by any data center anywhere.

That's where AWS is. AWS is not a commodity, though specific AWS services may become commodities. AWS is not basic computation services. AWS is not just for startups or web2.0 or mobile or small shops or transient projects or marketing or unregulated.. etc.

AWS is the successful utility-isation of ever more, and ever more valuable, technology services.

They are building the AWS of next year or further out through utility-ifying whatever it is that their ecosystem (customers included) is telling them (through behavior) is worth paying for. 

To really compete, you'd have to: match the ecosystem play and exert margin pressure. The former you could do through co-option--which would require taking over the service supply chain--or through drawing your own ecosystem to some core differentiation (e.g. live migration on GCE or seamless public/private experience on Azure). The latter can only be afforded by a few organizations (Google and Microsoft).

Hat tip: Most of the thought above is a direct result of, or informed by, Simon Wardley.

ooda redux - digging in and keeping context

Putting together some thoughts from a few posts from 2012 on OODA [one, two, three]. For some reason, the idea had been getting a lot of airtime in pop-tech-culture. Like most things that get pop-ified, the details are glossed over—ideas are worthless without execution; relying on the pop version of an idea will handicap any attempt at its execution. 

I’m not an expert. But, I’d wager that Boyd: The Fighter Pilot Who Changed the Art of War is the best read on the subject outside of source material from the military.

OODA stands for Observe, Orient, Decide, Act. It’s a recasting of the cognition<->action cycle central to any organism’s effort to stay alive, focused on a  competitive/combative engagement. 

Get the data (observe). 

Figure out what the world looks like, what’s going on in it, our place in it, our adversary’s place in it (orient).

Project courses of action and decide on one (decide).

Do it (act).

The basic premise is that, in order to best an opponent, we have to move at a faster tempo through the loop. Boyd used a more subtle description—operate inside the adversary’s time scale. 

First: If we traverse the loop before the adversary acts, then whatever they are acting to achieve may not matter because we have changed the environment in some way that nullifies or dulls the effectiveness of their action. They are acting on a model of the world that is outdated

Second: if we traverse the loop before the adversary decides, we may short circuit their process and cause them to jump to the start because new data has come in suggesting the model is wrong

Third: if we traverse the loop at this faster tempo continuously, we frustrate the adversary’s attempt to orient—causing disorientation—changing the environment faster than the adversary can apprehend and comprehend it, much rather act on it. We continue to move further ahead in time while the adversary falls backwards. By operating inside the adversary’s time scale.

Another detail from Boyd—all parts of the loop are not made equal.

Fundamentally, observation and action are physical processes while orientation and decision are mental processes. There are hard limits to the first and no such limits to the second. So, two equally matched adversaries can both conceivably hit equal hard limits on observation and action, but continue outdoing each other on orientation and decision. 

But realistically, adversaries are not equally matched. We don’t observe the same way, using the same means, with the same lens, etc. We don’t act the same way, with the same speed, etc. And being able to collect more data, spend more time orienting, leads to better decisions and actions. Being able to move through different parts of the loop faster, as needed, renders the greatest advantage. Compressing the decision-action sequence gives us a buffer to spend more time observing-orienting. Nailing observation gives us a buffer to spend more time orienting-deciding. We can come up with the best--not the fastest--response and act on it at the optimal--not the fastest--time. Getting a loop or more ahead of our adversary gives us a time buffer for the whole thing. It puts us at a different timescale. It allows us to play a different game, to change the game

Deliberately selecting pacing, timescale, game—strategic game play.

Ops/devops analogs:

  • Observe - instrumentation, monitoring, data collection, etc.
  • Orient - analytics in all its forms, correlation, visualization, etc.
  • Decide - modeling, scenarios, heuristics, etc.
  • Act - provision, develop, deploy, scale, etc.

Startup analogs:

  • Observe - funnel, feedback, objections, churn, engagement, market intel, competitive intel, etc.
  • Orient - analytics in all its forms, correlation, assigning and extracting meaning from metrics, grasping the market map and territory, etc.
  • Decide -  modeling, scenarios, heuristics, etc.
  • Act - prioritize, kill, build, target, partner, pivot, fundraise, etc.

Those are analogs. It’s worth keeping in mind that OODA was developed for the context of one-to-one-fighter-jet-to-fighter-jet combat and not anything else.

communication in the service and api supply chain

Another thought about “the service and api supply chain” —how do we know what an API provider or servicer can do? It’s unlikely that any given servicer of an API will service the same subset of that API as any other servicer or be able to keep up with all the changes that are introduced by the API originator.

Can you ask an API endpoint:

  • Hey, what can you do? 
  • What APIs do you originate and provide? 
  • What 3rd party APIs do you service? 
  • What subset of those APIs? 
  • What are your SLAs?
  • Etc.

Can the API endpoint tell you:

  • I’m running out of capacity to service X?
  • There’s a degradation of service Y?
  • You can send these calls to these other endpoints I own?
  • This is how much I charge per call for Z?
  • Etc. 

We could use a (roughly) global language with some basic terms for services that describe what they do, what they service, how they do it, with what kinds of commitments. An analog to Wolfram Language for distributed services. 

We could use a (roughly) global protocol for handshakes and mutual understanding so services can talk to each other, advertise and discover what they can do, what they can service, how they do it, with what kinds of commitments and interrogation mechanisms. An analog to ethernet autonegotiation for distributed services.

Plenty of API providers don't want this to exist. But the competitive advantage that could be generated by programatically dealing with an API should draw a significant ecosystem. So I wonder why this hasn't been done, especially be near-first tier providers like GCE and Azure. I can only guess it's because they haven't figured out how to do ecosystem-based strategic gameplay for cloud services yet. And of course, AWS has no need to do such a thing (yet).

fantasy vc - virtustream

This fantasy vc post comes from something I wrote about in what we don’t know about private cloudandthe three cloud questions you have to answer”: 

The line between what we do in the public cloud vs what we do in the private cloud vs what never goes to a cloud model will be moved—both in time and in scope—by the cloudification of legacy applications.

In the space between public cloud for cloud-native applications and on-prem virtualization plus automation for non-cloud-native applications lies a big space for remote hosting of non-cloud-native applications. 

Some of this is satisfied by what can best be called managed hosting for virtualization, which is what most VMware-oriented service providers (even those that use the word “cloud”) do. And some of it is satisfied by VMware-oriented service providers that actually have managed to build a self-service, usage-based service model (like the very successful Tier3, recently acquired by CenturyLink).  

Yet despite the promise of being able to forklift a legacy app to a cloud provider and switch from a perpetual license model to a usage-based model to pay for a particular bit of software as a service—that is something that is rare. Enter Virtustream.

Put these things together:

  • Many enterprise apps cannot be re-architected to be “cloud-native”
  • There is demand to forklift enterprise apps to hosted infrastructure
  • There is demand to pay for those apps on a resource-consumption model
  • Many of those apps are only supported virtualized on VMware
  • Many enterprises don’t have the expertise to do the forklifting
  • Many service providers don’t have the expertise to do the forklifting
  • Many (most?) VMware-oriented service providers can’t figure out how to get the automation and resource-consumption parts done in a way that generates a sufficiently cloud-like experience for their customers
  • VIrtustream solves for this

The interesting thing about Virtustream to me is the apparent focus they’ve had from the beginning: these exact customers, this exact problem space, those exact applications. And nothing else. Period. 

The technology they built is fundamentally an enabling mechanism to make the resource-consumption model granular enough on VMware to achieve the cloud-like experience. The model they built is predicated on doing the hard work of the full life-cycle of an enterprise pre-sales/sales/post-sales consultative service. And they do the hard work.

That's not to say that they're guaranteed success. Or won't get crushed by an incumbent or other party. Or even scooped up before they become too successful. Just that I would've placed that bet.


Disclosure: Virtustream, as a whole, was adjacent to my coverage area at Gartner—but their xStream cloud management platform was squarely in my coverage area once it was commercialized.

the service and api supply chain

When we visit a site, start an app, or do just about anything online—what lives behind that one object is 10s, sometimes 100s, of services.

As Mr Krugman notes, the great transport and communication innovations of the past generation did not necessarily reduce shipping costs. Rather, they reduced shipping time while also making international coordination of shipments cheaper and easier. The result has been, in Richard Baldwin's phrase, a "second unbundling". The first unbundling represented globalisation's geographic separation of production and consumption more than a century ago. The second is the geographic separation of stages of production. And one then has to ask how stories of the determinants of international trade apply to each of these various stages.

- Hyperglobalisation and metropolitan gravity, Free exchange @ The Economist [emphasis added] 

We’ve seen a steady unbundling on the web, on our phones, and in our apps. It’s the separation of technological stages of production of apps and services. And it’s turtles all the way down.

When we bought all our software and ran it on our own systems, we controlled the software supply chain underlying the ultimate business apps we used. But if we rely on a SaaS app that relies on other services accessed via APIs which may themselves rely on other services accessed via other APIs—how do we manage this new supply chain? How do we figure out how to manage the risk of our services’ services’ services’ services?

With actual manufacturing, if a supplier stops delivering or goes out of business, you find another maker of the same component or ship your specs off to another manufacturer who can make that very same part you need. 

In the API economy, if the provider of a service goes under or simply stops providing the service, what can we do? 

  • Suffer and recode to a new API for a competitive service (if one exists)
  • Build an abstraction (maybe our own API) to make that easier
  • Use two or more similar services via our own abstraction with some ability to switch if one fails, which still involves building the drivers (as it were) for each service’s API

What we can’t do is ship the API calls to another provider to service, or put a service spec plus API out to bid, or create an ecosystem of multiple suppliers for each service layer. 

Why not?

Where’s the alternate service provider who will service the AWS APIs? Where’s the alternate service provider who will service the Twilio APIs?  Where’s the alternate service provider who will service the Dropbox APIs? 

Added Feb 12th: Where's the communication mechanism to discover services, APIs, nodes and negotiate transactions? More on that question in another post.

There’s an opportunity in there somewhere.

fantasy vc - apprenda

Considering the press and recent funding round for my friends at Apprenda, it seems a bit disingenuous to fantasy vc them. But no matter.

I’ve been convinced of their success since the first explanation of the product and target. There were plenty of PaaSes at the time, but they were mostly targeted at developers and mostly public. Press releases from analysts firms like this notwithstanding, the market wasn’t taking off and didn’t look like it was going to take off any time soon.

Put these things together:

  • There is a lot of in house development in the enterprise—though no one knows how much exactly, it’s at least enough to support a couple of private PaaS players
  • That development is mostly Java or .NET on Linux or Windows
  • And it runs on fleets of servers, storage, networking, and data centers that are not at end of life
  • What’s developed is custom apps for core business/ops, paperwork apps, extensions to COTS with SDKs, glue to connect together these things and/or legacy apps and/or cloud apps and/or cloud services and/or…
  • There was and is very little “private” PaaS competition
  • There was no private PaaS for .NET applications at the time that I knew of and there are only a few today 
  • There was little that provided the experience of a distributed runtime on prem out of the box
  • Virtualization is not required
  • Apprenda was (and is still the only?) private PaaS supporting .NET that doesn’t predicate itself on some hypervisor

You offer a runtime, so you may or may not have VMs--but who cares since what you need to expose is the runtime, a management console for that runtime, and (hopefully) APIs to connect to and operate it

- me, provided vs exposed

To me, that last point was the killer. Before the renewed popularity in [the old technology of] containers, Apprenda leveraged Linux container tech and figured out how to get a workalike on Windows to underpin their PaaS. Thus totally foregoing the overhead of hypervisors and the overhead of VMware’s margin.

That's not to say that they're guaranteed success. Or won't get crushed by an incumbent or other party. Or even scooped up before they become too successful. Just that I would've placed that bet.

Disclosure: Apprenda is not in my former coverage area and I have no financial interest in them. But Sinclair and I do share an alma mater.

fantasy vc - metacloud

Kicking off a series about bets I would've placed if I had the money. This is something I very much wanted to do--very much could not do--when I was at Gartner.

I don't know the numbers on "real" (read: revenue generating) OpenStack adoption, growth, etc. . 

I do know there's real traction. 

Suspicion: it's with very very few vendors. Money is being made but success is concentrated.

There are only two startups in the space I would bet on. One, I have a conflict of interest regarding. The other is Metacloud. Both aren't really OpenStack companies. OpenStack is just a vehicle for the thing they actually do.. In Metacloud's case, what they do is remote ops (as a service!)

Put these things together:

  • There is a market for private cloud (whatever that is)
  • There is a market for AWSish public cloud
  • There is a market for AWSish private cloud (Eucalyptus are still in business, aren't they?)
  • There is an existing use case for AWSish private cloud in most enterprises (web, mobile, dev)
  • There is a fundamental our-bottom-line-at-stake use case for AWSish private cloud in some subset of enterprises (a few hundred?) today
  • There is a general lack of operational skill for AWSish private cloud
  • One of the core things public cloud provides is a managed service
  • There is a market for on-prem remote-managed AWS (the three letter agency thing is a public example)
  • The Metacloud guys are ops guys who understand enterprise, scale, web, mobile, open source, AWSish cloud
  • There just aren't a lot of hats (big or small) in this particular ring right now 

That's not to say that they're guaranteed success. Or won't get crushed by an incumbent or other party. Or even scooped up before they become too successful. Just that I would've placed that bet.

Disclosure: They're in my former coverage area. But I believe with some certainty that I'd come to the same conclusion without that background. I have no financial interest in Metacloud. I really like them. Would have a beer with that crew any day. 

provided vs exposed

If you're offering infrastructure as a service, you have to have infrastructure to offer and it has to be exposed.

But if you're offering something else, then:

The infrastructure doesn't need to be exposed, THUS you don't need to have it.

Examples:

  • You offer VMs, so you need to expose VMs, a management console for VMs, and (hopefully) APIs to connect to and operate them
  • You offer a runtime, so you may or may not have VMs--but who cares since what you need to expose is the runtime, a management console for that runtime, and (hopefully) APIs to connect to and operate it
  • You offer an application, so you may or may not have VMs or a particular runtime--but who cares since what you need to expose is the application, a management console for that application, and (hopefully) APIs to connect to and operate it

It gets a little more complicated when someone wants to build something else on top of what you offer. Then they probably want and/or need more exposure to, and more control knobs for, the underlying stuff.

Basically, this is what makes IaaS (specifically VMaaS) different in kind from anything else. 

What you provide guides what you expose dictates how you can build.

What you've built limits what you can expose dictates what you can provide.

the value of feature velocity

We don't put a value on feature velocity. Not our own, but the public cloud's.

Maybe we should. Being on a public cloud like AWS exposes us to new capabilities faster than most internal IT departments can begin to provide. We may not need any given capability, or even want it. But here's the thing: you will never know what you could do with it if you're not exposed to it. 

There's some inherent value to that. To being exposed. 

And there's an opportunity cost to not being exposed. 

Are you putting a value on that?