aws lambda - some words

To get these out of my head so I can stop thinking about them...

At re:Invent last year, Ben Golub was up on stage singing the praises of Docker. The masterminds at AWS had arranged for a solid 20-30min of Docker love-in before making the day 2 technical announcements. Ben said that [one of] Docker's goals was to free developers from having to worry about production and delivery (or something like that, see his keynote). Then Werner comes on stage, describes Lambda, and more or less says that while others are trying to free developers--Lambda actually does that. Pretty amusing.

Lambda will drive some usage away from other AWS services. I've already seen experimentation and real usage start amongst high end AWS users (not just Netflix). You could view it as cannibalization, but it's much smarter. Presumably AWS has figured out how to price Lambda in an accurate way such that the cost of all the underlying and adjacent services consumed is priced in.

Lambda might be a "true" PaaS in the sense of being a pure runtime where you don't have to understand the underlying mechanics or implementation of compute, storage, database, etc etc at all. There are no buildpacks, runtime plugins, etc etc like you have in most PaaSes.

Like Jeff Barr said in his blog post: "You don't have to configure, launch, or monitor EC2 instances. You don't have to install any operating systems or language environments. You don't need to think about scale or fault tolerance and you don't need to request or reserve capacity. A freshly created function is ready and able to handle tens of thousands of requests per hour with absolutely no incremental effort on your part, and on a very cost-effective basis."

Although it has constraints, like only being Node and only allowing up to 1GB of memory consumption per function (last I checked), etc--it's a completely abstracted runtime environment. You give it code and a few variables. It does the rest.

It completely removes Ops. Why DevOps when you can just Dev? It's more like Google App Engine than anything else out there. But GAE won't let you have long running functions (more than a few secs, last I checked), so in its limited way it's already a step ahead.

Where a Docker container gives you theoretical portability because your entire app is packaged in a way that's independent of what it's running on (but not really). Lambda locks you in because you have no idea how your code is running or what it takes to run your code. The only thing you could conceivably move to is GAE, but you'd have to rewrite bits and metadata in order to do it. Oh, except that GAE doesn't do Node. So nevermind.

It's brilliant. 

It's also dangerous. If you never learn how the thing below what you are doing--what you are downstream of and rely on--works, then you become intrinsically dependent on the provider of that service. Great when that service is an actual commoditized utility with multiple providers in a competitive marketplace. Miserable when it's a monopoly. Creating that dependence is good gameplay on AWS's part. Not providing equivalent alternatives that conform to the same interfaces is bad gameplay on everyone else's. Becoming hooked is a poor decision on our part, unless we do it with eyes wide open and willingness to do the work of unhooking ourselves in the future.


Or, as Nick Weaver puts it:


the psychological demands of change

Presented at Monktoberfest 2014.

It's about the psychological demands of change. Related to an Ignite talk earlier in the year, Unicorns and the Language of Otherness, about the investment of identity as a barrier to change

Soon after moving to the west coast for #startuplife, I read Hooked. It's mainly about using the mechanics of addiction for profit, how to build addiction into products. Buried as a subpoint deep in the book, there’s an interesting bit about Alcoholics Anonymous and support groups, in general. People in the group have been through something already or are going through it together. Their experience provides proof that the next step is achievable. Their empathy provides positive reinforcement to nurture that achievement.

That reminded me of the Six Pillars of Self-Esteem, which makes the point that the source of self-esteem is confidence and of confidence is efficacy. We do new things constantly when we’re little. Our success in doing those things gives us a sense of efficacy. That sense of efficacy translates into confidence which in turn forms the psychophysical (biological? evolutionary?) basis of self-esteem. 

Think about what happens when you try to do something new. Generally, you suck. And when you suck, you feel bad. You suck, your efficacy is challenged, your confidence is challenged, your esteem is challenged. Which is hard. 

Where doing something new is difficult, changing something old to do something new is even harder. You’re taking something you know how to do, where you have an efficacious experience, and you have to abandon it to do something else. You’re not just starting from scratch and then building experience, efficacy, confidence, and esteem. You’re starting from a place of confidence and cutting it out of your life. As you stumble along in the learning process, you hopefully find efficacy, confidence, and esteem along the way. But until that happens, the experience is unpleasant. Changing behavior actually hurts.

Changing behavior actually hurts.

Community support eases the pain by filling the gap. It supplies an external source of confidence and esteem to get you through until you can build your own.

In tech, we are producing and evangelizing new patterns, architectures, cultures, metaphors, and interfaces at an ever greater pace. We demand of everyone that they change, that they give up what they’re confident at, that they stumble along and fail on the way to the every new promised land we pitch. We demand that people suffer. So we should provide the means to alleviate that suffering. 

Make change consumable. 

Not just UX or user centered design, but psychological centered design.  People work in specific ways, feel in specific ways, learn in specific ways. Don’t build for the customer or the user. Build for the human. 

Design for efficacy. 

When we talk about building minimum viable products and breaking down the ultimate thing we’re trying to build into something that’s useful at each stage of creation, it’s to create efficacious experiences at every turn. Little wins matter.

Fill the gaps.

When we can’t help our users or even see the friction that we’ve created, they can help themselves. That’s why we build communities. They can help themselves, but they can’t do it by themselves. They can only do it together.


Disclaimer: This is neither expert nor scientific. :)

Photo credit: MVP image from here. 

Slides below.

fantasy founder - elder interfaces

Continuing an occasional series about products and companies that I’d like to see built, or build.

Over the years, I’ve tried to teach my grandmother to use computers, dumb phones, smart phones and tablets--with no success. She will learn one or two things (command sequences) to get something done for a little while, but nothing sticks.

Facts:

  • English is her 5th language (depending on how you count subcontinental languages).
  • She hasn’t had much schooling, up to 5th grade maybe.
  • But she’s sharper than most people I know, having cogent conversations about geopolitics and doing relatively complex financial math in her head.
  • Her formative years were in a developing country, traumatized by mob rule, lynchings and the like.
  • Her first personal exposure to computers was in her 40s, and her first attempt at using computers was in her 60s.
  • Recently, she had a stroke and lost some significant English comprehension circuitry.

Desktops, folders, files, that there are different kinds of files, applications, trees of objects, windows, visual controls, input controls, control contexts, focus, local vs remote, online vs offline, different affordances in different mediums, different affordances in different contexts on the same medium, contextual clues built into small variances in visual presentation, the boundaries that separate one object from another, the different kinds of boundaries presented for different kinds of objects in different mediums or contexts—are all bound to and presume a certain cultural context and assume a certain set of preexisting models of how the world is organized and works.

The cultural assumptions built into our interfaces render them incomprehensible.

How we might overcome them:

  • No files: If you didn’t grow up with computers or with desks and file folders, the metaphor doesn’t work. It doesn’t translate into the model which tells you that this thing is an object and the same form of object can have different content, etc. Better would be just apps which find and organize related content, the Apple way — stepping away from having to know how things are made and work to only needing to know what it is you want to do.
  • No exposure of the filesystem: An extension of the last point: no folders, no browsing, no object tree, no files—just actions. That’s what the machine exists for and that’s why we go to it, to do something. Tool and action are fundamental enough concepts to transcend cultural context.
  • Feedback on every action: I noticed that my grandmother would frequently do something on a computer or tablet and not know that she had done it or not believe that it had happend, especially things that are ephemeral like copying text. When you don’t have a model for how the system works, you need explicit feedback that the thing you’re trying to do was done or that you’ve done a thing, period. Strong visual, tactile and/or audio feedback for every action taken to tell you not just that you have actually done it, but that the intent has been registered by the system.
  • Larger tolerances: Because fine motor skills deteriorate with age, getting shaky fingers right on a button is an unreasonable expectation, soclose enough has to be sufficient.
  • Space between things: Corollary to the last point, what defines close enough should be consistent and big enough that it becomes intuitive (as an affordance) and feels easy. Which means sufficient space between all control elements to allow for not getting right on the button — as in, the whole grid square where the button is present is an active control.
  • No menus: Big buttons with big words and/or big icons, all the way; because glaucoma, macular degeneration, etc.
  • Less distractions: Wallpapers with objects in them or that could be confused for objects, window-dressing, flashy-visual-effects that look pretty but don’t help in navigation, orientation, or feedback create noise that makes it harder to adapt to a new environment. It’s like when you’re learning a foreign language—it’s much harder to understand what’s being said in a crowded, noisy cafe than it is in a quiet setting where you can focus on the one signal that matters instead of on trying to filter out the dozens that don’t.
  • Click or no click: The whole overloaded clicking — left, right, middle, double, triple, click+drag, blah blah blah — imposes a significant burden on the user to understand and remember all the things that can be done with a single input element. Pair that with deteriorating fine motor skills, deteriorating sight, and lack of clear feedback on whether or not an action was taken and you have a recipe for confusion. Better: there is just click, or no click.
  • Limit controls and contexts: Even when I would teach my grandmother something successful, frequently how I showed her to do something in one application would not translate at all to a different application or to a different context, like manipulating files. This is challenging in the extreme when you have no way of knowing that the context has even changed because you don’t have a mental model for the thing you’re looking at. The number of controls available in any given app should be stripped to the minimum, so there’s less to remember; the number of contexts (app vs app vs system) stripped to the minimum so there’s less to remember; and the variances between contexts (different control in different contexts) stripped to a minimum so there’s less to remember.
  • Fullscreen everything: That apps need to be opened or closed may even be an unnecessary metaphor. If every app took up the whole screen, was open all the time, and there was an ever-present mechanism to switch between them—then that’s a few more things that don’t have to be remembered. We could reduce the cognitive burden down to: which of these dozen things do I want to do right now/next -> select.

Mobile interfaces are moving in the right direction.

If I put my product hat on and make my grandmother the target user, what she really wants out of a computer comes down to a managed communications experience which empower her to:

  • Get in touch with the family and friends easily. Contacts as actions, the faces of the people she wants to contact as buttons on a screen that get in touch with them via video, phone or text. We, as relatives, need a way to remotely keep those contacts up to date via push to her device or a centralized service that propagates to her device.
  • Keep up with loved ones when we’re not talking. Facebook without the Facebook, a timeline of updates from loved ones, pictures and videos and text, shared directly to her device, in a single app, blown up full screen. A feed that any of us can push content to or that can consume and present content from things like Facebook.
  • Have important information and reminders without having to look for it. Emergency and medical information as collaborative app, pushed to the device by doctors and loved ones for consumption by all parties involved in care, including her for things like “Hey it’s 10am, take the blue pill!”.
  • Let loved ones help. Shared calendar that loved ones and caregivers can push events onto, like appointments and birthdays. Delegation of control for all apps and services so she can say to her banking app that I am designated to make sure her bills get paid. Or, so I can have an Uber pick her up to take her to the airport and have the notifications go to her device instead of mine. Or, so a caregiver cantake over her device and it’s capabilities (like the camera) and show her things on it remotely or check in on her.
  • Stay in touch with the world. News and entertainment, in one of the languages she understands, including: newspapers, streaming tv and movies, and games. The usual stuff that everyone enjoys. ☺

Why this doesn’t exist is beyond me. There’s a fortune to be made for someone with the single-mindedness to build interfaces for people who are older or didn't grow up with computers or lack our cultural metaphors or have zero exposure to computers outside of phones etc. 

unicorns and the language of otherness

Because even in the face of overwhelming evidence, people will come up with excuses for why they should not, will not, can not—learn or change.

Presented at Velocity NY 2014.

Transcribed:

This man is albino, which means he has no skin pigmentation.

The red you see is the blood below the skin. His name is Brother Ali. He is a muslim rapper from Minnesota. That makes him different from all of us, in some way. And in all likelihood, we don’t think like him.

Let’s say that I believe the earth is flat. It’s part of my identity. It’s a strong belief. I have convictions around it, decisions that I’ve made around it. I identify as an earth-is-flatter. My identity is invested in the earth being flat. An attack on the idea is an attack on me. If the idea is wrong than I am wrong. Personally. Not just about that one thing, but about my person.

Let’s say you believe something different. You believe that the earth is round. You’re an earth-is-rounder. That makes you apart from me. Not because you have a different idea, but because you have a different identity. I cannot identify with you. If you’re successful in your belief, then maybe my way isn’t the only way. If you’re more successful than I am, then maybe my way isn’t the best way. If you are successful and then I am less successful, then maybe I’m wrong. But I’m not just wrong about the idea, I am wrong as a person.

But, I don’t have to see that. I don’t have to see anything. I have labeled you as something other than me. I cannot identify with you, thus I do not have to see your success. I can ignore it. I can bury my head in the sand. My ingrained belief creates a bias about you that I have. And I rationalize that bias by calling you something else, by putting a label on you. 

There is a saying by our friend, Brother Ali, that we have a “legacy so ingrained in the way that we think that we no longer need chains to be slaves.” He’s taking about racial biases. but any ingrained way of thinking creates a bias. Biases pile up and compound into a kind of psychological debt. It’s like technological debt: you have to refactor it in order to move on. It will eventually slow you down, bog you down, prevent you from seeing things. Prevent you from noticing thing. Prevent you from seeing a thing you might want to learn. 

And what’s true of you as an individual is true of us as groups. Teams can have shared biases created by their entrenched ideas and ways of doing things that create a shared psychological debt that prevent them—not just from learning—but from seeing that they should be learning. And while they are not learning, while we are not learning, there are other people who have learned and through their learning have changed the world around us. 

I was an analyst at Gartner for a couple of years and I heard this all the time: - “These companies are not like us. They do things differently. They have different users. They have different environments. They can do whatever they want. They don’t have the same security concerns we do.” Any litany of excuses that say “we don’t have to learn from them because they are unicorns” and unicorns are different and different people are others. So, eh. It’s ok. 

Turns out that unicorns are just people. And as people, they’re just like us. They’ve just made a different set of decisions in a different context in a different environment. We can make different decisions. We can create a new context. We can pay down our psychological debts. We can even declare bankruptcy like people do with economic debt and start over, throwing out ideas and practices. 

Cause the thing is, if we really want to move forward and expand and learn and grow and change for a changing environment—we have to get past the mess of our past decisions. We have to separate our identities, who we are and who we will be, from who we were, what we have done and what we have been. So that when we encounter something different or see change, or see change in others, that is not a threat to our identity and it doesn’t hurt so much to accept change and to do change. 

I don’t want to be a unicorn. I don’t want to be someone who is apart from you, other than you, does not have to be listened to, can be dismissed. And I don’t want to think of anyone else as something special, apart, different, cannot be learned from, to be dismissed, not part of the same humanity that I’m in. 

Cause, in the beginning and in the end, we are all still people. Thus, mainly in essence the same. The fact that we have some simultaneous differences, that have evolved, that don’t cause us to die out there in the world—suggests that the single strongest signal that you have something to learn is the fact that a difference exists. 

..the single strongest signal that you have something to learn is the fact that a difference exists. 

devops appops infraops all the ops

Donnie Berkholz wrote a great post about what’s actually happening as Dev vs Ops becomes DevOps [I know I know, keep your groaning to a minimum].

This is a conversation I had frequently at Gartner. People would ask what kind of person they need to hire to do DevOps. I would respond with “Well you already have developers. You have some Unix admins hanging around? Yeah, get them.” 

I was once an Irix and Solaris admin. At that time, any good admin was dedicated to automating themselves out of work so they could spend most of the day on IRC, playing games, or reading newsgroups. Automating infrastructure and platforms that get more or less treated like a service by devs was once normal. And now it will be again.

Things don’t go away; the lines just move. Devs own their code through the lifecycle of an application (and it’s constituent services) from dev/test all the way through production and day to day operations. Ops (or IT or platform or whatever) own infrastructure through the lifecycle of an application (and its constituent services) from dev/test all the way through production and day to day operations.

So they have to work together every step of the way. Iterate together. Where exactly the line resides for any given org changes. For example, our "infrastructure" may only go up to the OS image but not all the way up to the runtime. But someone else's could go up to the runtime or not even as far up as the OS image. Regardless of where the line is, we end up having something that’s more like AppOps (AppDevOps!) and InfraOps (InfraDevOps!). InfraOps provides the infra or platform service that the app is built on. AppOps builds and runs the app on that service. They could be the same person, the same team, different people, different teams, generalists or specialists, in-house or outsourced to a cloud provider—it doesn’t really matter.  

Screen Shot 2014-06-04 at 8.29.15 AM.png

I don't really care about the terms. Neither should you. As many people point out, we end up back at devs and admins. Took bloody long enough. :)

--

P.S. 

@aneel@dberkholz What about opsing all the devs?

— Dan Turkenkopf (@dturkenk) May 27, 2014

Yes. That too. :)

fantasy founder - identify me

Continuing an occasional series about products and companies that I’d like to build or see built someday.

This is a recurring idea that’s been bouncing around my brain (and written about) since the days of using ICQ plugins to talk to every messaging system there was (irc, usenet, email, aim), more or less consolidating identity into that one application. 

This would only be useful in the case where you want or need to be identified. 

Consider this:

  • Most of us have multiple online identities that don’t serve a purpose, other than there’s no infrastructure to do otherwise in a way that’s satisfactory to everyone who wants to be identified or everyone who wants to identify us
  • Centralized password control exists through things like OnePass, but then we’re dependent on OnePass to stay up, serve our interests etc
  • Services like Facebook let us log in to other things using our FB identity, which is convenient but gives FB ever more data about what we do, where we do it, etc.. all for purposes that aren’t necessarily in our interests
  • What if we could have consolidated virtual identities that all services, including things like Google and FB, used but that were under our control?
  • What if it was completely decentralized and could be run on our phones, with the actual profile itself encrypted and replicated across multiple cloud services (DropBox, Google Drive, S3 bucket, etc) or our own servers (VPS, colo, net-attached-Drobo, etc) for storage?
  • Plus a password or passphrase to decrypt the profile for modifications and an (or any) editor to modify it
  • We could have a standard (extensible) profile format that could be mapped to/from any given service
  • And a way, like with Facebook application security settings, to allow/disallow access to specific parts of the profile on a per-service basis—so maybe Facebook can see your name, birthday, company, home city but LiinkedIn can only see your name and company
  • We could have elements of the profile tagged so one could say something like: name is ok for handing out to social networks but home address only goes to merchants that have to verify credit cards--thus having a mapping or filter by service type
  • Services could request certain profile elements and we could either auto-approve based on the above tagging or individually allow/disallow usage or allow creation of if they don't exist or maybe only send back acceptance of a subset of what was asked for

This would require some kind of protocol for asking for and finding an identity that would route the request to the right place. Something like DNS. We could call it ID-NS!

Amazon, Google, Facebook, LinkedIn and Twitter’s auth services could work like this. But the problem is that all those services value our identities and the ability to tie actions to those. I want a service who’s single and only utility is to hold and distribute identity per my authorization. 

If such a utility existed and gained mass popularity, I’d bet we as end users wouldn’t have to pay for it. Vendors would pay 1) to access it and 2) to be allowed to tie your actions to it.

I'd call it IdenitfyMe (points if you catch the reference!).

It’s interesting that the SSA hasn’t already built this, given that they more or less serve as the identity clearinghouse for the government.

//Side note: There are plenty of providers that do federated identity for federated authentication (single sign on), though no one talks about it that way. I really don’t think SSO matters. It’s a different problem altogether from having a single virtual identity. Authentication != Identification. How you authenticate someone’s virtual identity to arrange for SSO across multiple services is a related, but distinct, problem with it’s own set of hurdles.

fantasy vc - grand rounds

 

Continuing a series on startups I'd put a bet on if I could.

A few months ago, my friend James joined Grand Rounds. Most of Silicon Valley spouts a fountain of bullshit claiming change-the-world status. This team actually does it.

Consider this: 

  • It takes a very long time for studied, tested, proven and well known advances in medical and surgical practice to actually become conventional and prevalent—as in, a decade or more
  • The industry, as a whole, is set against advancement because advancement usually means more precise diagnoses and less money due to fewer surgical procedures, even though quality of life improves
  • There is no way to connect those who need information about new discoveries, new test, new procedures with those who lead the field and discovered the advances—people go to the doctors and practitioners they have access to
  • Those PhD/MDs doing the research, creating the studies, going through FDA approvals and fighting to make a better life for all of us want to accelerate the process—they’re desperate to get the life-saving discoveries, tests and procedures they’ve developed adopted by the world
  • What is a second opinion from one of those experts worth?
  • What is it worth to us, as individuals?
  • What is it worth to the companies that fund their own insurance programs (every big one)?
  • What happens when $1M worth of unnecessary procedures and hospital visits are replaced by a $10,000 outpatient visit? How much is that worth across an insured employee base? What if that happened a dozen times a year? A hundred times a year?

10% of the cases drive 66% of the costs for employers, and employers don’t have the right tools for resolving them. Obesity or smoking cessation programs are great, but the reality is that a small number of truly complex and expensive cases emerge in the workforce every year and account for a majority of a company’s overall healthcare spend.

As I’ve said, there are two ways to disruption: either you disrupt by doing something new or you disrupt by changing the supply chain, removing middlemen, disintermediating or consolidating intermediaries. Grand Rounds short-circuits the supply chain of medical information. And by so doing they’ve already saved lives. How many other startups can you say that about?

--

None of this is to say that they're guaranteed success. Or won't get crushed by entrenched interests. Or even scooped up before they become too successful. Just that I would've placed that bet.

design for service - affordances

In thinking about the importance of how space was set up, both for staff and customers, in design for service - spaces I  wrote that one of the components of service is the affordances created in the spaces in which people work, provide service and are served.

I’m not in UX, so I’m not entirely certain how “affordance” is used technically, but I’ll just assume it’s a reasonable derivation of the original term from cog pysch. If you’re at all curious, I’d recommend going to the source and reading J. J. Gibson’s The Ecological Approach to Visual Perception. It’s a good book. 

//Side note: if you’re curious at all about anything, get as close to first sources [like reading declassified excerpts of Boyd’s work instead of the hundreds of faulty third- and fourth-hand derivative applications of OODA] as possible—either reading inwards from derivations or outwards from original sources. It’s better for our brains and better for the world when we know what the **** we’re talking about before words come out of our faces.

As a way to explain affordances, I’m going to use the near-perfect design of my 11 year old, no longer made Spire messenger bag.

bag 1 front.jpg

Start with the front of the bag. There’s a zippered outside compartment where we can store whatever. It’s good for things we might want without opening the bag, that’re ephemeral in possession, that aren’t valuable—like a newspaper.   

bag 2 handle.jpg

Now look up to the handle. There’s a handle. Because sometimes we’ll just pick the damn thing up without slinging it over a shoulder.

bag 3 straphook.jpg

To the left, where the strap attaches, a hook. If we happen to favor one shoulder, we’ll flip the strap easily. [By the way, that original hardware has stood up to severe abuse over a decade through many travels across a dozen countries.]

bag 4 strappad.jpg
bag 5 strap1.JPG
bag 6 strap2.JPG
bag 7 strap.jpg

On the strap, a pad that floats. The strap, the whole bag, can move while the shoulder pad stays put on a shoulder. This means that the bag can be pulled to the front of the body or pushed to the back without repositioning the strap. Good for when we’re rushing through airports and crowded town squares. The strap itself is thick, sturdy, wide enough to not dig into skin when the bag is heavy [the way seatbelt straps do]. With an unfancy [no seatbelt buckle, ratchet, or plastic latch here] adjustment mechanism that simply works and is simply easy to use standing or on the go.

bag 8 onebuckle.jpg

Back down to the bag. Two buckles. We’ll loosen them when we’re holding more and tighten them to keep things tucked in when there’s less. Unbuckle one and notice there’s no velcro holding the flap down. For an actual bike messenger, I can see why velcro would be useful to prevent the flap from, well, flappingand being a literal drag. For the rest of us, having velcro here is baffling—two fastening systems to keep the same part down, but the velcro preventing us from reaching in to the bag when it’s buckled to get something out while on the move. But that’s very much the convention with today’s popular bags.

bag 9 mainpocketpull.jpg

Under the flap, a zippered mesh pocket to see in to. Quick access, good for keeping smaller objects that we might need to see. And above that a pull to change the size of the opening of the main compartment. Hold more, hold less, always secure.

bag 10 insidepocketright.JPG
bag 11 insidepocketleft.JPG

Inside the main compartment, four front pockets. Bigger ones closer to the front outside. Smaller ones below with mesh. Obvious places for writing implements, notepad, phone, water bottle etc. [I bought this bag many years before owning a mobile, so don’t be surprised that there aren’t mobile or tablet specific pockets.]

bag 12 laptop toggle.jpg

Another compartment inside the main compartment. For laptops, or whatever. The bag came with a sleeve that was kept in via that velcro strip. But note that it wasn’t built in. So we could take it out. Replace it with a different one when tech changes. Or not use it and just have a larger main compartment. An option that provides function but doesn’t saddle us with something that has an inherently short useful life. Which might not look intentional, but notice the pull that can change the size of the opening and secure it. That wouldn’t be necessary if it was designed for a fixed single use of a particular laptop sleeve size and configuration.

Each of these design elements give us function, not take it away. They afford us ways of using the object which don’t collide with each other. The net experience is one of leverage, value from the very act of using the thing, and never having the design prevent us from getting value.

--

These ideas can’t be new. The thinking is rough, but it serves.

Aneel's razor for design-- Do not create affordances beyond necessity.

  • First corollary: Do not create affordances that countervail natural patterns of use.
  • Second corollary: Do not create affordances that countervail conventional patterns of use.
  • Third corollary: Any affordance that does countervail natural or conventional patterns of use will create a cognitive barrier that must be overcome through
    • Education
    • Some bridging, stepwise path to transitioning to the new pattern from the old
    • Psychological support during the process of achieving competence at, and habituating of, the new pattern

what aws is not

In 2004, SQS and AWIS beta-ed.

In 2005, MT beta-ed.

In 2006, S3 and EC2 beta-ed.

From there, the pace of releases has skyrocketed (something we should put value on). AWS started by turning basic computation services into utilities. They've since done the same to a wide range of technology capabilities--dozens of services, hundreds of options, a combinatorial explosion of capabilities. So far so that we could reproduce all the functions and services provided by any data center anywhere.

That's where AWS is. AWS is not a commodity, though specific AWS services may become commodities. AWS is not basic computation services. AWS is not just for startups or web2.0 or mobile or small shops or transient projects or marketing or unregulated.. etc.

AWS is the successful utility-isation of ever more, and ever more valuable, technology services.

They are building the AWS of next year or further out through utility-ifying whatever it is that their ecosystem (customers included) is telling them (through behavior) is worth paying for. 

To really compete, you'd have to: match the ecosystem play and exert margin pressure. The former you could do through co-option--which would require taking over the service supply chain--or through drawing your own ecosystem to some core differentiation (e.g. live migration on GCE or seamless public/private experience on Azure). The latter can only be afforded by a few organizations (Google and Microsoft).

Hat tip: Most of the thought above is a direct result of, or informed by, Simon Wardley.