Filtering by Category: work

Creating Career Options in Tech

The Geek Whisperers is a great podcast focused on the non-tech side of tech careers — mentorship, career building, leadership, etc. They had me on a couple of GlueCon’s ago to discuss how to think about career options and advancement in tech. You can listen to the whole thing here. 

Get exposure

In general, you can’t really know what all your career options are. But what you can do is set yourself up in a situation or create an environment where options present themselves. You can sort of maximize the serendipity and the optionality around you. For me that was moving from a role that was buried in an organization to a role that exposed me to a larger diversity of people and projects.

In an engineering position I had to deal with architects, so I became an architect. As an architect I had to deal with product managers, so I became a product manager. As a product manager, I had to deal with marketing roles so I moved to one of those.

Take adjacent roles

It’s hard to totally jump two or three degrees from what you’re doing, but what I think you can do relatively easily is to move to an adjacent discipline. Then you get exposed to a bunch of new things, from which you can pick another adjacent discipline.

I have a personal mantra associated with this: if you think someone (or everyone) in a particular role is an idiot, you probably don’t understand it so you should go do it to figure it out.

Of course, my journey has been totally accidental. In each case I was either unhappy with what I was doing or unhappy with what people in the adjacent job were doing and wanted to fix it. Or sometimes both. So I would just start doing the job until it was self-evident that people would have to fire me to stop me from doing it.

What I’ve found consistently in tech is that you can basically do any job you want, if you just go ahead and do it. The person you’re working for right now might not let you do it, but someone else is going to let you do it.

You can basically do any job you want, if you just go ahead and do it.

That’s a privileged statement. I don’t know if that’s actually true for non-white-or-asian-males. And if it is , the barriers to entry or transition are likely much higher.

In any situation, there’s someone on the other side of the table from you. That’s adjacent. Whoever that is, whatever that role is, you should be able to do some part of it. If you’re not able to start doing it, it’s probably too many steps away.

Tweet without intent

Twitter is responsible for my entire career. Unbeknownst to myself, I built a reputation and people started approaching me to take on new roles because of my interactions there.

What is it about Twitter that builds credibility? My theory is that the validation provided by someone’s public persona expressing what it is that they do on a regular basis is kinda like establishing bona fides in a meeting. But constantly, every day.

Everyone has an individual style. My style is to try not to have an intent in any forum where I’m representing myself (and not a business). I try to use Twitter the way I would use a party, or any other social situation. Whatever I would normally do. I tweet when I have a coffee because in a social situation it would be normal for me to walk up with a cup of coffee, so doing it on Twitter is normal.

I know people who do use Twitter with intent: intent to get a job, to raise their profile, etc. It works for them. So I’m not saying you shouldn’t do that. It just doesn’t work for me.

 

Theory in Practice — OODA, Mapping, Antifragility

Based on a talk presented at Velocity 2016 in Santa Clara, this post tries to show the practical application of concepts like OODA, Wardley Maps, and Antifragility with examples from my day-to-day work at a startup.

Theory

OODA — Observe Orient Decide Act

Observe the situation, i.e. acquire data. Orient to the data, the universe of discourse, the operating environment, what is and isn’t possible, and other actors and their actions. Decide on a course of action. Act on it.

Typically, you hear people saying that we’re supposed to go through the loop [O -> O -> D -> A -> O -> O…] faster than others. Let’s break that down.

  • If we traverse the loop before an adversary acts, then whatever they are acting to achieve may not matter because we have changed the environment in some way that nullifies or dulls the effectiveness of their action. They are acting on a outdated model of the world.
  • If we traverse the loop before they decides, we may short circuit their process and cause them to jump to the start because new data has come in suggesting the model is wrong.
  • If we traverse the loop at this faster tempo continuously, we frustrate their attempt to orient — causing disorientation — changing the environment faster than they can apprehend, much rather act.
  • We move further ahead in time. Or to be more exact, they’re falling further backwards: unable to match observations to models, change orientation, have confidence in decisions, or act meaningfully.

This is what Boyd called operating inside someone’s time scale.

Our main means of connecting the components of the loop is via models (and projections of cause/effect based on those models). Observations are tied into and given context via models.

Another way to think of models is as maps.

Mapping

This is a Wardley, or Value Chain, Map. It’s the most useful model I’ve encountered for building products or businesses. Watch Simon’s OSCONkeynotes or read his blog to really dig in to the concept.

It starts with a user need at the top. What problem are we solving? How are we going to make someone’s life better.

Then it goes deep, laying out the supply (dependency) chain of components needed to service that need. The further down, the less visible and exposed the component is to our end user. For example, if we’re building a SaaS product, users are never (or should never) be exposed to the systems running the code. This is the Y axis.

The X axis is where it gets interesting. It provides stages of development that components map into. Nearly everything naturally moves from the left to the right over time as invented or discovered things become standardized, well understood, built by more producers competing for market share, until some eventually become absolute commodities or provided as utilities. It’s a kind of natural evolution.

  • Genesis [stage 1]: something that’s being discovered/built from scratch
  • Custom Built [stage 2]: built out of existing technologies but highly customized for a specific use case and not generalized to broad use
  • Product [stage 3]: COTS software, something bought from someone else vs self-built
  • Commodity [stage 4]: something that’s effectively fungible, for which there are multiple equivalent providers, that may be provided as a utility

Individual components, regardless of their stage, can be expanded into finer grained production pipelines, marked as something that’s either provided or consumed, and aligned with methodologies like in-house-agile-developed vs outsourced-to-cloud-provider.

Finally, each component can be treaded as a piece on the field and moving them around as functions of product strategy, attempts at changing the competitive landscape.

For example, open sourcing something to try to commoditize it or create a de facto standard. Or providing something as a utility / platform / API in order to build a moat (that you can also consume) out of the ecosystem that you engender around it

[Anti]Fragility

But everything is constantly changing. Which means our map can become stale fast. Which makes us fragile — exposed and unaware — to ever more risks. Black swans.

 

A black swan is only a black swan if you can’t predict it (or assign it a probability). They’re inevitable. As our maps become out of synch with the real world, non-black swans become black swans. It’s possible to be fragile to one kind of black swan but not another. There are activities or patterns that will make us fragile with respect to something. And those that will make us antifragile.

 

There’s no such thing as absolute antifragility. It’s contextual. A severe enough stressor over a short enough time period will destroy anything.

Maps can be made robust (to some scale) through adaptive mechanisms, learning and correcting to match for change in the world.

But beyond some scale, every map is fragile. The world can change faster than, or so severely that, any attempt to update the map fails. Events can get inside a map’s timescale.

Systems can be antifragile (to some scale) through constant stress, breakage, refactoring, rebuilding, adaptation and evolution. This is basically how Netflix’s chaos army + the system-evolution mechanism that is their army of brains iterating on the construction and operation of their systems works.

pragmatism_5.png

For example, here’s our model of the APIs or services we rely on — smooth and reliable, with clearly defined boundaries and expected behavior. This is also the model that those things have of the APIs and services they rely on. All the way down

 

But this is how most things actually look. Eventually in the course of operation, the gaps line up in such a way that a minor fault event becomes magnified into systemic failure.

Systems, software, teams, societies — everything eventually crumbles under the weight of it’s own technical debt.

Which is why we should be refactoring, paying down technical debt, or what I just call “doing maintenance”, all the time at every layer.

Practice

Caveats: My views don’t represent those of my employer or anyone else and a great deal of detail is left out.

Example: mapping at work

I’ll build a map for a new feature SignalFx just released in beta.

Starting with the user need which I describe as “discovering known and unknown unknowns.”

A lot is left out, but generally speaking: on the top left we have the need, immediately connected to that is how that need is served and proceeding out from there is a generalized view of the supply chain of components needed to make it so.

Some things worth noting:

  • We rely on utility or commodity technology and services for all our infrastructure hardware and software, like operating systems, and also middleware — using things like AWS, Linux, Kafka, Cassandra, Elasticsearch, etc. This is standard behavior for a software as a service company.
  • We rely on relatively standard means of getting data into the system, in our case collectd, StatsD, Dropwizard metrics, etc., and a host of plugins and libraries that conform to open APIs and use well known open, or public, protocols.
  • We can see that there’s a lynchpin without which the map would fall apart, the streaming real-time analytics engine.
  • In order to build what was needed to serve that user need we started with, we needed to build many other things: a specialized quantization service, lossless + real-time message bus, specialized timeseries database, a high-performance metadata store, real-time streaming analytics engine, and an interactive real-time web-based visualization for streaming data, etc.

Many of the components we built are, if they were generalized, standalone products that others build entire companies on. In this specific case those are all the open source technologies — Kafka, Cassandra, Elasticsearch, etc — that we built our highly customized components out of.

Given all of that, I have one important positive question each day: Given the amount of time I’m going to spend working today, what one thing can I do to move the needle in serving this user need through what we do?

And one negative one, seeking invalidation: Is there any evidence that our map, our hypothesis, our approach, have been invalidated?

  • Is our projected user need real? Will people pay for it? Is it the problem they actually want solved? Do people really not want leverage? Do they not want to be given more power and time through tools? Do they want thinking to be replaced, instead of force-multiplied?
  • Is our lynchpin really the point of leverage and differentiation we believe it to be? Has it become a commodity and we’re just fooling ourselves into thinking we’ve built something novel?
  • Has the territory changed in any way, through macro trends or the actions of players in the ecosystem, such that we need to rework our model?

Example: knowing what’s possible

Imagine we want to build a personal relationship management [PRM] system to meet some a need for people to manage their complicated and ever-growing network of contacts.

The top left is where we’re starting from. The y-axis is basically features or sub-capabilities that add up to something. The x-axis is what they add up to: products or capabilities that are in and of themselves valuable. The bar for something belonging in the leading row is being a viable sub-product. Everything in the column below are the features needed for it. Where the line is for being able to declare that we’ve built a minimum usable product may be different per column, as may the line for what constitutes an MVP.

We have limited time, people, and money. So we can only build so many things at once. Let’s say we can only build one column at a time. We have to get to usability and viability in each column to be able to expand users and business sufficiently to build the next column.

But every single thing we build limits our options for the next thing we build. We can go down and we can go to the right.

We can scrap everything further below and further to the right of the point we’re at today and figure out something else to build from where we are. This is effectively a pivot.

But what we can’t do is go from 3 steps down in the 2nd column [a graph of your contacts that’s auto generated based on your communications with them over Gmail, Twitter, LinkedIn, Facebook, and Outlook email that shows degrees of separation] to, say, a restaurant reservations and point of sales SaaS product. There’s no getting from here to there. But you can get from here to a product referral network.

Seeking invalidation:

  • Is a personal relationship management service still the best way to serve the user need? Is there a better way?
  • Can we build to that better way from where we are?
  • Have we built an minimum usable product? Is it viable? Can we generate enough business (or funding) from what we’ve built to build the next thing?

Example: hiring for antifragility

The core principle of antifragility, as I see it, is to arrange things such that we get stronger through stress. More or less how muscle growth works.

How do you build that into an organization? How do you decrease brittleness? The only way I’ve ever found is through diversity. Inclusion, and forced exposure, to different points of views is absolutely necessary if we don’t want to get stuck — stuck in a way of thinking, stuck in a way of dealing with issues, stuck with a pattern of response, stuck in a point of view that makes us blind to threats and opportunities.

Think of it this way. We have to stir the pot in order to not get trapped in a local maximum. Not once, but constantly. Even a hint of homogeneity — whether it’s of people or ideas or practices or anything — is a clear signal that we are fragilizing and becoming brittle.

For my team, here’s what that looks like: no one in my org has a background in tech except for me. My background is both deep and wide, but it’s 90% tech. There’re just a large swath of things I’m blind to. What I’ve got is people who’ve studied art, biology, english lit, who don’t look like me or think like me. Things that I’m fragile to, they’re not. As a group, we’re way stronger than if I was hiring copies of myself.

Seeking invalidation:

  • Is this the right team? Can they do what we need to do right now? Can they do what we need to do in a quarter, in a year, in 5 years?
  • Are they the wrong team, or am I failing at
  • Helping them get from where they are to where I need them to be?
  • Getting the most out of their perspectives?
  • Creating a safe environment for them to bring their best to the table?

Questions

John Allspaw asked if it’s possible for a person to be anti fragile. I don’t think so. I don’t think any given person or component of a system can be antifragile. I think groups and systems can be made antifragile. Complexity can be a symptom of the build up of antifragility in a system. Beyond some envelope, it’s also a harbinger of collapse.

Peter van Hardenberg asked where I set the bar. Assuming baseline functional competence (can do the job at hand), the next thing I look for is differentness. What do you bring to the table that’s unique from what we already have?


Wrapping up, here are the daily operating principles arising from this study:

Always be refactoring

Diversity has intrinsic value

Territory > Map

Seek invalidation


The above builds on ideas in these previous talks and posts:

The original abstract was way too ambitious for a 40 min time slot. The presentation suffered quite a bit from me erratically moving through the material, trying to pack in too many ideas.

product rails

I’ve written about not forgetting the future you dreamed of to settle for the present you’ve made.

On the way to building something great, we inevitably build other (hopefully useful) things. We’re swayed by what customers claim to want, what engineers say can or cannot be done, what we can figure out how to market and sell, what investors think will make money, what the press gets excited by, etc.

It’s challenging to keep in mind where you intended to go when you’re working hard to just take the next step. Here's a way to think about it.

The top left is where we’re starting. Z is the vision. The y-axis is basically features or sub-capabilities that add up to something. The x-axis is what they add up to: products or capabilities that are in and of themselves valuable. The bar for something belonging in the leading row is being a minimum usable product subset of Z. Everything in the column below an MUP is what's needed for it. Where the line is for being able to declare that we've built an MUP (the depth needed in a column) may be different per column and needs to be called out. Where the line is for something that has go-to-market viability per column may be different still. All of which is different from the depth we want to go to. Z is realized when the whole matrix has been built.

Everything’s a hypothesis:

  • Is each of the “products” sufficiently valuable that someone would pay for them on their own?
  • Is the minimum usable depth we project actually sufficient for someone to experience value?
  • Is the minimum usable depth sufficient to get the product to get the product to the point of go-to-market viability — we can market, sell, and close business against it? If not, how much further?
  • Is going any deeper than that worthwhile for the customer or the business?
  • Are these the right capabilities in the best order?

To some degree, the order doesn't matter. The ideal case is to get to something in each column that provides enough tangible value and positive experience that someone would pay for it before moving on. But as long as we don’t leave the matrix, we’re still progressing towards the vision.

At every step, we need metrics for success and failure. In my view, it’s more important to know what constitutes disconfirmatory evidence than confirmatory—so we know when it’s time to cut our losses and move on.

There's also an existential question: is Z the right thing. Are we building it for its own sake, or to solve some specific problem in the world? Assuming we’re driven to fix something, to make someone’s life better — what happens if there’s a better way to do it than this one? How do we even know? This is impossible without actively seeking disconfirmation.

This is where going to market matters the most. It’s the sensing mechanism to discover how the map compares to reality.

A final note: what we build today limits what we can build tomorrow. We can go deeper and broader. And we can stop where we are; discard everything that might follow, build a new vision to a new place. But that new place has to be reachable from where we are right now. Every thing we build closes some doors and opens others. It’s near impossible to do something completely discontinuous.

minimum usable product

A lesson from 9ish years in and adjacent to product work.

Minimum viability is very much a product-outwards perspective: what’s the least amount of work we can do to find out whether going down this line of thinking is a business idea that’s worth being invested in. It has nothing to do with viability for users.

It’s a well worn notion that the right way to build a product is to iterate through stages of development, where at each stage you deliver something that, on it’s own, provides real incremental value by accomplishing the user's goal appreciably faster/cheaper/better than was possible before. A functional approach.

What makes a product viable for use is something that’s more usable at each stage of creation; that creates experiences of greater efficacy at every turn; that provides incremental wins that add up to something much greater—a sense of joy. [Something I’ve seen enough times to say it with a straight face.] This is distinctly not a product-out orientation—but instead a user-in orientation.

We have to make up for the pain we put users through--our stumbling attempts at building something useful, the suffering of (re)learning how to do something, breaking their workflows--with some pleasure on the other side. 

Leading to questions that should be answered (see the HEART framework for thoroughness):

  • What is the qualitative, subjective improvement from the perspective of the user? Does it feel better? Does it yield results of higher quality?
  • What is the quantitative, objective improvement from the perspective of the user? Does it get the task done faster? Does it yield more results?
  • What is the quantitative, objective improvement from the perspective of the product? Is it faster? Does it do more of what users want?

Make Minimum Usable Products.

the dangers of models

All models are wrong; some are useful.

Disconfirmatory evidence is more important than confirmatory evidence.

Actively seek model invalidation.

Every thing was built in some context, or scale. Reading primary sources, or learning how/why a thing was made, is essential to understanding  the conditions that held and knowing bounding scales beyond which something may become unsafe.

This is something I think about a lot. It's true in software, distributed systems, and organizations. Which is the world I breathe in every day at SignalFx.

It began to knit together around OODA:

  • ooda x cloud-- positing how it OODA relates to our operating models
  • change the game-- the difference between O--A and -OD- and what we can achieve
  • pacing-- the problem with tunneling on "fast" as a uniform good
  • deliver better-- the real benefit of being faster at the right things
  • ooda redux-- bringing it all together

OODA is just a vehicle for the larger issue of models, biases, and model-based blindness--Taleb's Procrustean Bed. Where we chop off the disconfirmatory evidence that suggests our models are wrong AND manipulate [or manufacture] confirmatory evidence. 

Because if we allowed the wrongness to be true, or if we allowed ourselves to see that differentness works, we'd want/have to change. That hurts.

Our attachment [and self-identification] to particular models and ideas about how things are in the face of evidence to the contrary--even about how we ourselves are--is the source of avoidable disasters like the derivatives driven financial crisis. Black Swans.

  • Black swans are precisely those events that lie outside our models
  • Data that proves the model wrong is more important than data that proves it right 
  • Black swans are inevitable, because models are, at best, approximations

Antifragility is possible, to some scale. But I don’t believe models can be made antifragile. Systems, however, can.

  • Models that do not change when the thing modeled (turtles all the way down) change become less approximate approximations
  • Models can be made robust [to some scale] through adaptive mechanisms [or, learning] 
  • Systems can be antifragile [to some scale] through constant stress, breakage, refactoring, rebuilding, adaptation and evolution— chaos army + the system-evolution mechanism that is an army of brains iterating on the construction and operation of a system

The way we structure our world is by building models on models. All tables are of shape x and all objects y made to go on tables rely on x being the shape of tables. Some change in x can destroy the property of can-rest-on-table for all y in an instant.

  • Higher level models assume lower level models 
  • Invalidation of a lower level model might invalidate the entire chain of downstream (higher level) models—higher level models can experience catastrophic failures that are unforeseen 
  • Every model is subject to invalidation at the boundaries of a specific scale [proportional to its level of abstraction or below]

Even models that are accurate in one context or a particular scale become invalid or risky in a different context or scale. What is certain for this minute may not be certain for this year. What is certain for this year may not be certain for this minute. It’s turtles all the way down. If there are enough turtles that we can’t grasp the entire depth of our models, we have been fragilized and are [over]exposed to black swans.

This suggests that we should resist abstractions. Only use them when necessary, and remove [layers of] them whenever possible.

We should resist abstractions.

Rather than relying on models as sources of truth, we should rely on principles or systems of behavior like giving more weight to disconfirmatory evidence and actively seeking model invalidation. 

OODA, like grasping and unlocking affordances, is a process of continuous checking and evaluation of the model of the world with the experience of the world. And seeking invalidation is getting to the faults before the faults are exploited [or blow up]. 

Bringing it all back around to code--I posit that the value of making as many things programmable as possible is the effect on scales.

  • Observation can be instrumented > scaled beyond human capacity
  • Action can be automated > scaled beyond human capacity
  • Orientation and decision can be short-circuited [for known models] > scaled beyond human capacity
  • Time can be reallocated to orienting and deciding in novel contexts > scaling to human capacity

That last part is what matters. We are the best, amongst our many technologies, at understanding and successfully adapting to novel contexts. So we should be optimizing for making sure that's where our time is spent when necessary.

Scale problems to human capacity.

whiskey and gin

Do you know how whiskey is made

Something I learned from a my friend Dave the professional spirits geek: it takes significant up front capital and enough in the bank to wait it out years before having the real product. A terrible product is easy to come by in a few months. What anyone in their right mind would call whiskey takes time, indie-hipster-micro-distillers notwithstanding.

So what do you do in the meantime if you're not a retired banker or a trust fund kid? You make gin.

On the way to making whiskey, some of your product can be made into gin which is sellable at a hefty enough margin to keep things humming along while the real stuff is maturing, finding its character. [No offense to gin.] 

Witness the good work of the New York Distilling Company

And witness Uber (maybe). Here's Kottke repeating Michael Wolfe on Quora [notes in brackets mine]--

If you think of Uber as a town car company operating in a few cities, it is not big. [Gin.]

If you think of Uber as dominating and even growing the town car market in dozens of cities, it gets bigger. (Data point: there are now more Uber black cars in San Francisco than there were ALL black cars before Uber started). [Gin.]

If you think of Uber as absorbing the taxi markets, it gets pretty huge. [Gin.]

[...]

If you think of Uber as a giant supercomputer orchestrating the delivery of millions of people and items all over the world (the Cisco of the physical world), you get what could be one of the largest companies in the world. [Whiskey.]

The hard parts:

  • Not getting distracted making the gin.
  • Not dipping into your final product before it's ready.
  • Not siphoning off too much of your product into gin making before it hits the barrels.
  • Not being so successful with gin that you abandon the warehouse of hard work and hard won patience altogether.

What are you making?

the misogyny in technology

Maybe it's cause I was raised by a single mother.

Maybe it's cause I've worked under managers, directors, vice presidents, general managers, and senior vice presidents who happen to be women. Maybe it's cause I've worked with engineers and technologists with advanced degrees who were experts in their fields who happen to be women. Maybe it's cause I know CEOs, CMOs, COOs, and CIOs who happen to be women.

Maybe it's cause I'm secure in my person and don't find anyone else's success as a threat to my own. Maybe it's cause I don't think life is a zero sum game.

But…

  • Walking into a room and telling the first woman you see to get you a cup of coffee IS NOT OK.
  • Drunkenly texting "your room or mine" to the woman CEO of a successful company whose business you can impact IS NOT OK.
  • Rewriting the history of an organization to cut out the women leaders and founders in order to aggrandize the men IS NOT OK.
  • Attempting to put down a woman who has obviously kicked your ass in a technological argument by calling her ___ or ___ IS NOT OK.

I come from a culture that is historically not good to women and I don't seem to have a problem with this.

Mr. Senior _____, Mr. VC, Mr. Distinguished _____, Mr. CxO--what's your excuse?

on packetpushers: influence, analysis, and the life

Ethan and Greg over at PacketPushers asked me to come on the podcast to talk about what it's like to be an analyst and grill me on some topics about analyst life and perceptions of the industry. Listen at Show 137 – Gartner Is Not for Sale With @Aneel Lakhani.

With Gartner’s blessing, Aneel came on the show and answered some hard questions frankly – even bluntly. Sure, Aneel doesn’t speak for all of Gartner, but we ended up with a lot of useful insight from him.

  • How does Aneel’s job work? What’s he do all day?
  • Who is a Gartner “customer”?
  • How does an analyst determine what products are interesting while avoiding bias?
  • How technically competent are Gartner analysts?
  • Most Gartner reports seems to represent the current state of affairs, but not look into the future. Why is that?
  • Why is longevity at Gartner something to be proud of?

Some highlights from me:

Most of my time is inquiry with customers. Most of the customers are end users and buyers of technology. As an analyst, I am the product.

Woe to anyone who tries to turn us one way or another [vendor influence] because that goes very badly for them.If I am not factually incorrect and they [vendors] don't like what I've written about their product or marketing or behavior or whatever.. they should just do better.

In dealing with customers, I've found the reason Gartner commands the premium it does is because of the independence.

Like any large firm, Gartner has multiple divisions and business units.. serving different customers, etc. You have to know how to use analyst firms. If you want a deeply technical analyst, you should go get a deeply technical analyst.

It takes a particularly tough personality to survive the process of research and writing and getting through peer review and getting published and wading through all the information you get from vendors... it's way way way more work than I expected by easily and order of magnitude.

We'll see if I get into trouble for anything I said. :)