One of the most common mistakes I see people make when looking at data is incorrectly using an overly simplified model. A specific variant of this that has derailed the majority of work roadmaps I've looked at is treating people as interchangeable, as if it doesn't matter who is doing what, as if individuals don't matter.
Individuals matter.
A pattern I've repeatedly seen during the roadmap creation and review process is that people will plan out the next few quarters of work and then assign some number of people to it, one person for one quarter to a project, two people for three quarters to another, etc. Nominally, this process enables teams to understand what other teams are doing and plan appropriately. I've never worked in an organization where this actually worked, where this actually enabled teams to effectively execute with dependencies on other teams.
What I've seen happen instead is, when work starts on the projects, people will ask who's working the project and then will make a guess at whether or not the project will be completed on time or in an effective way or even be completed at all based on who ends up working on the project. "Oh, Joe is taking feature X? He never ships anything reasonable. Looks like we can't depend on it because that's never going to work. Let's do Y instead of Z since that won't require X to actually work". The roadmap creation and review process maintains the polite fiction that people are interchangeable, but everyone knows this isn't true and teams that are effective and want to ship on time can't play along when the rubber hits the road even if they play along with the managers, directors, and VPs, who create roadmaps as if people can be generically abstracted over.
Another place the non-fungibility of people causes predictable problems is with how managers operate teams. Managers who want to create effective teams1 end up fighting the system in order to do so. Non-engineering orgs mostly treat people as fungible, and the finance org at a number of companies I've worked for forces the engineering org to treat people as fungible by requiring the org to budget in terms of headcount. The company, of course, spends money and not "heads", but internal bookkeeping is done in terms of "heads", so $X of budget will be, for some team, translated into something like "three staff-level heads". There's no way to convert that into "two more effective and better-paid staff level heads"2. If you hire two staff engineers and not a third, the "head" and the associated budget will eventually get moved somewhere else.
One thing I've repeatedly seen is that a hiring manager will want to hire someone who they think will be highly effective or even just someone who has specialized skills and then not be able to hire because the company has translated budget into "heads" at a rate that doesn't allow for hiring some kind of heads. There will be a "comp team" or other group in HR that will object because the comp team has no concept of "an effective engineer" or "a specialty that's hard to hire for"; for a person, role, level, and location defines them and someone who's paid too much for their role and level is therefore a bad hire. If anyone reasonable had power over the process that they were willing to use, this wouldn't happen but, by design, the bureaucracy is set up so that few people have power3.
A similar thing happens with retention. A great engineer I know who was regularly creating $x0M/yr4 of additional profit for the company per year wanted to move home to Portugal, so the company cut the person's cash comp by a factor of four. The company also offered to only cut his cash comp by a factor of two if he moved to Spain instead of Portugal. He left for a company that doesn't have location-based pay. This was escalated up to the director level, but that wasn't sufficient to override HR, so they left. HR didn't care that the person made the company more money than HR saves by doing location adjustments for all international employees combined because HR at the company had no notion of the value of an employee, only the cost, title, level, and location5.
Relatedly, a "move" I've seen twice, once from a distance and once from up close, is when HR decides attrition is too low. In one case, the head of HR thought that the company's ~5% attrition was "unhealthy" because it was too low and in another, HR thought that the company's attrition sitting at a bit under 10% was too low. In both cases, the company made some moves that resulted in attrition moving up to what HR thought was a "healthy" level. In the case I saw from a distance, folks I know at the company agree that the majority of the company's best engineers left over the next year, many after only a few months. In the case I saw up close, I made a list of the most effective engineers I was aware of (like the person mentioned above who increased the company's revenue by 0.7% on his paternity leave) and, when the company successfully pushed attrition to over 10% overall, the most effective engineers left at over double that rate (which understates the impact of this because they tended to be long-tenured and senior engineers, where the normal expected attrition would be less than half the average company attrition).
Some people seem to view companies like a game of SimCity, where if you want more money, you can turn a knob, increase taxes, and get more money, uniformly impacting the city. But companies are not a game of SimCity. If you want more attrition and turn a knob that cranks that up, you don't get additional attrition that's sampled uniformly at random. People, as a whole, cannot be treated as an abstraction where the actions company leadership takes impacts everyone in the same way. The people who are most effective will be disproportionately likely to leave if you turn a knob that leads to increased attrition.
So far, we've talked about how treating individual people as fungible doesn't work for corporations but, of course, it also doesn't work in general. For example, a complaint from a friend of mine who's done a fair amount of "on the ground" development work in Africa is that a lot of people who are looking to donate want, clear, simple criteria to guide their donations (e.g., an RCT showed that the intervention was highly effective). But many effective interventions cannot have their impact demonstrated ex ante in any simple way because, among other reasons, the composition of the team implementing the intervention is important, resulting in a randomized trial or other experiment not being applicable to team implementing the intervention other than the teams from the trial in the context they were operating in during the trial.
An example of this would be an intervention they worked on that, among other things, helped wipe out guinea worm in a country. Ex post, we can say that was a highly effective intervention since it was a team of three people operating on a budget of $12/(person-day)6 for a relatively short time period, making it a high ROI intervention, but there was no way to make a quantitative case for the intervention ex ante, nor does it seem plausible that there could've been a set of randomized trials or experiments that would've justified the intervention.
Their intervention wasn't wiping out guinea worm, that was just a side effect. The intervention was, basically, travelling around the country and embedding in regional government offices in order to understand their problems and then advise/facilitate better decision making. In the course of talking to people and suggesting improvements/changes, they realized that guinea worm could with better distribution of clean water (guinea worm can come from drinking unfiltered water; giving people clean water can solve that problem) and that aid money flowing into the country specifically for water-related projects, like building wells, was already sufficient if the it was distributed to places in the country that had high rates of guinea worm due to contaminated water instead of to the places aid money was flowing to (which were locations that had a lot of aid money flowing to them for a variety of reasons, such as being near a local "office" that was doing a lot of charity work). The specific thing this team did to help wipe out guinea worm was to give powerpoint presentations to government officials on how the government could advise organizations receiving aid money on how those organizations could more efficiently place wells. At the margin, wiping out guinea worm in a country would probably be sufficient for the intervention to be high ROI, but that's a very small fraction of the "return" from this three person team. I only mention it because it's a self-contained easily-quantifiable change. Most of the value of "leveling up" decision making in regional government offices is very difficult to quantify (and, to the extent that it can be quantified, will still have very large error bars).
Many interventions that seem the same ex ante, probably even most, produce little to no impact. My friend has a lot of comments on organizations that send a lot of people around to do similar sounding work but that produce little value, such as the Peace Corps.
A major difference between my friend's team and most teams is that my friend's team was composed of people who had a track record of being highly effective across a variety of contexts. In an earlier job, my friend started a job at a large-ish ($5B/yr revenue) government-run utility company and was immediately assigned a problem that, unbeknownst to her, had been an open problem for years that was considered to be unsolvable. No one was willing to touch the problem, so they hired her because they wanted a scapegoat to blame and fire when the problem blew up. Instead, she solved the problem she was assigned to as well as a number of other problems that were considered unsolvable. A team of three such people will be able to get a lot of mileage out of potentially high ROI interventions that most teams would not succeed at, such as going to a foreign country and improving governmental decision making in regional offices across the country enough that the government is able to solve serious open problems that had been plaguing the country for decades.
Many of the highest ROI interventions are similarly skill intensive and not amenable to simple back-of-the-envelope calculations, but most discussions I see on the topic, both in person and online, rely heavily on simplistic but irrelevant back-of-the-envelope calculations. This is not just a problem limited to cocktail-party conversations. My friend's intervention was almost killed by the organization she worked for because the organization was infested with what she thinks of "overly simplistic EA thinking", which caused leadership in the organization to try to redirect resources to projects where the computation of expected return was simpler because those projects were thought to be higher impact even though they were, ex post, lower impact. Of course, we shouldn't judge interventions on how they performed ex post since that will overly favor high variance interventions, but I think that someone thinking it through, who was willing to exercise their judgement instead of outsourcing their judgement to a simple metric, could and should say that the intervention in question was a good choice ex ante.
This issue of projects which are more legible getting more funding is an issue across organizations as well as within them. For example, my friend says that, back when GiveWell was mainly or only recommending charities that had simply quantifiable return, she basically couldn't get her friends who worked in other fields to put resources towards efforts that weren't endorsed by GiveWell. People who didn't know about her aid background would say things like "haven't you heard of GiveWell?" when she suggested putting resources towards any particular cause, project, or organization.
I talked to a friend of mine who worked at GiveWell during that time period about this and, according to him, the reason GiveWell initially focused on charities that had easily quantifiable value wasn't that they thought those were the highest impact charities. Instead, it was because, as a young organization, they needed to be credible and it's easier to make a credible case for charities whose value is easily quantifiable. He would not, and he thinks GiveWell would not, endorse donors funnelling all resources into charities endorsed by GiveWell and neglecting other ways to improve the world. But many people want the world to be simple and apply the algorithm "charity on GiveWell list = good; not on GiveWell list = bad" because it makes the world simple for them.
Unfortunately for those people, as well as for the world, the world is not simple.
Coming back to the tech company examples, Laurence Tratt notes something that I've also observed:
One thing I've found very interesting in large organisations is when they realise that they need to do something different (i.e. they're slowly failing and want to turn the ship around). The obvious thing is to let a small team take risks on the basis that they might win big. Instead they tend to form endless committees which just perpetuate the drift that caused the committees to be formed in the first place! I think this is because they really struggle to see people as anything other than fungible, even if they really want to: it's almost beyond their ability to break out of their organisational mould, even when it spells long-term doom.
One lens we can use to look at what's going on is legibility. When you have a complex system, whether that's a company with thousands of engineers or a world with many billions of dollars going to aid work, the system is too complex for any decision maker to really understand, whether that's an exec at a company or a potential donor trying to understand where their money should go. One way to address this problem is by reducing the perceived complexity of the problem via imagining that individuals are fungible, making the system more legible. That produces relatively inefficient outcomes but, unlike trying to understand the issues at hand, it's highly scalable, and if there's one thing that tech companies like, it's doing things that scale, and treating a complex system like it's SimCity or Civilization is highly scalable. When returns are relatively evenly distributed, losing out on potential outlier returns in the name of legibility is a good trade-off. But when ROI is a heavy-tailed distribution, when the right person can, on their paternity leave, increase company revenue of a giant tech company by 0.7% and then much more when they work on that full-time, then severely tamping down on the right side of the curve to improve legibility is very costly and can cost you the majority of your potential returns.
Thanks to Laurence Tratt, Pam Wolf, Ben Kuhn, Peter Bhat Harkins, John Hergenroeder, Andrey Mishchenko, Joseph Kaptur, and Sophia Wisdom for comments/corrections/discussion.
Appendix: re-orgs
A friend of mine recently told me a story about a trendy tech company where they tried to move six people to another project, one that the people didn't want to work on that they thought didn't really make sense. The result was that two senior devs quit, the EM retired, one PM was fired (long story), and three people left the team. The team for both the old project and the new project had to be re-created from scratch.
It could be much worse. In that case, at least there were some people who didn't leave the company. I once asked someone why feature X, which had been publicly promised, hadn't been implemented yet and also the entire sub-product was broken. The answer was that, after about a year of work, when shipping the feature was thought to be weeks away, leadership decided that the feature, which was previously considered a top priority, was no longer a priority and should be abandoned. The team argued that the feature was very close to being done and they just wanted enough runway to finish the feature. When that was denied, the entire team quit and the sub-product has slowly decayed since then. After many years, there was one attempted reboot of the team but, for reasons beyond the scope of this story, it was done with a new manager managing new grads and didn't really re-create what the old team was capable of.
As we've previously seen, an effective team is difficult to create, due to the institutional knowledge that exists on a team, as well as the team's culture, but destroying a team is very easy.
I find it interesting that so many people in senior management roles persist in thinking that they can re-direct people as easily as opening up the city view in Civilization and assigning workers to switch from one task to another when the senior ICs I talk to have high accuracy in predicting when these kinds of moves won't work out.
Appendix: related posts
- Yossi Kreinin on compensation and project/person fit
- Me on the difficulty of obtaining institutional knowledge
- James C. Scott on legibility
On the flip side, there are managers who want to maximize the return to their career. At every company I've worked at that wasn't a startup, doing that involves moving up the ladder, which is easiest to do by collecting as many people as possible. At one company I've worked for, the explicitly stated promo criteria are basically "how many people report up to this person".
Tying promotions and compensation to the number of people managed could make sense if you think of people as mostly fungible, but is otherwise an obviously silly idea.
[return]- This isn't quite this simple when you take into account retention budgets (money set aside from a pool that doesn't come out of the org's normal budget, often used to match offers from people who are leaving), etc., but adding this nuance doesn't really change the fundamental point. [return]
There are advantages to a system where people don't have power, such as mitigating abuses of power, various biases, nepotism, etc. One can argue that reducing variance in outcomes by making people powerless is the preferred result, but in winner-take-most markets, which many tech markets are, forcing everyone lowest-common-denominator effectiveness is a recipe for being an also ran.
A specific, small-scale, example of this is the massive advantage companies that don't have a bureaucratic comms/PR approval process for technical blog posts have. The theory behind having the onerous process that most companies have is that the company is protected from downside risk of a bad blog post, but examples of bad engineering blog posts that would've been mitigated by having an onerous process are few and far between, whereas the companies that have good processes for writing publicly get a lot of value that's easy to see.
A larger scale example of this is that the large, now >= $500B companies, all made aggressive moves that wouldn't have been possible at their bureaucracy laden competitors, which allowed them to wipe the floor with their competitors. Of course, many other companies that made serious bets instead of playing it safe failed more quickly than companies trying to play it safe, but those companies at least had a chance, unlike the companies that played it safe.
[return]I'm generally skeptical of claims like this. At multiple companies that I've worked for, if you tally up the claimed revenue or user growth wins and compare them to actual revenue or user growth, you can see that there's some funny business going on since the total claimed wins are much larger than the observed total.
Just because I'm generally curious about measurements, I sometimes did my own analysis of people's claimed wins and I almost always came up with an estimate that was much lower than the original estimate. Of course, I generally didn't publish these results internally since that would, in general, be a good way to make a lot of enemies without causing any change. In one extreme case, I found that the experimental methodology one entire org used was broken, causing them to get spurious wins in their A/B tests. I quietly informed them and they did nothing about it, which was the only reasonable move for them since having experiments that systematically showed improvement when none existed was a cheap and effective way for the org to gain more power by having its people get promoted and having more headcount allocated to it. And if anyone with power over the bureaucracy cared about accuracy of results, such a large discrepancy between claimed wins and actual results couldn't exist in the first place.
Anyway, despite my general skepticism of claimed wins in general, I found this person's claimed wins highly credible after checking them myself. A project of theirs, done on their paternity leave (done while on leave because their manager and, really, the organization as well as the company, didn't support the kind of work they were doing) increased the company's revenue by 0.7%, robust and actually increasing in value through a long-term holdback, and they were able to produce wins of that magnitude after leadership was embarrassed into allowing them to do valuable work.
P.S. If you'd like to play along at home, another fun game you can play after figuring out which teams and orgs hit their roadmap goals. For bonus points, plot the percentage of roadmap goals a team hits vs. their headcount growth as well as how predictive hitting last quarter's goals are for hitting next quarter's goals across teams.
[return]I've seen quite a few people leave their employers due to location adjustments during the pandemic. In one case, HR insisted the person was actually very well compensated because, even though it might appear as if the person isn't highly paid because they were paid significantly less than many people who were one level below them, according to HR's formula, which included a location-based pay adjustment, the person was one of the highest paid people for their level at the entire company in terms of normalized pay. Putting aside abstract considerations about fairness, for an employee, HR telling them that they're highly paid given their location is like HR having a formula that pays based on height telling an employee that they're well paid for their height. That may be true according to whatever formula HR has but, practically speaking, that means nothing to the employee, who can go work somewhere that has a smaller height-based pay adjustment.
Companies were able to get away with severe location-based pay adjustments with no cost to themselves before the pandemic. But, since the pandemic, a lot of companies have ramped up remote hiring and some of those companies have relatively small location-based pay adjustments, which has allowed them to disproportionately hire away who they choose from companies that still maintain severe location-based pay adjustments.
[return]- Technically, their budget ended up being higher than this because one team member contracted typhoid and paid for some medical expenses from their personal budget and not from the organization's budget, but $12/(person-day), the organizational funding, is a pretty good approximation. [return]