By David Epstein

Link to author’s webpage

Brief

When dealing with wicked situations (complex problems) generalists can often trump specialists. This is because specialists tend to get tunnel vision and are unable to integrate external viewpoints or make use of analogical thinking. And even when using analogies they don’t make more than one and don’t often go far outside their domain when making analogies. They also tend to be incredibly defensive of their own perspective and even when faced with data about its inaccuracy, they harden their stance rather than change it. This shows up clearly when making predictions; generalist groups are able to make vastly superior forecasts about all sorts of events whereas highly specialised groups are no better than a monkey throwing darts, even when they have access to high quality, confidential information.

Becoming a specialist clearly requires immense practice, similarly a generalist endeavour is a lot of effort too as you go through the sampling stage. This is necessary as it is rare for someone to arrive at their interest and stick with it for decades. Most people shift through interests, so they require this sampling period where they try out whatever catches their fancy until they arrive at something that can hold their attention and interest. And when it passes, the search begins afresh. Lateral thinking, analogical thinking, knowing when to use what tools, not being married to your opinions and tools - are all important for becoming a good generalist.

Highlights

  • The most effective learning looks inefficient; it looks like falling behind.
    • This is because when you’re going through the sampling stage you learn slowly in order to accumulate lasting knowledge.
  • The bigger the picture, the more unique the potential human contribution. Our greatest strength is the exact opposite of narrow specialisation.
    • This is context of machines and AI. AI can beat us at chess and Go but would absolutely suck at strategy games. Our brains are phenomenal at strategy but terrible at pattern recognition. AI can identify patterns in chess or tumors in scans better than any human expert, but its advantage falls flat in wicked situations where there aren’t a fixed set of rules and those too are not strictly followed, i.e., all of life. E.g. Google’s AI sucked at giving early warning signs for flu trends in USA. IBM’s Watson can blaze through Jeopardy! because all the answers are already known. But it hasn’t been able to “cure cancer” because we don’t even know what the right questions to ask are.
  • How Not to Teach People to Discover Rules: By providing rewards for repetitive short-term success with a narrow range of solutions.
    • While [[Theory of small wins | small wins]] are important for maintaining motivation, when learners are rewarded for every tiny achievement their acceleration for discovery goes down. E.g. in a game where there are 70 possible solutions, students who got rewards for each solution found just kept repeating the same solution to get more money. Whereas other students who were asked to find the general solution were able to find it for all 70 solutions. Rewards can hurt.
  • When the rules are altered just slightly, it makes experts appear to have traded flexibility for narrow skill.
    • To become a specialist one goes through massive amounts of repetition of the same procedure. This helps when the rules remain the same but once the situation changes a tiny bit, the experts take far longer to readjust than novices. This is cognitive entrenchment.
  • Premodern people miss the forest for the trees. Modern people miss the trees for the forest.
    • Building on Luria’s experiments with villagers in Soviet Russia where they were unable to engage in higher-order cognitive tasks, it is found that modern problems require us to be able to identify abstractions, work with non-intuitive categories, and build on analogies - all higher-order cognitive skills. However, in doing so we miss the alternate worldview that the ‘premodern’ people possess - constructing a personal, emotional meaning building on empathy.
  • Almost none of the students in any major showed a consistent understanding of how to apply methods of evaluating truth they had learned in their own discipline to other areas.
    • Modern education is building narrow critical competence that does not translate to anything outside of that hyper-specialization.
  • Everyone needs habits of mind that allow them to dance across disciplines.
    • This allows for sampling and builds a tool belt of critical inquiry.
  • Rather than letting students grapple with some confusion, teachers often responded to their solicitations with hint-giving that morphed a making-connections problem into a using procedures one.
    • For example: a procedural problem is calculating the force given mass and acceleration. A making-connections problem would be to calculate velocity given the force, mass and time. The student will have to calculate acceleration from force-mass and then use it to calculate the velocity at given time. Instead of letting students figure this out, what teachers often do is they tell students the entire steps of the process, in a bid to [[scaffold]] their learning. This is unhelpful because…
  • The more hints that were available during training, the better the monkeys performed during early practice, and the worse they performed on test day.
    • There is probably a relationship with [[Spaced-repetition System]] worth exploring here. Elaborated at [[Struggle to Recall]].
  • The more confident a learner is of their wrong answer, the better the information sticks when they subsequently learn the right answer.
    • This is pretty much what [[Productive Failure]] is all about.
  • Good performance on a test during the learning process can indicate mastery, but learners and teachers need to be aware that such performance will often index, instead, fast but fleeting progress.
    • Coming from [[Desirable Difficulties]]. How about the learner’s external reality? Their mental health? Their need for small wins? How does that reconcile here?
  • [[Interleaving]] has been shown to improve inductive reasoning.
    • Instead of practicing blocks of same problems, mix the skills together and your performance will be better.
    • “When your intuition says block, you should probably interleave.” - Nate Kornell
    • “Instead of practicing from the free-throw line, Shaq should practice from a foot in front of and behind it to learn the motor modulation he needed.” - Robert Bjork
  • The slowest growth for the most complex skills.
    • Teaching children how to read in kindergarten has immediate benefits but they don’t stick. Teaching them how to hunt for and connect contextual clues can be a lasting advantage.
    • Note: though the benefits fade out with time, another dimension to this is the upside demonstrated by long-term social benefits, like decreased rates of incarceration. Even when the intended academic effects disappear, it seems that an extended programme of positive interactions between adults and children can leave a lasting mark.
  • [[Deep analogical thinking]] is the practice of recognising conceptual similarities in multiple domains or scenarios that may seem to have little in common on the surface.
    • Like how Kepler used magnetism, alchemy, and metaphysics to land upon causal relationships of planetary movement.
    • Relational thinking is why humans run the planet. Analogical thinking takes the new and makes it familiar, and takes the familiar and presents it in new light.
  • Do not rely on the first analogy that feels familiar.
    • Our intuition is to do just that. Experiments showed that students in the USA presented with the same problem reacted differently depending on what analogy was used. Those who were given the WW2 analogy suggested to go to war; those who were given the Vietnam analogy suggested diplomatic measures. Same held true for coaches judging youth players, whose opinion changed depending on which former player they were compared likened to.
  • Break away from the inside view.
    • Sticking to just one analogy forces us to remain in the inside view. This is when we begin to view our present situation as the universe of our thoughts and fail to make analogies from an ‘external perspective’. View underestimate or overcompensate when planning a project. But had we taken an objective, statistical perspective of how long does a typical project take, or what usual roadblocks occur, our estimates would be much closer to actual.
    • E.g. Daniel Kahneman’s group of experts creating a curriculum spent a year trying to estimate how long it would take. When faced with objective data that 40% projects do not finish and typically a curriculum designing process takes at least 7 years - the whole group was unwilling to accept this fact. It took them 8 years to finish. And the curriculum was no longer needed.
  • The more internal details a person can be made to consider, the more extreme their judgment becomes.
    • E.g. A group of venture capitalists were more likely to rate their present investment as a success since they knew a lot of details about it. However, when made to consider other such investments with broad conceptual similarities they would discover that their initial judgment was overly optimistic.
    • This is what Netflix leveraged for their recommendation engine. Instead of predicting what you might like, they figure out who you are like and then show you what that profile of users usually likes. This way the former complexity is captured therein.
  • Simply being reminded to use analogies helps us become more creative with our solutions.
    • E.g. business students asked to create business strategies for a tech company became more creative when they were suggested to use analogies to other companies.
  • Our intuition is to use too few analogies. That is usually exactly the wrong way to go about it.
    • E.g. The same business students tended to use a single analogy that too from a closely related company. So a tech company for a tech problem. However, the real genius lies in using analogies from companies that on the surface have no relation whatsoever. Like comparing Nike to Apple to General Motors.
  • When sowing the seeds for future analogy requirements, if the reading sounds incredibly remote from pressing business concerns, that is exactly the point.
    • E.g. BCG created an internal knowledge site where a consultant generating strategies for a post-merger integration could peruse the exhibit on how William the Conqueror “merged” England with the Norman Kingdom in the 11th century.
  • A problem well put is half-solved. Successful problem solvers are more able to determine the deep structure of a problem before they proceed to match a strategy to it.
    • They mentally classify problems into categories that may seem arbitrary at first glance. They attack the problem from multiple angles to learn more about it. And this gives them way more suitable information to be able to effectively solve it.
  • Exploration is not just a whimsical luxury of education, it is a central benefit.
    • Learning stuff is less important than learning about oneself. E.g. students in England and Wales are sequestered into specialisations at high school level, whereas in Scotland they can do it much later at university. Study found that the former students got a head start but that was all it was; they were far more likely to switch careers than their Scottish peers even though they had a greater disadvantage to switch having focused on that field. They were specialising so early that they were making more mistakes.
  • Knowing when to quit is such a big strategic advantage that every single person, before undertaking an endeavour, should enumerate conditions under which they should quit.
    • The important trick is staying attuned to whether switching is simply a failure of perseverance, or astute recognition that there are better things to do.
    • Young people should engage in situations with high risk and reward that have high informational value. They are unlikely to succeed but the potential reward is extremely high. Thanks to constant feedback and an unforgiving weed-out process, they will learn quickly if they might be a match - at least compared to jobs with less constant feedback. If it works, great, if it doesn’t, go try something else and continue to learn about your options and yourself.
  • Be a dark horse.
    • Dark horses have novel journeys but practice a common strategy: short-term planning. If you keep being successful in the short-term, you will be very successful in the long-term.

Here’s who I am at the moment, here are my motivations, here’s what I’ve found I like to do, here’s what I’d like to learn, and here are the opportunities. Which of these is the best match, right now? And maybe a year from now I’ll switch because I’ll find something better.

  • We are works in progress claiming to be finished.
    • Humans are terrible at remembering their old selves and predicting their future selves. Things we like/do today, we will not like/do 10 years hence.
    • Specialising early is like finding match quality for a person who doesn’t yet exist.
  • Instead of asking whether someone is gritty, we should ask when they are.
    • Grit isn’t an absolute trait. No one is gritty all through all of life. There are certain tasks where we show grit and certain others where we do not. Recognising those is crucial for leading teams and developing organisations.
    • If you get someone in a context that suits them, they will more likely work hard and it will look like grit from the outside.
  • We learn who we are only be living, and not before. We maximise match quality throughout life by sampling activities, social groups, contexts, jobs, careers, and then reflecting and adjusting our personal narratives.
    • The whole personality quiz and counselling industry is built against this notion. People want answers and so these frameworks sell. It’s much harder to say, “well, try it out and see what happens.”
  • Bring in the outsider advantage.
    • Experts have blinders on them, tunnel vision. By posing a problem to an outsider, we can gather new perspectives that could present a novel solution. E.g. InnoCentive and all the research problems ‘amateurs’ have helped experts solve.

I don’t have any particular specialist skills. I have a sort of vague knowledge of everything.

  • The lateral and vertical thinkers were both together, even in highly technical fields.
    • The world is both broad and deep. We need birds and frogs working together to explore it.
    • This is where that scientist at 3M said that “T-people like myself can happily go to the I-people with questions to create the trunk for the T.” Same idea as “T-shaped Information Diet”.
  • The average expert was a horrific forecaster. They were bad at forecasting in every domain.
    • Many experts never admit systematic flaws in their judgment. When they succeed, it was completely on their own merits. When they miss wildly, it is always a near miss.
  • The more likely an expert is to have their opinion featured in op-ed columns and TV, the more likely they are always wrong.
    • As knowledge hedgehogs, they view every world event through their preferred keyhole, thus giving them the advantage to fashion compelling stories about anything that occurred, and to tell the stories with adamant authority. Thus, they make great TV.

Narrow experts are an invaluable resource. Take facts from them, not opinions.

  • The best forecasters view their own ideas as hypotheses in need of testing.
    • Regular people don’t have the instinct to use their phone to check if and how they are wrong about something. They also get defensive about their opinions. [[The 2-minute-old hill to die on]] seems to be related.
  • Foxes see complexity in what others mistake for simple cause and effect. They understand that most cause and effect relationships are probabilistic, not deterministic.
    • Like foxes on a hunt, they loiter around and gather information before they strike. They are in the pursuit of truth, not in the pursuit to be right.
    • This requires empathy and appreciation for entropy.
  • Often in group meetings, everyone posits and argues based upon the data presented in the PPT. No one asks, “Is this the data we want to make the decision we need to make?"
    • E.g. business schools using NASA Challenger accident case study and students not asking for extra data.
  • Experienced groups become rigid under pressure and regress to what they know best.
    • Akin to the cognitive bias where you end up answering a question you think you heard instead of the actual question.
  • “At NASA, accepting a qualitative argument was like being told to forget you are an engineer."
    • This is what happens when you favour following tradition over being pragmatic.
  • NASA’s own famous can-do culture manifested as a belief that everything would be fine because “we followed every procedure”.
    • Relying on your past intellect can be a problem in the present. Just because something has always worked, doesn’t mean it will continue to always work.
    • Always account for entropy and spooky action.
  • Do sensemaking, not decision making.
    • If you take a decision, it is a possession that has to then be defended. Instead you should be dynamic and have “hunches held lightly”.
  • The process ended with more concern for being able to defend the decision than using all available information to make the right one.
    • This happens a lot in systems and organisations that overemphasise on bureaucratic accountability rather than keeping flexibility to focus on getting the right thing done.

When you don’t have any data, you have to use reason.

— Richard Feynman

  • Cultures can actually be too internally consistent. With incongruence, you’re building in cross-checks.
    • Managers are told to build consensus in teams. But remember the [[50 Ideas That Changed My Life - David Perell#The Paradox of Consensus | The Paradox of Consensus]].