Thinking Archives - Farnam Street https://canvasly.link/category/thinking/ Mastering the best of what other people have already figured out Fri, 25 Oct 2024 15:09:13 +0000 en-US hourly 1 https://canvasly.link/wp-content/uploads/2015/06/cropped-farnamstreet-80x80.png Thinking Archives - Farnam Street https://canvasly.link/category/thinking/ 32 32 148761140 Learning Through Play https://canvasly.link/learning-through-play/ Tue, 13 Sep 2022 09:20:00 +0000 https://canvasly.link/?p=43004 Play is an essential way of learning about the world. Doing things we enjoy without a goal in mind leads us to find new information, better understand our own capabilities, and find unexpected beauty around us. Arithmetic is one example of an area we can explore through play. Every parent knows that children need space …

The post Learning Through Play appeared first on Farnam Street.

]]>
Play is an essential way of learning about the world. Doing things we enjoy without a goal in mind leads us to find new information, better understand our own capabilities, and find unexpected beauty around us. Arithmetic is one example of an area we can explore through play.

Every parent knows that children need space for unstructured play that helps them develop their creativity and problem-solving skills. Free-form experimentation leads to the rapid acquisition of information about the world. When children play together, they expand their social skills and strengthen the ability to regulate their emotions. Young animals, such as elephants, dogs, ravens, and crocodiles, also develop survival skills through play.

The benefits of play don’t disappear as soon as you become an adult. Even if we engage our curiosity in different ways as we grow up, a lot of learning and exploration still comes from analogous activities: things we do for the sheer fun of it.

When the pressure mounts to be productive every minute of the day, we have much to gain from doing all we can to carve out time to play. Take away prescriptions and obligations, and we gravitate towards whatever interests us the most. Just like children and baby elephants, we can learn important lessons through play. It can also give us a new perspective on topics we take for granted—such as the way we represent numbers.

***

Playing with symbols

The book Arithmetic, in addition to being a clear and engaging history of the subject, is a demonstration of how insights and understanding can be combined with enjoyment and fun. The best place to start the book is at the afterword, where author and mathematics professor Paul Lockhart writes, “I especially hope that I have managed to get across the idea of viewing your mind as a playground—a place to create beautiful things for your own pleasure and amusement and to marvel at what you’ve made and at what you have yet to understand.

Arithmetic, the branch of math dealing with the manipulation and properties of numbers, can be very playful. After all, there are many ways to add and multiply numbers that in themselves can be represented in various ways. When we see six cows in a field, we represent that amount with the symbol 6. The Romans used VI. And there are many other ways that unfortunately can’t be typed on a standard English keyboard. If two more cows wander into the field, the usual method of counting them is to add 2 to 6 and conclude there are now 8 cows. But we could just as easily add 2 + 3 + 3. Or turn everything into fractions with a base of 2 and go from there.

One of the most intriguing parts of the book is when Lockhart encourages us to step away from how we commonly label numbers so we can have fun experimenting with them. He says, “The problem with familiarity is not so much that it breeds contempt, but that it breeds loss of perspective.” So we don’t get too hung up on our symbols such as 4 and 5, Lockhart shows us how any symbols can be used to complete some of the main arithmetic tasks such as comparing and grouping. He shows how completely random symbols can represent amounts and gives insight into how they can be manipulated.

When we start to play with the representations, we connect to the underlying reasoning behind what we are doing. We could be counting for the purposes of comparison, and we could also be interested in learning the patterns produced by our actions. Lockhart explains that “every number can be represented in a variety of ways, and we want to choose a form that is as useful and convenient as possible.” We can thus choose our representations of numbers based on curiosity versus what is conventional. It’s easy to extrapolate this thinking to broader life situations. How often do we assume certain parameters are fixed just because that is what has always been done? What else could we accomplish if we let go of convention and focused instead on function?

***

Stepping away from requirements

We all use the Hindu-Arabic number system, which utilizes groups of tens. Ten singles are ten, ten tens are a hundred, and so on. It has a consistent logic to it, and it is a pervasive way of grouping numbers as they increase. But Lockhart explains that grouping numbers by ten is as arbitrary as the symbols we use to represent numbers. He explains how a society might group by fours or sevens. One of the most interesting ideas though, comes when he’s explaining the groupings:

“You might think there is no question about it; we chose four as our grouping size, so that’s that. Of course we will group our groups into fours—as opposed to what? Grouping things into fours and then grouping our groups into sixes? That would be insane! But it happens all the time. Inches are grouped into twelves to make feet, and then three feet make a yard. And the old British monetary system had twelve pence to the shilling and twenty shillings to the pound.”

By reminding us of the options available in such a simple, everyday activity as counting, Lockhart opens a mental door. What other ways might we go about our tasks and solve our problems? It’s a reminder that most of our so-called requirements are ones that we impose on ourselves.

If we think back to being children, we often played with things in ways that were different from what they were intended for. Pots became drums and tape strung around the house became lasers. A byproduct of this type of play is usually learning—we learn what things are normally used for by playing with them. But that’s not the intention behind a child’s play. The fun comes first, and thus they don’t restrain themselves to convention.

***

Have fun with the unfamiliar

There are advantages and disadvantages to all counting systems. For Lockhart, the only way to discover what those are is to play around with them. And it is in the playing that we may learn more than arithmetic. For example, he says: “In fact, getting stuck (say on 7 +8 for instance) is one of the best things that can happen to you because it gives you an opportunity to reinvent and to appreciate exactly what it is that you are doing.” In the case of adding two numbers, we “are rearranging numerical information for comparison purposes.

The larger point is that getting stuck on anything can be incredibly useful. If forces you to stop and consider what it is you are really trying to achieve. Getting stuck can help you identify the first principles in your situation. In getting unstuck, we learn lessons that resonate and help us to grow.

Lockhart says of arithmetic that we need to “not let our familiarity with a particular system blind us to its arbitrariness.” We don’t have to use the symbol 2 to represent how many cows there are in a field, just as we don’t have to group sixty minutes into one hour. We may find those representations useful, but we also may not. There are some people in the world with so much money that the numbers that represent their wealth are almost nonsensical, and most people find the clock manipulation that is the annual flip to daylight savings time to be annoying and stressful.

Playing around with arithmetic can teach the broader lesson that we don’t have to keep using systems that no longer serve us well. Yet how many of us have a hard time letting go of the ineffective simply because it’s familiar?

Which brings us back to play. Play is often the exploration of the unfamiliar. After all, if you knew what the result would be, it likely wouldn’t be considered play. When we play we take chances, we experiment, and we try new combinations just to see what happens. We do all of this in the pursuit of fun because it is the novelty that brings us pleasure and makes play rewarding.

Lockhart makes a similar point about arithmetic:

“The point of studying arithmetic and its philosophy is not merely to get good at it but also to gain a larger perspective and to expand our worldview . . . Plus, it’s fun. Anyway, as connoisseurs of arithmetic, we should always be questioning and critiquing, examining and playing.”

***

We suggest that playing need not be confined to arithmetic. If you happen to enjoy playing with numbers, then go for it. Lockhart’s book gives great inspiration on how to have fun with numbers. Playing is inherently valuable and doesn’t need to be productive. Children and animals have no purpose for play; they merely do what’s fun. It just so happens that unstructured, undirected play often has incredibly powerful byproducts.

Play can lead to new ideas and innovations. It can also lead to personal growth and development, not to mention a better understanding of the world. And, by its definition, play leads to fun. Which is the best part. Arithmetic is just one example of an unexpected area we can approach with the spirit of play.

The post Learning Through Play appeared first on Farnam Street.

]]>
43004
Chesterton’s Fence: A Lesson in Thinking https://canvasly.link/chestertons-fence/ Mon, 05 Sep 2022 11:00:00 +0000 https://canvasly.link/?p=41295 Chesterton’s Fence is a principle that reminds us to look before we leap. To understand before we act. It’s a cautionary reminder to understand why something is the way it is before meddling in change. The principle comes from a parable by G.K. Chesterton. There exists in such a case a certain institution or law; …

The post Chesterton’s Fence: A Lesson in Thinking appeared first on Farnam Street.

]]>
Chesterton’s Fence is a principle that reminds us to look before we leap. To understand before we act. It’s a cautionary reminder to understand why something is the way it is before meddling in change.

The principle comes from a parable by G.K. Chesterton.

There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”

In its most concise version, Chesterton’s Fence states the following:

“Do not remove a fence until you know why it was put up in the first place.”

The lesson of Chesterton’s Fence is what already exists likely serves purposes that are not immediately obvious.

Fences don’t appear by accident. They are built by people who planned them and had a reason to believe they would benefit someone. Before we take an ax to a fence, we must first understand the reason behind its existence.

The original reason might not have been a good one, and even if it was, things might have changed, but we need to be aware of it. Otherwise, we risk unleashing unintended consequences that spread like ripples on a pond, causing damage for years.

Elsewhere, in his essay collection Heretics, Chesterton makes a similar point, detailed here:

Suppose that a great commotion arises in the street about something, let us say a lamp-post, which many influential persons desire to pull down. A grey-clad monk, who is the spirit of the Middle Ages, is approached upon the matter, and begins to say, in the arid manner of the Schoolmen, “Let us first of all consider, my brethren, the value of Light. If Light be in itself good—” At this point he is somewhat excusably knocked down. All the people make a rush for the lamp-post, the lamp-post is down in ten minutes, and they go about congratulating each other on their un-mediaeval practicality. But as things go on they do not work out so easily. Some people have pulled the lamp-post down because they wanted the electric light; some because they wanted old iron; some because they wanted darkness, because their deeds were evil. Some thought it not enough of a lamp-post, some too much; some acted because they wanted to smash municipal machinery; some because they wanted to smash something. And there is war in the night, no man knowing whom he strikes. So, gradually and inevitably, to-day, to-morrow, or the next day, there comes back the conviction that the monk was right after all, and that all depends on what is the philosophy of Light. Only what we might have discussed under the gas-lamp, we now must discuss in the dark.

As simple as Chesterton’s Fence is as a principle, it teaches us an important lesson.

Many of the problems we face in life occur when we intervene with systems without an awareness of what the consequences could be. While we are well-intentioned, it’s easy to do more harm than good. If a fence exists, there is likely a reason for it.

Chesterton challenged the common belief that previous generations were foolish. If we fail to respect their judgment and understand their reasoning, we risk creating new, unexpected problems. People rarely do things without a reason, and just because we don’t understand something doesn’t mean it’s pointless.

Intellect is therefore a vital force in history, but it can also be a dissolvent and destructive power. Out of every hundred new ideas ninety-nine or more will probably be inferior to the traditional responses which they propose to replace. No one man, however brilliant or well-informed, can come in one lifetime to such fullness of understanding as to safely judge and dismiss the customs or institutions of his society, for these are the wisdom of generations after centuries of experiment in the laboratory of history.

Will and Ariel Durant

Consider the case of supposedly hierarchy-free companies. Some people believe that having management and an overall hierarchy is an imperfect system that stresses employees, potentially damaging their health. They argue that it allows for power abuse, encourages manipulative company politics, and makes it difficult for good ideas from the bottom to be heard.

However, eliminating hierarchies altogether in companies overlooks why they are so ubiquitous. Someone needs to make decisions and be held accountable for their outcomes. For example, people instinctively look to leaders for guidance during stress or disorganization. Without a formal hierarchy, an invisible one forms, which can be more difficult to navigate and may lead to the most charismatic or domineering individual taking control, rather than the most qualified.

While hierarchy-free companies are taking a bold risk by trying something new, their approach ignores Chesterton’s Fence and fails to address the underlying reasons for hierarchies in companies. Removing them doesn’t necessarily create a fairer or more productive system.

Before I built a wall I’d ask to know
What I was walling in or walling out,
And to whom I was like to give offence.

Robert Frost, Mending Wall

Think of the intricate web of social norms that govern human interaction. Many of these norms may seem arbitrary or outdated, ripe for reform. But each norm became a norm to serve some purpose – to foster cooperation, to prevent conflict, to maintain order. To sweep them away without understanding their role in the social ecosystem is to invite chaos.

Or consider the complex laws and regulations that structure our civilization. Each law, no matter how weird or burdensome, arose to address some problem or serve some constituency. Repealing them without understanding them risks the very issues they were designed to solve.

The point is not that the status quo is always right, that every fence should remain standing. Rather, it’s that reform should be preceded by understanding and that critique should be informed by context.

The point of Chesterton’s fence is not to hold on to the past, but to ensure we understand it before moving forward. We shouldn’t be too quick to dismiss things that seem pointless without first understanding their purpose.

Rory Sutherland makes this point with the example of a peacock’s tail. The tail’s value lies in its very inefficiency—it signals that a bird is healthy enough to waste energy growing it and strong enough to carry it around. Peahens use tails to choose mates with the best genes for their offspring. If an outside observer were to give peacocks regular, functional tails, it would be more practical, but it would strip away their ability to advertise genetic potential.

Changing Habits

We all try to change habits to improve our lives at some point. While it’s admirable to eliminate bad habits, many attempts fail because bad habits don’t appear out of nowhere. People don’t suddenly decide to start smoking, drink nightly, or play video games all night. Habits — good or bad — develop to serve an unfulfilled need, such as connection, comfort, or distraction.

Habits exist for a reason. Removing a habit without addressing the underlying need can lead to a replacement habit that might be just as harmful or worse. There are two ways to change habits. We can address the underlying need and eliminate the habit entirely, or we can replace the habit with a better (or less harmful one).

In the end, Chesterton’s Fence is a metaphor for the hard-earned wisdom of the ages. A reminder to understand something before you change it, to respect the past, even if you want to change the future. You don’t need to be a slave to tradition, but you should approach what already exists with humility and curiosity.

The post Chesterton’s Fence: A Lesson in Thinking appeared first on Farnam Street.

]]>
41295
The Art of Being Alone https://canvasly.link/being-alone/ Fri, 15 Jul 2022 12:00:00 +0000 https://canvasly.link/?p=42330 Loneliness has more to do with our perceptions than how much company we have. It’s just as possible to be painfully lonely surrounded by people as it is to be content with little social contact. Some people need extended periods of time alone to recharge, others would rather give themselves electric shocks than spend a …

The post The Art of Being Alone appeared first on Farnam Street.

]]>
Loneliness has more to do with our perceptions than how much company we have. It’s just as possible to be painfully lonely surrounded by people as it is to be content with little social contact. Some people need extended periods of time alone to recharge, others would rather give themselves electric shocks than spend a few minutes with their thoughts. Here’s how we can change our perceptions by making and experiencing art.

***

At a moment in time when many people are facing unprecedented amounts of time alone, it’s a good idea for us to pause and consider what it takes to turn difficult loneliness into enriching solitude. We are social creatures, and a sustained lack of satisfying relationships carries heavy costs for our mental and physical health. But when we are forced to spend more time alone than we might wish, there are ways we can compensate and find a fruitful sense of connection and fulfillment. One way to achieve this is by using our loneliness as a springboard for creativity.

“Loneliness, longing, does not mean one has failed but simply that one is alive.”

— Olivia Laing

Loneliness as connection

One way people have always coped with loneliness is through creativity. By transmuting their experience into something beautiful, isolated individuals throughout history have managed to substitute the sense of community they might have otherwise found in relationships with their creative outputs.

In The Lonely City: Adventures in the Art of Being Alone, Olivia Laing tells the stories of a number of artists who led isolated lives and found meaning in their work even if their relationships couldn’t fulfill them. While she focuses specifically on visual artists in New York over the last seventy years, their methods of using their loneliness and transmitting it into their art carry wide resonance. These particular artists tapped into sentiments many of us will experience at least once in our lives. They found beauty in loneliness and showed it to be something worth considering, not just something to run from.

The artist Edward Hopper (1882–1967) is known for his paintings of American cityscapes inhabited by closed-off figures who seem to embody a vision of modern loneliness. Laing found herself drawn to his signature images of uneasy individuals in sparse surroundings, often separated from the viewer by a window or some other barrier.

Why, then, do we persist in ascribing loneliness to his work? The obvious answer is that his paintings tend to be populated by people alone, or in uneasy, uncommunicative groupings of twos and threes, fastened into poses that seem indicative of distress. But there’s something else too; something about the way he contrives his city streets . . . This viewpoint is often described as voyeuristic, but what Hopper’s urban scenes also replicate is one of the central experiences of being lonely: the way a feeling of separation, of being walled off or penned in, combines with a sense of near unbearable exposure.

While Hopper intermittently denied that his paintings were about loneliness, he certainly experienced the sense of being walled off in a city. In 1910 he moved to Manhattan, after a few years spent mostly in Europe, and found himself struggling to get by. Not only were his paintings not selling, he also felt alienated by the city. Hopper worked on commissions and had few close relationships. Only in his forties did he marry, well past the window of acceptability for the time. Laing writes of his early time in New York:

This sense of separation, of being alone in a big city, soon began to surface in his art . . . He was determined to articulate the day-to-day experience of inhabiting the modern, electric city of New York. Working first with etchings and then in paint, Hopper began to produce a distinctive body of images that captured the cramped, sometimes alluring experience of urban living.

Hopper roamed the city at night, sketching scenes that caught his eye. This perspective meant that the viewer of his paintings finds themselves most often in the position of an observer detached from the scene in front of them. If loneliness can feel like being separated from the world, the windows Hopper painted are perhaps a physical manifestation of this.

By Laing’s description, Hopper transformed the isolation he may have experienced by depicting the experience of loneliness as a place in itself, inhabited by the many people sharing it despite their differences. She elaborates and states, “They aren’t sentimental, his pictures, but there is an extraordinary attentiveness to them. As if what he saw was as interesting as he kept insisting he needed it to be: worth the labor, the miserable effort of setting it down. As if loneliness was something worth looking at. More than that, as if looking itself was an antidote, a way to defeat loneliness’ strange, estranging spell.”

Hopper’s work shows us that one way to make friends with loneliness is to create work that explores and examines it. This not only offers a way to connect with those enduring the same experience but also turns isolation into creative material and robs it of some of its sting.

Loneliness as inspiration

A second figure Laing considers is Andy Warhol (1928–1987). Born Andrew Warhola, the artist has become an icon, his work widely known, someone whose fame renders him hard to relate to. When she began exploring his body of work, Laing found that “one of the interesting things about his work, once you stop to look, is the way the real, vulnerable human self remains stubbornly visible, exerting its own submerged pressure, its own mute appeal to the viewer.”

In particular, much of Warhol’s work pertains to the loneliness he felt throughout his life, no matter how surrounded he was by glittering friends and admirers.

Throughout Warhol’s oeuvre, we see his efforts to turn his own sense of being on the outside into art. A persistent theme in his work was speech. He made thousands of tapes of conversations, often using them as the basis for other works of art. For instance, Warhol’s book, a, A Novel, consists of transcribed tapes from between 1965 and 1967. The tape recorder was such an important part of his life, both a way of connecting with people and keeping them at a distance, that he referred to it as his wife. By listening to others and documenting the oddities of their speech, Warhol coped with feeling he couldn’t be heard. Laing writes, “he retained a typically perverse fondness for language errors. He was fascinated by empty or deformed language, by chatter and trash, by glitches and botches in conversation.” In his work, all speech mattered regardless of its content.

Warhol himself often struggled with speech, mumbling in interviews and being embarrassed by his heavy Pittsburgh accent, which rendered him easily misunderstood in school. Speech was just one factor that left him isolated at times. At age seven, Warhol was confined to his bed by illness for several months. He withdrew from his peers, focusing on making art with his mother, and never quite integrated into school again. After graduating from Carnegie Mellon University in 1949, Warhol moved to New York and sought his footing in the art world. Despite his rapid rise to success and fame, he remained held back by an unshakeable belief in his own inferiority and exclusion from existing social circles.

Becoming a machine also meant having relationships with machines, using physical devices as a way of filling the uncomfortable, sometimes unbearable space between self and world. Warhol could not have achieved his blankness, his enviable detachment, without the use of these charismatic substitutes for intimacy and love.

Later in the book, Laing visits the Warhol museum to see his Time Capsules, 610 cardboard boxes filled with objects collected over the course of thirteen years: “postcards, letters, newspapers, magazines, photographs, invoices, slices of pizza, a piece of chocolate cake, even a mummified human foot.” He added objects until each box was full, then transferred them to a storage unit. Some objects have obvious value, while others seem like trash. There is no particular discernable order to the collection, yet Laing saw in the Time Capsules much the same impulse reflected in Warhol’s tape recordings:

What were the Capsules, really? Trash cans, coffins, vitrines, safes; ways of keeping the loved together, ways of never having to admit to loss or feel the pain of loneliness . . . What is left after the essence has departed? Rind and skin, things you want to throw away but can’t.

The loneliness Warhol felt when he created works like the Time Capsules was more a psychological one than a practical one. He was no longer alone, but his early experiences of feeling like an outsider, and the things he felt set him apart from others, like his speech, marred his ability to connect. Loneliness, for Warhol, was perhaps more a part of his personality than something he could overcome through relationships. Even so, he was able to turn it into fodder for the groundbreaking art we remember him for. Warhol’s art communicated what he struggled to say outright. It was also a way of him listening to and seeing other people—by photographing friends, taping them sleeping, or recording their conversations—when he perhaps felt he couldn’t be heard or seen.

Where creativity takes us

Towards the end of the book, Laing writes:

There are so many things that art can’t do. It can’t bring the dead back to life, it can’t mend arguments between friends, or cure AIDS, or halt the pace of climate change. All the same, it does have some extraordinary functions, some odd negotiating ability between people, including people who have never met and yet who infiltrate and enrich each other’s lives. It does have a capacity to create intimacy; it does have a way of healing wounds, and better yet of making it apparent that not all wounds need healing and not all scars are ugly.

When we face loneliness in our lives, it is not always possible or even appropriate to deal with it by rushing to fill our lives with people. Sometimes we do not have that option; sometimes we’re not in the right space to connect deeply; sometimes we first just need to work through that feeling. One way we can embrace our loneliness is by turning to the art of others who have inhabited that same lonely city, drawing solace and inspiration from their creations. We can use that as inspiration in our own creative pursuits which can help us work through difficult, and lonely, times.

The post The Art of Being Alone appeared first on Farnam Street.

]]>
42330
The Availability Bias: How to Overcome a Common Cognitive Distortion https://canvasly.link/availability-bias-cognitive-distortion/ Mon, 07 Jun 2021 11:45:58 +0000 https://canvasly.link/?p=44298 “The attention which we lend to an experience is proportional to its vivid or interesting character, and it is a notorious fact that what interests us most vividly at the time is, other things equal, what we remember best.” —William James The availability heuristic explains why winning an award makes you more likely to win …

The post The Availability Bias: How to Overcome a Common Cognitive Distortion appeared first on Farnam Street.

]]>

“The attention which we lend to an experience is proportional to its vivid or interesting character, and it is a notorious fact that what interests us most vividly at the time is, other things equal, what we remember best.”

—William James

The availability heuristic explains why winning an award makes you more likely to win another award. It explains why we sometimes avoid one thing out of fear and end up doing something else that’s objectively riskier. It explains why governments spend enormous amounts of money mitigating risks we’ve already faced. It explains why the five people closest to you have a big impact on your worldview. It explains why mountains of data indicating something is harmful don’t necessarily convince everyone to avoid it. It explains why it can seem as if everything is going well when the stock market is up. And it explains why bad publicity can still be beneficial in the long run.

Here’s how the availability heuristic works, how to overcome it, and how to use it to your advantage.

***

How the availability heuristic works

Before we explain the availability heuristic, let’s quickly recap the field it comes from.

Behavioral economics is a field of study bringing together knowledge from psychology and economics to reveal how real people behave in the real world. This is in contrast to the traditional economic view of human behavior, which assumed people always behave in accordance with rational, stable interests. The field largely began in the 1960s and 1970s with the work of psychologists Amos Tversky and Daniel Kahneman.

Behavioral economics posits that people often make decisions and judgments under uncertainty using imperfect heuristics, rather than by weighing up all of the relevant factors. Quick heuristics enable us to make rapid decisions without taking the time and mental energy to think through all the details.

Most of the time, they lead to satisfactory outcomes. However, they can bias us towards certain consistently irrational decisions that contradict what economics would tell us is the best choice. We usually don’t realize we’re using heuristics, and they’re hard to change even if we’re actively trying to be more rational.

One such cognitive shortcut is the availability heuristic, first studied by Tversky and Kahneman in 1973. We tend to judge the likelihood and significance of things based on how easily they come to mind. The more “available” a piece of information is to us, the more important it seems. The result is that we give greater weight to information we learned recently because a news article you read last night comes to mind easier than a science class you took years ago. It’s too much work to try to comb through every piece of information that might be in our heads.

We also give greater weight to information that is shocking or unusual. Shark attacks and plane crashes strike us more than an accidental drowning or car accidents, so we overestimate their odds.

If we’re presented with a set of similar things with one that differs from the rest, we’ll find it easier to remember. For example, of the sequence of characters “RTASDT9RTGS,” the most common character remembered would be the “9” because it stands out from the letters.

In Behavioural Law and Economics, Timur Kuran and Cass Sunstein write:

“Additional examples from recent years include mass outcries over Agent Orange, asbestos in schools, breast implants, and automobile airbags that endanger children. Their common thread is that people tended to form their risk judgments largely, if not entirely, on the basis of information produced through a social process, rather than personal experience or investigation. In each case, a public upheaval occurred as vast numbers of players reacted to each other’s actions and statements. In each, moreover, the demand for swift, extensive, and costly government action came to be considered morally necessary and socially desirable—even though, in most or all cases, the resulting regulations may well have produced little good, and perhaps even relatively more harm.”

Narratives are more memorable than disjointed facts. There’s a reason why cultures around the world teach important life lessons and values through fables, fairy tales, myths, proverbs, and stories.

Personal experience can also make information more salient. If you’ve recently been in a car accident, you may well view car accidents as more common in general than you did before. The base rates haven’t changed; you just have an unpleasant, vivid memory coming to mind whenever you get in a car. We too easily assume that our recollections are representative and true and discount events that are outside of our immediate memory. To give another example, you may be more likely to buy insurance against a natural disaster if you’ve just been impacted by one than you are before it happens.

Anything that makes something easier to remember increases its impact on us. In an early study, Tversky and Kahneman asked subjects whether a random English word is more likely to begin with “K” or have “K” as the third letter. Seeing as it’s typically easier to recall words beginning with a particular letter, people tended to assume the former was more common. The opposite is true.

In Judgment Under Uncertainty: Heuristics and Biases, Tversky and Kahneman write:

“…one may estimate probability by assessing availability, or associative distance. Lifelong experience has taught us that instances of large classes are recalled better and faster than instances of less frequent classes, that likely occurrences are easier to imagine than unlikely ones, and that associative connections are strengthened when two events frequently co-occur.…For example, one may assess the divorce rate in a given community by recalling divorces among one’s acquaintances; one may evaluate the probability that a politician will lose an election by considering various ways in which he may lose support; and one may estimate the probability that a violent person will ‘see’ beasts of prey in a Rorschach card by assessing the strength of association between violence and beasts of prey. In all of these cases, the assessment of the frequency of a class or the probability of an event is mediated by an assessment of availability.”

They go on to write:

“That associative bonds are strengthened by repetition is perhaps the oldest law of memory known to man. The availability heuristic exploits the inverse form of this law, that is, it uses strength of association as a basis for the judgment of frequency. In this theory, availability is a mediating variable, rather than a dependent variable as is typically the case in the study of memory.”

***

How the availability heuristic misleads us

“People tend to assess the relative importance of issues by the ease with which they are retrieved from memory—and this is largely determined by the extent of coverage in the media.” —Daniel Kahneman, Thinking Fast and Slow

To go back to the points made in the introduction of this post, winning an award can make you more likely to win another award because it gives you visibility, making your name come to mind more easily in connection to that kind of accolade. We sometimes avoid one thing in favor of something objectively riskier, like driving instead of taking a plane, because the dangers of the latter are more memorable. The five people closest to you can have a big impact on your worldview because you frequently encounter their attitudes and opinions, bringing them to mind when you make your own judgments. Mountains of data indicating something is harmful don’t always convince people to avoid it if those dangers aren’t salient, such as if they haven’t personally experienced them. It can seem as if things are going well when the stock market is up because it’s a simple, visible, and therefore memorable indicator. Bad publicity can be beneficial in the long run if it means something, such as a controversial book, gets mentioned often and is more likely to be recalled.

These aren’t empirical rules, but they’re logical consequences of the availability heuristic, in the absence of mitigating factors.

We are what we remember, and our memories have a significant impact on our perception of the world. What we end up remembering is influenced by factors such as the following:

  • Our foundational beliefs about the world
  • Our expectations
  • The emotions a piece of information inspires in us
  • How many times we’re exposed to a piece of information
  • The source of a piece of information

There is no real link between how memorable something is and how likely it is to happen. In fact, the opposite is often true. Unusual events stand out more and receive more attention than commonplace ones. As a result, the availability heuristic skews our perception of risks in two key ways:

We overestimate the likelihood of unlikely events. And we underestimate the likelihood of likely events.

Overestimating the risk of unlikely events leads us to stay awake at night, turning our hair grey, worrying about things that have almost no chance of happening. We can end up wasting enormous amounts of time, money, and other resources trying to mitigate things that have, on balance, a small impact. Sometimes those mitigation efforts end up backfiring, and sometimes they make us feel safer than they should.

On the flipside, we can overestimate the chance of unusually good things happening to us. Looking at everyone’s highlights on social media, we can end up expecting our own lives to also be a procession of grand achievements and joys. But most people’s lives are mundane most of the time, and the highlights we see tend to be exceptional ones, not routine ones.

Underestimating the risk of likely events leads us to fail to prepare for predictable problems and occurrences. We’re so worn out from worrying about unlikely events, we don’t have the energy to think about what’s in front of us. If you’re stressed and anxious much of the time, you’ll have a hard time paying attention to those signals when they really matter.

All of this is not to say that you shouldn’t prepare for the worst. Or that unlikely things never happen (as Littlewood’s Law states, you can expect a one-in-a-million event at least once per month.) Rather, we should be careful about only preparing for the extremes because those extremes are more memorable.

***

How to overcome the availability heuristic

Knowing about a cognitive bias isn’t usually enough to overcome it. Even people like Kahneman who have studied behavioral economics for many years sometimes struggle with the same irrational patterns. But being aware of the availability heuristic is helpful for the times when you need to make an important decision and can step back to make sure it isn’t distorting your view. Here are five ways of mitigating the availability heuristic.

#1. Always consider base rates when making judgments about probability.
The base rate of something is the average prevalence of it within a particular population. For example, around 10% of the population are left-handed. If you had to guess the likelihood of a random person being left-handed, you would be correct to say 1 in 10 in the absence of other relevant information. When judging the probability of something, look at the base rate whenever possible.

#2. Focus on trends and patterns.
The mental model of regression to the mean teaches us that extreme events tend to be followed by more moderate ones. Outlier events are often the result of luck and randomness. They’re not necessarily instructive. Whenever possible, base your judgments on trends and patterns—the longer term, the better. Track record is everything, even if outlier events are more memorable.

#3. Take the time to think before making a judgment.
The whole point of heuristics is that they save the time and effort needed to parse a ton of information and make a judgment. But, as we always say, you can’t make a good decision without taking time to think. There’s no shortcut for that. If you’re making an important decision, the only way to get around the availability heuristic is to stop and go through the relevant information, rather than assuming whatever comes to mind first is correct.

#4. Keep track of information you might need to use in a judgment far off in the future.
Don’t rely on memory. In Judgment in Managerial Decision-Making, Max Bazerman and Don Moore present the example of workplace annual performance appraisals. Managers tend to base their evaluations more on the prior three months than the nine months before that. It’s much easier than remembering what happened over the course of an entire year. Managers also tend to give substantial weight to unusual one-off behavior, such as a serious mistake or notable success, without considering the overall trend. In this case, noting down observations on someone’s performance throughout the entire year would lead to a more accurate appraisal.

#5. Go back and revisit old information.
Even if you think you can recall everything important, it’s a good idea to go back and refresh your memory of relevant information before making a decision.

The availability heuristic is part of Farnam Street’s latticework of mental models.

The post The Availability Bias: How to Overcome a Common Cognitive Distortion appeared first on Farnam Street.

]]>
44298
Better Thinking & Incentives: Lessons From Shakespeare https://canvasly.link/lessons-shakespeare/ Mon, 24 May 2021 12:55:41 +0000 https://canvasly.link/?p=44159 At Farnam Street, we aim to master the best of what other people have figured out. Not surprisingly, it’s quite a lot. The past is full of useful lessons that have much to teach us. Sometimes, we just need to remember what we’re looking for and why. Life can be overwhelming. It seems like there’s …

The post Better Thinking & Incentives: Lessons From Shakespeare appeared first on Farnam Street.

]]>
At Farnam Street, we aim to master the best of what other people have figured out. Not surprisingly, it’s quite a lot. The past is full of useful lessons that have much to teach us. Sometimes, we just need to remember what we’re looking for and why.

Life can be overwhelming. It seems like there’s a new technology, a new hack, a new way of doing things, or a new way we need to be every five minutes. Figuring out what to pay attention to is hard. It’s also a task we take seriously at Farnam Street. If we want to be a signal in the noise, we have to find other signals ourselves.

That’s why we spend a lot of time in the past. We like reading about history, and we like to look for timeless ideas. Learning information that is going to stay relevant for one hundred years is a better time investment than trying to digest information that will expire next week.

However, the past is a big place containing a lot of information. So it’s always appreciated when we find a source that has curated some timeless lessons from the past for us. In his book How to Think Like Shakespeare, professor Scott Newstok dives into history to pull out some of what humanity has already learned about better thinking and applying incentives.

***

Better thinking and education

“Doing and thinking are reciprocal practices.”

How do we get better at thinking? When you think about something, hopefully you learn more about it. But then the challenge becomes doing something with what you’ve learned. Often, we don’t want our knowledge to stay theoretical. We’ve learned something in order to do something. We want to put our knowledge into practice somehow.

The good news is, doing and thinking reinforce and augment each other. It’s a subtle but powerful feedback loop. You learn something. Armed with that new information, you do something. Informed by the results of your doing, you learn something new.

Throughout his book, Newstok weaves in many ideas on how to think better and how to engage with information. One of the ways to think better is to complement thinking with doing. For centuries, we’ve had the concept of “craft,” loosely understood as the knowledge one attains by doing. Newstok explains that the practice of any craft “requires—well, practice. Its difficult-to-codify habits are best transmitted in person, through modeling, observation, imitation, [and] correction adjustment.” You develop a deeper understanding when you apply your knowledge to creating something tangible. Crafting a piece of furniture is similar to crafting a philosophical argument in the sense that actually doing the work is what really develops knowledge. “Incorporating this body of knowledge, learning how to improvise within constraints, [and] appreciating how limited resources shape solutions to problems” lies at the core of mastery.

The application of what you’ve ingested in order to really learn it reminds us of the Feynman Learning Technique. To really master a subject, teach it to a novice. When you break down what you think you know into a teachable format, you begin to truly know something.

Newstok writes, “It’s human to avoid the hard work of thinking, reading, and writing. But we all fail when technology becomes a distraction from, or, worse, a substitute for, the interminable yet rewarding task of confronting the object under study.” Basically, it’s human to be lazy. It’s easier to cruise around on social media than put your ideas into action.

Better thinking takes strength. You have to be able to tune out the noise and walk away from the quick dopamine hits to put the effort into attempting to do something with your thoughts. You also need strength to confront the results and figure out how to do better next time. And even if your job is figuring out how to be better on social media, focusing on the relationship between doing and thinking will produce better results than undirected consumption.

The time and space to do something with our thoughts is how we transform what we learn into something we know.

Admittedly, knowing something often requires courage. First, the courage to admit what you don’t know, and second, the courage to be the least smart person in the room. But when you master a subject, the rewards are incredible. You have flexibility and understanding and options to keep learning.

***

Applying incentives

“If you create an incentive to hit the target, it’s all the less likely you will do so.”

Newstok explains how the wrong incentives do far more damage than diminishing our motivation to attain a goal. Applying bad incentives can diminish the effectiveness of an entire system. You get what you measure, because measuring something incentivizes you to do it.

He explores the problem of incentives in the American education system. The priority is on the immediate utility of information because the incentive is to pass tests. For students, passing tests is the path to higher education, where they can pass more tests and get validated as being a person who knows something. For teachers, students passing tests is the path to higher rankings, more students, and more funding.

Newstok suggests we don’t need to worry so much about being right and feeding the continual assessment pressure this attitude creates. Why? Because we don’t know exactly what we will need to know in the future. He writes, “When Shakespeare was born there wasn’t yet a professional theater in London. His ‘useless’ Latin drills prepared him for a job that didn’t yet exist.…Why are we wasting precious classroom hours on fleeting technical skills—skills that will become obsolete before graduates enter the workforce?” It seems that a better approach is to incentivize teaching tools that will give students the flexibility to develop their thinking in response to changes around them.

Considering the proper application of incentives in relation to future goals has ramifications in all organizations, not just schools.

A common problem in many organizations is that the opportunities to accrue further reward and compensation can only come by climbing ever higher in the pyramid. Thus people are incentivized to get into management, something they may have no interest in and may not be any good at. Not everyone who invents amazing widgets should manage a group of widget inventors. By not incentivizing alternate paths, the organization ends up losing the amazing widget inventors, handicapping itself by diminishing its adaptability.

We’ve written before about another common problem in so many offices: compensation is tied to visibility, physical presence, or volume of output and not to quality of contribution. To be fair, quality is harder to measure. But it is really more about organizational attitude. Do you want people to be busy typing or busy thinking? We all say we want thinkers. We rarely give anyone the time to think. In this case, we end up with organizations that end up being able only to produce more of the same.

And paying people by, say, profit-sharing can be great, as it incentivizes collaboration and commitment to the health of the organization. But even this needs to be managed so that the incentives don’t end up prioritizing short-term money at the expense of long term success—much like students learning only to pass tests at the expense of their future knowledge and resiliency.

Newstok suggests instead that “we all need practice in curiosity, intellectual agility, the determination to analyze, commitment to resourceful communication, historically and culturally situated reflectiveness, [and] the confidence to embrace complexity. In short: the ambition to create something better, in whatever field.” We don’t need to be incentivized for immediate performance. Rather, we need incentives to explore what might need to be known to face future challenges and respond to future opportunities.

***

The most fascinating thing about Newstok’s book is that it rests on ideas that are hundreds of years old. The problems he explores are not new, and the answers he presents to the challenges of better thinking and aligning incentives are based on perspectives provided in history books.

So maybe the ultimate lesson is the reminder that not every problem needs to be approached as a blank slate. Humanity has developed some wisdom and insight on a few topics. Before we reinvent the wheel, it’s worth looking back to leverage what we’ve already figured out.

The post Better Thinking & Incentives: Lessons From Shakespeare appeared first on Farnam Street.

]]>
44159
Your Thinking Rate Is Fixed https://canvasly.link/thinking-rate-fixed/ Mon, 01 Mar 2021 13:01:21 +0000 https://canvasly.link/?p=43618 You can’t force yourself to think faster. If you try, you’re likely to end up making much worse decisions. Here’s how to improve the actual quality of your decisions instead of chasing hacks to speed them up. If you’re a knowledge worker, as an ever-growing proportion of people are, the product of your job is …

The post Your Thinking Rate Is Fixed appeared first on Farnam Street.

]]>
You can’t force yourself to think faster. If you try, you’re likely to end up making much worse decisions. Here’s how to improve the actual quality of your decisions instead of chasing hacks to speed them up.

If you’re a knowledge worker, as an ever-growing proportion of people are, the product of your job is decisions.

Much of what you do day to day consists of trying to make the right choices among competing options, meaning you have to process large amounts of information, discern what’s likely to be most effective for moving towards your desired goal, and try to anticipate potential problems further down the line. And all the while, you’re operating in an environment of uncertainty where anything could happen tomorrow.

When the product of your job is your decisions, you might find yourself wanting to be able to make more decisions more quickly so you can be more productive overall.

Chasing speed is a flawed approach. Because decisions—at least good ones—don’t come out of thin air. They’re supported by a lot of thinking.

While experience and education can grant you the pattern-matching abilities to make some kinds of decisions using intuition, you’re still going to run into decisions that require you to sit and consider the problem from multiple angles. You’re still going to need to schedule time to do nothing but think. Otherwise making more decisions will make you less productive overall, not more, because your decisions will suck.

Here’s a secret that might sound obvious but can actually transform the way you work: you can’t force yourself to think faster. Our brains just don’t work that way. The rate at which you make mental discernments is fixed.

Sure, you can develop your ability to do certain kinds of thinking faster over time. You can learn new methods for decision-making. You can develop your mental models. You can build your ability to focus. But if you’re trying to speed up your thinking so you can make an extra few decisions today, forget it.

***

Beyond the “hurry up” culture

Management consultant Tom DeMarco writes in Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency that many knowledge work organizations have a culture where the dominant message at all times is to hurry up.

Everyone is trying to work faster at all times, and they pressure everyone around them to work faster, too. No one wants to be perceived as a slacker. The result is that managers put pressure on their subordinates through a range of methods. DeMarco lists the following examples:

  • “Turning the screws on delivery dates (aggressive scheduling)
  • Loading on extra work
  • Encouraging overtime
  • Getting angry when disappointed
  • Noting one subordinate’s extraordinary effort and praising it in the presence of others
  • Being severe about anything other than superb performance
  • Expecting great things of all your workers
  • Railing against any apparent waste of time
  • Setting an example yourself (with the boss laboring so mightily there is certainly no time for anyone else to goof off)
  • Creating incentives to encourage desired behavior or results.”

All of these things increase pressure in the work environment and repeatedly reinforce the “hurry up!” message. They make managers feel like they’re moving things along faster. That way if work isn’t getting done, it’s not their fault. But, DeMarco writes, they don’t lead to meaningful changes in behavior that make the whole organization more productive. Speeding up often results in poor decisions that create future problems.

The reason more pressure doesn’t mean better productivity is that the rate at which we think is fixed.

We can’t force ourselves to start making faster decisions right now just because we’re faced with an unrealistic deadline. DeMarco writes, “Think rate is fixed. No matter what you do, no matter how hard you try, you can’t pick up the pace of thinking.

If you’re doing a form of physical labor, you can move your body faster when under pressure. (Of course, if it’s too fast, you’ll get injured or won’t be able to sustain it for long.)

If you’re a knowledge worker, you can’t pick up the pace of mental discriminations just because you’re under pressure. Chances are good that you’re already going as fast as you can. Because guess what? You can’t voluntarily slow down your thinking, either.

***

The limits of pressure

Faced with added stress and unable to accelerate our brains instantaneously, we can do any of three things:

  • “Eliminate wasted time.
  • Defer tasks that are not on the critical path.
  • Stay late.”

Even if those might seem like positive things, they’re less advantageous than they appear at first glance. Their effects are marginal at best. The smarter and more qualified the knowledge worker, the less time they’re likely to be wasting anyway. Most people don’t enjoy wasting time. What you’re more likely to end up eliminating is valuable slack time for thinking.

Deferring non-critical tasks doesn’t save any time overall, it just pushes work forwards—to the point where those tasks do become critical. Then something else gets deferred.

Staying late might work once in a while. Again, though, its effects are limited. If we keep doing it night after night, we run out of energy, our personal lives suffer, and we make worse decisions as a result.

None of the outcomes of increasing pressure result in more or better decisions. None of them speed up the rate at which people think. Even if an occasional, tactical increase in pressure (whether it comes from the outside or we choose to apply it to ourselves) can be effective, ongoing pressure increases are unsustainable in the long run.

***

Think rate is fixed

It’s incredibly important to truly understand the point DeMarco makes in this part of Slack: the rate at which we process information is fixed.

When you’re under pressure, the quality of your decisions plummets. You miss possible angles, you don’t think ahead, you do what makes sense now, you panic, and so on. Often, you make a snap judgment then grasp for whatever information will support it for the people you work with. You don’t have breathing room to stress-test your decisions.

The clearer you can think, the better your decisions will be. Trying to think faster can only cloud your judgment. It doesn’t matter how many decisions you make if they’re not good ones. As DeMarco reiterates throughout the book, you can be efficient without being effective.

Try making a list of the worst decisions you’ve made so far in your career. There’s a good chance most of them were made under intense pressure or without taking much time over them.

At Farnam Street, we write a lot about how to make better decisions, and we share a lot of tools for better thinking. We made a whole course on decision-making. But none of these resources are meant to immediately accelerate your thinking. Many of them require you to actually slow down a whole lot and spend more time on your decisions. They improve the rate at which you can do certain kinds of thinking, but it’s not going to be an overnight process.

***

Upgrading your brain

Some people read one of our articles or books about mental models and complain that it’s not an effective approach because it didn’t lead to an immediate improvement in their thinking. That’s unsurprising; our brains don’t work like that. Integrating new, better approaches takes a ton of time and repetition, just like developing any other skill. You have to keep on reflecting and making course corrections.

At the end of the day, your brain is going to go where it wants to go. You’re going to think the way you think. However much you build awareness of how the world works and learn how to reorient, you’re still, to use Jonathan Haidt’s metaphor from The Righteous Mind, a tiny rider atop a gigantic elephant. None of us can reshape how we think overnight.

Making good decisions is hard work. There’s a limit to how many decisions you can make in a day before you need a break. On top of that, many knowledge workers are in fields where the most relevant information has a short half-life. Making good decisions requires constant learning and verifying what you think you know.

If you want to make better decisions, you need to do everything you can to reduce the pressure you’re under. You need to let your brain take whatever time it needs to think through the problem at hand. You need to get out of a reactive mode, recognize when you need to pause, and spend more time looking at problems.

A good metaphor is installing an update to the operating system on your laptop. Would you rather install an update that fixes bugs and improves existing processes, or one that just makes everything run faster? Obviously, you’d prefer the former. The latter would just lead to more crashes. The same is true for updating your mental operating system.

Stop trying to think faster. Start trying to think better.

The post Your Thinking Rate Is Fixed appeared first on Farnam Street.

]]>
43618
How Julia Child Used First Principles Thinking https://canvasly.link/how-julia-child-used-first-principles-thinking/ Mon, 16 Nov 2020 14:00:47 +0000 https://canvasly.link/?p=43024 There’s a big difference between knowing how to follow a recipe and knowing how to cook. If you can master the first principles within a domain, you can see much further than those who are just following recipes. That’s what Julia Child, “The French Chef”, did throughout her career. Following a recipe might get you …

The post How Julia Child Used First Principles Thinking appeared first on Farnam Street.

]]>
There’s a big difference between knowing how to follow a recipe and knowing how to cook. If you can master the first principles within a domain, you can see much further than those who are just following recipes. That’s what Julia Child, “The French Chef”, did throughout her career.

Following a recipe might get you the results you want, but it doesn’t teach you anything about how cooking works at the foundational level. Or what to do when something goes wrong. Or how to come up with your own recipes when you open the fridge on a Wednesday night and realize you forgot to go grocery shopping. Or how to adapt recipes for your own dietary needs.

Adhering to recipes will only get you so far, and it certainly won’t result in you coming up with anything new or creative.

People who know how to cook understand the basic principles that make food taste, look, and smell good. They have confidence in troubleshooting and solving problems as they go—or adjusting to unexpected outcomes. They can glance at an almost barren kitchen and devise something delicious. They know how to adapt to a guest with a gluten allergy or a child who doesn’t like green food. Sure, they might consult a recipe when it makes sense to do so. But they’re not dependent on it, and they can change it up based on their particular circumstances.

There’s a reason many cooking competition shows feature a segment where contestants need to design their own recipe from a limited assortment of ingredients. Effective improvisation shows the judges that someone can actually cook, not just follow recipes.

We can draw a strong parallel from cooking to thinking. If you want to learn how to think for yourself, you can’t just follow what someone else came up with. You need to understand first principles if you want to be able to solve complex problems or think in a unique, creative fashion. First principles are the building blocks of knowledge, the foundational understanding acquired from breaking something down into its most essential concepts.

One person who exemplifies first principles thinking is Julia Child, an American educator who charmed audiences with her classes, books, and TV shows. First principles thinking enabled Julia to both master her own struggles with cooking and then teach the world to do the same. In Something from the Oven, Laura Shapiro tells the charming story of how she did it. Here’s what we can learn about better thinking from the “French Chef.”

***

Gustave Flaubert wrote that “talent is a long patience, ” something which was all too true for Julia. She wasn’t born with an innate skill for or even love of cooking. Her starting point was falling in love with her future husband, Paul Child, in Ceylon in 1944 when both were working for the Office of Strategic Services. Paul adored food, and his delight in it inspired Julia. When they each returned to their separate homes after the war, she decided she would learn to cook. Things got off to a bad start, as Shapiro explains:

“At first she tried to teach herself at home, but it was frustrating to bushwhack her way through one dish after another. She never knew whether she would find success or failure when she opened the oven door, and worst of all, she didn’t know why this recipe worked and that one didn’t.”

Seeking expert guidance, Julia started taking cooking classes three times a week at a Beverly Hills cooking school. Even that didn’t help much, however, and after she married Paul a year later, her experiments in their Washington, DC kitchen continued to go awry. Only when the couple moved to Paris did an epiphany strike. Julia’s encounters with French cooking instilled in her an understanding of the need for first principles thinking. Trying to follow recipes without comprehending their logic wasn’t going to produce delicious results. She needed to learn how food actually worked.

In 1949, at the age of 37, she enrolled in classes at the famous Cordon Bleu school of cooking. It changed her forever:

“Learning to cook at the Cordon Bleu meant breaking down every dish into its smallest individual steps and doing each laborious and exhausting procedure by hand. In time Child could bone a duck while leaving the skin intact, extract the guts of a chicken through a hole she made in the neck, make a ham mousse by pounding the ham to a pulp with a mortar and pestle, and turn out a swath of elaborate dishes from choucroute garnie to vol-au-vent financière. None of this came effortlessly but she could do it. She had the brains, the considerable physical strength it demanded, and her vast determination. Most important, she could understand for the first time the principles governing how and why a recipe worked as it did.”

Julia had found her calling. After six months of Cordon Bleu classes, she continued studying independently for a year. She immersed herself in French cooking, filled her home with equipment, and befriended two women who shared her passion, Simone Beck and Louisette Bertholle. In the early 1950s, they opened a tiny school together, with a couple of students working out of Julia’s kitchen. She was “adamant that the recipes used in class be absolutely reliable, and she tested every one of them for what she called ‘scientific workability.’” By this, Julia meant that the recipes needed to make sense per her understanding of the science of cooking. If they didn’t agree with the first principles she knew, they were out.

***

When Paul transferred to Marseille, Julia was sad to leave her school. But she and her friends continued their collaboration, working at a distance on a French cookery book aimed at Americans. For what would become Mastering the Art of French Cooking, Julia focused on teaching first principles in a logical order, not copying down mere recipes.

She’d grown frustrated at opening recipe books to see instructions she knew couldn’t work because they contradicted the science of cooking—for example, recipes calling for temperatures she knew would burn a particular ingredient, or omitting key ingredients like baking soda, without which a particular effect would be impossible. It was clear no one had bothered to test anything before they wrote it down, and she was determined not to make the same mistake.

Mastering the Art of French Cooking came out in 1961. Shapiro writes, “The reviews were excellent, there was a gratifying burst of publicity all across the country, and the professional food world acknowledged a new star in Julia Child. What nobody knew for sure was whether everyday homemakers in the nation that invented the TV dinner would buy the book.” Though the book was far from a flop, it was the TV show it inspired that catapulted Julia and her approach to cooking to stardom.

The French Chef first aired in 1963 and was an enormous success from the start. Viewers adored how Julia explained why she did what she did and how it worked. They also loved her spontaneous capacity to adapt to unanticipated outcomes. It was usually only possible to shoot one take so Julia needed to keep going no matter what happened.

Her show appealed to every kind of person because it could make anyone a better cook—or at least help them understand the process better. Not only was Julia “a striking image of unaffected good nature,” the way she taught really worked. Viewers and readers who followed her guidance discovered a way of cooking that made them feel in control.

Julia “believed anybody could cook with distinction from scratch and that’s what she was out to prove.” Many of the people who watched The French Chef were women who needed a new way to think about cooking. As gender roles were being redefined and more women entered the workforce, it no longer seemed like something they were obligated by birth to do. At the same time, treating it as an undesirable chore was no more pleasant than treating it as a duty. Julia taught them another way. Cooking could be an intellectual, creative, enjoyable activity. Once you understood how it actually worked, you could learn from mistakes instead of repeating them again and again.

Shapiro explains that “Child was certainly not the first TV chef. The genre was almost as old as TV itself. But she was the first to make it her own and have an enduring societal impact.”

***

If you can master the first principles within a domain, you can see much further than those who are just following recipes. That’s what Julia managed to do, and it’s part of why she stood out from the other TV chefs of her time—and still stands out today. By mastering first principles, you can find better ways of doing things, instead of having to stick to conventions. If Julia thought a modern piece of equipment worked better than a traditional one or that part of a technique was a pointless custom, she didn’t hesitate to make changes as she saw fit. Once you know the why of something, it is easy to modify the how to achieve your desired result.

The lessons of first principles in cooking are the same for the first principles in any domain. Looking for first principles is just a way of thinking. It’s a commitment to understanding the foundation that something is built on and giving yourself the freedom to adapt, develop, and create. Once you know the first principles, you can keep learning more advanced concepts as well as innovating for yourself.

The post How Julia Child Used First Principles Thinking appeared first on Farnam Street.

]]>
43024
Being Smart is Not Enough https://canvasly.link/being-smart-is-not-enough/ Mon, 28 Sep 2020 14:57:47 +0000 https://canvasly.link/?p=42793 When hiring a team, we tend to favor the geniuses who hatch innovative ideas, but overlook the butterflies, the crucial ones who share and implement them. Here’s why it’s important to be both smart AND social. *** In business, it’s never enough to have a great idea. For any innovation to be successful, it has …

The post Being Smart is Not Enough appeared first on Farnam Street.

]]>
When hiring a team, we tend to favor the geniuses who hatch innovative ideas, but overlook the butterflies, the crucial ones who share and implement them. Here’s why it’s important to be both smart AND social.

***

In business, it’s never enough to have a great idea. For any innovation to be successful, it has to be shared, promoted, and bought into by everyone in the organization. Yet often we focus on the importance of those great ideas and seem to forget about the work that is required to spread them around.

Whenever we are building a team, we tend to look for smarts. We are attracted to those with lots of letters after their names or fancy awards on their resumes. We assume that if we hire the smartest people we can find, they will come up with new, better ways of doing things that save us time and money.

Conversely, we often look down on predominantly social people. They seem to spend too much time gossiping and not enough time working. We assume they’ll be too busy engaging on social media or away from their desks too often to focus on their duties, and thus we avoid hiring them.

Although we aren’t going to tell you to swear off smarts altogether, we are here to suggest that maybe it’s time to reconsider the role that social people play in cultural growth and the diffusion of innovation.

In his book, The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter, Joseph Henrich explores the role of culture in human evolution. One point he makes is that it’s not enough for a species to be smart. What counts far more is having the cultural infrastructure to share, teach, and learn.

Consider two very large prehuman populations, the Geniuses and the Butterflies. Suppose the Geniuses will devise an invention once in 10 lifetimes. The Butterflies are much dumber, only devising the same invention once in 1000 lifetimes. So, this means that the Geniuses are 100 times smarter than the Butterflies. However, the Geniuses are not very social and have only 1 friend they can learn from. The Butterflies have 10 friends, making them 10 times more social.

Now, everyone in both populations tries to obtain an invention, both by figuring it out for themselves and by learning from friends. Suppose learning from friends is difficult: if a friend has it, a learner only learns it half the time. After everyone has done their own individual learning and tried to learn from their friends, do you think the innovation will be more common among the Geniuses or the Butterflies?

Well, among the Geniuses a bit fewer than 1 out of 5 individuals (18%) will end up with the invention. Half of those Geniuses will have figured it out all by themselves. Meanwhile, 99.9% of Butterflies will have the innovation, but only 0.1% will have figured it out by themselves.

Wow.

What if we take this thinking and apply to the workplace? Of course you want to have smart people. But you don’t want an organization full of Geniuses. They might come up with a lot, but without being able to learn from each other easily, many of their ideas won’t have any uptake in the organization. Instead, you’d want to pair Geniuses with Butterflies—socially attuned people who are primed to adopt the successful behaviors of those around them.

If you think you don’t need Butterflies because you can just put Genius innovations into policy and procedure, you’re missing the point. Sure, some brilliant ideas are concrete, finite, and visible. Those are the ones you can identify and implement across the organization from the top down. But some of the best ideas happen on the fly in isolated, one-off situations as responses to small changes in the environment. Perhaps there’s a minor meeting with a client, and the Genius figures out a new way of describing your product that really resonates. The Genius though, is not a teacher. It worked for them and they keep repeating the behavior, but it doesn’t occur to them to teach someone else. And they don’t pick up on other tactics to further refine their innovation.

But the Butterfly who went to the meeting with the Genius? They pick up on the successful new product description right away. They emulate it in all meetings from then on. They talk about it with their friends, most of whom are also Butterflies. Within two weeks, the new description has taken off because of the propensity for cultural learning embedded in the social Butterflies.

The lesson here is to hire both types of people. Know that it’s the Geniuses who innovate, but it’s the Butterflies who spread that innovation around. Both components are required for successfully implementing new, brilliant ideas.

The post Being Smart is Not Enough appeared first on Farnam Street.

]]>
42793
Job Interviews Don’t Work https://canvasly.link/job-interviews/ Mon, 06 Jul 2020 11:00:56 +0000 https://canvasly.link/?p=42535 Better hiring leads to better work environments, less turnover, and more innovation and productivity. When you understand the limitations and pitfalls of the job interview, you improve your chances of hiring the best possible person for your needs. *** The job interview is a ritual just about every adult goes through at least once. They …

The post Job Interviews Don’t Work appeared first on Farnam Street.

]]>
Better hiring leads to better work environments, less turnover, and more innovation and productivity. When you understand the limitations and pitfalls of the job interview, you improve your chances of hiring the best possible person for your needs.

***

The job interview is a ritual just about every adult goes through at least once. They seem to be a ubiquitous part of most hiring processes. The funny thing about them, however, is that they take up time and resources without actually helping to select the best people to hire. Instead, they promote a homogenous workforce where everyone thinks the same.

If you have any doubt about how much you can get from an interview, think of what’s involved for the person being interviewed. We’ve all been there. The night before, you dig out your smartest outfit, iron it, and hope your hair lies flat for once. You frantically research the company, reading every last news article based on a formulaic press release, every blog post by the CEO, and every review by a disgruntled former employee.

After a sleepless night, you trek to their office, make awkward small talk, then answer a set of predictable questions. What’s your biggest weakness? Where do you see yourself in five years? Why do you want this job? Why are you leaving your current job? You reel off the answers you prepared the night before, highlighting the best of the best. All the while, you’re reminding yourself to sit up straight, don’t bite your nails, and keep smiling.

It’s not much better on the employer’s side of the table. When you have a role to fill, you select a list of promising candidates and invite them for an interview. Then you pull together a set of standard questions to riff off, doing a little improvising as you hear their responses. At the end of it all, you make some kind of gut judgment about the person who felt right—likely the one you connected with the most in the short time you were together.

Is it any surprise that job interviews don’t work when the whole process is based on subjective feelings? They are in no way the most effective means of deciding who to hire because they maximize the role of bias and minimize the role of evaluating competency.

What is a job interview?

“In most cases, the best strategy for a job interview is to be fairly honest, because the worst thing that can happen is that you won’t get the job and will spend the rest of your life foraging for food in the wilderness and seeking shelter underneath a tree or the awning of a bowling alley that has gone out of business.”

— Lemony Snicket, Horseradish

When we say “job interviews” throughout this post, we’re talking about the type of interview that has become standard in many industries and even in universities: free-form interviews in which candidates sit in a room with one or more people from a prospective employer (often people they might end up working with) and answer unstructured questions. Such interviews tend to focus on how a candidate behaves generally, emphasizing factors like whether they arrive on time or if they researched the company in advance. While questions may ostensibly be about predicting job performance, they tend to better select for traits like charisma rather than actual competence.

Unstructured interviews can make sense for certain roles. The ability to give a good first impression and be charming matters for a salesperson. But not all roles need charm, and just because you don’t want to hang out with someone after an interview doesn’t mean they won’t be an amazing software engineer. In a small startup with a handful of employees, someone being “one of the gang” might matter because close-knit friendships are a strong motivator when work is hard and pay is bad. But that group mentality may be less important in a larger company in need of diversity.

Considering the importance of hiring and how much harm getting it wrong can cause, it makes sense for companies to study and understand the most effective interview methods. Let’s take a look at why job interviews don’t work and what we can do instead.

Why job interviews are ineffective

Discrimination and bias

Information like someone’s age, gender, race, appearance, or social class shouldn’t dictate if they get a job or not—their competence should. But that’s unfortunately not always the case. Interviewers can end up picking the people they like the most, which often means those who are most similar to them. This ultimately means a narrower range of competencies is available to the organization.

Psychologist Ron Friedman explains in The Best Place to Work: The Art and Science of Creating an Extraordinary Workplace some of the unconscious biases that can impact hiring. We tend to rate attractive people as more competent, intelligent, and qualified. We consider tall people to be better leaders, particularly when evaluating men. We view people with deep voices as more trustworthy than those with higher voices.

Implicit bias is pernicious because it’s challenging to spot the ways it influences interviews. Once an interviewer judges someone, they may ask questions that nudge the interviewee towards fitting that perception. For instance, if they perceive someone to be less intelligent, they may ask basic questions that don’t allow the candidate to display their expertise. Having confirmed their bias, the interviewer has no reason to question it or even notice it in the future.

Hiring often comes down to how much an interviewer likes a candidate as a person. This means that we can be manipulated by manufactured charm. If someone’s charisma is faked for an interview, an organization can be left dealing with the fallout for ages.

The map is not the territory

The representation of something is not the thing itself. A job interview is meant to be a quick snapshot to tell a company how a candidate would be at a job. However, it’s not a representative situation in terms of replicating how the person will perform in the actual work environment.

For instance, people can lie during job interviews. Indeed, the situation practically encourages it. While most people feel uncomfortable telling outright lies (and know they would face serious consequences later on for a serious fabrication), bending the truth is common. Ron Friedman writes, “Research suggests that outright lying generates too much psychological discomfort for people to do it very often. More common during interviews are more nuanced forms of deception which include embellishment (in which we take credit for things we haven’t done), tailoring (in which we adapt our answers to fit the job requirements), and constructing (in which we piece together elements from different experiences to provide better answers.)” An interviewer can’t know if someone is deceiving them in any of these ways. So they can’t know if they’re hearing the truth.

One reason why we think job interviews are representative is the fundamental attribution error. This is a logical fallacy that leads us to believe that the way people behave in one area carries over to how they will behave in other situations. We view people’s behaviors as the visible outcome of innate characteristics, and we undervalue the impact of circumstances.

Some employers report using one single detail they consider representative to make hiring decisions, such as whether a candidate sends a thank-you note after the interview or if their LinkedIn picture is a selfie. Sending a thank-you note shows manners and conscientiousness. Having a selfie on LinkedIn shows unprofessionalism. But is that really true? Can one thing carry across to every area of job performance? It’s worth debating.

Gut feelings aren’t accurate

We all like to think we can trust our intuition. The problem is that intuitive judgments tend to only work in areas where feedback is fast and cause and effect clear. Job interviews don’t fall into that category. Feedback is slow. The link between a hiring decision and a company’s success is unclear.

Overwhelmed by candidates and the pressure of choosing, interviewers may resort to making snap judgments based on limited information. And interviews introduce a lot of noise, which can dilute relevant information while leading to overconfidence. In a study entitled Belief in the Unstructured Interview: The Persistence of an Illusion, participants predicted the future GPA of a set of students. They either received biographical information about the students or both biographical information and an interview. In some of the cases, the interview responses were entirely random, meaning they shouldn’t have conveyed any genuine useful information.

Before the participants made their predictions, the researchers informed them that the strongest predictor of a student’s future GPA is their past GPA. Seeing as all participants had access to past GPA information, they should have factored it heavily into their predictions.

In the end, participants who were able to interview the students made worse predictions than those who only had access to biographical information. Why? Because the interviews introduced too much noise. They distracted participants with irrelevant information, making them forget the most significant predictive factor: past GPA. Of course, we do not have clear metrics like GPA for jobs. But this study indicates that interviews do not automatically lead to better judgments about a person.

We tend to think human gut judgments are superior, even when evidence doesn’t support this. We are quick to discard information that should shape our judgments in favor of less robust intuitions that we latch onto because they feel good. The less challenging information is to process, the better it feels. And we tend to associate good feelings with ‘rightness’.

Experience ≠ expertise in interviewing

In 1979, the University of Texas Medical School at Houston suddenly had to increase its incoming class size by 50 students due to a legal change requiring larger classes. Without time to interview again, they selected from the pool of candidates the school chose to interview, then rejected as unsuitable for admission. Seeing as they got through to the interview stage, they had to be among the best candidates. They just weren’t previously considered good enough to admit.

When researchers later studied the result of this unusual situation, they found that the students whom the school first rejected performed no better or worse academically than the ones they first accepted. In short, interviewing students did nothing to help select for the highest performers.

Studying the efficacy of interviews is complicated and hard to manage from an ethical standpoint. We can’t exactly give different people the same real-world job in the same conditions. We can take clues from fortuitous occurrences, like the University of Texas Medical School change in class size and the subsequent lessons learned. Without the legal change, the interviewers would never have known that the students they rejected were of equal competence to the ones they accepted. This is why building up experience in this arena is difficult. Even if someone has a lot of experience conducting interviews, it’s not straightforward to translate that into expertise. Expertise is about have a predictive model of something, not just knowing a lot about it.

Furthermore, the feedback from hiring decisions tends to be slow. An interviewer cannot know what would happen if they hired an alternate candidate. If a new hire doesn’t work out, that tends to fall on them, not the person who chose them. There are so many factors involved that it’s not terribly conducive to learning from experience.

Making interviews more effective

It’s easy to see why job interviews are so common. People want to work with people they like, so interviews allow them to scope out possible future coworkers. Candidates expect interviews, as well—wouldn’t you feel a bit peeved if a company offered you a job without the requisite “casual chat” beforehand? Going through a grueling interview can make candidates more invested in the position and likely to accept an offer. And it can be hard to imagine viable alternatives to interviews.

But it is possible to make job interviews more effective or make them the final step in the hiring process after using other techniques to gauge a potential hire’s abilities. Doing what works should take priority over what looks right or what has always been done.

Structured interviews

While unstructured interviews don’t work, structured ones can be excellent. In Thinking, Fast and Slow, Daniel Kahneman describes how he redefined the Israel Defense Force’s interviewing process as a young psychology graduate. At the time, recruiting a new soldier involved a series of psychometric tests followed by an interview to assess their personality. Interviewers then based their decision on their intuitive sense of a candidate’s fitness for a particular role. It was very similar to the method of hiring most companies use today—and it proved to be useless.

Kahneman introduced a new interviewing style in which candidates answered a predefined series of questions that were intended to measure relevant personality traits for the role (for example, responsibility and sociability). He then asked interviewers to give candidates a score for how well they seemed to exhibit each trait based on their responses. Kahneman explained that “by focusing on standardized, factual questions I hoped to combat the halo effect, where favorable first impressions influence later judgments.” He tasked interviewers only with providing these numbers, not with making a final decision.

Although interviewers at first disliked Kahneman’s system, structured interviews proved far more effective and soon became the standard for the IDF. In general, they are often the most useful way to hire. The key is to decide in advance on a list of questions, specifically designed to test job-specific skills, then ask them to all the candidates. In a structured interview, everyone gets the same questions with the same wording, and the interviewer doesn’t improvise.

Tomas Chamorro-Premuzic writes in The Talent Delusion:

There are at least 15 different meta-analytic syntheses on the validity of job interviews published in academic research journals. These studies show that structured interviews are very useful to predict future job performance. . . . In comparison, unstructured interviews, which do not have a set of predefined rules for scoring or classifying answers and observations in a reliable and standardized manner, are considerably less accurate.

Why does it help if everyone hears the same questions? Because, as we learned previously, interviewers can make unconscious judgments about candidates, then ask questions intended to confirm their assumptions. Structured interviews help measure competency, not irrelevant factors. Ron Friedman explains this further:

It’s also worth having interviewers develop questions ahead of time so that: 1) each candidate receives the same questions, and 2) they are worded the same way. The more you do to standardize your interviews, providing the same experience to every candidate, the less influence you wield on their performance.

What, then, is an employer to do with the answers? Friedman says you must then create clear criteria for evaluating them.

Another step to help minimize your interviewing blind spots: include multiple interviewers and give them each specific criteria upon which to evaluate the candidate. Without a predefined framework for evaluating applicants—which may include relevant experience, communication skills, attention to detail—it’s hard for interviewers to know where to focus. And when this happens, fuzzy interpersonal factors hold greater weight, biasing assessments. Far better to channel interviewers’ attention in specific ways, so that the feedback they provide is precise.

Blind auditions

One way to make job interviews more effective is to find ways to “blind” the process—to disguise key information that may lead to biased judgments. Blinded interviews focus on skills alone, not who a candidate is as a person. Orchestras offer a remarkable case study in the benefits of blinding.

In the 1970s, orchestras had a gender bias problem. A mere 5% of their members were women, on average. Orchestras knew they were missing out on potential talent, but they found the audition process seemed to favor men over women. Those who were carrying out auditions couldn’t sidestep their unconscious tendency to favor men.

Instead of throwing up their hands in despair and letting this inequality stand, orchestras began carrying out blind auditions. During these, candidates would play their instruments behind a screen while a panel listened and assessed their performance. They received no identifiable information about candidates. The idea was that orchestras would be able to hire without room for bias. It took a bit of tweaking to make it work – at first, the interviewers were able to discern gender based on the sound of a candidate’s shoes. After that, they requested that people interview without their shoes.

The results? By 1997, up to 25% of orchestra members were women. Today, the figure is closer to 30%.

Although this is sometimes difficult to replicate for other types of work, blind auditions can provide an inspiration to other industries that could benefit from finding ways to make interviews more about a person’s abilities than their identity.

Competency-related evaluations

What’s the best way to test if someone can do a particular job well? Get them to carry out tasks that are part of the job. See if they can do what they say they can do. It’s much harder for someone to lie and mislead an interviewer during actual work than during an interview. Using competency tests for a blinded interview process is also possible—interviewers could look at depersonalized test results to make unbiased judgments.

Tomas Chamorro-Premuzic writes in The Talent Delusion: Why Data, Not Intuition, Is the Key to Unlocking Human Potential, “The science of personnel selection is over a hundred years old yet decision-makers still tend to play it by ear or believe in tools that have little academic rigor. . . . An important reason why talent isn’t measured more scientifically is the belief that rigorous tests are difficult and time-consuming to administer, and that subjective evaluations seem to do the job ‘just fine.’”

Competency tests are already quite common in many fields. But interviewers tend not to accord them sufficient importance. They come after an interview, or they’re considered secondary to it. A bad interview can override a good competency test. At best, interviewers accord them equal importance to interviews. Yet they should consider them far more important.

Ron Friedman writes, “Extraneous data such as a candidate’s appearance or charisma lose their influence when you can see the way an applicant actually performs. It’s also a better predictor of their future contributions because unlike traditional in-person interviews, it evaluates job-relevant criteria. Including an assignment can help you better identify the true winners in your applicant pool while simultaneously making them more invested in the position.”

Conclusion

If a company relies on traditional job interviews as its sole or main means of choosing employees, it simply won’t get the best people. And getting hiring right is paramount to the success of any organization. A driven team of people passionate about what they do can trump one with better funding and resources. The key to finding those people is using hiring techniques that truly work.

The post Job Interviews Don’t Work appeared first on Farnam Street.

]]>
42535
Why You Feel At Home In A Crisis https://canvasly.link/crisis/ Mon, 22 Jun 2020 21:52:52 +0000 https://canvasly.link/?p=42504 When disaster strikes, people come together. During the worst times of our lives, we can end up experiencing the best mental health and relationships with others. Here’s why that happens and how we can bring the lessons we learn with us once things get better. *** The Social Benefits of Adversity When World War II …

The post Why You Feel At Home In A Crisis appeared first on Farnam Street.

]]>
When disaster strikes, people come together. During the worst times of our lives, we can end up experiencing the best mental health and relationships with others. Here’s why that happens and how we can bring the lessons we learn with us once things get better.

***

“Humans don’t mind hardship, in fact they thrive on it; what they mind is not feeling necessary. Modern society has perfected the art of making people not feel necessary.”

— Sebastian Junger

The Social Benefits of Adversity

When World War II began to unfold in 1939, the British government feared the worst. With major cities like London and Manchester facing aerial bombardment from the German air force, leaders were sure societal breakdown was imminent. Civilians were, after all, in no way prepared for war. How would they cope with a complete change to life as they knew it? How would they respond to the nightly threat of injury or death? Would they riot, loot, experience mass-scale psychotic breaks, go on murderous rampages, or lapse into total inertia as a result of exposure to German bombing campaigns?

Robert M. Titmuss writes in Problems of Social Policy that “social distress, disorganization, and loss of morale” were expected. Experts predicted 600,000 deaths and 1.2 million injuries from the bombings. Some in the government feared three times as many psychiatric casualties as physical ones. Official reports pondered how the population would respond to “financial distress, difficulties of food distribution, breakdowns in transport, communications, gas, lighting, and water supplies.”

After all, no one had lived through anything like this. Civilians couldn’t receive training as soldiers could, so it stood to reason they would be at high risk of psychological collapse. Titmus writes, “It seems sometimes to have been expected almost as a matter of course that widespread neurosis and panic would ensue.” The government contemplated sending a portion of soldiers into cities, rather than to the front lines, to maintain order.

Known as the Blitz, the effects of the bombing campaign were brutal. Over 60,000 civilians died, about half of them in London. The total cost of property damage was about £56 billion in today’s money, with almost a third of the houses in London becoming uninhabitable.

Yet despite all this, the anticipated social and psychological breakdown never happened. The death toll was also much lower than predicted, in part due to stringent adherence to safety instructions. In fact, the Blitz achieved the opposite of what the attackers intended: the British people proved more resilient than anyone predicted. Morale remained high, and there didn’t appear to be an increase in mental health problems. The suicide rate may have decreased. Some people with longstanding mental health issues found themselves feeling better.

People in British cities came together like never before to organize themselves at the community level. The sense of collective purpose this created led many to experience better mental health than they’d ever had. One indicator of this is that children who remained with their parents fared better than those evacuated to the safety of the countryside. The stress of the aerial bombardment didn’t override the benefits of staying in their city communities.

The social unity the British people reported during World War II lasted in the decades after. We can see it in the political choices the wartime generation made—the politicians they voted into power and the policies they voted for. By some accounts, the social unity fostered by the Blitz was the direct cause of the strong welfare state that emerged after the war and the creation of Britain’s free national healthcare system. Only when the wartime generation started to pass away did that sentiment fade.

We know how to Adapt to Adversity

We may be ashamed to admit it, but human nature is more at home in a crisis.

Disasters force us to band together and often strip away our differences. The effects of World War II on the British people were far from unique. The Allied bombing of Germany also strengthened community spirit. In fact, cities that suffered the least damage saw the worst psychological consequences. Similar improvements in morale occurred during other wars, riots, and after September 11, 2001.

When normality breaks down, we experience the sort of conditions we evolved to handle. Our early ancestors lived with a great deal of pain and suffering. The harsh environments they faced necessitated collaboration and sharing. Groups of people who could work together were most likely to survive. Because of this, evolution selected for altruism.

Among modern foraging tribal groups, the punishments for freeloading are severe. Execution is not uncommon. As severe as this may seem, allowing selfishness to flourish endangers the whole group. It stands to reason that the same was true for our ancestors living in much the same conditions. Being challenged as a group by difficult changes in our environment leads to incredible community cohesion.

Many of the conditions we need to flourish both as individuals and as a species emerge during disasters. Modern life otherwise fails to provide them. Times of crisis are closer to the environments our ancestors evolved in. Of course, this does not mean that disasters are good. By their nature, they produce immense suffering. But understanding their positive flip side can help us to both weather them better and bring important lessons into the aftermath.

Embracing Struggle

Good times don’t actually produce good societies.

In Tribe: On Homecoming and Belonging, Sebastian Junger argues that modern society robs us of the solidarity we need to thrive. Unfortunately, he writes, “The beauty and the tragedy of the modern world is that it eliminates many situations that require people to demonstrate commitment to the collective good.” As life becomes safer, it is easier for us to live detached lives. We can meet all of our needs in relative isolation, which prevents us from building a strong connection to a common purpose. In our normal day to day, we rarely need to show courage, turn to our communities for help, or make sacrifices for the sake of others.

Furthermore, our affluence doesn’t seem to make us happier. Junger writes that “as affluence and urbanization rise in a society, rates of depression and suicide tend to go up, not down. Rather than buffering people from clinical depression, increased wealth in society seems to foster it.” We often think of wealth as a buffer from pain, but beyond a certain point, wealth can actually make us more fragile.

The unexpected worsening of mental health in modern society has much to do with our lack of community—which might explain why times of disaster, when everyone faces the breakdown of normal life, can counterintuitively improve mental health, despite the other negative consequences. When situations requiring sacrifice do reappear and we must work together to survive, it alleviates our disconnection from each other. Disaster increases our reliance on our communities.

In a state of chaos, our way of relating to each other changes. Junger explains that “self-interest gets subsumed into group interest because there is no survival outside of group survival, and that creates a social bond that many people sorely miss.” Helping each other survive builds ties stronger than anything we form during normal conditions. After a natural disaster, residents of a city may feel like one big community for the first time. United by the need to get their lives back together, individual differences melt away for a while.

Junger writes particularly of one such instance:

The one thing that might be said for societal collapse is that—for a while at least—everyone is equal. In 1915 an earthquake killed 30,000 people in Avezzano, Italy, in less than a minute. The worst-hit areas had a mortality rate of 96 percent. The rich were killed along with the poor, and virtually everyone who survived was immediately thrust into the most basic struggle for survival: they needed food, they needed water, they needed shelter, and they needed to rescue the living and bury the dead. In that sense, plate tectonics under the town of Avezzano managed to recreate the communal conditions of our evolutionary past quite well.

Disasters bring out the best in us. Junger goes on to say that “communities that have been devastated by natural or manmade disasters almost never lapse into chaos and disorder; if anything they become more just, more egalitarian, and more deliberately fair to individuals.” When catastrophes end, despite their immense negatives, people report missing how it felt to unite for a common cause. Junger explains that “what people miss presumably isn’t danger or loss but the unity that these things often engender.” The loss of that unification can be, in its own way, traumatic.

Don’t be Afraid of Disaster

So what can we learn from Tribe?

The first lesson is that, in the face of disaster, we should not expect the worst from other people. Yes, instances of selfishness will happen no matter what. Many people will look out for themselves at the expense of others, not least the ultra-wealthy who are unlikely to be affected in a meaningful way and so will not share in the same experience. But on the whole, history has shown that the breakdown of order people expect is rare. Instead, we find new ways to continue and to cope.

During World War II, there were fears that British people would resent the appearance of over two million American servicemen in their country. After all, it meant more competition for scarce resources. Instead, the “friendly invasion” met with a near-unanimous warm welcome. British people shared what they had without bitterness. They understood that the Americans were far from home and missing their loved ones, so they did all they could to help. In a crisis, we can default to expecting the best from each other.

Second, we can achieve a great deal by organizing on the community level when disaster strikes. Junger writes, “There are many costs to modern society, starting with its toll on the global ecosystem and working one’s way down to its toll on the human psyche, but the most dangerous may be to community. If the human race is under threat in some way that we don’t yet understand, it will probably be at a community level that we either solve the problem or fail to.” When normal life is impossible, being able to volunteer help is an important means of retaining a sense of control, even if it imposes additional demands. One explanation for the high morale during the Blitz is that everyone could be involved in the war effort, whether they were fostering a child, growing cabbages in their garden, or collecting scrap metal to make planes.

For our third and final lesson, we should not forget what we learn about the importance of banding together. What’s more, we must do all we can to let that knowledge inform future decisions. It is possible for disasters to spark meaningful changes in the way we live. We should continue to emphasize community and prioritize stronger relationships. We can do this by building strong reminders of what happened and how it impacted people. We can strive to educate future generations, teaching them why unity matters.

(In addition to Tribe, many of the details of this post come from Disasters and Mental Health: Therapeutic Principles Drawn from Disaster Studies by Charles E. Fritz.)

The post Why You Feel At Home In A Crisis appeared first on Farnam Street.

]]>
42504
Stop Preparing For The Last Disaster https://canvasly.link/last-disaster/ Mon, 15 Jun 2020 11:30:27 +0000 https://canvasly.link/?p=42449 When something goes wrong, we often strive to be better prepared if the same thing happens again. But the same disasters tend not to happen twice in a row. A more effective approach is simply to prepare to be surprised by life, instead of expecting the past to repeat itself. *** If we want to …

The post Stop Preparing For The Last Disaster appeared first on Farnam Street.

]]>
When something goes wrong, we often strive to be better prepared if the same thing happens again. But the same disasters tend not to happen twice in a row. A more effective approach is simply to prepare to be surprised by life, instead of expecting the past to repeat itself.

***

If we want to become less fragile, we need to stop preparing for the last disaster.

When disaster strikes, we learn a lot about ourselves. We learn whether we are resilient, whether we can adapt to challenges and come out stronger. We learn what has meaning for us, we discover core values, and we identify what we’re willing to fight for. Disaster, if it doesn’t kill us, can make us stronger. Maybe we discover abilities we didn’t know we had. Maybe we adapt to a new normal with more confidence. And often we make changes so we will be better prepared in the future.

But better prepared for what?

After a particularly trying event, most people prepare for a repeat of whatever challenge they just faced. From the micro level to the macro level, we succumb to the availability bias and get ready to fight a war we’ve already fought. We learn that one lesson, but we don’t generalize that knowledge or expand it to other areas. Nor do we necessarily let the fact that a disaster happened teach us that disasters do, as a rule, tend to happen. Because we focus on the particulars, we don’t extrapolate what we learn to identifying what we can better do to prepare for adversity in general.

We tend to have the same reaction to challenge, regardless of the scale of impact on our lives.

Sometimes the impact is strictly personal. For example, our partner cheats on us, so we vow never to have that happen again and make changes designed to catch the next cheater before they get a chance; in future relationships, we let jealousy cloud everything.

But other times, the consequences are far reaching and impact the social, cultural, and national narratives we are a part of. Like when a terrorist uses an airplane to attack our city, so we immediately increase security at airports so that planes can never be used again to do so much damage and kill so many people.

The changes we make may keep us safe from a repeat of those scenarios that hurt us. The problem is, we’re still fragile. We haven’t done anything to increase our resilience—which means the next disaster is likely to knock us on our ass.

Why do we keep preparing for the last disaster?

Disasters cause pain. Whether it’s emotional or physical, the hurt causes vivid and strong reactions. We remember pain, and we want to avoid it in the future through whatever means possible. The availability of memories of our recent pain informs what we think we should do to stop it from happening again.

This process, called the availability bias, has significant implications for how we react in the aftermath of disaster. Writing in The Legal Analyst: A Toolkit for Thinking about the Law about the information cascades this logical fallacy sets off, Ward Farnsworth says they “also help explain why it’s politically so hard to take strong measures against disasters before they have happened at least once. Until they occur they aren’t available enough to the public imagination to seem important; after they occur their availability cascades and there is an exaggerated rush to prevent the identical thing from happening again. Thus after the terrorist attacks on the World Trade Center, cutlery was banned from airplanes and invasive security measures were imposed at airports. There wasn’t the political will to take drastic measures against the possibility of nuclear or other terrorist attacks of a type that hadn’t yet happened and so weren’t very available.”

In the aftermath of a disaster, we want to be reassured of future safety. We lived through it, and we don’t want to do so again. By focusing on the particulars of a single event, however, we miss identifying the changes that will improve our chances of better outcomes next time. Yes, we don’t want any more planes to fly into buildings. But preparing for the last disaster leaves us just as underprepared for the next one.

What might we do instead?

We rarely take a step back and go beyond the pain to look at what made us so vulnerable to it in the first place. However, that’s exactly where we need to start if we really want to better prepare ourselves for future disaster. Because really, what most of us want is to not be taken by surprise again, caught unprepared and vulnerable.

The reality is that the same disaster is unlikely to happen twice. Your next lover is unlikely to hurt you in the same way your former one did, just as the next terrorist is unlikely to attack in the same way as their predecessor. If we want to make ourselves less fragile in the face of great challenge, the first step is to accept that you are never going to know what the next disaster will be. Then ask yourself: How can I prepare anyway? What changes can I make to better face the unknown?

As Andrew Zolli and Ann Marie Healy explain in Resilience: Why Things Bounce Back, “surprises are by definition inevitable and unforeseeable, but seeking out their potential sources is the first step toward adopting the open, ready stance on which resilient responses depend.”

Giving serious thought to the range of possible disasters immediately makes you aware that you can’t prepare for all of them. But what are the common threads? What safeguards can you put in place that will be useful in a variety of situations? A good place to start is increasing your adaptability. The easier you can adapt to change, the more flexibility you have. More flexibility means having more options to deal with, mitigate, and even capitalize on disaster.

Another important mental tool is to accept that disasters will happen. Expect them. It’s not about walking around every day with your adrenaline pumped in anticipation; it’s about making plans assuming that they will get derailed at some point. So you insert backup systems. You create a cushion, moving away from razor-thin margins. You give yourself the optionality to respond differently when the next disaster hits.

Finally, we can find ways to benefit from disaster. Author and economist Keisha Blair, in Holistic Wealth, suggests that “building our resilience muscles starts with the way we process the negative events in our lives. Mental toughness is a prerequisite for personal growth and success.” She further writes, “adversity allows us to become better rounded, richer in experience, and to strengthen our inner resources.” We can learn from the last disaster how to grow and leverage our experiences to better prepare for the next one.

The post Stop Preparing For The Last Disaster appeared first on Farnam Street.

]]>
42449
Coordination Problems: What It Takes to Change the World https://canvasly.link/coordination-problems/ Mon, 08 Jun 2020 11:00:17 +0000 https://canvasly.link/?p=42387 The key to major changes on a societal level is getting enough people to alter their behavior at the same time. It’s not enough for isolated individuals to act. Here’s what we can learn from coordination games in game theory about what it takes to solve some of the biggest problems we face. *** What …

The post Coordination Problems: What It Takes to Change the World appeared first on Farnam Street.

]]>
The key to major changes on a societal level is getting enough people to alter their behavior at the same time. It’s not enough for isolated individuals to act. Here’s what we can learn from coordination games in game theory about what it takes to solve some of the biggest problems we face.

***

What is a Coordination Failure?

Sometimes we see systems where everyone involved seems to be doing things in a completely ineffective and inefficient way. A single small tweak could make everything substantially better—save lives, be more productive, save resources. To an outsider, it might seem obvious what needs to be done, and it might be hard to think of an explanation for the ineffectiveness that is more nuanced than assuming everyone in that system is stupid.

Why is publicly funded research published in journals that charge heavily for it, limiting the flow of important scientific knowledge, without contributing anything? Why are countries spending billions of dollars and risking disaster developing nuclear weapons intended only as deterrents? Why is doping widespread in some sports, even though it carries heavy health consequences and is banned? You can probably think of many similar problems.

Coordination games in game theory gives us a lens for understanding both the seemingly inscrutable origins of such problems and why they persist.

The Theoretical Background to Coordination Failure

In game theory, a game is a set of circumstances where two or more players pick among competing strategies in order to get a payoff. A coordination game is one where players get the best possible payoff by all doing the same thing. If one player chooses a different strategy, they get a diminished payoff and the other player usually gets an increased payoff.

When all players are carrying out a strategy from which they have no incentive to deviate, this is called the Nash equilibrium: given the strategy chosen by the other player(s), no player could improve their payoff by changing their strategy. However, a game can have multiple Nash equilibria with different payoffs. In real-world terms, this means there are multiple different choices everyone could make, some better than others, but all only working if they are unanimous.

The Prisoner’s Dilemma is a coordination game. In a one-round Prisoner’s Dilemma, the optimal strategy for each player is to defect. Even though this is the strategy that makes most sense, it isn’t the one with the highest possible payoff—that would involve both players cooperating. If one cooperates when the other doesn’t, they receive a diminished payoff. Seeing as they cannot know what the other player will do, cooperating is unwise. If they cooperate when the other defects, they get the worst possible payoff. If they defect and the other player also defects, they still get a better payoff than they would have done by cooperating.

So the Prisoner’s Dilemma is a coordination failure. The players would get a better payoff if they both cooperated, but they cannot trust each other. In a form of the Iterated Prisoner’s Dilemma, players compete for an unknown number of rounds. In this case, cooperation becomes possible if both players use the strategy of “tit for tat.” This means that they cooperate in the first round, then do whatever the other player previously did for each subsequent round. However, there is still an incentive to defect because any given round could be the last, so cooperating can never be the Nash equilibrium in the Prisoner’s Dilemma.

Many of the major problems we see around us are coordination failures. They are only solvable if everyone can agree to do the same thing at the same time. Faced with multiple Nash equilibria, we do not necessarily choose the best one overall. We choose what makes sense given the existing incentives, which often discourage us from challenging the status quo. It often makes most sense to do what everyone else is doing, whether that’s driving on the left side of the road, wearing a suit to a job interview, or keeping your country’s nuclear arsenal stocked up.

Take the case of academic publishing, given as a classic coordination failure by Eliezer Yudkowsky in Inadequate Equilibria: Where and How Civilizations Get Stuck. Academic journals publish research within a given field and charge for access to it, often at exorbitant rates. In order to get the best jobs and earn prestige within a field, researchers need to publish in the most respected journals. If they don’t, no one will take their work seriously.

Academic publishing is broken in many ways. By charging high prices, journals limit the flow of knowledge and slow scientific progress. They do little to help researchers, instead profiting from the work of volunteers and taxpayer funding. Yet researchers continue to submit their work to them. Why? Because this is the Nash equilibrium. Although it would be better for science as a whole if everyone stopped publishing in journals that charge for access, it isn’t in the interests of any individual scientist to do so. If they did, their career would suffer and most likely end. The only solution would be a coordinated effort for everyone to move away from journals. But seeing as this is so difficult to organize, the farce of academic publishing continues, harming everyone except the journals.

How We Can Solve and Avoid Coordination Failures

It’s possible to change things on a large scale if we are able to communicate on a much greater scale. When everyone knows that everyone knows, changing what we do is much easier.

We all act out of self-interest, so expecting individuals to risk the costs of going against convention is usually unreasonable. Yet it only takes a small proportion of people to change their opinions to reach a tipping point where there is strong incentive for everyone to change their behavior, and this is magnified even more if those people have a high degree of influence. The more power those who enact change have, the faster everyone else can do the same.

To overcome coordination failures, we need to be able to communicate despite our differences. And we need to be able to trust that when we act, others will act too. The initial kick can be enough people making their actions visible. Groups can have exponentially greater impacts than individuals. We thus need to think beyond the impact of our own actions and consider what will happen when we act as part of a group.

In an example given by the effective altruism-centered website 80,000 Hours, there are countless charitable causes one could donate money to at any given time. Most people who donate do so out of emotional responses or habit. However, some charitable causes are orders of magnitude more effective than others at saving lives and having a positive global impact. If many people can coordinate and donate to the most effective charities until they reach their funding goal, the impact of the group giving is far greater than if isolated individuals calculate the best use of their money. Making research and evidence of donations public helps solve the communication issue around determining the impact of charitable giving.

As Michael Suk-Young Chwe writes in Rational Ritual: Culture, Coordination, and Common Knowledge, “Successful communication sometimes is not simply a matter of whether a given message is received. It also depends on whether people are aware that other people also receive it.” According to Suk-Young Chwe, for people to coordinate on the basis of certain information it must be “common knowledge,” a phrase used here to mean “everyone knows it, everyone knows that everyone knows it, everyone knows that everyone knows that everyone knows it, and so on.” The more public and visible the change is, the better.

We can prevent coordination failures in the first place by visible guarantees that those who take a different course of action will not suffer negative consequences. Bank runs are a coordination failure that were particularly problematic during the Great Depression. It’s better for everyone if everyone leaves their deposits in the bank so it doesn’t run out of reserves and fail. But when other people start panicking and withdrawing their deposits, it makes sense for any given individual to do likewise in case the bank fails and they lose their money. The solution to this is deposit protection insurance, which ensures no one comes away empty-handed even if a bank does fail.

Game theory can help us to understand not only why it can be difficult for people to work together in the best possible way but also how we can reach more optimal outcomes through better communication. With a sufficient push towards a new equilibrium, we can drastically improve our collective circumstances in a short time.

The post Coordination Problems: What It Takes to Change the World appeared first on Farnam Street.

]]>
42387
When Safety Proves Dangerous https://canvasly.link/safety-proves-dangerous/ Mon, 25 May 2020 12:00:48 +0000 https://canvasly.link/?p=42299 Not everything we do with the aim of making ourselves safer has that effect. Sometimes, knowing there are measures in place to protect us from harm can lead us to take greater risks and cancel out the benefits. This is known as risk compensation. Understanding how it affects our behavior can help us make the …

The post When Safety Proves Dangerous appeared first on Farnam Street.

]]>
Not everything we do with the aim of making ourselves safer has that effect. Sometimes, knowing there are measures in place to protect us from harm can lead us to take greater risks and cancel out the benefits. This is known as risk compensation. Understanding how it affects our behavior can help us make the best possible decisions in an uncertain world.

***

The world is full of risks. Every day we take endless chances, whether we’re crossing the road, standing next to someone with a cough on the train, investing in the stock market, or hopping on a flight.

From the moment we’re old enough to understand, people start teaching us crucial safety measures to remember: don’t touch that, wear this, stay away from that, don’t do this. And society is endlessly trying to mitigate the risks involved in daily life, from the ongoing efforts to improve car safety to signs reminding employees to wash their hands after using the toilet.

But the things we do to reduce risk don’t always make us safer. They can end up having the opposite effect. This is because we tend to change how we behave in response to our perceived safety level. When we feel safe, we take more risks. When we feel unsafe, we are more cautious.

Risk compensation means that efforts to protect ourselves can end up having a smaller effect than expected, no effect at all, or even a negative effect. Sometimes the danger is transferred to a different group of people, or a behavior modification creates new risks. Knowing how we respond to risk can help us avoid transferring danger to other more vulnerable individuals or groups.

Examples of Risk Compensation

There are many documented instances of risk compensation. One of the first comes from a 1975 paper by economist Sam Peltzman, entitled “The Effects of Automobile Safety Regulation.” Peltzman looked at the effects of new vehicle safety laws introduced several years earlier, finding that they led to no change in fatalities. While people in cars were less likely to die in accidents, pedestrians were at a higher risk. Why? Because drivers took more risks, knowing they were safer if they crashed.

Although Peltzman’s research has been both replicated and called into question over the years (there are many ways to interpret the same dataset), risk compensation is apparent in many other areas. As Andrew Zolli and Ann Marie Healy write in Resilience: Why Things Bounce Back, children who play sports involving protective gear (like helmets and knee pads) take more physical risks, and hikers who think they can be easily rescued are less cautious on the trails.

A study of taxi drivers in Munich, Germany, found that those driving vehicles with antilock brakes had more accidents than those without—unsurprising, considering they tended to accelerate faster and stop harder. Another study suggested that childproof lids on medicine bottles did not reduce poisoning rates. According to W. Kip Viscusi at Duke University, parents became more complacent with all medicines, including ones without the safer lids. Better ripcords on parachutes lead skydivers to pull them too late.

As defenses against natural disasters have improved, people have moved into riskier areas, and deaths from events like floods or hurricanes have not necessarily decreased. After helmets were introduced in American football, tackling fatalities actually increased for a few years, as players were more willing to strike heads (this changed with the adoption of new tackling standards.) Bailouts and protective mechanisms for financial institutions may have contributed to the scale of the 2008 financial crisis, as they led to banks taking greater and greater risks. There are numerous other examples.

We can easily see risk compensation play out in our lives and those of people around us. Someone takes up a healthy habit, like going to the gym, then compensates by drinking more. Having an emergency fund in place can encourage us to take greater financial risks.

Risk Homeostasis

According to psychology professor Gerald Wilde, we all internally have a desired level of risk that varies depending on who we are and the context we are in. Our risk tolerance is like a thermostat—we take more risks if we feel too safe, and vice versa, in order to remain at our desired “temperature.” It all comes down to the costs and benefits we expect from taking on more or less risk.

The notion of risk homeostasis, although controversial, can help explain risk compensation. It means that enforcing measures to make people safer will inevitably lead to changes in behavior that maintain the amount of risk we’d like to experience, like driving faster while wearing a seatbelt. A feedback loop communicating our perceived risk helps us keep things as dangerous as we wish them to be. We calibrate our actions to how safe we’d like to be, making adjustments if it swings too far in one direction or the other.

What We Can Learn from Risk Compensation

We can learn many lessons from risk compensation and the research that has been done on the subject. First, safety measures are more effective the less visible they are. If people don’t know about a risk reduction, they won’t change their behavior to compensate for it. When we want to make something safer, it’s best to ensure changes go as unnoticed as possible.

Second, an effective method to reduce risk-taking behavior is to provide incentives for prudent behavior, giving people a reason to adjust their risk thermostat. Just because it seems like something has become safer doesn’t mean the risk hasn’t transferred elsewhere, putting a different group of people in danger as when seat belt laws lead to more pedestrian fatalities. So, for instance, lower insurance premiums for careful drivers might result in fewer fatalities than stricter road safety laws because it causes them to make positive changes to their behavior, instead of shifting the risk elsewhere.

Third, we are biased towards intervention. When we want to improve a situation, our first instinct tends to be to step in and change something, anything. Sometimes it is wiser to do less, or even nothing. Changing something does not always make people safer, sometimes it just changes the nature of the danger.

Fourth, when we make a safety change, we may need to implement corresponding rules to avoid risk compensation. Football helmets made the sport more dangerous at first, but new rules about tackling helped cancel out the behavior changes because the league was realistic about the need for more than just physical protection.

Finally, making people feel less safe can actually improve their behavior. Serious injuries in car crashes are rarer when the roads are icy, even if minor incidents are more common, because drivers take more care. If we want to improve safety, we can make risks more visible through better education.

Risk compensation certainly doesn’t mean it’s not a good idea to take steps to make ourselves safer, but it does illustrate how we need to be aware of unintended consequences that occur when we interact with complex systems. We can’t always expect to achieve the changes we desire first time around. Once we make a change, we should pay careful attention to the effects on the whole system to see what happens. Sometimes it will take the testing of a few alternate approaches to bring us closer to the desired effect.

The post When Safety Proves Dangerous appeared first on Farnam Street.

]]>
42299
Rethinking Fear https://canvasly.link/rethinking-fear/ Mon, 11 May 2020 12:00:34 +0000 https://canvasly.link/?p=42084 Fear is a state no one wants to embrace, yet for many of us it’s the background music to our lives. But by making friends with fear and understanding why it exists, we can become less vulnerable to harm—and less afraid. Read on to learn a better approach to fear. *** In The Gift of …

The post Rethinking Fear appeared first on Farnam Street.

]]>
Fear is a state no one wants to embrace, yet for many of us it’s the background music to our lives. But by making friends with fear and understanding why it exists, we can become less vulnerable to harm—and less afraid. Read on to learn a better approach to fear.

***

In The Gift of Fear: Survival Signals That Protect Us From Violence, author Gavin de Becker argues that we all have an intuitive sense of when we are in danger. Drawing upon his experience as a high-stakes security specialist, he explains how we can protect ourselves by paying better attention to our gut feelings and not letting denial lead us to harm. Our intuition, honed by evolution and by a lifetime of experience, deserves more respect.

By telling us to value our intuition, de Becker isn’t telling anyone to live in fear permanently, always alert for possible risks. Quite the opposite. De Becker writes that we misunderstand the value of fear when we think that being constantly hypervigilant will keep us safe. Being afraid all the time doesn’t protect us from danger. Instead, he explains, by trusting that our gut feelings are accurate and learning key signals that portend risk, we can actually feel calmer and safer:

Far too many people are walking around in a constant state of vigilance, their intuition misinformed about what really poses danger. It needn’t be so. When you honor accurate intuitive signals and evaluate them without denial (believing that either the favorable or unfavorable outcome is possible), you need not be wary, for you will come to trust that you’ll be notified if there is something worthy of your attention. Fear will gain credibility because it won’t be applied wastefully.

When we walk around terrified all the time, we can’t pick out the signal from the noise. If you’re constantly scared, you can’t correctly notice when there is something genuine to fear. True fear is a momentary signal, not an ongoing state. De Becker writes that “if one feels fear of all people all the time, there is no signal reserved for the times when it’s really needed.”

What we fear the most is rarely what ends up happening. Fixating on particular dangers blinds us to others. We focus on checking the road for snakes and end up getting knocked over by a car. De Becker writes that it matters that we’re receptive to fear, not that we’re watching out for what scares us the most (though of course, different things pose different risks to different people, and we should evaluate accordingly.) After all, “we are far more open to signals when we don’t focus on the expectation of specific signals.”

Fear vs. anxiety

Fear is not the same as anxiety. Although people experiencing anxiety are often afraid of both the anxiety and what they presume to be its cause, these two states have different triggers. De Becker explains one of the key factors that differentiates the two:

Anxiety, unlike real fear, is always caused by uncertainty. It is caused, ultimately, by predictions in which you have little confidence. When you predict that you will be fired from your job and you are certain the prediction is correct, you don’t have anxiety about being fired. You might have anxiety about the things you can’t predict with certainty, such as the ramifications of losing the job. Predictions in which you have high confidence free you to respond, adjust, feel sadness, accept, prepare, or to do whatever is needed. Accordingly, anxiety is reduced by improving your prediction, thus increasing your certainty.

Understand that when we’re anxious, it’s because we’re uncertain. The solution to this, then, isn’t worrying more—it’s doing all we can to either find clarity or working to accept that uncertainty is part of life.

Using fear

What can we learn from de Becker’s call to rethink fear? We learn that we’ll be in a better position if we can face possible threats with a calm mind, alert to our internal signals but not anticipating every possible bad thing that could happen. While being told to stop panicking never helped anyone, we benefit by understanding that being overwhelmed by fear will hurt us more. Our imaginary fears harm us more than reality ever does.

If this approach sounds familiar, it’s because it echoes ideas from Stoic philosophy. Much like de Becker, the Stoics urged us to be realistic about the fact that bad things can and will happen to us throughout our lives. No one can escape that. Once we’ve faced that reality, some of the shock goes away and we can think about how to prepare. After all, catastrophe and tragedy are part of the journey, not an unexpected detour. Being aware and accepting of the inevitable terrible things that will happen is actually a critical tool in mitigating both their severity and impact.

We don’t need to live in fear to stay safe. A better approach is to be aware of the risks we face, accept that some are unknown or unpredictable, and do all we can to be prepared for any serious or imminent dangers. Then we can focus our energy on maintaining a calm mind and trusting that our intuition will protect us.

“We are more often frightened than hurt; and we suffer more from imagination than from reality.”

— Seneca
 

The Stoics also taught us that we should view terrible events as survivable. It would do us well to give ourselves more credit—we’ve all survived occurrences that once seemed like the worst-case scenario, and we can survive many more.

The post Rethinking Fear appeared first on Farnam Street.

]]>
42084
Bad Arguments and How to Avoid Them https://canvasly.link/bad-arguments/ Mon, 04 May 2020 12:30:58 +0000 https://canvasly.link/?p=41839 Productive arguments serve two purposes: to open our minds to truths we couldn’t see — and help others do the same. Here’s how to avoid common pitfalls and argue like a master. *** We’re often faced with situations in which we need to argue a point, whether we’re pitching an investor or competing for a …

The post Bad Arguments and How to Avoid Them appeared first on Farnam Street.

]]>
Productive arguments serve two purposes: to open our minds to truths we couldn’t see — and help others do the same. Here’s how to avoid common pitfalls and argue like a master.

***

We’re often faced with situations in which we need to argue a point, whether we’re pitching an investor or competing for a contract. When being powerfully persuasive matters, it’s important that we don’t use bad arguments that prevent useful debate instead of furthering it. To do this, it’s useful to know some common ways people remove the possibility of a meaningful discussion. While it can be a challenge to keep our cool and not sink to using bad arguments when responding to a Twitter troll or during a heated confrontation over Thanksgiving dinner, we can benefit from knowing what to avoid when the stakes are high.

“If the defendant be a man of straw, who is to pay the costs?” 

— Charles Dickens
 

To start, let’s define three common types of bad arguments, or logical fallacies: “straw man,” “hollow man,” and “iron man.”

Straw man arguments

A straw man argument is a misrepresentation of an opinion or viewpoint, designed to be as easy as possible to refute. Just as a person made of straw would be easier to fight with than a real human, a straw man argument is easy to knock to the ground. And just as it might look a bit like a real person from a distance, a straw man argument has the rough outline of the actual discussion. In some cases, it might seem similar to an outside observer. But it lacks any semblance of substance or strength. The sole purpose is for it to be easy to refute. It’s not an argument you happen to find inconvenient or challenging. It’s one that is logically flawed. A straw man argument may not even be invalid; it’s just not relevant.

It’s important not to confuse a strawman argument with a simplified summary of a complex argument. When we’re having a debate, we may sometimes need to explain an opponent’s grounds back to them to ensure we understand it. In this case, this explanation will be by necessity a briefer version. But it is only a straw man if the simplification is used to make it easier to attack, rather than to facilitate clearer understanding

There are a number of common tactics used to construct straw man arguments. One is per fas et nefas (which means “through right and wrong” in Latin) and involves refuting one of the reasons for an opponent’s argument, then claiming that discredits everything they’ve said. Often, this type of straw man argument will focus on an irrelevant or unimportant detail, selecting the weakest part of the argument. Even though they have no response to the rest of the discourse, they purport to have disproven it in its entirety. As Doug Walton, professor of philosophy at the University of Winnipeg, puts it, “The straw man tactic is essentially to take some small part of an arguer’s position and then treat it as if that represented his larger position, even though it is not really representative of that larger position. It is a form of generalizing from one aspect to a larger, broader position, but not in a representative way.”

Oversimplifying an argument makes it easier to attack by removing any important nuance. An example is the “peanut butter argument,” which states life cannot have evolved through natural selection because we do not see the spontaneous appearance of new life forms inside sealed peanut butter jars. The argument claims evolutionary theory asserts life emerged through a simple combination of matter and heat, both of which are present in a jar of peanut butter. It is a straw man because it uses an incorrect statement about evolution as being representative of the whole theory. The defender of evolution gets trapped into explaining a position they didn’t even have: why life doesn’t spontaneously develop inside a jar of peanut butter.

Another tactic is to over-exaggerate a line of reasoning to the point of absurdity, thus making it easier to refute. An example would be someone claiming a politician who is not opposed to immigration is thus in favor of open borders with no restrictions on who can enter a country. Seeing as that would be a weak view that few people hold, the politician then feels obligated to defend border controls and risks losing control of the debate and being charged as a hypocrite.

“The light obtained by setting straw men on fire is not what we mean by illumination.”

— Adam Gopnik
 

Straw man arguments that respond to irrelevant points could involve ad hominem points, which are sort of relevant but don’t refute the argument—for example, responding to the point that wind turbines are a more environmentally friendly means of generating energy than fossil fuels by saying, “But wind turbines are ugly.” This point has a loose connection, yet the way wind turbines look doesn’t discredit their benefits for power generation. A person who made an ad hominem point like that would likely be doing so because they knew they had no rebuttal for the actual assertion.

Quoting an argument out of context is another tactic of straw man arguments. “Quote mining” is the practice of removing any part of a source that proves contradictory, often using ellipses to fill in the gaps. For instance, film posters and book blurbs will sometimes take quotes from bad reviews out of context to make them seem positive. So, “It’s amazing how bad this film is” becomes “Amazing,” and “The perfect book for people who wish to be bored to tears” becomes “The perfect book.” Reviewers face an uphill battle in trying not to write anything that could be taken out of control in this manner.

Hollow man arguments

A hollow man argument is similar to a straw man one. The difference is that it is a weak case attributed to a non-existent group. Someone will fabricate a viewpoint that is easy to refute, then claim it was made by a group they disagree with. Arguing against an opponent which doesn’t exist is a pretty easy way to win any debate. People who use hollow man arguments will often favor vague, non-specific language without explicitly giving any sources or stating who their opponent is.

Hollow man arguments slip into debate because they’re a lazy way of making a strong point without risking anyone refuting you or needing to be accountable for the actual strength of a line of reasoning. In Why We Argue (And How We Should): A Guide to Political Disagreement, Scott F. Aikin and Robert B. Talisse write that “speakers commit the hollow man when they respond critically to arguments that nobody on the opposing side has ever made. The act of erecting a hollow man is an argumentative failure because it distracts attention away from the actual reasons and argument given by one’s opposition. . . . It is a full-bore fabrication of the opposition.”

An example of a hollow man argument would be the claim that animal rights activists want humans and non-human animals to have a perfectly equal legal standing, meaning that dogs would have to start wearing clothes to avoid being arrested for public indecency. This is a hollow man because no one has said that all laws applying to humans should also apply to dogs.

“The smart way to keep people passive and obedient is to strictly limit the spectrum of acceptable opinion, but allow very lively debate within that spectrum.”

— Noam Chomsky
 

Iron man argument

An iron man argument is one constructed in such a way that it is resistant to attacks by a challenger. Iron man arguments are difficult to avoid because they have a lot of overlap with legitimate debate techniques. The distinction is whether the person using them is doing so to prevent opposition altogether or if they are open to changing their minds and listening to an opposer. Being proven wrong is painful, which is why we often unthinkingly resort to shielding ourselves from it using iron man arguments.

Someone using an iron man argument often makes their own stance so vague that nothing anyone says about it can weaken it. They’ll make liberal use of caveats, jargon, and imprecise terms. This means they can claim anyone who disagrees didn’t understand them, or they’ll rephrase their contention repeatedly. You could compare this to the language used in the average horoscope or in a fortune cookie. It’s so vague that it’s hard to disagree with or label it as incorrect because it can’t be incorrect. It’s like boxing with a wisp of steam.

An example would be a politician who answers a difficult question about their policies by saying, “I think it’s important that we take the best possible actions to benefit the people of this country. Our priority in this situation is to implement policies that have a positive impact on everyone in society.” They’ve answered the question, just without saying anything that anyone could disagree with.

Why bad arguments are harmful

What is the purpose of debate? Most of us, if asked, would say it’s about helping someone with an incorrect, harmful idea see the light. It’s an act of kindness. It’s about getting to the truth.

But the way we tend to engage in debate contradicts our supposed intentions.

Much of the time, we’re really debating because we want to prove we’re right and our opponent is wrong. Our interest is not in getting to the truth. We don’t even consider the possibility that our opponent might be correct or that we could learn something from them.

As decades of psychological research indicate, our brains are always out to save energy, and part of that is that we prefer not to change our minds about anything. It’s much easier to cling to our existing beliefs through whatever means possible and ignore anything that challenges them. Bad arguments enable us to engage in what looks like a debate but doesn’t pose any risk of forcing us to question what we stand for.

We debate for other reasons, too. Sometimes we’re out to entertain ourselves. Or we want to prove we’re smarter than someone else. Or we’re secretly addicted to the shot of adrenaline we get from picking a fight. And that’s what we’re doing—fighting, not arguing. In these cases, it’s no surprise that shoddy tactics like using straw man or hollow man arguments emerge.

It’s never fun to admit we’re wrong about anything or to have to change our minds. But it is essential if we want to get smarter and see the world as it is, not as we want it to be. Any time we engage in debate, we need to be honest about our intentions. What are we trying to achieve? Are we open to changing our minds? Are we listening to our opponent? Only when we’re out to have a balanced discussion with the possibility of changing our minds can a debate be productive, avoiding the use of logical fallacies.

Bad arguments are harmful to everyone involved in a debate. They don’t get us anywhere because we’re not tackling an opponent’s actual viewpoint. This means we have no hope of convincing them. Worse, this sort of underhand tactic is likely to make an opponent feel frustrated and annoyed by the deliberate misrepresentation of their beliefs. They’re forced to listen to a refutation of something they don’t even believe in the first place, which insults their intelligence. Feeling attacked like this only makes them hold on tighter to their actual belief. It may even make them less willing to engage in any sort of debate in the future.

And if you’re a chronic constructor of bad arguments, as many of us are, it leads people to avoid challenging you or starting discussions. Which means you don’t get to learn from them or have your views questioned. In formal situations, using bad arguments makes it look like you don’t really have a strong point in the first place.

How to avoid using bad arguments

If you want to have useful, productive debates, it’s vital to avoid using bad arguments.

The first thing we need to do to avoid constructing bad arguments is to accept it’s something we’re all susceptible to. It’s easy to look at a logical fallacy and think of all the people we know who use it. It’s much harder to recognize it in ourselves. We don’t always realize when the point we’re making isn’t that strong.

Bad arguments are almost unavoidable if we haven’t taken the time to research both sides of the debate. Sometimes the map is not the territory—that is, our perception of an opinion is not that opinion. The most useful thing we can do is attempt to see the territory. That brings us to steelman arguments and the ideological Turing test.

Steel man arguments

The most powerful way to avoid using bad arguments and to discourage their use by others is to follow the principle of charity and to argue against the strongest and most persuasive version of their grounds. In this case, we suspend disbelief and ignore our own opinions for long enough to understand where they’re coming from. We recognize the good sides of their case and play to its strengths. Ask questions to clarify anything you don’t understand. Be curious about the other person’s perspective. You might not change their mind, but you will at least learn something and hopefully reduce any conflict in the process.

“It is better to debate a question without settling it than to settle a question without debating it.”

— Joseph Joubert
 

In Intuition Pumps and Other Tools for Thinking, the philosopher Daniel Dennett offers some general guidelines for using the principle of charity, formulated by social psychologist and game theorist Anatol Rapoport:

  1. You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.”
  1. You should list any points of agreement (especially if they are not matters of general or widespread agreement).
  1. You should mention anything you have learned from your target.
  1. Only then are you permitted to say so much as a word of rebuttal or criticism.

An argument that is the strongest version of an opponent’s viewpoint is known as a steel man. It’s purposefully constructed to be as difficult as possible to attack. The idea is that we can only say we’ve won a debate when we’ve fought with a steel man, not a straw one. Seeing as we’re biased towards tackling weaker versions of an argument, often without realizing it, this lets us err on the side of caution.

As challenging as this might be, it serves a bigger picture purpose. Steel man arguments help us understand a new perspective, however ludicrous it might be in our eyes, so we’re better positioned to succeed and connect better in the future. It shows a challenger we are empathetic and willing to listen, regardless of personal opinion. The point is to see the strengths, not the weaknesses. If we’re open-minded, not combative, we can learn a lot.

“He who knows only his side of the case knows little of that.”

— John Stuart Mill
 

An exercise in steel manning, the ideological Turing test, proposes that we cannot say we understand an opponent’s position unless we would be able to argue in favor of it so well that an observer would not be able to tell which opinion we actually hold. In other words, we shouldn’t hold opinions we can’t argue against. The ideological Turing test is a great thought experiment to establish whether you understand where an opponent is coming from.

Although we don’t have the option to do this for every single thing we disagree with, when a debate is extremely important to us, the ideological Turing test can be a helpful tool for ensuring we’re fully prepared. Even if we can’t use it all the time, it can serve us well in high-stakes situations.

How to handle other people using bad arguments

“You could not fence with an antagonist who met rapier thrust with blow of battle axe.”

— L.M. Montgomery
 

Let’s say you’re in the middle of a debate with someone with a different opinion than yours. You’re responding to the steel man version of their explanation, staying calm and measured. But what do you do if your opponent starts using bad arguments against you? What if they’re not listening to you?

The first thing you can do when someone uses a bad argument against you is the simplest: point it out. Explain what they’re doing and why it isn’t helpful. There’s not much point in just telling them they’re using a straw man argument or any other type of logical fallacy. If they’re not familiar with the concept, it may just seem like alienating jargon. There’s also not much point in using it as a “gotcha!” point which will likewise foster more tensions. It’s best to define the concept, then reiterate your actual beliefs and how they differ from the bad argument they’re arguing against.

  1. Edward Damer writes in Attacking Faulty Reasoning, “It is not always possible to know whether an opponent has deliberately distorted your argument or has simply failed to understand or interpret it in the way that you intended. For this reason, it might be helpful to recapitulate the basic outline . . . or [ask] your opponent to summarize it for you.”

If this doesn’t work, you can continue to repeat your original point and make no attempt to defend the bad argument. Should your opponent prove unwilling to recognize their use of a bad argument (and you’re 100% certain that’s what they’re doing), it’s worth considering if there is any point in continuing the debate. The reality is that most of the debates we have are not rationally thought out; they’re emotionally driven. This is even more pertinent when we’re arguing with people we have a complex relationship with. Sometimes, it’s better to walk away.

Conclusion

The bad arguments discussed here are incredibly common logical fallacies in debates. We often use them without realizing it or experience them without recognizing it. But these types of debates are unproductive and unlikely to help anyone learn. If we want our arguments to create buy-in and not animosity, we need to avoid making bad ones.

The post Bad Arguments and How to Avoid Them appeared first on Farnam Street.

]]>
41839
Why We Focus on Trivial Things: The Bikeshed Effect https://canvasly.link/bikeshed-effect/ Mon, 20 Apr 2020 11:00:39 +0000 https://canvasly.link/?p=41737 Bikeshedding is a metaphor to illustrate the strange tendency we have to spend excessive time on trivial matters, often glossing over important ones. Here’s why we do it, and how to stop. *** How can we stop wasting time on unimportant details? From meetings at work that drag on forever without achieving anything to weeks-long …

The post Why We Focus on Trivial Things: The Bikeshed Effect appeared first on Farnam Street.

]]>
Bikeshedding is a metaphor to illustrate the strange tendency we have to spend excessive time on trivial matters, often glossing over important ones. Here’s why we do it, and how to stop.

***

How can we stop wasting time on unimportant details? From meetings at work that drag on forever without achieving anything to weeks-long email chains that don’t solve the problem at hand, we seem to spend an inordinate amount of time on the inconsequential. Then, when an important decision needs to be made, we hardly have any time to devote to it.

To answer this question, we first have to recognize why we get bogged down in the trivial. Then we must look at strategies for changing our dynamics towards generating both useful input and time to consider it.

The Law of Triviality

You’ve likely heard of Parkinson’s Law, which states that tasks expand to fill the amount of time allocated to them. But you might not have heard of the lesser-known Parkinson’s Law of Triviality, also coined by British naval historian and author Cyril Northcote Parkinson in the 1950s.

The Law of Triviality states that the amount of time spent discussing an issue in an organization is inversely correlated to its actual importance in the scheme of things. Major, complex issues get the least discussion while simple, minor ones get the most discussion.

Parkinson’s Law of Triviality is also known as “bike-shedding,” after the story Parkinson uses to illustrate it. He asks readers to imagine a financial committee meeting to discuss a three-point agenda. The points are as follows:

  1. A proposal for a £10 million nuclear power plant
  2. A proposal for a £350 bike shed
  3. A proposal for a £21 annual coffee budget

What happens? The committee ends up running through the nuclear power plant proposal in little time. It’s too advanced for anyone to really dig into the details, and most of the members don’t know much about the topic in the first place. One member who does is unsure how to explain it to the others. Another member proposes a redesigned proposal, but it seems like such a huge task that the rest of the committee decline to consider it.

The discussion soon moves to the bike shed. Here, the committee members feel much more comfortable voicing their opinions. They all know what a bike shed is and what it looks like. Several members begin an animated debate over the best possible material for the roof, weighing out options that might enable modest savings. They discuss the bike shed for far longer than the power plant.

At last, the committee moves onto item three: the coffee budget. Suddenly, everyone’s an expert. They all know about coffee and have a strong sense of its cost and value. Before anyone realizes what is happening, they spend longer discussing the £21 coffee budget than the power plant and the bike shed combined! In the end, the committee runs out of time and decides to meet again to complete their analysis. Everyone walks away feeling satisfied, having contributed to the conversation.

Why this happens

Bike-shedding happens because the simpler a topic is, the more people will have an opinion on it and thus more to say about it. When something is outside of our circle of competence, like a nuclear power plant, we don’t even try to articulate an opinion.

But when something is just about comprehensible to us, even if we don’t have anything of genuine value to add, we feel compelled to say something, lest we look stupid. What idiot doesn’t have anything to say about a bike shed? Everyone wants to show that they know about the topic at hand and have something to contribute.

With any issue, we shouldn’t be according equal importance to every opinion anyone adds. We should emphasize the inputs from those who have done the work to have an opinion. And when we decide to contribute, we should be putting our energy into the areas where we have something valuable to add that will improve the outcome of the decision.

Strategies for avoiding bike-shedding

The main thing you can do to avoid bike-shedding is for your meeting to have a clear purpose. In The Art of Gathering: How We Meet and Why It Matters, Priya Parker, who has decades of experience designing high-stakes gatherings, says that any successful gathering (including a business meeting) needs to have a focused and particular purpose. “Specificity,” she says, “is a crucial ingredient.”

Why is having a clear purpose so critical? Because you use it as the lens to filter all other decisions about your meeting, including who to have in the room.

With that in mind, we can see that it’s probably not a great idea to discuss building a nuclear power plant and a bike shed in the same meeting. There’s not enough specificity there.

The key is to recognize that the available input on an issue doesn’t all need considering. The most informed opinions are most relevant. This is one reason why big meetings with lots of people present, most of whom don’t need to be there, are such a waste of time in organizations. Everyone wants to participate, but not everyone has anything meaningful to contribute.

When it comes to choosing your list of invitees, Parker writes, “if the purpose of your meeting is to make a decision, you may want to consider having fewer cooks in the kitchen.” If you don’t want bike-shedding to occur, avoid inviting contributions from those who are unlikely to have relevant knowledge and experience. Getting the result you want—a thoughtful, educated discussion about that power plant—depends on having the right people in the room.

It also helps to have a designated individual in charge of making the final judgment. When we make decisions by committee with no one in charge, reaching a consensus can be almost impossible. The discussion drags on and on. The individual can decide in advance how much importance to accord to the issue (for instance, by estimating how much its success or failure could help or harm the company’s bottom line). They can set a time limit for the discussion to create urgency. And they can end the meeting by verifying that it has indeed achieved its purpose.

Any issue that invites a lot of discussions from different people might not be the most important one at hand. Avoid descending into unproductive triviality by having clear goals for your meeting and getting the best people to the table to have a productive, constructive discussion.

The post Why We Focus on Trivial Things: The Bikeshed Effect appeared first on Farnam Street.

]]>
41737
Standing on the Shoulders of Giants https://canvasly.link/shoulders-of-giants/ Mon, 13 Apr 2020 13:33:34 +0000 https://canvasly.link/?p=41681 Innovation doesn’t occur in a vacuum. Doers and thinkers from Shakespeare to Jobs, liberally “stole” inspiration from the doers and thinkers who came before. Here’s how to do it right. *** “If I have seen further,” Isaac Newton wrote in a 1675 letter to fellow scientist Robert Hooke, “it is by standing on the shoulders …

The post Standing on the Shoulders of Giants appeared first on Farnam Street.

]]>
Innovation doesn’t occur in a vacuum. Doers and thinkers from Shakespeare to Jobs, liberally “stole” inspiration from the doers and thinkers who came before. Here’s how to do it right.

***

If I have seen further,” Isaac Newton wrote in a 1675 letter to fellow scientist Robert Hooke, “it is by standing on the shoulders of giants.

It can be easy to look at great geniuses like Newton and imagine that their ideas and work came solely out of their minds, that they spun it from their own thoughts—that they were true originals. But that is rarely the case.

Innovative ideas have to come from somewhere. No matter how unique or unprecedented a work seems, dig a little deeper and you will always find that the creator stood on someone else’s shoulders. They mastered the best of what other people had already figured out, then made that expertise their own. With each iteration, they could see a little further, and they were content in the knowledge that future generations would, in turn, stand on their shoulders.

Standing on the shoulders of giants is a necessary part of creativity, innovation, and development. It doesn’t make what you do less valuable. Embrace it.

Everyone gets a lift up

Ironically, Newton’s turn of phrase wasn’t even entirely his own. The phrase can be traced back to the twelfth century, when the author John of Salisbury wrote that philosopher Bernard of Chartres compared people to dwarves perched on the shoulders of giants and said that “we see more and farther than our predecessors, not because we have keener vision or greater height, but because we are lifted up and borne aloft on their gigantic stature.

Mary Shelley put it this way in the nineteenth century, in a preface for Frankenstein: “Invention, it must be humbly admitted, does not consist in creating out of void but out of chaos.

There are giants in every field. Don’t be intimidated by them. They offer an exciting perspective. As the film director Jim Jarmusch advised, “Nothing is original. Steal from anywhere that resonates with inspiration or fuels your imagination. Devour old films, new films, music, books, paintings, photographs, poems, dreams, random conversations, architecture, bridges, street signs, trees, clouds, bodies of water, light, and shadows. Select only things to steal from that speak directly to your soul. If you do this, your work (and theft) will be authentic. Authenticity is invaluable; originality is non-existent. And don’t bother concealing your thievery—celebrate it if you feel like it. In any case, always remember what Jean-Luc Godard said: ‘It’s not where you take things from—it’s where you take them to.’”

That might sound demoralizing. Some might think, “My song, my book, my blog post, my startup, my app, my creation—surely they are original? Surely no one has done this before!” But that’s likely not the case. It’s also not a bad thing. Filmmaker Kirby Ferguson states in his TED Talk: “Admitting this to ourselves is not an embrace of mediocrity and derivativeness—it’s a liberation from our misconceptions, and it’s an incentive to not expect so much from ourselves and to simply begin.

There lies the important fact. Standing on the shoulders of giants enables us to see further, not merely as far as before. When we build upon prior work, we often improve upon it and take humanity in new directions. However original your work seems to be, the influences are there—they might just be uncredited or not obvious. As we know from social proof, copying is a natural human tendency. It’s how we learn and figure out how to behave.

In Antifragile: Things That Gain from Disorder, Nassim Taleb describes the type of antifragile inventions and ideas that have lasted throughout history. He describes himself heading to a restaurant (the likes of which have been around for at least 2,500 years), in shoes similar to those worn at least 5,300 years ago, to use silverware designed by the Mesopotamians. During the evening, he drinks wine based on a 6,000-year-old recipe, from glasses invented 2,900 years ago, followed by cheese unchanged through the centuries. The dinner is prepared with one of our oldest tools, fire, and using utensils much like those the Romans developed.

Much about our societies and cultures has undeniably changed and continues to change at an ever-faster rate. But we continue to stand on the shoulders of those who came before in our everyday life, using their inventions and ideas, and sometimes building upon them.

Not invented here syndrome

When we discredit what came before or try to reinvent the wheel or refuse to learn from history, we hold ourselves back. After all, many of the best ideas are the oldest. “Not Invented Here Syndrome” is a term for situations when we avoid using ideas, products, or data created by someone else, preferring instead to develop our own (even if it is more expensive, time-consuming, and of lower quality.)

The syndrome can also manifest as reluctance to outsource or delegate work. People might think their output is intrinsically better if they do it themselves, becoming overconfident in their own abilities. After all, who likes getting told what to do, even by someone who knows better? Who wouldn’t want to be known as the genius who (re)invented the wheel?

Developing a new solution for a problem is more exciting than using someone else’s ideas. But new solutions, in turn, create new problems. Some people joke that, for example, the largest Silicon Valley companies are in fact just impromptu incubators for people who will eventually set up their own business, firm in the belief that what they create themselves will be better.

The syndrome is also a case of the sunk cost fallacy. If a company has spent a lot of time and money getting a square wheel to work, they may be resistant to buying the round ones that someone else comes out with. The opportunity costs can be tremendous. Not Invented Here Syndrome detracts from an organization or individual’s core competency, and results in wasting time and talent on what are ultimately distractions. Better to use someone else’s idea and be a giant for someone else.

Why Steve Jobs stole his ideas

“Creativity is just connecting things. When you ask creative people how they did something, they feel a little guilty because they didn’t really do it. They just saw something. It seemed obvious to them after a while; that’s because they were able to connect experiences they’ve had and synthesize new things.” 

— Steve Jobs

In The Runaway Species: How Human Creativity Remakes the World, Anthony Brandt and David Eagleman trace the path that led to the creation of the iPhone and track down the giants upon whose shoulders Steve Jobs perched. We often hail Jobs as a revolutionary figure who changed how we use technology. Few who were around in 2007 could have failed to notice the buzz created by the release of the iPhone. It seemed so new, a total departure from anything that had come before. The truth is a little messier.

The first touchscreen came about almost half a century before the iPhone, developed by E.A. Johnson for air traffic control. Other engineers built upon his work and developed usable models, filing a patent in 1975. Around the same time, the University of Illinois was developing touchscreen terminals for students. Prior to touchscreens, light pens used similar technology. The first commercial touchscreen computer came out in 1983, soon followed by graphics boards, tablets, watches, and video game consoles. Casio released a touchscreen pocket computer in 1987 (remember, this is still a full twenty years before the iPhone.)

However, early touchscreen devices were frustrating to use, with very limited functionality, often short battery lives, and minimal use cases for the average person. As touchscreen devices developed in complexity and usability, they laid down the groundwork for the iPhone.

Likewise, the iPod built upon the work of Kane Kramer, who took inspiration from the Sony Walkman. Kramer designed a small portable music player in the 1970s. The IXI, as he called it, looked similar to the iPod but arrived too early for a market to exist, and Kramer lacked the marketing skills to create one. When pitching to investors, Kramer described the potential for immediate delivery, digital inventory, taped live performances, back catalog availability, and the promotion of new artists and microtransactions. Sound familiar?

Steve Jobs stood on the shoulders of the many unseen engineers, students, and scientists who worked for decades to build the technology he drew upon. Although Apple has a long history of merciless lawsuits against those they consider to have stolen their ideas, many were not truly their own in the first place. Brandt and Eagleman conclude that “human creativity does not emerge from a vacuum. We draw on our experience and the raw materials around us to refashion the world. Knowing where we’ve been, and where we are, points the way to the next big industries.”

How Shakespeare got his ideas

Nothing will come of nothing.”  

— William Shakespeare,<em> King Lear</em>

Most, if not all, of Shakespeare’s plays draw heavily upon prior works—so much so that some question whether he would have survived today’s copyright laws.

Hamlet took inspiration from Gesta Danorum, a twelfth-century work on Danish history by Saxo Grammaticus, consisting of sixteen Latin books. Although it is doubtful whether Shakespeare had access to the original text, scholars find the parallels undeniable and believe he may have read another play based on it, from which he drew inspiration. In particular, the accounts of the plight of Prince Amleth (which has the same letters as Hamlet) involves similar events.

Holinshed’s Chronicles, a co-authored account of British history from the late sixteenth century, tells stories that mimic the plot of Macbeth, including the three witches. Holinshed’s Chronicles itself was a mélange of earlier texts, which transferred their biases and fabrications to Shakespeare. It also likely inspired King Lear.

Parts of Antony and Cleopatra are copied verbatim from Plutarch’s Life of Mark Anthony. Arthur Brooke’s 1562 poem The Tragicall Historye of Romeus and Juliet was an undisguised template for Romeo and Juliet. Once again, there are more giants behind the scenes—Brooke copied a 1559 poem by Pierre Boaistuau, who in turn drew from a 1554 story by Matteo Bandello, who in turn drew inspiration from a 1530 work by Luigi da Porto. The list continues, with Plutarch, Chaucer, and the Bible acting as inspirations for many major literary, theatrical, and cultural works.

Yet what Shakespeare did with the works he sometimes copied, sometimes learned from, is remarkable. Take a look at any of the original texts and, despite the mimicry, you will find that they cannot compare to his plays. Many of the originals were dry, unengaging, and lacking any sort of poetic language. J.J. Munro wrote in 1908 that The Tragicall Historye of Romeus and Julietmeanders on like a listless stream in a strange and impossible land; Shakespeare’s sweeps on like a broad and rushing river, singing and foaming, flashing in sunlight and darkening in cloud, carrying all things irresistibly to where it plunges over the precipice into a waste of waters below.

Despite bordering on plagiarism at times, he overhauled the stories with exceptional use of the English language, bringing drama and emotion to dreary chronicles or poems. He had a keen sense for the changes required to restructure plots, creating suspense and intensity in their stories. Shakespeare saw far further than those who wrote before him, and with their help, he ushered in a new era of the English language.

Of course, it’s not just Newton, Jobs, and Shakespeare who found a (sometimes willing, sometimes not) shoulder to stand upon. Facebook is presumed to have built upon Friendster. Cormac McCarthy’s books often replicate older history texts, with one character coming straight from Samuel Chamberlain’s My Confessions. John Lennon borrowed from diverse musicians, once writing in a letter to the New York Times that though the Beatles copied black musicians, “it wasn’t a rip off. It was a love in.

In The Ecstasy of Influence, Jonathan Lethem points to many other instances of influences in classic works. In 1916, journalist Heinz von Lichberg published a story of a man who falls in love with his landlady’s daughter and begins a love affair, culminating in her death and his lasting loneliness. The title? Lolita. It’s hard to question that Nabokov must have read it, but aside from the plot and name, the style of language in his version is absent from the original.

The list continues. The point is not to be flippant about plagiarism but to cultivate sensitivity to the elements of value in a previous work, as well as the ability to build upon those elements. If we restrict the flow of ideas, everyone loses out.

The adjacent possible

What’s this about? Why can’t people come up with their own ideas? Why do so many people come up with a brilliant idea but never profit from it? The answer lies in what scientist Stuart Kauffman calls “the adjacent possible.” Quite simply, each new innovation or idea opens up the possibility of additional innovations and ideas. At any time, there are limits to what is possible, yet those limits are constantly expanding.

In Where Good Ideas Come From: The Natural History of Innovation, Steven Johnson compares this process to being in a house where opening a door creates new rooms. Each time we open the door to a new room, new doors appear and the house grows. Johnson compares it to the formation of life, beginning with basic fatty acids. The first fatty acids to form were not capable of turning into living creatures. When they self-organized into spheres, the groundwork formed for cell membranes, and a new door opened to genetic codes, chloroplasts, and mitochondria. When dinosaurs evolved a new bone that meant they had more manual dexterity, they opened a new door to flight. When our distant ancestors evolved opposable thumbs, dozens of new doors opened to the use of tools, writing, and warfare. According to Johnson, the history of innovation has been about exploring new wings of the adjacent possible and expanding what we are capable of.

A new idea—like those of Newton, Jobs, and Shakespeare—is only possible because a previous giant opened a new door and made their work possible. They in turn opened new doors and expanded the realm of possibility. Technology, art, and other advances are only possible if someone else has laid the groundwork; nothing comes from nothing. Shakespeare could write his plays because other people had developed the structures and language that formed his tools. Newton could advance science because of the preliminary discoveries that others had made. Jobs built Apple out of the debris of many prior devices and technological advances.

The questions we all have to ask ourselves are these: What new doors can I open, based on the work of the giants that came before me? What opportunities can I spot that they couldn’t? Where can I take the adjacent possible? If you think all the good ideas have already been found, you are very wrong. Other people’s good ideas open new possibilities, rather than restricting them.

As time passes, the giants just keep getting taller and more willing to let us hop onto their shoulders. Their expertise is out there in books and blog posts, open-source software and TED talks, podcast interviews, and academic papers. Whatever we are trying to do, we have the option to find a suitable giant and see what can be learned from them. In the process, knowledge compounds, and everyone gets to see further as we open new doors to the adjacent possible.

The post Standing on the Shoulders of Giants appeared first on Farnam Street.

]]>
41681
Unlikely Optimism: The Conjunctive Events Bias https://canvasly.link/conjunctive-events-bias/ Mon, 06 Apr 2020 10:17:30 +0000 https://canvasly.link/?p=41561 When certain events need to take place to achieve a desired outcome, we’re overly optimistic that those events will happen. Here’s why we should temper those expectations. *** Why are we so optimistic in our estimation of the cost and schedule of a project? Why are we so surprised when something inevitably goes wrong? If …

The post Unlikely Optimism: The Conjunctive Events Bias appeared first on Farnam Street.

]]>
When certain events need to take place to achieve a desired outcome, we’re overly optimistic that those events will happen. Here’s why we should temper those expectations.

***

Why are we so optimistic in our estimation of the cost and schedule of a project? Why are we so surprised when something inevitably goes wrong? If we want to get better at executing our plans successfully, we need to be aware of how the conjunctive events bias can throw us way off track.

We often overestimate the likelihood of conjunctive events—occurrences that must happen in conjunction with one another. The probability of a series of conjunctive events happening is lower than the probability of any individual event. This is often very hard for us to wrap our heads around. But if we don’t try, we risk seriously underestimating the time, money, and effort required to achieve our goals.

The Most Famous Bank Teller

In Thinking, Fast and Slow, Daniel Kahneman gives a now-classic example of the conjunctive events bias. Students at several major universities received a description of a woman. They were told that Linda is 31, single, intelligent, a philosophy major, and concerned with social justice. Students were then asked to estimate which of the following statements is most likely true:

  • Linda is a bank teller.
  • Linda is a bank teller and is active in the feminist movement.

The majority of students (85% to 95%) chose the latter statement, seeing the conjunctive events (that she is both a bank teller and a feminist activist) as more probable. Two events together seemed more likely that one event. It’s perfectly possible that Linda is a feminist bank teller. It’s just not more probable for her to be a feminist bank teller than it is for her to be a bank teller. After all, the first statement does not exclude the possibility of her being a feminist; it just does not mention it.

The logic underlying the Linda example can be summed up as follows: The extension rule in probability theory states that if B is a subset of A, B cannot be more probable than A. Likewise, the probability of A and B cannot be higher than the probability of A or B. Broader categories are always more probable than their subsets. It’s more likely a randomly selected person is a parent than it is that they are a father. It’s more likely someone has a pet than they have a cat. It’s more likely someone likes coffee than they like cappuccinos. And so on.

It’s not that we always think conjunctive events are more likely. If the second option in the Linda Problem was ‘Linda is a bank teller and likes to ski’, maybe we’d all pick just the bank teller option because we don’t have any information that makes either a good choice. The point here, is that given what we know about Linda, we think it’s likely she’s a feminist. Therefore, we are willing to add almost anything to the Linda package if it appears with ‘feminist’. This willingness to create a narrative with pieces that clearly don’t fit is the real danger of the conjunctive events bias.

“Plans are useless, but planning is indispensable.” 

— Dwight D. Eisenhower

Why the best laid plans often fail

The conjunctive events bias makes us underestimate the effort required to accomplish complex plans. Most plans don’t work out. Things almost always take longer than expected. There are always delays due to dependencies. As Max Bazerman and Don Moore explain in Judgment in Managerial Decision Making, “The overestimation of conjunctive events offers a powerful explanation for the problems that typically occur with projects that require multistage planning. Individuals, businesses, and governments frequently fall victim to the conjunctive events bias in terms of timing and budgets. Home remodeling, new product ventures, and public works projects seldom finish on time.”

Plans don’t work because completing a sequence of tasks requires a great deal of cooperation from multiple events. As a system becomes increasingly complex, the chance of failure increases. A plan can be thought of as a system. Thus, a change in one component will very likely have impacts on the functionality of other parts of the system. The more components you have, the more chances that something will go wrong in one of them, causing delays, setbacks, and fails in the rest of the system. Even if the chance of an individual component failing is slight, a large number of them will increase the probability of failure.

Imagine you’re building a house. Things start off well. The existing structure comes down on schedule. Construction continues and the framing goes up, and you are excited to see the progress. The contractor reassures you that all trades and materials are lined up and ready to go. What is more likely:

  • The building permits get delayed
  • The building permits get delayed and the electrical goes in on schedule

You know a bit about the electrical schedule. You know nothing about the permits. But you bucket them in optimistically, erroneously linking one with the other. So you don’t worry about the building permits and never imagine that their delay will impact the electrical. When the permits do get delayed you have to pay the electrician for the week he can’t work, and then have to wait for him to finish another job before he can resume yours.

Thus, the more steps involved in a plan, the greater the chance of failure, as we associate probabilities to events that aren’t at all related. That is especially true as more people get involved, bringing their individual biases and misconceptions of chance.

In Seeking Wisdom: From Darwin to Munger, Peter Bevelin writes:

A project is composed of a series of steps where all must be achieved for success. Each individual step has some probability of failure. We often underestimate the large number of things that may happen in the future or all opportunities for failure that may cause a project to go wrong. Humans make mistakes, equipment fails, technologies don’t work as planned, unrealistic expectations, biases including sunk cost-syndrome, inexperience, wrong incentives, changing requirements, random events, ignoring early warning signals are reasons for delays, cost overruns, and mistakes. Often we focus too much on the specific base project case and ignore what normally happens in similar situations (base rate frequency of outcomes—personal and others). Why should some project be any different from the long-term record of similar ones? George Bernard Shaw said: “We learn from history that man can never learn anything from history.”

The more independent steps that are involved in achieving a scenario, the more opportunities for failure and the less likely it is that the scenario will happen. We often underestimate the number of steps, people, and decisions involved.

We can’t pretend that knowing about conjunctive events bias will automatically stop us from having it. When, however, we are doing planning where a successful outcome is of importance to us, it’s useful to run through our assumptions with this bias in mind. Sometimes, assigning frequencies instead of probabilities can also show us where our assumptions might be leading us astray. In the housing example above, asking what is the frequency of having building permits delayed in every hundred houses, versus the frequency of having permits delayed and electrical going in on time for the same hundred demonstrates more easily the higher frequency of option one.

It also extremely useful to keep a decision journal for our major decisions, so that we can more realistic in our estimates on the time and resources we need for future plans. The more realistic we are, the higher our chances of accomplishing what we set out to do.

The conjunctive events bias teaches us to be more pessimistic about plans and to consider the worst-case scenario, not just the best. We may assume things will always run smoothly but disruption is the rule rather than the exception.

The post Unlikely Optimism: The Conjunctive Events Bias appeared first on Farnam Street.

]]>
41561
Using Models to Stay Calm in Charged Situations https://canvasly.link/models-charged-situations/ Mon, 02 Mar 2020 12:30:29 +0000 https://canvasly.link/?p=41053 When polarizing topics are discussed in meetings, passions can run high and cloud our judgment. Learn how mental models can help you see clearly from this real-life scenario. *** Mental models can sometimes come off as an abstract concept. They are, however, actual tools you can use to navigate through challenging or confusing situations. In …

The post Using Models to Stay Calm in Charged Situations appeared first on Farnam Street.

]]>
When polarizing topics are discussed in meetings, passions can run high and cloud our judgment. Learn how mental models can help you see clearly from this real-life scenario.

***

Mental models can sometimes come off as an abstract concept. They are, however, actual tools you can use to navigate through challenging or confusing situations. In this article, we are going to apply our mental models to a common situation: a meeting with conflict.

A recent meeting with the school gave us an opportunity to use our latticework. Anyone with school-age kids has dealt with the bureaucracy of a school system and the other parents who interact with it. Call it what you will, all school environments usually have some formal interface between parents and the school administration that is aimed at progressing issues and ideas of importance to the school community.

The particular meeting was an intense one. At issue was the school’s communication around a potentially harmful leak in the heating system. Some parents felt the school had communicated reasonably about the problem and the potential consequences. Others felt their child’s life had been put in danger due to potential exposure to mold and asbestos. Some parents felt the school could have done a better job of soliciting feedback from students about their experiences during the previous week, and others felt the school administration had done a poor job about communicating potential risks to parents.

The first thing you’ll notice if you’re in a meeting like this is that emotions on all sides run high. After some discussion you might also notice a few more things, like how many people do the following:

Any of these occurrences, when you hear them via statements from people around the table, are a great indication that using a few mental models might improve the dynamics of the situation.

The first mental model that is invaluable in situations like this is Hanlon’s Razor: don’t attribute to maliciousness that which is more easily explained by incompetence. (Hanlon’s Razor is one of the 9 general thinking concepts in The Great Mental Models Volume One.) When people feel victimized, they can get angry and lash out in an attempt to fight back against a perceived threat. When people feel accused of serious wrongdoing, they can get defensive and withhold information to protect themselves. Neither of these reactions is useful in a situation like this. Yes, sometimes people intentionally do bad things. But more often than not, bad things are the result of incompetence. In a school meeting situation, it’s safe to assume everyone at the table has the best interests of the students at heart. School staff and administrators usually go into teaching motivated by a deep love of education. They genuinely want their schools to be amazing places of learning, and they devote time and attention to improving the lives of their students.

It makes no sense to assume a school’s administration would deliberately withhold harmful information. Yes, it could happen. But, in either case, you are going to obtain more valuable information if you assume poor decisions were the result of incompetence versus maliciousness.

When we feel people are malicious toward us, we instinctively become a negatively coiled spring, waiting for the right moment to take them down a notch or two. Removing malice from the equation, you give yourself emotional breathing room to work toward better solutions and apply more models.

The next helpful model is relativity, adapted from the laws of physics. This model is about remembering that everyone’s perspective is different from yours. Understanding how others see the same situation can help you move toward a more meaningful dialogue with the people in the meeting. You can do this by looking around the room and asking yourself what is influencing people’s approaches to the situation.

In our school meeting, we see some people are afraid for their child’s health. Others are influenced by past dealings with the school administration. Authorities are worried about closing the school. Teachers are concerned about how missed time might impact their students’ learning. Administrators are trying to balance the needs of parents with their responsibility to follow the necessary procedures. Some parents are stressed because they don’t have care for their children when the school closes. There is a lot going on, and relativity gives us a lens to try to identify the dynamics impacting communication.

After understanding the different perspectives, it becomes easier to incorporate them into your thinking. You can diffuse conflict by identifying what it is you think you hear. Often, just the feeling of being heard will help people start to listen and engage more objectively.

Now you can dive into some of the details. First up is probabilistic thinking. Before we worry about mold levels or sick children, let’s try to identify the base rates. What is the mold content in the air outside? How many children are typically absent due to sickness at this time of year? Reminding people that severity has to be evaluated against something in a situation like this can really help diffuse stress and concern. If 10% of the student population is absent on any given day, and in the week leading up to these events 12% to 13% of the population was absent, then it turns out we are not actually dealing with a huge statistical anomaly.

Then you can evaluate the anecdotes with the model of the Law of Large Numbers in mind. Small sample sizes can be misleading. The larger your group for evaluation, the more relevant the conclusions. In a situation such as our school council meeting, small sample sizes only serve to ratchet up the emotion by implying they are the causal outcomes of recent events.

In reality, any one-off occurrence can often be explained in multiple ways. One or two children coming home with hives? There are a dozen reasonable explanations for that: allergies, dry skin, reaction to skin cream, symptom of an illness unrelated to the school environment, and so on. However, the more children that develop hives, the more it is statistically possible the cause relates to the only common denominator between all children: the school environment.

Even then, correlation does not equal causation. It might not be a recent leaky steam pipe; is it exam time? Are there other stressors in the culture? Other contaminants in the environment? The larger your sample size, the more likely you will obtain relevant information.

Finally, you can practice systems thinking and contribute to the discussion by identifying the other components in the system you are all dealing with. After all, a school council is just one part of a much larger system involving governments, school boards, legislators, administrators, teachers, students, parents, and the community. When you put your meeting into the bigger context of the entire system, you can identify the feedback loops: Who is responding to what information, and how quickly does their behavior change? When you do this, you can start to suggest some possible steps and solutions to remedy the situation and improve interactions going forward.

How is the information flowing? How fast does it move? How much time does each recipient have to adjust before receiving more information? Chances are, you aren’t going to know all this at the meeting. So you can ask questions. Does the principal have to get approval from the school board before sending out communications involving risk to students? Can teachers communicate directly with parents? What are the conditions for communicating possible risk? Will speculation increase the speed of a self-reinforcing feedback loop causing panic? What do parents need to know to make an informed decision about the welfare of their child? What does the school need to know to make an informed decision about the welfare of their students?

In meetings like the one described here, there is no doubt that communication is important. Using the meeting to discuss and debate ways of improving communication so that outcomes are generally better in the future is a valuable use of time.

A school meeting is one practical example of how having a latticework of mental models can be useful. Using mental models can help you diffuse some of the emotions that create an unproductive dynamic. They can also help you bring forward valuable, relevant information to assist the different parties in improving their decision-making process going forward.

At the very least, you will walk away from the meeting with a much better understanding of how the world works, and you will have gained some strategies you can implement in the future to leverage this knowledge instead of fighting against it.

The post Using Models to Stay Calm in Charged Situations appeared first on Farnam Street.

]]>
41053
The Illusory Truth Effect: Why We Believe Fake News, Conspiracy Theories and Propaganda https://canvasly.link/illusory-truth-effect/ Mon, 03 Feb 2020 12:00:31 +0000 https://canvasly.link/?p=40974 When a “fact” tastes good and is repeated enough, we tend to believe it, no matter how false it may be. Understanding the illusory truth effect can keep us from being bamboozled. *** A recent Verge article looked at some of the unsavory aspects of working as Facebook content moderators—the people who spend their days …

The post The Illusory Truth Effect: Why We Believe Fake News, Conspiracy Theories and Propaganda appeared first on Farnam Street.

]]>
When a “fact” tastes good and is repeated enough, we tend to believe it, no matter how false it may be. Understanding the illusory truth effect can keep us from being bamboozled.

***

A recent Verge article looked at some of the unsavory aspects of working as Facebook content moderators—the people who spend their days cleaning up the social network’s most toxic content. One strange detail stands out. The moderators the Verge spoke to reported that they and their coworkers often found themselves believing fringe, often hatemongering conspiracy theories they would have dismissed under normal circumstances. Others described experiencing paranoid thoughts and intense fears for their safety.

An overnight switch from skepticism to fervent belief in conspiracy theories is not unique to content moderators. In a Nieman Lab article by Laura Hazard Owen, she explains that researchers who study the spread of disinformation online can find themselves struggling to be sure about their own beliefs and needing to make an active effort to counteract what they see. Some of the most fervent, passionate conspiracy theorists admit that they first fell into the rabbit hole when they tried to debunk the beliefs they now hold. There’s an explanation for why this happens: the illusory truth effect.

The illusory truth effect

Facts do not cease to exist because they are ignored.

— <em>Aldous Huxley</em>

Not everything we believe is true. We may act like it is and it may be uncomfortable to think otherwise, but it’s inevitable that we all hold a substantial number of beliefs that aren’t objectively true. It’s not about opinions or different perspectives. We can pick up false beliefs for the simple reason that we’ve heard them a lot.

If I say that the moon is made of cheese, no one reading this is going to believe that, no matter how many times I repeat it. That statement is too ludicrous. But what about something a little more plausible? What if I said that moon rock has the same density as cheddar cheese? And what if I wasn’t the only one saying it? What if you’d also seen a tweet touting this amazing factoid, perhaps also heard it from a friend at some point, and read it in a blog post?

Unless you’re a geologist, a lunar fanatic, or otherwise in possession of an unusually good radar for moon rock-related misinformation, there is a not insignificant chance you would end up believing a made-up fact like that, without thinking to verify it. You might repeat it to others or share it online. This is how the illusory truth effect works: we all have a tendency to believe something is true after being exposed to it multiple times. The more times we’ve heard something, the truer it seems. The effect is so powerful that repetition can persuade us to believe information we know is false in the first place. Ever thought a product was stupid but somehow you ended up buying it on a regular basis? Or you thought that new manager was okay, but now you participate in gossip about her?

The illusory truth effect is the reason why advertising works and why propaganda is one of the most powerful tools for controlling how people think. It’s why the speech of politicians can be bizarre and multiple-choice tests can cause students problems later on. It’s why fake news spreads and retractions of misinformation don’t work. In this post, we’re going to look at how the illusory truth effect works, how it shapes our perception of the world, and how we can avoid it.

The discovery of the illusory truth effect

Rather than love, than money, than fame, give me truth.

— <em>Henry David Thoreau</em>

The illusory truth effect was first described in a 1977 paper entitled “Frequency and the Conference of Referential Validity,” by Lynn Hasher and David Goldstein of Temple University and Thomas Toppino of Villanova University. In the study, the researchers presented a group of students with 60 statements and asked them to rate how certain they were that each was either true or false. The statements came from a range of subjects and were all intended to be not too obscure, but unlikely to be familiar to study participants. Each statement was objective—it could be verified as either correct or incorrect and was not a matter of opinion. For example, “the largest museum in the world is the Louvre in Paris” was true.

Students rated their certainty three times, with two weeks in between evaluations. Some of the statements were repeated each time, while others were not. With each repetition, students became surer of their certainty regarding the statements they labelled as true. It seemed that they were using familiarity as a gauge for how confident they were of their beliefs.

An important detail is that the researchers did not repeat the first and last 10 items on each list. They felt students would be most likely to remember these and be able to research them before the next round of the study. While the study was not conclusive evidence of the existence of the illusory truth effect, subsequent research has confirmed its findings.

Why the illusory truth effect happens

The sad truth is the truth is sad.

— <em>Lemony Snicket</em>

Why does repetition of a fact make us more likely to believe it, and to be more certain of that belief? As with other cognitive shortcuts, the typical explanation is that it’s a way our brains save energy. Thinking is hard work—remember that the human brain uses up about 20% of an individual’s energy, despite accounting for just 2% of their body weight.

The illusory truth effect comes down to processing fluency. When a thought is easier to process, it requires our brains to use less energy, which leads us to prefer it. The students in Hasher’s original study recognized the repeated statements, even if not consciously. That means that processing them was easier for their brains.

Processing fluency seems to have a wide impact on our perception of truthfulness. Rolf Reber and Norbert Schwarz, in their article “Effects of Perceptual Fluency on Judgments of Truth,” found that statements presented in an easy-to-read color are judged as more likely to be true than ones presented in a less legible way. In their article “Birds of a Feather Flock Conjointly (?): Rhyme as Reason in Aphorisms,” Matthew S. McGlone and Jessica Tofighbakhsh found that aphorisms that rhyme (like “what sobriety conceals, alcohol reveals”), even if someone hasn’t heard them before, seem more accurate than non-rhyming versions. Once again, they’re easier to process.

Fake news

“One of the saddest lessons of history is this: If we’ve been bamboozled long enough, we tend to reject any evidence of the bamboozle. We’re no longer interested in finding out the truth. The bamboozle has captured us. It’s simply too painful to acknowledge, even to ourselves, that we’ve been taken. ”

— Carl Sagan

The illusory truth effect is one factor in why fabricated news stories sometimes gain traction and have a wide impact. When this happens, our knee-jerk reaction can be to assume that anyone who believes fake news must be unusually gullible or outright stupid. Evan Davis writes in Post Truth, “Never before has there been a stronger sense that fellow citizens have been duped and that we are all suffering the consequences of their intellectual vulnerability.” As Davis goes on to write, this assumption isn’t helpful for anyone. We can’t begin to understand why people believe seemingly ludicrous news stories until we consider some of the psychological reasons why this might happen.

Fake news falls under the umbrella of “information pollution,” which also includes news items that misrepresent information, take it out of context, parody it, fail to check facts or do background research, or take claims from unreliable sources at face value. Some of this news gets published on otherwise credible, well-respected news sites due to simple oversight. Some goes on parody sites that never purport to tell the truth, yet are occasionally mistaken for serious reporting. Some shows up on sites that replicate the look and feel of credible sources, using similar web design and web addresses. And some fake news comes from sites dedicated entirely to spreading misinformation, without any pretense of being anything else.

A lot of information pollution falls somewhere in between the extremes that tend to get the most attention. It’s the result of people being overworked or in a hurry and unable to do the due diligence that reliable journalism requires. It’s what happens when we hastily tweet something or mention it in a blog post and don’t realize it’s not quite true. It extends to miscited quotes, doctored photographs, fiction books masquerading as memoirs, or misleading statistics.

The signal to noise ratio is so skewed that we have a hard time figuring out what to pay attention to and what we should ignore. No one has time to verify everything they read online. No one. (And no, offline media certainly isn’t perfect either.) Our information processing capabilities are not infinite and the more we consume, the harder it becomes to assess its value.

Moreover, we’re often far outside our circle of competence, reading about topics we don’t have the expertise in to assess accuracy in any meaningful way. This drip-drip of information pollution is not harmless. Like air pollution, it builds up over time and the more we’re exposed to it, the more likely we are to end up picking up false beliefs which are then hard to shift. For instance, a lot of people believe that crime, especially the violent kind, is on an upward trend year by year—in a 2016 study by Pew Research, 57% of Americans believed crime had worsened since 2008. This despite violent crime having actually fallen by nearly a fifth during that time. This false belief may stem from the fact that violent crime receives a disproportional amount of media coverage, giving it wide and repeated exposure.

When people are asked to rate the apparent truthfulness of news stories, they score ones they have read multiple times more truthful than those they haven’t. Danielle C. Polage, in her article “Making Up History: False Memories of Fake News Stories,” explains that a false story someone has been exposed to more than once can seem more credible than a true one they’re seeing for the first time. In experimental settings, people also misattribute their previous exposure to stories, believing they read a news item from another source when they actually saw it as part of a prior part of a study. Even when people know the story is part of the experiment, they sometimes think they’ve also read it elsewhere. The repetition is all that matters.

Given enough exposure to contradictory information, there is almost no knowledge that we won’t question.

Propaganda

If a lie is only printed often enough, it becomes a quasi-truth, and if such a truth is repeated often enough, it becomes an article of belief, a dogma, and men will die for it.

— <em>Isa Blagden</em>

Propaganda and fake news are similar. By relying on repetition, disseminators of propaganda can change the beliefs and values of people.

Propaganda has a lot in common with advertising, except instead of selling a product or service, it’s about convincing people of the validity of a particular cause. Propaganda isn’t necessarily malicious; sometimes the cause is improved public health or boosting patriotism to encourage military enrollment. But often propaganda is used to undermine political processes to further narrow, radical, and aggressive agendas.

During World War II, the graphic designer Abraham Games served as the official war artist for the British government. Games’s work is iconic and era-defining for its punchy, brightly colored visual style. His army recruitment posters would often feature a single figure rendered in a proud, strong, admirable pose with a mere few words of text. They conveyed to anyone who saw them the sorts of positive qualities they would supposedly gain through military service. Whether this was true or not was another matter. Through repeated exposure to the poster, Games instilled the image the army wanted to create in the minds of viewers, affecting their beliefs and behaviors.

Today, propaganda is more likely to be a matter of quantity over quality. It’s not about a few artistic posters. It’s about saturating the intellectual landscape with content that supports a group’s agenda. With so many demands on our attention, old techniques are too weak.

Researchers Christopher Paul and Miriam Matthews at the Rand Corporation refer to the method of bombarding people with fabricated information as the “firehose of propaganda” model. While the report focuses on modern Russian propaganda, the techniques it describes are not confined to Russia. These techniques make use of the illusory truth effect, alongside other cognitive shortcuts. Firehose propaganda has four distinct features:

  • High-volume and multi-channel
  • Rapid, continuous and repetitive
  • Makes no commitment to objective reality
  • Makes no commitment to consistency

Firehose propaganda is predicated on exposing people to the same messages as frequently as possible. It involves a large volume of content, repeated again and again across numerous channels: news sites, videos, radio, social media, television and so on. These days, as the report describes, this can also include internet users who are paid to repeatedly post in forums, chat rooms, comment sections and on social media disputing legitimate information and spreading misinformation. It is the sheer volume that succeeds in obliterating the truth. Research into the illusory truth effect suggests that we are further persuaded by information heard from multiple sources, hence the efficacy of funneling propaganda through a range of channels.

Seeing as repetition leads to belief in many cases, firehose propaganda doesn’t need to pay attention to the truth or even to be consistent. A source doesn’t need to be credible for us to end up believing its messages. Fact-checking is of little help because it further adds to the repetition, yet we feel compelled not to ignore obviously untrue propagandistic material.

Firehose propaganda does more than spread fake news. It nudges us towards feelings like paranoia, mistrust, suspicion, and contempt for expertise. All of this makes future propaganda more effective. Unlike those espousing the truth, propagandists can move fast because they’re making up some or all of what they claim, meaning they gain a foothold in our minds first.  First impressions are powerful. Familiarity breeds trust.

How to combat the illusory truth effect

So how can we protect ourselves from believing false news and being manipulated by propaganda due to the illusory truth effect? The best route is to be far more selective. The information we consume is like the food we eat. If it’s junk, our thinking will reflect that.

We don’t need to spend as much time reading the news as most of us do. As with many other things in life, more can be less. The vast majority of the news we read is just information pollution. It doesn’t do us any good.

One of the best solutions is to quit the news. This frees up time and energy to engage with timeless wisdom that will improve your life. Try it for a couple of weeks. And if you aren’t convinced, read a few days’ worth of newspapers from 1978. You’ll see how much the news doesn’t really matter at all.

If you can’t quit the news habit, stick to reliable, well-known news sources that have a reputation to uphold. Steer clear of dubious sources whenever you can—even if you treat it as entertainment, you might still end up absorbing it. Research unfamiliar sources before trusting them. Be cautious of sites that are funded entirely by advertising (or that pay their journalists based on views) and seek to support reader-funded news sources you get value from if possible. Prioritize sites that treat their journalists well and don’t expect them to churn out dozens of thoughtless articles per day.  Don’t rely on news in social media posts without sources, from people outside of their circle of competence.

Avoid treating the news as entertainment to passively consume on the bus or while waiting in line. Be mindful about it—if you want to inform yourself on a topic, set aside designated time to learn about it from multiple trustworthy sources. Don’t assume breaking news is better, as it can take some time for the full details of a story to come out and people may be quick to fill in the gaps with misinformation. Accept that you can’t be informed about everything and most of it isn’t important. Pay attention to when news items make you feel outrage or other strong emotions, because this may be a sign of manipulation. Be aware that correcting false information can further fuel the illusory truth effect by adding to the repetition.

We can’t stop the illusory truth effect from existing. But we can recognize that it is a reality and seek to prevent ourselves from succumbing to it in the first place.

Conclusion

Our memories are imperfect. We are easily led astray by the illusory truth effect, which can direct what we believe and even change our understanding of the past. It’s not about intelligence—this happens to all of us. This effect is too powerful for us to override it simply by learning the truth. Cognitively, there is no distinction between a genuine memory and a false one. Our brains are designed to save energy and it’s crucial we accept that.

We can’t just pull back and think the illusory truth only applies to other people. It applies to everyone. We’re all responsible for our own beliefs. We can’t pin the blame on the media or social media algorithms or whatever else. When we put effort into thinking about and questioning the information we’re exposed to, we’re less vulnerable to the illusory truth effect. Knowing about the effect is the best way to identify when it’s distorting our worldview. Before we use information as the basis for important decisions, it’s a good idea to verify if it’s true, or if it’s something we’ve just heard a lot.

Truth is a precarious thing, not because it doesn’t objectively exist, but because the incentives to warp it can be so strong. It’s up to each of us to seek it out.

The post The Illusory Truth Effect: Why We Believe Fake News, Conspiracy Theories and Propaganda appeared first on Farnam Street.

]]>
40974