Technology Archives - Farnam Street https://canvasly.link/category/technology/ Mastering the best of what other people have already figured out Tue, 01 Oct 2024 11:17:42 +0000 en-US hourly 1 https://canvasly.link/wp-content/uploads/2015/06/cropped-farnamstreet-80x80.png Technology Archives - Farnam Street https://canvasly.link/category/technology/ 32 32 148761140 Why Life Can’t Be Simpler https://canvasly.link/why-life-cant-be-simpler/ Mon, 05 Oct 2020 21:29:39 +0000 https://canvasly.link/?p=42855 We’d all like life to be simpler. But we also don’t want to sacrifice our options and capabilities. Tesler’s law of the conservation of complexity, a rule from design, explains why we can’t have both. Here’s how the law can help us create better products and services by rethinking simplicity. “Why can’t life be simple?” …

The post Why Life Can’t Be Simpler appeared first on Farnam Street.

]]>
We’d all like life to be simpler. But we also don’t want to sacrifice our options and capabilities. Tesler’s law of the conservation of complexity, a rule from design, explains why we can’t have both. Here’s how the law can help us create better products and services by rethinking simplicity.

“Why can’t life be simple?”

We’ve all likely asked ourselves that at least once. After all, life is complicated. Every day, we face processes that seem almost infinitely recursive. Each step requires the completion of a different task to make it possible, which in itself requires another task. We confront tools requiring us to memorize reams of knowledge and develop additional skills just to use them. Endeavors that seem like they should be simple, like getting utilities connected in a new home or figuring out the controls for a fridge, end up having numerous perplexing steps.

When we wish for things to be simpler, we usually mean we want products and services to have fewer steps, fewer controls, fewer options, less to learn. But at the same time, we still want all of the same features and capabilities. These two categories of desires are often at odds with each other and distort how we understand the complex.

***

Conceptual Models

In Living with Complexity, Donald A. Norman explains that complexity is all in the mind. Our perception of a product or service as simple or complex has its basis in the conceptual model we have of it. Norman writes that “A conceptual model is the underlying belief structure held by a person about how something works . . . Conceptual models are extremely important tools for organizing and understanding otherwise complex things.”

For example, on many computers, you can drag and drop a file into a folder. Both the file and the folder often have icons that represent their real-world namesakes. For the user, this process is simple; it provides a clear conceptual model. When people first started using graphical interfaces, real-world terms and icons made it easier to translate what they were doing. But the process only seems simple because of this effective conceptual model. It doesn’t represent what happens on the computer, where files and folders don’t exist. Computers store data wherever is convenient and may split files across multiple locations.

When we want something to be simpler, what we truly need is a better conceptual model of it. Once we know how to use them, complex tools end up making our lives simpler because they provide the precise functionality we want. A computer file is a great conceptual model because it hijacked something people already understood: physical files and folders. It would have been much harder for them to develop a whole new conceptual model reflecting how computers actually store files. What’s important to note is that giving users this simple conceptual model didn’t change how things work behind the scenes.

Removing functionality doesn’t make something simpler, because it removes options. Simple tools have a limited ability to simplify processes. Trying to do something complex with a simple tool is more complex than doing the same thing with a more complex tool.

A useful analogy here is the hand tools used by craftspeople, such as a silversmith’s planishing hammer (a tool used to shape and smooth the surface of metal). Norman highlights that these tools seem simple to the untrained eye. But using them requires great skill and practice. A craftsperson needs to know how to select them from the whole constellation of specialized tools they possess.

In itself, a planishing hammer might seem far, far simpler than, say, a digital photo editing program. Look again, Norman says. We have to compare the photo editing tool with the silversmith’s whole workbench. Both take a lot of time and practice to master. Both consist of many tools that are individually simple. Learning how and when to use them is the complex part.

Norman writes, “Whether something is complicated is in the mind of the beholder. ” Looking at a workbench of tools or a digital photo editing program, a novice sees complexity. A professional sees a range of different tools, each of which is simple to use. They know when to use each to make a process easier. Having fewer options would make their life more complex, not simpler, because they wouldn’t be able to break what they need to do down into individually simple steps. A professional’s experience-honed conceptual model helps them navigate a wide range of tools.

***

The conservation of complexity

To do difficult things in the simplest way, we need a lot of options.

Complexity is necessary because it gives us the functionality we need. A useful framework for understanding this is Tesler’s law of the conservation of complexity, which states:

The total complexity of a system is a constant. If you make a user’s interaction with a system simpler, the complexity behind the scenes increases.

The law originates from Lawrence Tesler (1945–2020), a computer scientist specializing in human-computer interactions who worked at Xerox, Apple, Amazon, and Yahoo! Tesler was influential in the development of early graphical interfaces, and he was the co-creator of the copy-and-paste functionality.

Complexity is like energy. It cannot be created or destroyed, only moved somewhere else. When a product or service becomes simpler for users, engineers and designers have to work harder. Norman writes, “With technology, simplifications at the level of usage invariably result in added complexity of the underlying mechanism. ” For example, the files and folders conceptual model for computer interfaces doesn’t change how files are stored, but by putting in extra work to translate the process into something recognizable, designers make navigating them easier for users.

Whether something looks simple or is simple to use says little about its overall complexity. “What is simple on the surface can be incredibly complex inside: what is simple inside can result in an incredibly complex surface. So from whose point of view do we measure complexity? ”

***

Out of control

Every piece of functionality requires a control—something that makes something happen. The more complex something is, the more controls it needs—whether they are visible to the user or not. Controls may be directly accessible to a user, as with the home button on an iPhone, or they may be behind the scenes, as with an automated thermostat.

From a user’s standpoint, the simplest products and services are those that are fully automated and do not require any intervention (unless something goes wrong.)

As long as you pay your bills, the water supply to your house is probably fully automated. When you turn on a tap, you don’t need to have requested there to be water in the pipes first. The companies that manage the water supply handle the complexity.

Or, if you stay in an expensive hotel, you might find your room is always as you want it, with your minifridge fully stocked with your favorites and any toiletries you forgot provided. The staff work behind the scenes to make this happen, without you needing to make requests.

On the other end of the spectrum, we have products and services that require users to control every last step.

A professional photographer is likely to use a camera that needs them to manually set every last setting, from white balance to shutter speed. This means the camera itself doesn’t need automation, but the user needs to operate controls for everything, giving them full control over the results. An amateur photographer might use a camera that automatically chooses these settings so all they need to do is point and shoot. In this case, the complexity transfers to the camera’s inner workings.

In the restaurants inside IKEA stores, customers typically perform tasks such as filling up drinks and clearing away dishes themselves. This means less complexity for staff and much lower prices compared to restaurants where staff do these things.

***

Lessons from the conservation of complexity

The first lesson from Tesler’s law of the conservation of complexity is that how simple something looks is not a reflection of how simple it is to use. Removing controls can mean users need to learn complex sequences to use the same features—similar to how languages with fewer sounds have longer words. One way to conceptualize the movement of complexity is through the notion of trade-offs. If complexity is constant, then there are trade-offs depending on where that complexity is moved.

A very basic example of complexity trade-offs can be found in the history of arithmetic. For centuries, many counting systems all over the world employed tools using stones or beads like a tabula (the Romans) or soroban (the Japanese) to facilitate adding and subtracting numbers. They were easy to use, but not easily portable. Then the Hindu-Arabic system came along (the one we use today) and by virtue of employing columns, and thus not requiring any moving parts, offered a much more portable counting system. However, the portability came with a cost.

Paul Lockhart explains in Arithmetic, “With the Hindu-Arabic system the writing and calculating are inextricably linked. Instead of moving stones or sliding beads, our manipulations become transmutations of the symbols themselves. That means we need to know things. We need to know that one more than 2 is 3, for instance. In other words, the price we pay [for portability] is massive amounts of memorization.” Thus, there is a trade-off. The simpler arithmetic system requires more complexity in terms of the memorization required of the users. We all went through the difficult process of learning mathematical symbols early in life. Although they might seem simple to us now, that’s just because we’re so accustomed to them.

Although perceived simplicity may have greater appeal at first, users are soon frustrated if it means greater operational complexity. Norman writes:

Perceived simplicity is not at all the same as simplicity of usage: operational simplicity. Perceived simplicity decreases with the number of visible controls and displays. Increase the number of visible alternatives and the perceived simplicity drops. The problem is that operational simplicity can be drastically improved by adding more controls and displays. The very things that make something easier to learn and to use can also make it be perceived as more difficult.

Even if it receives a negative reaction before usage, operational simplicity is the more important goal. For example, in a company, having a clearly stated directly responsible person for each project might seem more complex than letting a project be a team effort that falls to whoever is best suited to each part. But in practice, this adds complexity when someone tries to move forward with it or needs to know who should hear feedback about problems.

A second lesson is that things don’t always need to be incredibly simple for users. People have an intuitive sense that complexity has to go somewhere. When using a product or service is too simple, users can feel suspicious or like they’ve been robbed of control. They know that a lot more is going on behind the scenes, they just don’t know what it is. Sometimes we need to preserve a minimum level of complexity so that users feel like an actual participant. According to legend, cake mixes require the addition of a fresh egg because early users found that dried ones felt a bit too lazy and low effort.

An example of desirable minimum complexity is help with homework. For many parents, helping their children with their homework often feels like unnecessary complexity. It is usually subjects and facts they haven’t thought about in years, and they find themselves having to relearn them in order to help their kids. It would be far simpler if the teachers could cover everything in class to a degree that each child needed no additional practice. However, the complexity created by involving parents in the homework process helps make parents more aware of what their children are learning. In addition, they often get insight into areas of both struggle and interest, can identify ways to better connect with their children, and learn where they may want to teach them some broader life skills.

When we seek to make things simpler for other people, we should recognize that there be a point of diminishing negative returns wherein further simplification leads to a worse experience. Simplicity is not an end in itself—other things like speed, usability, and time-saving are. We shouldn’t simplify things from the user standpoint for the sake of it.

If changes don’t make something better for users, we’re just creating unnecessary behind-the-scenes complexity. People want to feel in control, especially when it comes to something important. We want to learn a bit about what’s happening, and an overly simple process teaches us nothing.

A third lesson is that products and services are only as good as what happens when they break. Handling a problem with something that has lots of controls on the user side may be easier for the user. They’re used to being involved in it. If something has been fully automated up until the point where it breaks, users don’t know how to react. The change is jarring, and they may freeze or overreact. Seeing as fully automated things fade into the background, this may be their most salient and memorable interaction with a product or service. If handling a problem is difficult for the user—for example, if there’s a lack of rapid support or instructions available or it’s hard to ascertain what went wrong in the first place—they may come away with a negative overall impression, even if everything worked fine for years beforehand.

A big challenge in the development of self-driving cars is that a driver needs to be able to take over if the car encounters a problem. But if someone hasn’t had to operate the car manually for a while, they may panic or forget what to do. So it’s a good idea to limit how long the car drives itself for. The same is purportedly true for airplane pilots. If the plane does too much of the work, the pilot won’t cope well in an emergency.

A fourth lesson is the importance of thinking about how the level of control you give your customers or users influences your workload. For a graphic designer, asking a client to detail exactly how they want their logo to look makes their work simpler. But it might be hard work for the client, who might not know what they want or may make poor choices. A more experienced designer might ask a client for much less information and instead put the effort into understanding their overall brand and deducing their needs from subtle clues, then figuring out the details themselves. The more autonomy a manager gives their team, the lower their workload, and vice versa.

If we accept that complexity is a constant, we need to always be mindful of who is bearing the burden of that complexity.

 

The post Why Life Can’t Be Simpler appeared first on Farnam Street.

]]>
42855
The Spiral of Silence https://canvasly.link/spiral-of-silence/ Mon, 21 Sep 2020 12:00:18 +0000 https://canvasly.link/?p=42772 Our desire to fit in with others means we don’t always say what we think. We only express opinions that seem safe. Here’s how the spiral of silence works and how we can discover what people really think. *** Be honest: How often do you feel as if you’re really able to express your true …

The post The Spiral of Silence appeared first on Farnam Street.

]]>
Our desire to fit in with others means we don’t always say what we think. We only express opinions that seem safe. Here’s how the spiral of silence works and how we can discover what people really think.

***

Be honest: How often do you feel as if you’re really able to express your true opinions without fearing judgment? How often do you bite your tongue because you know you hold an unpopular view? How often do you avoid voicing any opinion at all for fear of having misjudged the situation?

Even in societies with robust free speech protections, most people don’t often say what they think. Instead they take pains to weigh up the situation and adjust their views accordingly. This comes down to the “spiral of silence,” a human communication theory developed by German researcher Elisabeth Noelle-Neumann in the 1960s and ’70s. The theory explains how societies form collective opinions and how we make decisions surrounding loaded topics.

Let’s take a look at how the spiral of silence works and how understanding it can give us a more realistic picture of the world.

***

How the spiral of silence works

According to Noelle-Neumann’s theory, our willingness to express an opinion is a direct result of how popular or unpopular we perceive it to be. If we think an opinion is unpopular, we will avoid expressing it. If we think it is popular, we will make a point of showing we think the same as others.

Controversy is also a factor—we may be willing to express an unpopular uncontroversial opinion but not an unpopular controversial one. We perform a complex dance whenever we share views on anything morally loaded.

Our perception of how “safe” it is to voice a particular view comes from the clues we pick up, consciously or not, about what everyone else believes. We make an internal calculation based on signs like what the mainstream media reports, what we overhear coworkers discussing on coffee breaks, what our high school friends post on Facebook, or prior responses to things we’ve said.

We also weigh up the particular context, based on factors like how anonymous we feel or whether our statements might be recorded.

As social animals, we have good reason to be aware of whether voicing an opinion might be a bad idea. Cohesive groups tend to have similar views. Anyone who expresses an unpopular opinion risks social exclusion or even ostracism within a particular context or in general. This may be because there are concrete consequences, such as losing a job or even legal penalties. Or there may be less official social consequences, like people being less friendly or willing to associate with you. Those with unpopular views may suppress them to avoid social isolation.

Avoiding social isolation is an important instinct. From an evolutionary biology perspective, remaining part of a group is important for survival, hence the need to at least appear to share the same views as anyone else. The only time someone will feel safe to voice a divergent opinion is if they think the group will share it or be accepting of divergence, or if they view the consequences of rejection as low. But biology doesn’t just dictate how individuals behave—it ends up shaping communities. It’s almost impossible for us to step outside of that need for acceptance.

A feedback loop pushes minority opinions towards less and less visibility—hence why Noelle-Neumann used the word “spiral.” Each time someone voices a majority opinion, they reinforce the sense that it is safe to do so. Each time someone receives a negative response for voicing a minority opinion, it signals to anyone sharing their view to avoid expressing it.

***

An example of the spiral of silence

A 2014 Pew Research survey of 1,801 American adults examined the prevalence of the spiral of silence on social media. Researchers asked people about their opinions on one public issue: Edward Snowden’s 2013 revelations of US government surveillance of citizens’ phones and emails. They selected this issue because, while controversial, prior surveys suggested a roughly even split in public opinion surrounding whether the leaks were justified and whether such surveillance was reasonable.

Asking respondents about their willingness to share their opinions in different contexts highlighted how the spiral of silence plays out. 86% of respondents were willing to discuss the issue in person, but only about half as many were willing to post about it on social media. Of the 14% who would not consider discussing the Snowden leaks in person, almost none (0.3%) were willing to turn to social media instead.

Both in person and online, respondents reported far greater willingness to share their views with people they knew agreed with them—three times as likely in the workplace and twice as likely in a Facebook discussion.

***

The implications of the spiral of silence

The end result of the spiral of silence is a point where no one publicly voices a minority opinion, regardless of how many people believe it. The first implication of this is that the picture we have of what most people believe is not always accurate. Many people nurse opinions they would never articulate to their friends, coworkers, families, or social media followings.

A second implication is that the possibility of discord makes us less likely to voice an opinion at all, assuming we are not trying to drum up conflict. In the aforementioned Pew survey, people were more comfortable discussing a controversial story in person than online. An opinion voiced online has a much larger potential audience than one voiced face to face, and it’s harder to know exactly who will see it. Both of these factors increase the risk of someone disagreeing.

If we want to gauge what people think about something, we need to remove the possibility of negative consequences. For example, imagine a manager who often sets overly tight deadlines, causing immense stress to their team. Everyone knows this is a problem and discusses it among themselves, recognizing that more realistic deadlines would be motivating, and unrealistic ones are just demoralizing. However, no one wants to say anything because they’ve heard the manager say that people who can’t handle pressure don’t belong in that job. If the manager asks for feedback about their leadership style, they’re not going to hear what they need to hear if they know who it comes from.

A third implication is that what seems like a sudden change in mainstream opinions can in fact be the result of a shift in what is acceptable to voice, not in what people actually think. A prominent public figure getting away with saying something controversial may make others feel safe to do the same. A change in legislation may make people comfortable saying what they already thought.

For instance, if recreational marijuana use is legalized where someone lives, they might freely remark to a coworker that they consume it and consider it harmless. Even if that was true before the legislation change, saying so would have been too fraught, so they might have lied or avoided the topic. The result is that mainstream opinions can appear to change a great deal in a short time.

A fourth implication is that highly vocal holders of a minority opinion can end up having a disproportionate influence on public discourse. This is especially true if that minority is within a group that already has a lot of power.

While this was less the case during Noelle-Neumann’s time, the internet makes it possible for a vocal minority to make their opinions seem far more prevalent than they actually are—and therefore more acceptable. Indeed, the most extreme views on any spectrum can end up seeming most normal online because people with a moderate take have less of an incentive to make themselves heard.

In anonymous environments, the spiral of silence can end up reversing itself, making the most fringe views the loudest.

The post The Spiral of Silence appeared first on Farnam Street.

]]>
42772
When Technology Takes Revenge https://canvasly.link/revenge-effects/ Mon, 14 Sep 2020 13:00:32 +0000 https://canvasly.link/?p=42759 While runaway cars and vengeful stitched-together humans may be the stuff of science fiction, technology really can take revenge on us. Seeing technology as part of a complex system can help us avoid costly unintended consequences. Here’s what you need to know about revenge effects. *** By many metrics, technology keeps making our lives better. …

The post When Technology Takes Revenge appeared first on Farnam Street.

]]>
While runaway cars and vengeful stitched-together humans may be the stuff of science fiction, technology really can take revenge on us. Seeing technology as part of a complex system can help us avoid costly unintended consequences. Here’s what you need to know about revenge effects.

***

By many metrics, technology keeps making our lives better. We live longer, healthier, richer lives with more options than ever before for things like education, travel, and entertainment. Yet there is often a sense that we have lost control of our technology in many ways, and thus we end up victims of its unanticipated impacts.

Edward Tenner argues in Why Things Bite Back: Technology and the Revenge of Unintended Consequences that we often have to deal with “revenge effects.” Tenner coined this term to describe the ways in which technologies can solve one problem while creating additional worse problems, new types of problems, or shifting the harm elsewhere. In short, they bite back.

Although Why Things Bite Back was written in the late 1990s and many of its specific examples and details are now dated, it remains an interesting lens for considering issues we face today. The revenge effects Tenner describes haunt us still. As the world becomes more complex and interconnected, it’s easy to see that the potential for unintended consequences will increase.

Thus, when we introduce a new piece of technology, it would be wise to consider whether we are interfering with a wider system. If that’s the case, we should consider what might happen further down the line. However, as Tenner makes clear, once the factors involved get complex enough, we cannot anticipate them with any accuracy.

Neither Luddite nor alarmist in nature, the notion of revenge effects can help us better understand the impact of intervening with complex systems But we need to be careful. Although second-order thinking is invaluable, it cannot predict the future with total accuracy. Understanding revenge effects is primarily a reminder of the value of caution and not of specific risks.

***

Types of revenge effects

There are four different types of revenge effects, described here as follows:

  1. Repeating effects: occur when more efficient processes end up forcing us to do the same things more often, meaning they don’t free up more of our time. Better household appliances have led to higher standards of cleanliness, meaning people end up spending the same amount of time—or more—on housework.
  2. Recomplicating effects: occur when processes become more and more complex as the technology behind them improves. Tenner gives the now-dated example of phone numbers becoming longer with the move away from rotary phones. A modern example might be lighting systems that need to be operated through an app, meaning a visitor cannot simply flip a switch.
  3. Regenerating effects: occur when attempts to solve a problem end up creating additional risks. Targeting pests with pesticides can make them increasingly resistant to harm or kill off their natural predators. Widespread use of antibiotics to control certain conditions has led to be resistant strains of bacteria that are harder to treat.
  4. Rearranging effects: occur when costs are transferred elsewhere so risks shift and worsen. Air conditioning units on subways cool down the trains—while releasing extra heat and making the platforms warmer. Vacuum cleaners can throw dust mite pellets into the air, where they remain suspended and are more easily breathed in. Shielding beaches from waves transfers the water’s force elsewhere.

***

Recognizing unintended consequences

The more we try to control our tools, the more they can retaliate.

Revenge effects occur when the technology for solving a problem ends up making it worse due to unintended consequences that are almost impossible to predict in advance. A smartphone might make it easier to work from home, but always being accessible means many people end up working more.

Things go wrong because technology does not exist in isolation. It interacts with complex systems, meaning any problems spread far from where they begin. We can never merely do one thing.

Tenner writes: “Revenge effects happen because new structures, devices, and organisms react with real people in real situations in ways we could not foresee.” He goes on to add that “complexity makes it impossible for anyone to understand how the system might act: tight coupling spreads problems once they begin.”

Prior to the Industrial Revolution, technology typically consisted of tools that served as an extension of the user. They were not, Tenner argues, prone to revenge effects because they did not function as parts in an overall system like modern technology. He writes that “a machine can’t appear to have a will of its own unless it is a system, not just a device. It needs parts that interact in unexpected and sometimes unstable and unwanted ways.”

Revenge effects often involve the transformation of defined, localized risks into nebulous, gradual ones involving the slow accumulation of harm. Compared to visible disasters, these are much harder to diagnose and deal with.

Large localized accidents, like a plane crash, tend to prompt the creation of greater safety standards, making us safer in the long run. Small cumulative ones don’t.

Cumulative problems, compared to localized ones, aren’t easy to measure or even necessarily be concerned about. Tenner points to the difference between reactions in the 1990s to the risk of nuclear disasters compared to global warming. While both are revenge effects, “the risk from thermonuclear weapons had an almost built-in maintenance compulsion. The deferred consequences of climate change did not.”

Many revenge effects are the result of efforts to improve safety. “Our control of the acute has indirectly promoted chronic problems”, Tenner writes. Both X-rays and smoke alarms cause a small number of cancers each year. Although they save many more lives and avoiding them is far riskier, we don’t get the benefits without a cost. The widespread removal of asbestos has reduced fire safety, and disrupting the material is often more harmful than leaving it in place.

***

Not all effects exact revenge

A revenge effect is not a side effect—defined as a cost that goes along with a benefit. The value of being able to sanitize a public water supply has significant positive health outcomes. It also has a side effect of necessitating an organizational structure that can manage and monitor that supply.

Rather, a revenge effect must actually reverse the benefit for at least a small subset of users. For example, the greater ease of typing on a laptop compared to a typewriter has led to an increase in carpal tunnel syndrome and similar health consequences. It turns out that the physical effort required to press typewriter keys and move the carriage protected workers from some of the harmful effects of long periods of time spent typing.

Likewise, a revenge effect is not just a tradeoff—a benefit we forgo in exchange for some other benefit. As Tenner writes:

If legally required safety features raise airline fares, that is a tradeoff. But suppose, say, requiring separate seats (with child restraints) for infants, and charging a child’s fare for them, would lead many families to drive rather than fly. More children could in principle die from transportation accidents than if the airlines had continued to permit parents to hold babies on their laps. This outcome would be a revenge effect.

***

In support of caution

In the conclusion of Why Things Bite Back, Tenner writes:

We seem to worry more than our ancestors, surrounded though they were by exploding steamboat boilers, raging epidemics, crashing trains, panicked crowds, and flaming theaters. Perhaps this is because the safer life imposes an ever increasing burden of attention. Not just in the dilemmas of medicine but in the management of natural hazards, in the control of organisms, in the running of offices, and even in the playing of games there are, not necessarily more severe, but more subtle and intractable problems to deal with.

While Tenner does not proffer explicit guidance for dealing with the phenomenon he describes, one main lesson we can draw from his analysis is that revenge effects are to be expected, even if they cannot be predicted. This is because “the real benefits usually are not the ones that we expected, and the real perils are not those we feared.”

Chains of cause and effect within complex systems are stranger than we can often imagine. We should expect the unexpected, rather than expecting particular effects.

While we cannot anticipate all consequences, we can prepare for their existence and factor it into our estimation of the benefits of new technology. Indeed, we should avoid becoming overconfident about our ability to see the future, even when we use second-order thinking. As much as we might prepare for a variety of impacts, revenge effects may be dependent on knowledge we don’t yet possess. We should expect larger revenge effects the more we intensify something (e.g., making cars faster means worse crashes).

Before we intervene in a system, assuming it can only improve things, we should be aware that our actions can do the opposite or do nothing at all. Our estimations of benefits are likely to be more realistic if we are skeptical at first.

If we bring more caution to our attempts to change the world, we are better able to avoid being bitten.

 

The post When Technology Takes Revenge appeared first on Farnam Street.

]]>
42759
A Primer on Algorithms and Bias https://canvasly.link/algorithms-and-bias/ Mon, 07 Sep 2020 12:55:12 +0000 https://canvasly.link/?p=42742 The growing influence of algorithms on our lives means we owe it to ourselves to better understand what they are and how they work. Understanding how the data we use to inform algorithms influences the results they give can help us avoid biases and make better decisions. *** Algorithms are everywhere: driving our cars, designing …

The post A Primer on Algorithms and Bias appeared first on Farnam Street.

]]>
The growing influence of algorithms on our lives means we owe it to ourselves to better understand what they are and how they work. Understanding how the data we use to inform algorithms influences the results they give can help us avoid biases and make better decisions.

***

Algorithms are everywhere: driving our cars, designing our social media feeds, dictating which mixer we end up buying on Amazon, diagnosing diseases, and much more.

Two recent books explore algorithms and the data behind them. In Hello World: Being Human in the Age of Algorithms, mathematician Hannah Fry shows us the potential and the limitations of algorithms. And Invisible Women: Data Bias in a World Designed for Men by writer, broadcaster, and feminist activist Caroline Criado Perez demonstrates how we need to be much more conscientious of the quality of the data we feed into them.

Humans or algorithms?

First, what is an algorithm? Explanations of algorithms can be complex. Fry explains that at their core, they are defined as step-by-step procedures for solving a problem or achieving a particular end. We tend to use the term to refer to mathematical operations that crunch data to make decisions.

When it comes to decision-making, we don’t necessarily have to choose between doing it ourselves and relying wholly on algorithms. The best outcome may be a thoughtful combination of the two.

We all know that in certain contexts, humans are not the best decision-makers. For example, when we are tired, or when we already have a desired outcome in mind, we may ignore relevant information. In Thinking, Fast and Slow, Daniel Kahneman gave multiple examples from his research with Amos Tversky that demonstrated we are heavily influenced by cognitive biases such as availability and anchoring when making certain types of decisions. It’s natural, then, that we would want to employ algorithms that aren’t vulnerable to the same tendencies. In fact, their main appeal for use in decision-making is that they can override our irrationalities.

Algorithms, however, aren’t without their flaws. One of the obvious ones is that because algorithms are written by humans, we often code our biases right into them. Criado Perez offers many examples of algorithmic bias.

For example, an online platform designed to help companies find computer programmers looks through activity such as sharing and developing code in online communities, as well as visiting Japanese manga (comics) sites. People visiting certain sites with frequency received higher scores, thus making them more visible to recruiters.

However, Criado Perez presents the analysis of this recruiting algorithm by Cathy O’Neil, scientist and author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, who points out that “women, who do 75% of the world’s unpaid care work, may not have the spare leisure time to spend hours chatting about manga online . . . and if, like most of techdom, that manga site is dominated by males and has a sexist tone, a good number of women in the industry will probably avoid it.”

Criado Perez postulates that the authors of the recruiting algorithm didn’t intend to encode a bias that discriminates against women. But, she says, “if you aren’t aware of how those biases operate, if you aren’t collecting data and taking a little time to produce evidence-based processes, you will continue to blindly perpetuate old injustices.”

Fry also covers algorithmic bias and asserts that “wherever you look, in whatever sphere you examine, if you delve deep enough into any system at all, you’ll find some kind of bias.” We aren’t perfect—and we shouldn’t expect our algorithms to be perfect, either.

In order to have a conversation about the value of an algorithm versus a human in any decision-making context, we need to understand, as Fry explains, that “algorithms require a clear, unambiguous idea of exactly what we want them to achieve and a solid understanding of the human failings they are replacing.”

Garbage in, garbage out

No algorithm is going to be successful if the data it uses is junk. And there’s a lot of junk data in the world. Far from being a new problem, Criado Perez argues that “most of recorded human history is one big data gap.” And that has a serious negative impact on the value we are getting from our algorithms.

Criado Perez explains the situation this way: We live in “a world [that is] increasingly reliant on and in thrall to data. Big data. Which in turn is panned for Big Truths by Big Algorithms, using Big Computers. But when your data is corrupted by big silences, the truths you get are half-truths, at best.”

A common human bias is one regarding the universality of our own experience. We tend to assume that what is true for us is generally true across the population. We have a hard enough time considering how things may be different for our neighbors, let alone for other genders or races. It becomes a serious problem when we gather data about one subset of the population and mistakenly assume that it represents all of the population.

For example, Criado Perez examines the data gap in relation to incorrect information being used to inform decisions about safety and women’s bodies. From personal protective equipment like bulletproof vests that don’t fit properly and thus increase the chances of the women wearing them getting killed to levels of exposure to toxins that are unsafe for women’s bodies, she makes the case that without representative data, we can’t get good outputs from our algorithms. She writes that “we continue to rely on data from studies done on men as if they apply to women. Specifically, Caucasian men aged twenty-five to thirty, who weigh 70 kg. This is ‘Reference Man’ and his superpower is being able to represent humanity as whole. Of course, he does not.” Her book contains a wide variety of disciplines and situations where the gender gap in data leads to increased negative outcomes for women.

The limits of what we can do

Although there is a lot we can do better when it comes to designing algorithms and collecting the data sets that feed them, it’s also important to consider their limits.

We need to accept that algorithms can’t solve all problems, and there are limits to their functionality. In Hello World, Fry devotes a chapter to the use of algorithms in justice. Specifically, algorithms designed to provide information to judges about the likelihood of a defendant committing further crimes. Our first impulse is to say, “Let’s not rely on bias here. Let’s not have someone’s skin color or gender be a key factor for the algorithm.” After all, we can employ that kind of bias just fine ourselves. But simply writing bias out of an algorithm is not as easy as wishing it so. Fry explains that “unless the fraction of people who commit crimes is the same in every group of defendants, it is mathematically impossible to create a test which is equally accurate at predicting across the board and makes false positive and false negative mistakes at the same rate for every group of defendants.”

Fry comes back to such limits frequently throughout her book, exploring them in various disciplines. She demonstrates to the reader that “there are boundaries to the reach of algorithms. Limits to what can be quantified.” Perhaps a better understanding of those limits is needed to inform our discussions of where we want to use algorithms.

There are, however, other limits that we can do something about. Both authors make the case for more education about algorithms and their input data. Lack of understanding shouldn’t hold us back. Algorithms that have a significant impact on our lives specifically need to be open to scrutiny and analysis. If an algorithm is going to put you in jail or impact your ability to get a mortgage, then you ought to be able to have access to it.

Most algorithm writers and the companies they work for wave the “proprietary” flag and refuse to open themselves up to public scrutiny. Many algorithms are a black box—we don’t actually know how they reach the conclusions they do. But Fry says that shouldn’t deter us. Pursuing laws (such as the data access and protection rights being instituted in the European Union) and structures (such as an algorithm-evaluating body playing a role similar to the one the U.S. Food and Drug Administration plays in evaluating whether pharmaceuticals can be made available to the U.S. market) will help us decide as a society what we want and need our algorithms to do.

Where do we go from here?

Algorithms aren’t going away, so it’s best to acquire the knowledge needed to figure out how they can help us create the world we want.

Fry suggests that one way to approach algorithms is to “imagine that we designed them to support humans in their decisions, rather than instruct them.” She envisions a world where “the algorithm and the human work together in partnership, exploiting each other’s strengths and embracing each other’s flaws.”

Part of getting to a world where algorithms provide great benefit is to remember how diverse our world really is and make sure we get data that reflects the realities of that diversity. We can either actively change the algorithm, or we change the data set. And if we do the latter, we need to make sure we aren’t feeding our algorithms data that, for example, excludes half the population. As Criado Perez writes, “when we exclude half of humanity from the production of knowledge, we lose out on potentially transformative insights.”

Given how complex the world of algorithms is, we need all the amazing insights we can get. Algorithms themselves perhaps offer the best hope, because they have the inherent flexibility to improve as we do.

Fry gives this explanation: “There’s nothing inherent in [these] algorithms that means they have to repeat the biases of the past. It all comes down to the data you give them. We can choose to be ‘crass empiricists’ (as Richard Berk put it ) and follow the numbers that are already there, or we can decide that the status quo is unfair and tweak the numbers accordingly.”

We can get excited about the possibilities that algorithms offer us and use them to create a world that is better for everyone.

The post A Primer on Algorithms and Bias appeared first on Farnam Street.

]]>
42742
The Ingredients For Innovation https://canvasly.link/ingredients-for-innovation/ Mon, 03 Aug 2020 12:00:24 +0000 https://canvasly.link/?p=42637 Inventing new things is hard. Getting people to accept and use new inventions is often even harder. For most people, at most times, technological stagnation has been the norm. What does it take to escape from that and encourage creativity? *** Writing in The Lever of Riches: Technological Creativity and Economic Progress, economic historian Joel …

The post The Ingredients For Innovation appeared first on Farnam Street.

]]>
Inventing new things is hard. Getting people to accept and use new inventions is often even harder. For most people, at most times, technological stagnation has been the norm. What does it take to escape from that and encourage creativity?

***

“Technological progress requires above all tolerance toward the unfamiliar and the eccentric.”

— Joel Mokyr, The Lever of Riches

Writing in The Lever of Riches: Technological Creativity and Economic Progress, economic historian Joel Mokyr asks why, when we look at the past, some societies have been considerably more creative than others at particular times. Some have experienced sudden bursts of progress, while others have stagnated for long periods of time. By examining the history of technology and identifying the commonalities between the most creative societies and time periods, Mokyr offers useful lessons we can apply as both individuals and organizations.

What does it take for a society to be technologically creative?

When trying to explain something as broad and complex as technological creativity, it’s important not to fall prey to the lure of a single explanation. There are many possible reasons for anything that happens, and it’s unwise to believe explanations that are too tidy. Mokyr disregards some of the common simplistic explanations for technological creativity, such as that war prompts creativity or people with shorter life spans are less likely to expend time on invention.

Mokyr explores some of the possible factors that contribute to a society’s technological creativity. In particular, he seeks to explain why Europe experienced such a burst of technological creativity from around 1500 to the Industrial Revolution, when prior to that it had lagged far behind the rest of the world. Mokyr explains that “invention occurs at the level of the individual, and we should address the factors that determine individual creativity. Individuals, however, do not live in a vacuum. What makes them implement, improve and adapt new technologies, or just devise small improvements in the way they carry out their daily work depends on the institutions and the attitudes around them.” While environment isn’t everything, certain conditions are necessary for technological creativity.

He identifies the three following key factors in an environment that impact the occurrence of invention and innovation.

The social infrastructure

First of all, the society needs a supply of “ingenious and resourceful innovators who are willing and able to challenge their physical environment for their own improvement.” Fostering these attributes requires factors like good nutrition, religious beliefs that are not overly conservative, and access to education. It is in part about the absence of negative factors—necessitous people have less capacity for creativity. Mokyr writes: “The supply of talent is surely not completely exogenous; it responds to incentives and attitudes. The question that must be confronted is why in some societies talent is unleashed upon technical problems that eventually change the entire productive economy, whereas in others this kind of talent is either repressed or directed elsewhere.”

One partial explanation for Europe’s creativity from 1500 to the Industrial Revolution is that it was often feasible for people to relocate to a different country if the conditions in their current one were suboptimal. A creative individual finding themselves under a conservative government seeking to maintain the technological status quo was able to move elsewhere.

The ability to move around was also part of the success of the Abbasid Caliphate, an empire that stretched from India to the Iberian Peninsula from about 750 to 1250. Economists Maristella Botticini and Zvi Eckstein write in The Chosen Few: How Education Shaped Jewish History, 70–1492 that “it was relatively easy to move or migrate” within the Abbasid empire, especially with its “common language (Arabic) and a uniform set of institutions and laws over an immense area, greatly [favoring] trade and commerce.”

It also matters whether creative people are channeled into technological fields or into other fields, like the military. In Britain during and prior to the Industrial Revolution, Mokyr considers invention to have been the main possible path for creative individuals, as other areas like politics leaned towards conformism.

The social incentives

Second, there need to be incentives in place to encourage innovation. This is of extra importance for macroinventions – completely new inventions, not improvements on existing technology – which can require a great leap of faith. The person who comes up with a faster horse knows it has a market; the one who comes up with a car does not. Such incentives are most often financial, but not always. Awards, positions of power, and recognition also count. Mokyr explains that diverse incentives encourage the patience needed for creativity: “Sustained innovation requires a set of individuals willing to absorb large risks, sometimes to wait many years for the payoff (if any.)”

Patent systems have long served as an incentive, allowing inventors to feel confident they will profit from their work. Patents first appeared in northern Italy in the early fifteenth century; Venice implemented a formal system in 1474. According to Mokyr, the monopoly rights mining contractors received over the discovery of hitherto unknown mineral resources provided inspiration for the patent system.

However, Mokyr points out that patents were not always as effective as inventors hoped. Indeed, they may have provided the incentive without any actual protection. Many inventors ended up spending unproductive time and money on patent litigation, which in some cases outweighed their profits, discouraged them from future endeavors, or left them too drained to invent more. Eli Whitney, inventor of the cotton gin, claimed his legal costs outweighed his profits. Mokyr proposes that though patent laws may be imperfect, they are, on balance, good for society as they incentivize invention while not altogether preventing good ideas from circulating and being improved upon by others.

The ability to make money from inventions is also related to geographic factors. In a country with good communication and transport systems, with markets in different areas linked, it is possible for something new to sell further afield. A bigger prospective market means stronger financial incentives. The extensive, accessible, and well-maintained trade routes during the Abbasid empire allowed for innovations to diffuse throughout the region. And during the Industrial Revolution in Britain, railroads helped bring developments to the entire country, ensuring inventors didn’t just need to rely on their local market.

The social attitude

Third, a technologically creative society must be diverse and tolerant. People must be open to new ideas and outré individuals. They must not only be willing to consider fresh ideas from within their own society but also happy to take inspiration from (or to outright steal) those coming from elsewhere. If a society views knowledge coming from other countries as suspect or even dangerous, unable to see its possible value, it is at a disadvantage. If it eagerly absorbs external influences and adapts them for its own purposes, it is at an advantage. Europeans were willing to pick up on ideas from each other. and elsewhere in the world. As Mokyr puts it, “Inventions such as the spinning wheel, the windmill, and the weight-driven clock recognized no boundaries”

In the Abbasid empire, there was an explosion of innovation that drew on the knowledge gained from other regions. Botticini and Eckstein write:

“The Abbasid period was marked by spectacular developments in science, technology, and the liberal arts. . . . The Muslim world adopted papermaking from China, improving Chinese technology with the invention of paper mills many centuries before paper was known in the West. Muslim engineers made innovate industrial uses of hydropower, tidal power, wind power, steam power, and fossil fuels. . . . Muslim engineers invented crankshafts and water turbines, employed gears in mills and water-raising machines, and pioneered the use of dams as a source of waterpower. Such advances made it possible to mechanize many industrial tasks that had previously been performed by manual labor.”

Within societies, certain people and groups seek to maintain the status quo because it is in their interests to do so. Mokyr writes that “Some of these forces protect vested interests that might incur losses if innovations were introduced, others are simply don’t-rock-the-boat kind of forces.” In order for creative technology to triumph, it must be able to overcome those forces. While there is always going to be conflict, the most creative societies are those where it is still possible for the new thing to take over. If those who seek to maintain the status quo have too much power, a society will end up stagnating in terms of technology. Ways of doing things can prevail not because they are the best, but because there is enough interest in keeping them that way.

In some historical cases in Europe, it was easier for new technologies to spread in the countryside, where the lack of guilds compensated for the lower density of people. City guilds had a huge incentive to maintain the status quo. The inventor of the ribbon loom in Danzig in 1579 was allegedly drowned by the city council, while “in the fifteenth century, the scribes guild of Paris succeeded in delaying the introduction of printing in Paris by 20 years.”

Indeed, tolerance could be said to matter more for technological creativity than education. As Mokyr repeatedly highlights, many inventors and innovators throughout history were not educated to a high level—or even at all. Up until relatively recently, most technology preceded the science explaining how it actually worked. People tinkered, looking to solve problems and experiment.

Unlike modern times, Mokyr explains, for most of history technology did not emerge from “specialized research laboratories paid for by research and development budgets and following strategies mapped out by corporate planners well-informed by marketing analysts. Technological change occurred mostly through new ideas and suggestions occurring if not randomly, then in a highly unpredictable fashion.”

When something worked, it worked, even if no one knew why or the popular explanation later proved incorrect. Steam engines are one such example. The notion that all technologies function under the same set of physical laws was not standard until Galileo. People need space to be a bit weird.

Those who were scientists and academics during some of Europe’s most creative periods worked in a different manner than what we expect today, often working on the practical problems they faced themselves. Mokyr gives Galileo as an example, as he “built his own telescopes and supplemented his salary as a professor at the University of Padua by making and repairing instruments.” The distinction between one who thinks and one who makes was not yet clear at the time of the Renaissance. Wherever and whenever making has been a respectable activity for thinkers, creativity flourishes.

Seeing as technological creativity requires a particular set of circumstances, it is not the norm. Throughout history, Mokyr writes, “Technological progress was neither continuous nor persistent. Genuinely creative societies were rare, and their bursts of creativity usually short-lived.”

Not only did people need to be open to new ideas, they also needed to be willing to actually start using new technologies. This often required a big leap of faith. If you’re a farmer just scraping by, trying a new way of ploughing your fields could mean starving to death if it doesn’t work out. Innovations can take a long time to defuse, with riskier ones taking the longest.

How can we foster the right environment?

So what can we learn from The Lever of Riches that we can apply as individuals and in organizations?

The first lesson is that creativity does not occur in a vacuum. It requires certain necessary conditions to occur. If we want to come up with new ideas as individuals, we should consider ourselves as part of a system. In particular, we need to consider what might impede us and what can encourage us. We need to eradicate anything that will get in the way of our thinking, such as limiting beliefs or lack of sleep.

We need to be clear on what motivates us to be creative, ensuring what we endeavor to do will be worthwhile enough to drive us through the associated effort. When we find ourselves creatively blocked, it’s often because we’re not in touch with what inspires us to create in the first place.

Within an organization, such factors are equally important. If you want your employees to be creative, it’s important to consider the system they’re part of. Is there anything blocking their thinking? Is a good incentive structure in place (bearing in mind incentives are not solely financial)?

Another lesson is that tolerance for divergence is essential for encouraging creativity. This may seem like part of the first lesson, but it’s crucial enough to consider in isolation.

As individuals, when we seek to come up with new ideas, we need to ask ourselves the following questions: Am I exposing myself to new material and inspirations or staying within a filter bubble? Am I open to unusual ways of thinking? Am I spending too much time around people who discourage deviation from the status quo? Am I being tolerant of myself, allowing myself to make mistakes and have bad ideas in service of eventually having good ones? Am I spending time with unorthodox people who encourage me to think differently?

Within organizations, it’s worth asking the following questions: Are new ideas welcomed or shot down? Is it in the interests of many to protect the status quo? Are ideas respected regardless of their source? Are people encouraged to question norms?

A final lesson is that the forces of inertia are always acting to discourage creativity. Invention is not the natural state of things—it is an exception. Technological stagnation is the norm. In most places, at most times, people have not come up with new technology. It takes a lot for individuals to be willing to wrestle something new from nothing or to question if something in existence can be made better. But when those acts do occur, they can have an immeasurable impact on our world.

The post The Ingredients For Innovation appeared first on Farnam Street.

]]>
42637
Gates’ Law: How Progress Compounds and Why It Matters https://canvasly.link/gates-law/ Mon, 13 May 2019 10:50:49 +0000 https://canvasly.link/?p=37741 “Most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years.” It’s unclear exactly who first made that statement, when they said it, or how it was phrased. The most probable source is Roy Amara, a Stanford computer scientist. In the 1960s, Amara told colleagues that he …

The post Gates’ Law: How Progress Compounds and Why It Matters appeared first on Farnam Street.

]]>
“Most people overestimate what they can achieve in a year and underestimate what they can achieve in ten years.”

It’s unclear exactly who first made that statement, when they said it, or how it was phrased. The most probable source is Roy Amara, a Stanford computer scientist. In the 1960s, Amara told colleagues that he believed that “we overestimate the impact of technology in the short-term and underestimate the effect in the long run.” For this reason, variations on that phrase are often known as Amara’s Law. However, Bill Gates made a similar statement (possibly paraphrasing Amara), so it’s also known as Gates’s Law.

You may have seen the same phrase attributed to Arthur C. Clarke, Tony Robbins, or Peter Drucker. There’s a good reason why Amara’s words have been appropriated by so many thinkers—they apply to so much more than technology. Almost universally, we tend to overestimate what can happen in the short term and underestimate what can happen in the long term.

Thinking about the future does not require endless hyperbole or even forecasting, which is usually pointless anyway. Instead, there are patterns we can identify if we take a long-term perspective.

Let’s look at what Bill Gates meant and why it matters.

Moore’s Law

Gates’s Law is often mentioned in conjunction with Moore’s Law. This is generally quoted as some variant of “the number of transistors on an inch of silicon doubles every eighteen months.” However, calling it Moore’s Law is misleading—at least if you think of laws as invariant. It’s more of an observation of a historical trend.

When Gordon Moore, co-founder of Fairchild Semiconductor and Intel, noticed in 1965 that the number of semiconductors on a chip doubled every year, he was not predicting that would continue in perpetuity. Indeed, Moore revised the doubling time to two years a decade later. But the world latched onto his words. Moore’s Law has been variously treated as a target, a limit, a self-fulfilling prophecy, and a physical law as certain as the laws of thermodynamics.

Moore’s Law is now considered to be outdated, after holding true for several decades. That doesn’t mean the concept has gone anywhere. Moore’s Law is often regarded as a general principle in technological development. Certain performance metrics have a defined doubling time, the opposite of a half-life.

Why is Moore’s Law related to Amara’s Law?

Exponential growth is a concept we struggle to conceptualize. As University of Colorado physics professor Albert Allen Bartlett famously put it, “The greatest shortcoming of the human race is our inability to understand the exponential function.”

When we talk about Moore’s Law, we easily underestimate what happens when a value keeps doubling. Sure, it’s not that hard to imagine your laptop getting twice as fast in a year, for instance. Where it gets tricky is when we try to imagine what that means on a longer timescale. What does that mean for your laptop in 10 years? There is a reason your iPhone has more processing power than the first space shuttle.

One of the best illustrations of exponential growth is the legend about a peasant and the emperor of China. In the story, the peasant (sometimes said to be the inventor of chess), visits the emperor with a seemingly modest request: a chessboard with one grain of rice on the first square, then two on the second, four on the third and so on, doubling each time. The emperor agreed to this idiosyncratic request and ordered his men to start counting out rice grains.

“Every fact of science was once damned. Every invention was considered impossible. Every discovery was a nervous shock to some orthodoxy. Every artistic innovation was denounced as fraud and folly. We would own no more, know no more, and be no more than the first apelike hominids if it were not for the rebellious, the recalcitrant, and the intransigent.”

— Robert Anton Wilson

If you haven’t heard this story before, it might seem like the peasant would end up with, at best, enough rice to feed their family that evening. In reality, the request was impossible to fulfill. Doubling one grain 63 times (the number of squares on a chessboard, minus the first one that only held one grain) would mean the emperor had to give the peasant over 18 million trillion grains of rice. To grow just half of that amount, he would have needed to drain the oceans and convert every bit of land on this planet into rice fields. And that’s for half.

In his essay “The Law of Accelerating Returns,” author and inventor Ray Kurzweil uses this story to show how we misunderstand the meaning of exponential growth in technology. For the first few squares, the growth was inconsequential, especially in the eyes of an emperor. It was only once they reached the halfway point that the rate began to snowball dramatically. (It’s no coincidence that Warren Buffett’s authorized biography is called The Snowball, and few people understand exponential growth better than Warren Buffett). It just so happens that by Kurzweil’s estimation, we’re at that inflection point in computing. Since the creation of the first computers, computation power has doubled roughly 32 times. We may underestimate the long-term impact because the idea of this continued doubling is so tricky to imagine.

The Technology Hype Cycle

To understand how this plays out, let’s take a look at the cycle innovations go through after their invention. Known as the Gartner hype cycle, it primarily concerns our perception of technology—not its actual value in our lives.

Hype cycles are obvious in hindsight, but fiendishly difficult to spot while they are happening. It’s important to bear in mind that this model is one way of looking at reality and is not a prediction or a template. Sometimes a step gets missed, sometimes there is a substantial gap between steps, sometimes a step is deceptive.

The hype cycle happens like this:

  • New technology: The media picks up on the existence of a new technology which may not exist in a usable form yet. Nonetheless, the publicity leads to significant interest. At this point, people working on research and development are probably not making any money from it. Lots of mistakes are made. In Everett Rogers’s diffusion of innovations theory, this is known as the innovation stage. If it seems like something new will have a dramatic payoff, it probably won’t last. If it seems we have found the perfect use for a brand-new technology, we may be wrong.
  • The peak of inflated expectations: A few well-publicized success stories lead to inflated expectations. Hype builds and new companies pop up to anticipate the demand. There may be a burst of funding for research and development. Scammers looking to make a quick buck may move into the area. Rogers calls this the syndication stage. It’s here that we overestimate the future applications and impact of the technology.
  • The trough of disillusionment: Prominent failures or a lack of progress break through the hype and lead to disillusionment. People become pessimistic about technology’s potential and mostly lose interest. Reports of scams may contribute to this, as the media uses this as a reason to describe the technology as a fraud. If it seems like new technology is dying, it may just be that its public perception has changed and the technology itself is still developing. Hype does not correlate directly with functionality.
  • The slope of enlightenment: As time passes, people continue to improve technology and find better uses for it. Eventually, it’s clear how it can improve our lives, and mainstream adoption begins. Mechanisms for preventing scams or lawbreaking emerge.
  • The plateau of productivity: The technology becomes mainstream. Development slows. It becomes part of our lives and ceases to seem novel. Those who move into the now saturated market tend to struggle, as a few dominant players take the lion’s share of the available profits. Rogers calls this the diffusion stage.

When we are cresting the peak of inflated expectations, we imagine that the new development will transform our lives within months. In the depths of the trough of disillusionment, we don’t expect it to get anywhere, even allowing years for it to improve. We typically fail to anticipate the significance of the plateau of productivity, even if it exceeds our initial expectations.

Smart people can usually see through the initial hype. But only a handful of people can—through foresight, stubbornness or perhaps pure luck—see through the trough of disillusionment. Most of the initial skeptics feel vindicated by the dramatic drop in interest and expect the innovation to disappear. It takes far greater expertise to support an unpopular technology than to deride a popular one.

Correctly spotting the cycle as it unfolds can be immensely profitable. Misreading it can be devastating. First movers in a new area often struggle to survive the trough, even if they are the ones who do the essential research and development. We tend to assume current trends will continue, so we expect sustained growth during the peak and expect linear decline during the trough.

If we are trying to assess the future impact of a new technology, we need to separate its true value from its public perception. When something is new, the mainstream hype is likely to be more noise than signal. After all, the peak of inflated expectations often happens before the technology is available in a usable form. It’s almost always before the public has access to it. Hype serves a real purpose in the early days: it draws interest, secures funding, attracts people with the right talents to move things forward and generates new ideas. Not all hype is equally important, because not all opinions are equally important. If there’s intense interest within a niche group with relevant expertise, that’s more telling than a general enthusiasm.

The hype cycle doesn’t just happen with technology. It plays out all over the place, and we’re usually fooled by it. Discrepancies between our short- and long-term estimates of achievement are everywhere. Consider the following situations. They’re hypothetical, but similar situations are common.

  • A musician releases an acclaimed debut album which creates enormous interest in their work. When their second album proves disappointing (or never materializes), most people lose interest. Over time, the performer develops a loyal, sustained following of people who accurately assess the merits of their music, not the hype.
  • A promising new pharmaceutical receives considerable attention—until it becomes apparent that there are unexpected side effects, or it isn’t as powerful as expected. With time, clinical trials find alternate uses which may prove even more beneficial. For example, a side effect could be helpful for another use. It’s estimated that over 20% of pharmaceuticals are prescribed for a different purpose than they were initially approved for, with that figure rising as high as 60% in some areas.
  • A propitious start-up receives an inflated valuation after a run of positive media attention. Its founders are lauded and extensively profiled and investors race to get involved. Then there’s an obvious failure—perhaps due to the overconfidence caused by hype—or early products fall flat or take too long to create. Interest wanes. The media gleefully dissects the company’s apparent demise. But the product continues to improve and ultimately becomes a part of our everyday lives.

In the short run, the world is a voting machine affected by whims and marketing. In the long run, it’s a weighing machine where quality and product matter.

The Adjacent Possible

Now that we know how Amara’s Law plays out in real life, the next question is: why does this happen? Why does technology grow in complexity at an exponential rate? And why don’t we see it coming?

One explanation is what Stuart Kauffman describes as “the adjacent possible.” Each new innovation adds to the number of achievable possible (future) innovations. It opens up adjacent possibilities which didn’t exist before, because better tools can be used to make even better tools.

Humanity is about expanding the realm of the possible. Discovering fire meant our ancestors could use the heat to soften or harden materials and make better tools. Inventing the wheel meant the ability to move resources around, which meant new possibilities such as the construction of more advanced buildings using materials from other areas. Domesticating animals meant a way to pull wheeled vehicles with less effort, meaning heavier loads, greater distances and more advanced construction. The invention of writing led to new ways of recording, sharing and developing knowledge which could then foster further innovation. The internet continues to give us countless new opportunities for innovation. Anyone with a new idea can access endless free information, find supporters, discuss their ideas and obtain resources. New doors to the adjacent possible open every day as we find different uses for technology.

“We like to think of our ideas as $40,000 incubators shipped directly from the factory, but in reality, they’ve been cobbled together with spare parts that happened to be sitting in the garage.”

— Steven Johnson, Where Good Ideas Come From

Take the case of GPS, an invention that was itself built out of the debris of its predecessors. In recent years, GPS has opened up new possibilities that didn’t exist before. The system was developed by the US government for military usage. In the 1980s, they decided to start allowing other organizations and individuals to use it. Civilian access to GPS gave us new options. Since then, it has led to numerous innovations that incorporate the system into old ideas: self-driving cars, mobile phone tracking (very useful for solving crime or finding people in emergency situations), tectonic plate trackers that help predict earthquakes, personal navigation systems, self-navigating robots, and many others. None of these would have been possible without some sort of global positioning system. With the invention of GPS, human innovation sped up a little more.

Steven Johnson gives one example of how this happens in Where Good Ideas Come From. In 2008, MIT professor Timothy Presto visited a hospital in Indonesia and found that all eight of the incubators for newborn babies were broken. The incubators had been donated to the hospital by relief organizations, but the staff didn’t know how to fix them. Plus, the incubators were poorly suited to the humid climate and the repair instructions only came in English. Presto realized that donating medical equipment was pointless if local people couldn’t fix it. He and his team began working on designing an incubator that could save the lives of babies for a lot longer than a couple of months.

Instead of continuing to tweak existing designs, Presto and his team devised a completely new incubator that used car parts. While the local people didn’t know how to fix an incubator, they were extremely adept at keeping their cars working no matter what. Named the NeoNurture, it used headlights for warmth, dashboard fans for ventilation, and a motorcycle battery for power. Hospital staff just needed to find someone who was good with cars to fix it—the principles were the same.

Even more, telling is the origin of the incubators Presto and his team reconceptualized. The first incubator for newborn babies was designed by Stephane Tarnier in the late 19th century. While visiting a zoo on his day off, Tarnier noted that newborn chicks were kept in heated boxes. It’s not a big leap to imagine that the issue of infant mortality was permanently on his mind. Tarnier was an obstetrician, working at a time when the infant mortality rate for premature babies was about 66%. He must have been eager to try anything that could reduce that figure and its emotional toll. Tarnier’s rudimentary incubator immediately halved that mortality rate. The technology was right there, in the zoo. It just took someone to connect the dots and realize human babies aren’t that different from chicken babies.

Johnson explains the significance of this: “Good ideas are like the NeoNurture device. They are, inevitably, constrained by the parts and skills that surround them…ideas are works of bricolage; they’re built out of that detritus.” Tarnier could invent the incubator only because someone else had already invented a similar device. Presto and his team could only invent the NeoNurture because Tarnier had come up with the incubator in the first place.

This happens in our lives, as well. If you learn a new skill, the number of skills you could potentially learn increases because some elements may be transferable. If you are introduced to a new person, the number of people you could meet grows, because they may introduce you to others. If you start learning a language, native speakers may be more willing to have conversations with you in it, meaning you can get a broader understanding. If you read a new book, you may find it easier to read other books by linking together the information in them. The list is endless. We can’t imagine what we’re capable of achieving in ten years because we forget about the adjacent possibilities that will emerge.

Accelerating Change

The adjacent possible has been expanding ever since the first person picked up a stone and started shaping it into a tool. Just look at what written and oral forms of communication made possible—no longer did each generation have to learn everything from scratch. Suddenly we could build upon what had come before us.

Some (annoying) people claim that there’s nothing new left. There are no new ideas to be had, no new creations to invent, no new options to explore. In fact, the opposite is true. Innovation is a non-zero-sum game. A crowded market actually means more opportunities to create something new than a barren one. Technology is a feedback loop. The creation of something new begets the creation of something even newer and so on.

Progress is exponential, not linear. So we overestimate the impact of a new technology during the early days when it is just finding its feet, then underestimate its impact in a decade or so when its full uses are emerging. As old limits and constraints melt away, our options explode. The exponential growth of technology is known as accelerating change. It’s a common belief among experts that the rate of change is speeding up and society will change dramatically alongside it.

“Ideas borrow, blend, subvert, develop and bounce off other ideas.”

— John Hegarty, Hegarty On Creativity

In 1999, author and inventor Ray Kurzweil posited the Law of Accelerating Change — that evolutionary systems develop at an exponential rate. While this is most obvious for technology, Kurzweil hypothesized that the principle is relevant in numerous other areas. Moore’s Law, initially referring only to semiconductors, has wider implications.

Kurzweil writes:

An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense “intuitive linear” view. So we won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today’s rate). The “returns,” such as chip speed and cost-effectiveness, also increase exponentially. There’s even exponential growth in the rate of exponential growth.

Progress is tricky to predict or even to notice as it happens. It’s hard to notice things in a system that we are part of. And it’s hard to notice the incremental change because it lacks stark contrast. The current pace of change is our norm, and we adjust to it. In hindsight, we can see how Amara’s Law plays out.

Look at where the internet was just twenty years ago. A report from the Pew Research Center shows us how to change compounds. In 1998, a mere 41% of Americans used the internet at all—and the report expresses surprise that the users were beginning to include “people without college training, those with modest incomes, and women.” Less than a third of users had bought something online, email was predominantly just for work, and only a third of users looked at online news at least once per week. That’s a third of the 41% using the internet by the way, not of the general population. Wikipedia and Gmail didn’t exist. Internet users in the late nineties reported that their main problem was finding what they needed online.

That is perhaps the biggest change and one we may not have anticipated: the move towards personalization. Finding what we need is no longer a problem. Most of us have the opposite problem and struggle with information overwhelm. Twenty years ago, filter bubbles were barely a problem (at least, not online.) Now, almost everything we encounter online is personalized to ensure it’s ridiculously easy to find what we want. Newsletters, websites, and apps greet us by name. Newsfeeds are organized by our interests. Shopping sites recommend other products we might like. This has increased the amount the internet does for us to a level that would have been hard to imagine in the late 90s. Kevin Kelly, writing in The Inevitable,  describes filtering as one of the key forces that will shape the future.

History reveals an extraordinary acceleration of technological progress. Establishing the precise history of technology is problematic as some inventions occurred in several places at varying times, archaeological records are inevitably incomplete, and dating methods are imperfect. However, accelerating change is a clear pattern. To truly understand the principle of accelerating change, we need to take a quick look at a simple overview of the history of technology.

Early innovations happened slowly. It took us about 30,000 years to invent clothing and about 120,000 years to invent jewelry. It took us about 130,000 years to invent art and about 136,000 years to come up with the bow and arrow. But things began to speed up in the Upper Paleolithic period. Between 50,000 and 10,000 years ago, we developed more sophisticated tools with specialized uses—think harpoons, darts, fishing tools, and needles—early musical instruments, pottery, and the first domesticated animals. Between roughly 11,000 years and the 18th century, the pace truly accelerated. That period essentially led to the creation of civilization, with the foundations of our current world.

More recently, the Industrial Revolution changed everything because it moved us significantly further away from relying on the strength of people and domesticated animals to power means of production. Steam engines and machinery replaced backbreaking labor, meaning more production at a lower cost. The number of adjacent possibilities began to snowball. Machinery enabled mass production and interchangeable parts. Steam-powered trains meant people could move around far more easily, allowing people from different areas to mix together and share ideas. Improved communications did the same. It’s pointless to even try listing the ways technology has changed since then. Regardless of age, we’ve all lived through it and seen the acceleration. Few people dispute that the change is snowballing. The only question is how far that will go.

As Stephen Hawking put it in 1993:

For millions of years, mankind lived just like the animals. Then something happened which unleashed the power of our imagination. We learned to talk and we learned to listen. Speech has allowed the communication of ideas, enabling human beings to work together to build the impossible. Mankind’s greatest achievements have come about by talking, and its greatest failures by not talking. It doesn’t have to be like this. Our greatest hopes could become reality in the future. With the technology at our disposal, the possibilities are unbounded. All we need to do is make sure we keep talking.

But, as we saw with Moore’s Law, exponential growth cannot continue forever. Eventually, we run into fundamental constraints. Hours in the day, people on the planet, availability of a resource, smallest possible size of a semiconductor, attention—there’s always a bottleneck we can’t eliminate.  We reach the point of diminishing returns. Growth slows or stops altogether. We must then either look at alternative routes to improvement or leave things as they are. In Everett Rogers’s diffusion of innovation theory, this is known as the substitution stage, when usage declines and we start looking for substitutes.

This process is not linear. We can’t predict the future because there’s no way to take into account the tiny factors that will have a disproportionate impact in the long-run.

The post Gates’ Law: How Progress Compounds and Why It Matters appeared first on Farnam Street.

]]>
37741
Why the Printing Press and the Telegraph Were as Impactful as the Internet https://canvasly.link/printing-press-telegraph-matter/ Mon, 06 Mar 2017 12:00:03 +0000 https://www.farnamstreetblog.com/?p=30620 What makes a communications technology revolutionary? One answer to this is to ask whether it fundamentally changes the way society is organized. This can be a very hard question to answer, because true fundamental changes alter society in such a way that it becomes difficult to speak of past society without imposing our present understanding. In her …

The post Why the Printing Press and the Telegraph Were as Impactful as the Internet appeared first on Farnam Street.

]]>
What makes a communications technology revolutionary? One answer to this is to ask whether it fundamentally changes the way society is organized. This can be a very hard question to answer, because true fundamental changes alter society in such a way that it becomes difficult to speak of past society without imposing our present understanding.

In her seminal work, The Printing Press as An Agent of Change, Elizabeth Eisenstein argues just that:

When ideas are detached from the media used to transmit them, they are also cut off from the historical circumstances that shape them, and it becomes difficult to perceive the changing context within which they must be viewed.

Today we rightly think of the internet and the mobile phone, but long ago, the printing press and the telegraph both had just as heavy an impact on the development of society.

Printing Press

Thinking of the time before the telegraph, when communications had to be hand delivered, is quaint. Trying to conceive the world before the uniformity of communication brought about by the printing press is almost unimaginable.

Eisenstein argues that the printing press “is of special historical significance because it produced fundamental alterations in prevailing patterns of continuity and change.”

Before the printing press there were no books, not in the sense that we understand them. There were manuscripts that were copied by scribes, which contained inconsistencies and embellishments, and modifications that suited who the scribe was working for. The printing press halted the evolution of symbols: For the first time maps and numbers were fixed.

Furthermore, because pre-press scholars had to go to manuscripts, Eisenstein says we should “recognize the novelty of being able to assemble diverse records and reference guides, and of being able to study them without having to transcribe them at the same time” that was afforded by the printing press.

This led to new ways of being able to compare and thus develop knowledge, by reducing the friction of getting to the old knowledge:

More abundantly stocked bookshelves obviously increased opportunities to consult and compare different texts. Merely by making more scrambled data available, by increasing the output of Aristotelian, Alexandrian and Arabic texts, printers encouraged efforts to unscramble these data.

Eisenstein argues that many of the great thinkers of the 16th century, such as Descartes and Montaigne, would have been unlikely to have produced what they did without the changes wrought by the printing press. She says of Montaigne, “that he could see more books by spending a few months in his Bordeaux tower-study than earlier scholars had seen after a lifetime of travel.”

The printing press increased the speed of communication and the spread of knowledge: Far less man hours were needed to turn out 50 printed books than 50 scribed manuscripts.

Telegraph

Henry Ford famously said of life before the car “If I had asked people what they wanted, they would have said faster horses“. This sentiment could be equally applied to the telegraph, a communications technology that came about 400 years after the printing press.

Before the telegraph, the speed of communication was dependent on the speed of the physical object doing the transporting – the horse, or the ship. Societies were thus organized around the speed of communication available to them, from the way business was conducted and wars were fought to the way interpersonal communication was conducted.

Let’s consider, for example, the way the telegraph changed the conduct of war.

Prior to the telegraph, countries shared detailed knowledge of their plans with their citizens in order to boost morale, knowing that their plans would arrive at the enemy the same time their ships did. Post-telegraph, communications could arrive far faster than soldiers: This was something to consider!

In addition, as Tom Standage considers in his book The Victorian Internet, the telegraph altered the command structure in battle. “For who was better placed to make strategic decisions: the commander at the scene or his distant superiors?”

The telegraph brought changes similar in many ways to the printing press: It allowed for an accumulation of knowledge and increased the availability of this knowledge; more people had access to more information.

And society was forever altered as the new speed of communication made it fundamentally impossible to not use the telegraph, just as it is near impossible not to use a mobile phone or the Internet today.

Once the telegraph was widespread, there was no longer a way to do business without using it. Having up to the minute stock quotes changed the way businesses evaluated their holdings. Being able to communicate with various offices across the country created centralization and middle management. These elements became part of doing business so that it became nonsensical to talk about developing any aspect of business independent of the effect of electronic communication.

A Final Thought on Technology Uptake

One can argue that the more revolutionary an invention is, the slower the initial uptake into society, as society must do a fair amount of reorganizing to integrate the invention.

Such was the case for both the telegraph and printing press, as they allowed for things that were never before possible. Not being possible, they were rarely considered. Being rarely considered, there wasn’t a large populace pining for them to happen. So when new options presented themselves, no one was rushing to embrace them, because there was no general appreciation of their potential. This is, of course, a fundamental aspect of revolutionary technology. Everyone has to figure out how (and why) to use it.

In The Victorian Internet, Standage says of William Cooke and Samuel Morse, the British and American inventors, respectively, of the telegraph:

[They] had done the impossible and constructed working telegraphs. Surely the world would fall at their feet. Building the prototypes, however, turned out to be the easy part. Convincing people of their significance was far more of a challenge.

It took years for people to see advantages with the telegraph. Even after the first lines were built, and the accuracy and speed of the communications they could carry verified, Morse realized that “everybody still thought of the telegraph as a novelty, as nothing more than an amusing subject for a newspaper article, rather than the revolutionary new form of communication that he envisaged.”

The new technology might confer great benefits, but it took a lot of work building the infrastructure, both physical and mental, to take any advantage of them.

The printing press faced similar challenges. In fact, books printed from Gutenberg until 1501 have their own term, incunabula, which reflects the transition from manuscript to book. Eisenstein writes: “Printers and scribes copied each other’s products for several decades and duplicated the same texts for the same markets during the age of incunabula.”

The momentum took a while to build. When it did, the changes were remarkable.

But looking at these two technologies serves as a reminder of what revolutionary means in this context: The use by and value to society cannot be anticipated. Therefore, great and unpredictable shifts are caused when they are adopted and integrated into everyday life.

The post Why the Printing Press and the Telegraph Were as Impactful as the Internet appeared first on Farnam Street.

]]>
30620
Don’t Let Your (Technology) Tools Use You https://canvasly.link/dont-let-technology-tools-use-you/ Tue, 02 Aug 2016 11:45:58 +0000 https://www.farnamstreetblog.com/?p=28593 “In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among …

The post Don’t Let Your (Technology) Tools Use You appeared first on Farnam Street.

]]>
“In an information-rich world, the wealth of information means a dearth of something else:
a scarcity of whatever it is that information consumes.
What information consumes is rather obvious: it consumes the attention of its recipients.
Hence a wealth of information creates a poverty of attention and a need to allocate
that attention efficiently among the overabundance of information sources that might consume it.”
Herbert Simon

***

A shovel is just a shovel. You shovel things with it. You can break up weeds and dirt. (You can also whack someone with it.) I’m not sure I’ve seen a shovel used for much else.

Modern technological tools aren’t really like that.

What is an iPhone, functionally? Sure, it’s got the phone thing down, but it’s also a GPS, a note-taker, an emailer, a text messager, a newspaper, a video-game device, a taxi-calling service, a flashlight, a web browser, a library, a book…you get the point. It does a lot.

This all seems pretty wonderful. To perform those functions 20 years ago, you needed a map and a sense of direction, a notepad, a personal computer, a cell phone, an actual newspaper, a Playstation, a phone and the willingness to talk to a person, an actual flashlight, an actual library, an actual book…you get the point. As Mark Andreessen puts it, the world is being eaten by software. One simple (looking) device and a host of software can perform the functions served by a bunch of big clunky tools of the past.

So far, we’ve been convinced that use of the New Tools is mostly “upside,” that our embrace of them should be wholehearted. Much of this is for good reason. Do you remember how awful using a map was? Yuck.

The problem is that our New Tools are winning the battle of attention. We’ve gotten to the point where the tools use us as much as we use them. This new reality means we need to re-examine our relationship with our New Tools.

Don't Let Your Tools Use You

Down the Rabbit Hole

Here’s a typical situation.

You’re on your computer finishing the client presentation you have to give in two days. Your phone lights up and makes a chimney noise — you’ve got a text message. “Hey, have you seen that new Dracula movie?” asks your friend. It only takes a few messages before the two of you begin to disagree on whether Transylvania is actually a real place. Off to Google!

After a few quick clicks, you get to Wikipedia, which tells you that yes, Transylvania is a region of Romania which the author Bram Stoker used as Count Dracula’s birthplace. Reading the Wikipedia entry costs you about 20 minutes. As you read, you find out that Bram Stoker was actually Irish. Irish! An Irish guy wrote Dracula? How did I not know this? Curiosity stoked, you look up Irish novelists, the history of Gothic literature, the original vampire stories…down and down the rabbit hole you go.

Eventually your thirst for trivia is exhausted, and you close the Wikipedia tab to text your friend how wrong they are in regards to Transylvania. You click the Home button to leave your text conversation, which lets you see the Twitter icon. I wonder how many people retweeted my awesome joke about ventriloquism? You pull it up and start “The Scroll.” Hah! Greg is hilarious. Are you serious, Bill Gates? Damn — I wish I read as much as Shane Parrish. You go and go. Your buddy tweets a link to an interesting-looking article about millennials — “10 Ways Millennials are Ruining the Workplace”. God, they are so self-absorbed. Click.

You decide to check Facebook and see if that girl from the cocktail party on Friday commented on your status. She didn’t, but Wow, Susanne went to Hawaii? You look at 35 pictures Susanne posted in her first three hours in Hawaii. Wait, who’s that guy she’s with? You click his name and go to his Facebook page. On down the rabbit hole you fall…

Now it’s been two hours since you left your presentation to respond to the text message, and you find yourself physically tired from the rapid scanning and clicking, scanning and clicking, scanning and clicking of the past two hours. Sad, you go get a coffee, go for a short walk, and decide: Now, I will focus. No more distraction.

Ten minutes in, your phone buzzes. That girl from the cocktail party commented on your status…

Attention for Sale

We’ve all been there. When we come up for air, it can feel like the aftermath of a mob crowd. What did I just do?

The tools we’re now addicted to have been engineered for a simple purpose: To keep us addicted to them. The service they provide is secondary to the addiction. Yes, Facebook is a networking tool. Yes, Twitter is a communication tool. Yes, Instagram is an excellent food-photography tool. But unless they get us hooked and keep us hooked, their business models are broken.

Don’t believe us?

Take stock of the metrics by which people value or assess these companies. Clicks. Views. Engagement. Return visits. Length of stay. The primary source of value for these products is how much you use them and what they can sell to you while you’re there. Increasing their value is a simple (but not easy) proposition: Either get usage up or figure out more effective ways to sell to you while you’re there.

As Herbert Simon might have predicted, our attention is for sale, and we’re ceding it a little at a time as the tools get better and better at fulfilling their function. There’s a version of natural selection going on, where the only consumer technology products that survive are the enormously addictive ones. The trait which produces maximum fitness is addictiveness itself. If you’re not using a tool constantly, it has no value to advertisers or data sellers, and thus they cannot raise capital to survive. And even if it’s an app or tool that you buy, one that you have to pay money for upfront, they must hook you on Version 1 if you’re going to be expected to buy Versions 2, 3, and 4.

This ecosystem ensures that each generation of consumer tech products – hardware or software – gets better and better at keeping you hooked. These services have learned, through a process of evolution, to drown users in positive feedback and create intense habitual usage. They must – because any other outcome is death. Facebook doesn’t want you to go on once a month to catch up on your correspondence. You must be engaged. The service does not care whether it’s unnecessarily eating into your life.

Snap Back to Reality

It’s up to us to take our lives back then. We must comprehend that the New Tools have a tremendous downside in their loss of focused attention, and that we’re giving it up willingly in a sort of Faustian bargain for entertainment, connectedness, and novelty.

Psychologist Mihaly Csikszentmihalyi pioneered the concept of Flow, where we enter an enjoyable state of rapt attention to our work and produce a high level of creative output. It’s a wonderful feeling, but the New Tools have learned to provide the same sensation without the actual results. We don’t end up with a book, or a presentation, or a speech, or a quilt, or a hand-crafted table. We end up two hours later in the day.

***

The first step towards a solution must be to understand the reality of this new ecosystem.

It follows Garrett Hardin’s “First Law of Ecology”: You can never merely do one thing. The New Tools are not like the Old Tools, where you pick up the shovel, do your shoveling, and then put the shovel back in the garage. The iPhone is not designed that way. It’s designed to keep you going, as are most of the other New Tools. You probably won’t send one text. You probably won’t watch one video. You probably won’t read one article. You’re not supposed to!

The rational response to this new reality depends a lot on who you are and what you need the tools for. Some people can get rid of 50% or more of their New Tools very easily. You don’t have to toss out your iPhone for a StarTAC, but because software is doing the real work, you can purposefully reduce the capability of the hardware by reducing your exposure to certain software.

As you shed certain tools, expect a homeostatic response from your network. Don’t be mistaken: If you’re a Snapchatter or an Instagrammer or simply an avid texter, getting rid of those services will give rise to consternation. They are, after all, networking tools. Your network will notice. You’ll need a bit of courage to face your friends and tell them, with a straight face, that you won’t be Instagramming anymore because you’re afraid of falling down the rabbit hole. But if you’ve got the courage, you’ll probably find that after a week or two of adjustment your life will go on just fine.

The second and more mild type of response would be to appreciate the chain-smoking nature of these products and to use them more judiciously. Understand that every time you look at your iPhone or connect to the Internet, the rabbit hole is there waiting for you to tumble down. If you can grasp that, you’ll realize that you need to be suspicious of the “quick check.” Either learn to batch Internet and phone time into concentrated blocks or slowly re-learn how to ignore the desire to follow up on every little impulse that comes to mind. (Or preferably, do both.)

A big part of this is turning off any sort of “push” notification, which must be the most effective attention-diverter ever invented by humanity. A push notification is anything that draws your attention to the tool without your conscious input. It’s when your phone buzzes for a text message, or an image comes on the screen when you get an email, or your phone tells you that you’ve got a Facebook comment. Anything that desperately induces you to engage. You need to turn them off. (Yes, including text message notifications – your friends will get used to waiting).

E-mail can be the worst offender; it’s the earliest and still one of the most effective digital rabbit holes. To push back, close your email client when you’re not using it. That way, you’ll have to open it to send or read an email. Then go ahead and change the settings on your phone’s email client so you have to “fetch” emails yourself, rather than having them pushed at you. Turn off anything that tells you an email has arrived.

Once you stop being notified by your tools, you can start to engage with them on your own terms and focus on your real work for a change; focus on the stuff actually producing some value in your life and in the world. When the big stuff is done, you can give yourself a half-hour or an hour to check your Facebook page, check your Instagram page, follow up on Wikipedia, check your emails, and respond to your text messages. This isn’t as good a solution as deleting many of the apps altogether, but it does allow you to engage with these tools on your own terms.

However you choose to address the world of New Tools, you’re way ahead if you simply recognize their power over your attention. Getting lost in hyperlinks and Facebook feeds doesn’t mean you’re weak, it just means the tools you’re using are designed, at their core, to help you get lost. Instead of allowing yourself to go to work for them, resolve to make them work for you.

The post Don’t Let Your (Technology) Tools Use You appeared first on Farnam Street.

]]>
28593
Marshall McLuhan: The Here And Now https://canvasly.link/marshall-mcluhan-now/ Tue, 26 May 2015 11:00:09 +0000 http://www.farnamstreetblog.com/?p=20086 “In a culture like ours, long accustomed to splitting and dividing all things as a means of control, it is sometimes a bit of a shock to be reminded that, in operational and practical fact, the medium is the message.” *** In this passage from Understanding Media, Marshall McLuhan reminds us of the difficulty that …

The post Marshall McLuhan: The Here And Now appeared first on Farnam Street.

]]>
“In a culture like ours, long accustomed to splitting and dividing all things as a means of control, it is sometimes a bit of a shock to be reminded that, in operational and practical fact, the medium is the message.”

***

In this passage from Understanding Media, Marshall McLuhan reminds us of the difficulty that frictionless connection brings with it and how technological media advances have worked not to preserve but rather to ‘abolish history.’

Perfection of the means of communication has meant instantaneity. Such an instantaneous network of communication is the body-mind unity of each of us. When a city or a society achieves a diversity and equilibrium of awareness analogous to the body-mind network, it has what we tend to regard as a high culture.

But the instantaneity of communication makes free speech and thought difficult if not impossible, and for many reasons. Radio extends the range of the casual speaking voice, but it forbids that many should speak. And when what is said has such range of control, it is forbidden to speak any but the most acceptable words and notions. Power and control are in all cases paid for by loss of freedom and flexibility.

Today the entire globe has a unity in point of mutual interawareness, which exceeds in rapidity the former flow of information in a small city—say Elizabethan London with its eighty or ninety thousand inhabitants. What happens to existing societies when they are brought into such intimate contact by press, picture stories, newsreels, and jet propulsion? What happens when the Neolithic Eskimo is compelled to share the time and space arrangements of technological man? What happens in our minds as we become familiar with the diversity of human cultures which have come into existence under innumerable circumstances, historical and geographical? Is what happens comparable to that social revolution which we call the American melting pot?

When the telegraph made possible a daily cross section of the globe transferred to the page of newsprint, we already had our mental melting pot for cosmic man—the world citizen.The mere format of the page of newsprint was more revolutionary in its intellectual and emotional consequences than anything that could be said about any part of the globe.

When we juxtapose news items from Tokyo, London, New York, Chile, Africa, and New Zealand, we are not just manipulating space. The events so brought together belong to cultures widely separated in time. The modern world abridges all historical times as readily as it reduces space. Everywhere and every age have become here and now. History has been abolished by our new media.

The post Marshall McLuhan: The Here And Now appeared first on Farnam Street.

]]>
20086
The Glass Cage: Automation and US https://canvasly.link/the-glass-cage-nicholas-carr/ Mon, 29 Sep 2014 12:00:13 +0000 http://www.farnamstreetblog.com/?p=19003 People have worried about losing their jobs to robots for decades now. But how is growing automation really going to change us? Let’s take a look at the limitations of automation and the uniquely human skills that will remain valuable. *** The impact of technology is all around us. Maybe we’re at another Gutenberg moment …

The post The Glass Cage: Automation and US appeared first on Farnam Street.

]]>
People have worried about losing their jobs to robots for decades now. But how is growing automation really going to change us? Let’s take a look at the limitations of automation and the uniquely human skills that will remain valuable.

***

The impact of technology is all around us. Maybe we’re at another Gutenberg moment and maybe we’re not.

Marshall McLuhan said it best.

When any new form comes into the foreground of things, we naturally look at it through the old stereos. We can’t help that. This is normal, and we’re still trying to see how will our previous forms of political and educational patterns persist under television. We’re just trying to fit the old things into the new form, instead of asking what is the new form going to do to all the assumptions we had before.

He also wrote that “a new medium is never an addition to an old one, nor does it leave the old one in peace.”

In The Glass Cage: Automation and US, Nick Carr, one of my favorite writers, enters the debate about the impact automation has on us, “examining the personal as well as the economic consequences of our growing dependence on computers.”

We know that the nature of jobs is going to change in the future thanks to technology. Tyler Cowen argues “If you and your skills are a complement to the computer, your wage and labor market prospects are likely to be cheery. If your skills do not complement the computer, you may want to address that mismatch.”

Carr’s book shows another side to the argument – the broader human consequences to living in a world where computers and software do the things we used to do.

Computer automation makes our lives easier, our chores less burdensome. We’re often able to accomplish more in less time—or to do things we simply couldn’t do before. But automation also has deeper, hidden effects. As aviators have learned, not all of them are beneficial. Automation can take a toll on our work, our talents, and our lives. It can narrow our perspectives and limit our choices. It can open us to surveillance and manipulation. As computers become our constant companions, our familiar, obliging helpmates, it seems wise to take a closer look at exactly how they’re changing what we do and who we are.

On the autonomous automobile, for example, Carr argues that while they have a ways to go before they start chauffeuring us around, there are broader questions that need to be answered first.

Although Google has said it expects commercial versions of its car to be on sale by the end of the decade, that’s probably wishful thinking. The vehicle’s sensor systems remain prohibitively expensive, with the roof-mounted laser apparatus alone going for eighty thousand dollars. Many technical challenges remain to be met, such as navigating snowy or leaf-covered roads, dealing with unexpected detours, and interpreting the hand signals of traffic cops and road workers. Even the most powerful computers still have a hard time distinguishing a bit of harmless road debris (a flattened cardboard box, say) from a dangerous obstacle (a nail-studded chunk of plywood). Most daunting of all are the many legal, cultural, and ethical hurdles a driverless car faces-Where, for instance, will culpability and liability reside should a computer-driven automobile cause an accident that kills or injures someone? With the car’s owner? With the manufacturer that installed the self-driving system? With the programmers who wrote the software? Until such thorny questions get sorted out, fully automated cars are unlikely to grace dealer showrooms.

Tacit and Explicit Knowledge

Self-driving cars are just one example of a technology that forces us “to change our thinking about what computers and robots can and can’t do.”

Up until that fateful October day, it was taken for granted that many important skills lay beyond the reach of automation. Computers could do a lot of things, but they couldn’t do everything. In an influential 2004 book, The New Division of Labor: How Computers Are Creating the Next Job Market, economists Frank Levy and Richard Murnane argued, convincingly, that there were practical limits to the ability of software programmers to replicate human talents, particularly those involving sensory perception, pattern recognition, and conceptual knowledge. They pointed specifically to the example of driving a car on the open road, a talent that requires the instantaneous interpretation of a welter of visual signals and an ability to adapt seamlessly to shifting and often unanticipated situations. We hardly know how we pull off such a feat ourselves, so the idea that programmers could reduce all of driving’s intricacies, intangibilities, and contingencies to a set of instructions, to lines of software code, seemed ludicrous. “Executing a left turn across oncoming traffic,” Levy and Murnane wrote, “involves so many factors that it is hard to imagine the set of rules that can replicate a drivers behavior.” It seemed a sure bet, to them and to pretty much everyone else, that steering wheels would remain firmly in the grip of human hands.

In assessing computers’ capabilities, economists and psychologists have long drawn on a basic distinction between two kinds of knowledge: tacit and explicit. Tacit knowledge, which is also sometimes called procedural knowledge, refers to all the stuff we do without actively thinking about it: riding a bike, snagging a fly ball, reading a book, driving a car. These aren’t innate skills—we have to learn them, and some people are better at them than others—but they can’t be expressed as a simple recipe, a sequence of precisely defined steps. When you make a turn through a busy intersection in your car, neurological studies have shown, many areas of your brain are hard at work, processing sensory stimuli, making estimates of time and distance, and coordinating your arms and legs. But if someone asked you to document everything involved in making that turn, you wouldn’t be able to, at least not without resorting to generalizations and abstractions.The ability resides deep in your nervous system outside the ambit of your conscious mind. The mental processing goes on without your awareness.

Much of our ability to size up situations and make quick judgments about them stems from the fuzzy realm of tacit knowledge. Most of our creative and artistic skills reside there too. Explicit knowledge, which is also known as declarative knowledge, is the stuff you can actually write down: how to change a flat tire, how to fold an origami crane, how to solve a quadratic equation. These are processes that can be broken down into well-defined steps. One person can explain them to another person through written or oral instructions: do this, then this, then this.

Because a software program is essentially a set of precise, written instructions—do this, then this, then this—we’ve assumed that while computers can replicate skills that depend on explicit knowledge, they’re not so good when it comes to skills that flow from tacit knowledge. How do you translate the ineffable into lines of code, into the rigid, step-by-step instructions of an algorithm? The boundary between the explicit and the tacit has always been a rough one—a lot of our talents straddle the line—but it seemed to offer a good way to define the limits of automation and, in turn, to mark out the exclusive precincts of the human. The sophisticated jobs Levy and Murnane identified as lying beyond the reach of computers—in addition to driving, they pointed to teaching and medical diagnosis—were a mix of the mental and the manual, but they all drew on tacit knowledge.

Google’s car resets the boundary between human and computer, and it does so more dramatically, more decisively, than have earlier breakthroughs in programming. It tells us that our idea of the limits of automation has always been something of a fiction. Were not as special as we think we are. While the distinction between tacit and explicit knowledge remains a useful one in the realm of human psychology, it has lost much of its relevance to discussions of automation.

Tomorrowland

That doesn’t mean that computers now have tacit knowledge, or that they’ve started to think the way we think, or that they’ll soon be able to do everything people can do. They don’t, they haven’t, and they won’t. Artificial intelligence is not human intelligence. People are mindful; computers are mindless. But when it comes to performing demanding tasks, whether with the brain or the body, computers are able to replicate our ends without replicating our means. When a driverless car makes a left turn in traffic, it’s not tapping into a well of intuition and skill; it’s following a program. But while the strategies are different, the outcomes, for practical purposes, are the same. The superhuman speed with which computers can follow instructions, calculate probabilities, and receive and send data means that they can use explicit knowledge to perform many of the complicated tasks that we do with tacit knowledge. In some cases, the unique strengths of computers allow them to perform what we consider to be tacit skills better than we can perform them ourselves. In a world of computer-controlled cars, you wouldn’t need traffic lights or stop signs. Through the continuous, high-speed exchange of data, vehicles would seamlessly coordinate their passage through even the busiest of intersections—just as computers today regulate the flow of inconceivable numbers of data packets along the highways and byways of the internet. What’s ineffable in our own minds becomes altogether effable in the circuits of a microchip.

Many of the cognitive talents we’ve considered uniquely human, it turns out, are anything but. Once computers get quick enough, they can begin to replicate our ability to spot patterns, make judgments, and learn from experience.

It’s not only vocations that are increasingly being computerized, avocations are too.

Thanks to the proliferation of smartphones, tablets, and other small, affordable, and even wearable computers, we now depend on software to carry out many of our daily chores and pastimes. We launch apps to aid us in shopping, cooking, exercising, even finding a mate and raising a child. We follow turn-by-turn GPS instructions to get from one place to the next. We use social networks to maintain friendships and express our feelings. We seek advice from recommendation engines on what to watch, read, and listen to. We look to Google, or to Apple’s Siri, to answer our questions and solve our problems. The computer is becoming our all-purpose tool for navigating, manipulating, and understanding the world, in both its physical and its social manifestations. Just think what happens these days when people misplace their smartphones or lose their connections to the net. Without their digital assistants, they feel helpless.

As Katherine Hayles, a literature professor at Duke University, observed in her 2012 book How We Think, “When my computer goes down or my Internet connection fails, I feel lost, disoriented, unable to work—in fact, I feel as if my hands have been amputated.”

While our dependence on computers is “disconcerting at times,” we welcome it.

We’re eager to celebrate and show off our whizzy new gadgets and apps—and not only because they’re so useful and so stylish. There’s something magical about computer automation. To watch an iPhone identify an obscure song playing over the sound system in a bar is to experience something that would have been inconceivable to any previous generation.

Miswanting

The trouble with automation is “that it often gives us what we don’t need at the cost of what we do.”

To understand why that’s so, and why we’re eager to accept the bargain, we need to take a look at how certain cognitive biases—flaws in the way we think—can distort our perceptions. When it comes to assessing the value of labor and leisure, the mind’s eye can’t see straight.

Mihaly Csikszentmihalyi, a psychology professor and author of the popular 1990 book Flow, has described a phenomenon that he calls “the paradox of work.” He first observed it in a study conducted in the 1980s with his University of Chicago colleague Judith LeFevre. They recruited a hundred workers, blue-collar and white-collar, skilled and unskilled, from five businesses around Chicago. They gave each an electronic pager (this was when cell phones were still luxury goods) that they had programmed to beep at seven random moments a day over the course of a week. At each beep, the subjects would fill out a short questionnaire. They’d describe the activity they were engaged in at that moment, the challenges they were facing, the skills they were deploying, and the psychological state they were in, as indicated by their sense of motivation, satisfaction, engagement, creativity, and so forth. The intent of this “experience sampling,” as Csikszentmihalyi termed the technique, was to see how people spend their time, on the job and off, and how their activities influence their “quality of experience.”

The results were surprising. People were happier, felt more fulfilled by what they were doing, while they were at work than during their leisure hours. In their free time, they tended to feel bored and anxious. And yet they didn’t like to be at work. When they were on the job, they expressed a strong desire to be off the job, and when they were off the job, the last thing they wanted was to go back to work. “We have,” reported Csikszentmihalyi and LeFevre, “the paradoxical situation of people having many more positive feelings at work than in leisure, yet saying that they wish to be doing something else when they are at work, not when they are in leisure.” We’re terrible, the experiment revealed, at anticipating which activities will satisfy us and which will leave us discontented. Even when we’re in the midst of doing something, we don’t seem able to judge its psychic consequences accurately.

Those are symptoms of a more general affliction, on which psychologists have bestowed the poetic name miswanting. We’re inclined to desire things we don’t like and to like things we don’t desire. “When the things we want to happen do not improve our happiness, and when the things we want not to happen do,” the cognitive psychologists Daniel Gilbert and Timothy Wilson have observed, “it seems fair to say we have wanted badly.” And as slews of gloomy studies show, we’re forever wanting badly. There’s also a social angle to our tendency to misjudge work and leisure. As Csikszentmihalyi and LeFevre discovered in their experiments, and as most of us know from our own experience, people allow themselves to be guided by social conventions—in this case, the deep-seated idea that being “at leisure” is more desirable, and carries more status, than being “at work”—rather than by their true feelings. “Needless to say,” the researchers concluded, “such a blindness to the real state of affairs is likely to have unfortunate consequences for both individual wellbeing and the health of society.” As people act on their skewed perceptions, they will “try to do more of those activities that provide the least positive experiences and avoid the activities that are the source of their most positive and intense feelings.” That’s hardly a recipe for the good life.

It’s not that the work we do for pay is intrinsically superior to the activities we engage in for diversion or entertainment. Far from it. Plenty of jobs are dull and even demeaning, and plenty of hobbies and pastimes are stimulating and fulfilling. But a job imposes a structure on our time that we lose when we’re left to our own devices. At work, were pushed to engage in the kinds of activities that human beings find most satisfying. We’re happiest when we’re absorbed in a difficult task, a task that has clear goals and that challenges us not only to exercise our talents but to stretch them. We become so immersed in the flow of our work, to use Csikszentmihalyi s term, that we tune out distractions and transcend the anxieties and worries that plague our everyday lives. Our usually wayward attention becomes fixed on what we’re doing. “Every action, movement, and thought follows inevitably from the previous one,” explains Csikszentmihalyi. “Your whole being is involved, and you’re using your skills to the utmost.” Such states of deep absorption can be produced by all manner of effort, from laying tile to singing in a choir to racing a dirt bike. You don’t have to be earning a wage to enjoy the transports of flow.

More often than not, though, our discipline flags and our mind wanders when we’re not on the job. We may yearn for the workday to be over so we can start spending our pay and having some fun, but most of us fritter away our leisure hours. We shun hard work and only rarely engage in challenging hobbies. Instead, we watch TV or go to the mall or log on to Facebook. We get lazy. And then we get bored and fretful. Disengaged from any outward focus, our attention turns inward, and we end up locked in what Emerson called the jail of self-consciousness. Jobs, even crummy ones, are “actually easier to enjoy than free time,” says Csikszentmihalyi, because they have the “built-in” goals and challenges that “encourage one to become involved in one’s work, to concentrate and lose oneself in it.” But that’s not what our deceiving minds want us to believe. Given the opportunity, we’ll eagerly relieve ourselves of the rigors of labor. We’ll sentence ourselves to idleness.

Automation offers us innumerable promises. Our lives, we think, will be greater if more things are automated. Yet as Carr explores in The Glass Cage, automation extracts a cost. Removing “complexity from jobs, diminishing the challenge they present and hence the level of engagement they promote.” This doesn’t mean that Carr is anti-automation. He’s not. He just wants us to see another side.

“All too often,” Carr warns, “automation frees us from that which makes us feel free.”

The post The Glass Cage: Automation and US appeared first on Farnam Street.

]]>
19003
Claude Shannon: The Man Who Turned Paper Into Pixels https://canvasly.link/claude-shannon-paper-into-pixels/ Sun, 13 Jul 2014 12:00:09 +0000 http://www.farnamstreetblog.com/?p=18475 “The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning.” — Claude Shannon (1948) *** Claude Shannon is the most important man you’ve probably never heard of. If Alan Turing is to be considered the father of modern …

The post Claude Shannon: The Man Who Turned Paper Into Pixels appeared first on Farnam Street.

]]>
“The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning.”
— Claude Shannon (1948)

***

Claude Shannon is the most important man you’ve probably never heard of. If Alan Turing is to be considered the father of modern computing, then the American mathematician Claude Shannon is the architect of the Information Age.

The video, created by the British filmmaker Adam Westbrook, echoes the thoughts of Nassim Taleb that boosting the signal does not mean you remove the noise, in fact, just the opposite: you amplify it.

Any time you try to send a message from one place to another something always gets in the way. The original signal is always distorted. Where ever there is signal there is also noise.

So what do you do? Well, the best anyone could do back then was to boost the signal. But then all you do is boost the noise.

Thing is we were thinking about information all wrong. We were obsessed with what a message meant.

A Renoir and a receipt? They’re different, right? Was there a way to think of them in the same way? Like so many breakthroughs the answer came from an unexpected place. A brilliant mathematician with a flair for blackjack.

***

The transistor was invented in 1948, at Bell Telephone Laboratories. This remarkable achievement, however, “was only the second most significant development of that year,” writes James Gleick in his fascinating book: The Information: A History, a Theory, a Flood. The most important development of 1948 and what still underscores modern technology is the bit.

An invention even more profound and more fundamental came in a monograph spread across seventy-nine pages of The Bell System Technical Journal in July and October. No one bothered with a press release. It carried a title both simple and grand “A Mathematical Theory of Communication” and the message was hard to summarize. But it was a fulcrum around which the world began to turn. Like the transistor, this development also involved a neologism: the word bit, chosen in this case not by committee but by the lone author, a thirty-two-year -old named Claude Shannon. The bit now joined the inch, the pound, the quart, and the minute as a determinate quantity— a fundamental unit of measure.

But measuring what? “A unit for measuring information,” Shannon wrote, as though there were such a thing, measurable and quantifiable, as information.

[…]

Shannon’s theory made a bridge between information and uncertainty; between information and entropy; and between information and chaos. It led to compact discs and fax machines, computers and cyberspace, Moore’s law and all the world’s Silicon Alleys. Information processing was born, along with information storage and information retrieval. People began to name a successor to the Iron Age and the Steam Age.

Gleick also recounts the relationship between Turing and Shannon:

In 1943 the English mathematician and code breaker Alan Turing visited Bell Labs on a cryptographic mission and met Shannon sometimes over lunch, where they traded speculation on the future of artificial thinking machines. (“ Shannon wants to feed not just data to a Brain, but cultural things!” Turing exclaimed. “He wants to play music to it!”)

Commenting on vitality of information, Gleick writes:

(Information) pervades the sciences from top to bottom, transforming every branch of knowledge. Information theory began as a bridge from mathematics to electrical engineering and from there to computing. … Now even biology has become an information science, a subject of messages, instructions, and code. Genes encapsulate information and enable procedures for reading it in and writing it out. Life spreads by networking. The body itself is an information processor. Memory resides not just in brains but in every cell. No wonder genetics bloomed along with information theory. DNA is the quintessential information molecule, the most advanced message processor at the cellular level— an alphabet and a code, 6 billion bits to form a human being. “What lies at the heart of every living thing is not a fire, not warm breath, not a ‘spark of life,’” declares the evolutionary theorist Richard Dawkins. “It is information, words, instructions.… If you want to understand life, don’t think about vibrant, throbbing gels and oozes, think about information technology.” The cells of an organism are nodes in a richly interwoven communications network, transmitting and receiving, coding and decoding. Evolution itself embodies an ongoing exchange of information between organism and environment.

The bit is the very core of the information age.

The bit is a fundamental particle of a different sort: not just tiny but abstract— a binary digit, a flip-flop, a yes-or-no. It is insubstantial, yet as scientists finally come to understand information, they wonder whether it may be primary: more fundamental than matter itself. They suggest that the bit is the irreducible kernel and that information forms the very core of existence.

In the words of John Archibald Wheeler, the last surviving collaborator of both Einstein and Bohr, information gives rise to “every it— every particle, every field of force, even the spacetime continuum itself.”

This is another way of fathoming the paradox of the observer: that the outcome of an experiment is affected, or even determined, when it is observed. Not only is the observer observing, she is asking questions and making statements that must ultimately be expressed in discrete bits. “What we call reality,” Wheeler wrote coyly, “arises in the last analysis from the posing of yes-no questions.” He added: “All things physical are information-theoretic in origin, and this is a participatory universe.” The whole universe is thus seen as a computer —a cosmic information-processing machine.

The greatest gift of Prometheus to humanity was not fire after all: “Numbers, too, chiefest of sciences, I invented for them, and the combining of letters, creative mother of the Muses’ arts, with which to hold all things in memory .”

Information technologies are both relative in the time they were created and absolute in terms of the significance. Gleick writes:

The alphabet was a founding technology of information. The telephone, the fax machine, the calculator, and, ultimately, the computer are only the latest innovations devised for saving, manipulating, and communicating knowledge. Our culture has absorbed a working vocabulary for these useful inventions. We speak of compressing data, aware that this is quite different from compressing a gas. We know about streaming information, parsing it, sorting it, matching it, and filtering it. Our furniture includes iPods and plasma displays, our skills include texting and Googling, we are endowed, we are expert, so we see information in the foreground. But it has always been there. It pervaded our ancestors’ world, too, taking forms from solid to ethereal, granite gravestones and the whispers of courtiers. The punched card, the cash register, the nineteenth-century Difference Engine, the wires of telegraphy all played their parts in weaving the spiderweb of information to which we cling. Each new information technology, in its own time, set off blooms in storage and transmission. From the printing press came new species of information organizers: dictionaries, cyclopaedias, almanacs— compendiums of words, classifiers of facts, trees of knowledge. Hardly any information technology goes obsolete. Each new one throws its predecessors into relief. Thus Thomas Hobbes, in the seventeenth century, resisted his era’s new-media hype: “The invention of printing, though ingenious, compared with the invention of letters is no great matter.” Up to a point, he was right. Every new medium transforms the nature of human thought. In the long run, history is the story of information becoming aware of itself.

The Information: A History, a Theory, a Flood is a fascinating read.

The post Claude Shannon: The Man Who Turned Paper Into Pixels appeared first on Farnam Street.

]]>
18475
Douglas Adams on our Reactions to Technology Over Time https://canvasly.link/douglas-adams-reactions-technology-over-time/ Fri, 23 May 2014 12:00:37 +0000 http://www.farnamstreetblog.com/?p=18045 “I’ve come up with a set of rules that describe our reactions to technologies,” writes Douglas Adams in The Salmon of Doubt. 1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. 2. Anything that’s invented between when you’re …

The post Douglas Adams on our Reactions to Technology Over Time appeared first on Farnam Street.

]]>
“I’ve come up with a set of rules that describe our reactions to technologies,” writes Douglas Adams in The Salmon of Doubt.

1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.

2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.

3. Anything invented after you’re thirty-five is against the natural order of things.

Adams is mostly known for his cult-like classic The Hitchhiker’s Guide to the Galaxy.

The post Douglas Adams on our Reactions to Technology Over Time appeared first on Farnam Street.

]]>
18045
Marshall McLuhan: Old Versus New Assumptions https://canvasly.link/marshall-mcluhan-assumptions/ Sun, 21 Jul 2013 13:00:12 +0000 http://www.farnamstreetblog.com/?p=12094 Marshall McLuhan (1911-1980), for those unfamiliar with him, rocketed from an unknown academic to rockstar with the publication of Understanding Media: The Extensions of Man in 1964. The core of the book is a phrase many of us are familiar with: “The medium is the message.” But long before those famous words were ever spoken, McLuhan …

The post Marshall McLuhan: Old Versus New Assumptions appeared first on Farnam Street.

]]>
Marshall McLuhan (1911-1980), for those unfamiliar with him, rocketed from an unknown academic to rockstar with the publication of Understanding Media: The Extensions of Man in 1964. The core of the book is a phrase many of us are familiar with: “The medium is the message.

But long before those famous words were ever spoken, McLuhan offered advice on the evolving media landscape. This interview from 1960 offers as much wisdom today as it did then.

When any new form comes into the foreground of things, we naturally look at it through the old stereos. We can’t help that. This is normal, and we’re still trying to see how will our previous forms of political and educational patterns persist under television. We’re just trying to fit the old things into the new form, instead of asking what is the new form going to do to all the assumptions we had before.

Let’s think about this for a second. “When any new form comes into the foreground of things, we naturally look at it through the old stereos. We can’t help that. … We’re just trying to fit the old things into the new form.”

We do this with knowledge as well. Generalists are largely a thing of the past. To get a job we have to know a niche and we have to know it well. This process of specialization, however, comes with high cost. We fail to learn about the dynamic world in which we operate. Over years and decades, we come to fit the world into our knowledge. Rather than see things as an interconnected holistic system we fall into what Daniel Kahneman calls “What You See Is All There Is” (WYSIATI). The way out of this, of course, is to arm yourself with the great models of the world.

 

The post Marshall McLuhan: Old Versus New Assumptions appeared first on Farnam Street.

]]>
12094