- Home
- Berns, Gregory
Iconoclast: A Neuroscientist Reveals How to Think Differently Page 4
Iconoclast: A Neuroscientist Reveals How to Think Differently Read online
Page 4
Although the ability to reprogram neural networks is a key attribute of the iconoclast’s brain, that doesn’t mean it works for everyone. Sometimes reprogramming must be approached gradually, or else the iconoclast’s ideas will be rejected.
Before Pac-Man, the Iconoclast Who Brought Us Pong
I used the example of Pac-Man earlier because this game was, for a time, the most popular video game in existence. For those who grew up during that era, the image of those pies chunking around a video screen remains indelibly burned into their brains. It is easy to take those images for granted now, but at the time, video games were revolutionary. And the granddaddy of all video games, Pong, was perhaps the most iconoclastic of all. Every modern video game, whether it is played on a computer or an Xbox, derives from the deceptively simple computer version of table tennis.
In 1970, Pong’s inventor, Nolan Bushnell, was just another electrical engineer working in Silicon Valley. He was making decent money working for Ampex, a manufacturer of recording equipment, but Bushnell’s real love was for games, and he soon found himself designing coin-operated arcade games for a much smaller company, Nutting Associates. The result was a game called Computer Space, which was a sort of galactic dogfight between a spaceship and a flying saucer. Although Computer Space was a hit with his engineering friends, it didn’t go over so well in the usual environment for arcade games: bars. In fact, it was a flop. Although the game was simple by today’s standards of video gaming, it required players to control a spaceship using “thrust,” “fire,” and “rotate” buttons. At that time, Bushnell observed too many players dropping a quarter into the game and just standing there waiting for something to happen. What happened was, the flying saucer flew over and zapped their spaceship. The players did not have a category in their brains for interpreting this type of amusement.
Because of this failure, Bushnell left Nutting and with his friend Ted Dabney and $500, formed his own company, calling it Atari, after a term for the Japanese game of Go. Outside of big mainframes, computers didn’t exist, so all these video games had to be created with specialized electronics. They hired Al Alcorn, a young engineer, to carry out the electrical wizardry. As a warm-up exercise, Bushnell gave Alcorn the simple task of creating a video version of Ping-Pong.
Nobody, for a minute, believed that a computer version of Ping-Pong would have any appeal. After all, if you wanted to play Ping-Pong, you might as well just play on a table. The pattern of dogmatic thinking was identical to what chemists said about NMR. But Bushnell eschewed dogma and plowed ahead. Keeping it as simple as possible, Bushnell suggested the screen should show only one ball, two paddles, and the players’ scores. It didn’t take Alcorn but two weeks to come up with a working prototype. Much to everyone’s surprise, the game was remarkably entertaining and addictive. And most important, it didn’t require any instructions or reprogramming in the brains of end users, who, if they were playing in a bar, were probably drunk anyway.
Pong was field-tested for the first time in 1972 at Andy Capp’s Tavern in Sunnyvale. Two weeks later, the bar owner called Bushnell, asking him to come and fix the machine. But Pong wasn’t broken. The coin box had simply jammed with too many quarters. Bushnell was onto something, and the coin-op arcade business ate it up. Pong’s simplicity also threatened to destroy Atari. The game was easily copied, and rivals began selling competing versions to arcades. On the verge of bankruptcy, Bushnell made the bold move into a home version of Pong and bucked conventional wisdom that said arcade games were only played in arcades. For a company with no experience in the consumer electronics sector, it was a risky strategy. Advances in silicon chip technology had advanced sufficiently so that, in 1974, a custom chip containing all the circuitry that the arcade game had could fit into a home console. Eventually Sears bought exclusive rights for one year and ordered 150,000 units, enough to save Atari and launch Bushnell into his next venture, Chuck E. Cheese’s.
Seeing Like an Iconoclast
If we can say one thing about the iconoclast’s brain, it would be this: it sees differently than other people’s brains. When Chihuly lost the vision in one eye, he began to see the world differently. But this is a drastic measure. It does, however, illustrate the importance of new perspectives in the creation of new ideas. The overwhelming importance of the visual system to the human mind means that many of the great innovations began with a change in visual perception. It wasn’t until Paul Lauterbur stared at a blurry NMR spectrum of cancer that he realized the potential for creating MRI. In both of these cases, the iconoclasts’ key insights were triggered by visual images. For Chihuly, it was a realization that beauty in glass sculpture need not be equated with symmetry, which was a reflection of his own asymmetry. For Lauterbur, it was a realization that blurriness in an NMR spectrum need not be equated with noise. Even Nolan Bushnell’s realization that Computer Space was too complicated for people came from seeing customers being dumbfounded by the game.
Iconoclasm begins with perception. More specifically, it begins with visual perception, and so the first step to thinking like an iconoclast is to see like one.
At every step in the process of visual perception, the brain throws out pieces of information and assimilates the remaining ones into increasingly abstract components. Experience plays a major role in this process. The human brain sees things in ways that are most familiar to it. But epiphanies rarely occur in familiar surroundings. The key to seeing like an iconoclast is to look at things that you have never seen before. It seems almost obvious that breakthroughs in perception do not come from simply staring at an object and thinking harder about it. Breakthroughs come from a perceptual system that is confronted with something that it doesn’t know how to interpret. Unfamiliarity forces the brain to discard its usual categories of perception and create new ones.
Sometimes the brain needs a kick start. Although Chihuly was already marching down the path of artistic creativity, the loss of vision in one eye jolted his brain in a very literal sense to see differently. Chihuly’s brain probably adapted to monocular vision within about six months, but the effect on his art was indelible. He continued to be a visual artist, seeking out inspirations in unlikely places. Although he works in a medium that dictates individual pieces can only be a foot or two tall, he gets ideas from nature and, nowadays, architecture. It stimulates his visual system, and yet, at a different level, architecture is a tactile experience for Chihuly. Unusual spaces force his brain to process inputs in novel ways, sprouting new connections and making synapses where none existed before.
Sometimes a simple change of environment is enough to jog the perceptual system out of familiar categories. This may be one reason why restaurants figure so prominently as sites of perceptual breakthroughs. A more drastic change of environment—traveling to another country, for example—is even more effective. When confronted with places never seen before, the brain must create new categories. It is in this process that the brain jumbles around old ideas with new images to create new syntheses.
New acquaintances can also be a source of new perceptions. Other people will frequently lend their opinion of what they see, and these ideas may be enough to destabilize familiar patterns of perception. A change of vantage point may also be sufficient to yield new perceptions. The floating triangle example illustrated how focusing on details versus standing back and looking at the whole can yield markedly different visual perceptions.
By forcing the visual system to see things in different ways, you can increase the odds of new insights. It sounds remarkably simple. But it is not quite that easy. As we shall see in the next chapter, the brain frequently resists exactly these types of new experiences because they cost energy to process.
From Perception
to Imagination
Education consists mainly in what we have unlearned.
—Mark Twain
HUMANS DEPEND ON VISION, more than any other sense to navigate through the world. Mostly we take the visual process for gr
anted. And rightly so, for if we had to think too much about what we see from moment to moment, scarce brain power would remain for doing anything else. Most of the time, the efficiency of our visual systems works to our advantage. Hitting a major league fastball, for example, requires the precise coordination of eyes and body. A 90-mile-per-hour fastball reaches the plate in about 0.4 seconds, but the batter must decide whether to hit it when it gets about halfway. The limit of human reaction time is about 0.2 seconds, which means that the task of hitting a fastball pushes the vision and motor systems to their limits. There is no time for thought. The connection between eye and body must be seamless. This automaticity lets us accomplish anything that requires hand-eye coordination, but this automaticity comes with a price. In the interests of crafting an efficient visual system, the brain must make guesses about what it is actually seeing. Most of the time this works, but these automatic processes also get in the way of seeing things differently. Automatic thinking destroys the creative process that forms the foundation of iconoclastic thinking.
The brain is fundamentally a lazy piece of meat. It doesn’t like to waste energy. This is not too surprising given that all animals must conserve energy, so the brain, like every other organ, has evolved to be as efficient as possible for what it does. There’s a myth that we only use 10 or 15 percent of our brains. Although only a fraction of the brain is active at any moment in time, the real truth is that we use all of our brains—just not all at the same time. At any instant, a battle wages between the different parts of the brain. Each piece of the brain serves its own particular set of functions, but in order to carry out these functions, it needs energy. The parts of the brain that accomplish their tasks with the least amount of energy carry the moment. The neuro-scientist Gerald Edelman, called this neural Darwinism, meaning that the brain has evolved, and continues to evolve, by principles of resource competition and adaptation.1 Energy is precious; so efficiency reigns above all else.
The efficiency principle has major ramifications for the visual system. It means the brain takes shortcuts whenever it can. In the last chapter, we saw how one shortcut, categorization, streamlines visual perception. In this chapter, we will take a closer look at where these categories come from and how iconoclasts break out of them. Novelty will play a key role.
Another side effect of the efficiency principle is that the brain uses circuits like the visual system for multiple purposes. Visual creativity—imagination—utilizes the same systems in the brain as vision itself. Imagination comes from the visual system. Iconoclasm goes hand in hand with imagination. Before one can muster the strength to tear down conventional thinking, one must first imagine the possibility that conventional thinking is wrong. But even this is not enough. The iconoclast goes further and imagines alternative possibilities. But imagination is a fickle process, and most iconoclasts have good days, when the ideas flow freely, and bad days, when their thinking is stale and cliché. The good days hold nuggets of insight into the imaginative process, and in this chapter, I will examine the conditions in the brain that foster imagination and creativity. This is the story of the search for the holy grail of creativity, an almost childlike imagination and willful abandonment to dream crazy thoughts.
Perhaps it is a result of the way we are educated, or perhaps it is simply a reflection of the biological maturation of our brains, but creativity seems to become more difficult for many people as they get older. The efficiency principle, coupled with the consolidation of large amounts of information and experience as we get older, means that the brain needs to categorize. And yet, imagination stems from the ability to break this categorization, to see things not for what one thinks they are, but for what they might be.
Walt Disney—The Iconoclast of Animation
One of the greatest innovators of the entertainment industry, Walt Disney, was also an iconoclast because he did something that nobody thought could be done. He changed the animated cartoon from being a movie trailer to a main feature. Disney had always been interested in drawing as a child, and Disney became a competent, if enthusiastic, illustrator while he was stationed in France at the end of World War I. He drew sketches for the canteen menu featuring a doughboy character he had invented. He also developed a small business selling caricatures of his fellow soldiers to send back to their families.2 After returning home to Kansas City, Disney began earning money by drawing advertisements and letterheads. He was a decent illustrator, but because he was so gung ho about drawing, his reputation as a hard worker grew, and business owners liked him. In short order, the Kansas City Slide Company, which produced promotional slides shown in movie theaters, hired the nineteen-year-old to illustrate its ads.
Disney was clearly taken with the idea of combining drawing with movie technology. Disney wasn’t working on movies per se, but his work, even though it still consisted of single cartoons, was being projected onto a big screen. These visual images, cartoons that would normally be viewed on a piece of paper, now appeared larger than life. These images had a profound effect on Disney’s visual perception. The exposure to film technology gripped Disney’s imagination. What if he could turn his cartoons into a movie? In his free time, he set up his own studio in a garage his father had built, even paying him $5 a month for rent. With his earnings as an illustrator, Disney bought an overhead camera stand and some studio lights. He borrowed a glass negative camera and began experimenting with taking pictures of his drawings. Nobody seemed to notice at the time, but Disney’s photographic experiments changed his visual perception of his drawings, and even his perception of himself, to the point that he quickly saw himself not as an illustrator, but as an animator.
Disney did not invent animation, but he took it further than anyone thought possible. When he got into the business, animations were only used for the advertisements before the main feature. Disney became an iconoclast when he decided to make his animation the main feature. What is interesting about Disney’s story is that although he had drawn since he was a child, the accomplishments that he is best known for had their origin in a change in visual perception. Disney didn’t wake up one day suddenly thinking he was going to create animated feature-length films. The ability to imagine this possibility first required a novel visual stimulus in the form of seeing a static cartoon projected on a movie theater screen. These images changed Disney’s categorization of drawing from one of static cartoons to that of moving ones—drawings that told stories in a narrative sense.
The Evolution of Perception
Disney’s epiphany had its roots in a perceptual shift, and this change in perception opened the floodgates of his imagination. Perception and imagination are closely linked because the brain uses the same systems for both functions. You can think of imagination as nothing more than running the perceptual machinery in reverse. The reason that it is so difficult to imagine truly novel ideas has to do with how the perceptual system interprets visual signals from the eyes. Whatever limits the brain places on perception naturally limit the imagination. So let’s take a closer look at how the perceptual system works.
For over one hundred years, the predominant view of how the brain constructs a mental image has been one of progressively higher-order feature extraction. Indeed, when we follow the flow of visual information through the brain, whether it is through the high road or the low road, we see a gradual transition from local processing based on the retinal grid to global processing where objects and their locations are extracted. The traditional view of this process has been one of progressively greater integration of lower-level features in a sort of pyramid approach vision. Experience was thought to play only a small role in this process. Recent advances in neuroscience, however, have shown just how big a role experience plays in perception.
Dale Purves, a neuroscientist at Duke University, has been the most vocal proponent of what can be termed an evolutionary approach to perception. As Purves points out, the images that strike your retina do not, by themselves, tell you with certainty what
you are seeing. As we saw in the last chapter, there are at least two interpretations of the Kanizsa triangle: floating triangle or Pac-Man attack. The historical, bottom-up theory of visual perception would say that you see a triangle because of local interactions of contrast edges in the figure. Purves offers a different explanation: you see a triangle because that is the most likely explanation for what your eyes are transmitting. Purves believes that visual perception is largely a result of statistical expectations. Perception is the brain’s way of interpreting ambiguous visual signals in the most likely explanation possible, and the likelihood of these explanations is a direct result of past experience.3