Press "Enter" to skip to content

A Controlled Hallucination: Part 3—Behind the Curtain of Cognition

Thousands of years before anyone understood the brain and the mechanics of perception, the Greek philosopher Plato came up with his famous Allegory of the Cave to represent how humans think about and perceive reality and truth. In the allegory, prisoners spend their whole life chained up in a cave, facing a wall with a fire at their back. Many objects, animals, and people pass in front of the fire behind them and cast shadows on the wall of the cave. The prisoners see these shadows and give each form different names. Eventually, a prisoner is released from the cave, but as he leaves, he finds it hard to adjust his eyes to “reality.” Once he has finally adjusted to the light of day, he tries to return to the cave, to free his fellow prisoners and tell them about the real world. But the prisoners don’t understand and resist him. The shadows are reality as far as they’re concerned. Plato’s Allegory of the Cave can be seen as a commentary on knowledge versus ignorance, but it is also an interesting representation of human perception. As we’ve seen for the past few weeks, what we perceive and what we remember aren’t necessarily faithful representations of reality. At best, they are shadows on the wall—vague recreations of the world that we label and seek to understand. Unlike the prisoners, we cannot escape the cave. . . not fully. We are trapped inside of our own brains, watching the shadows dance on the walls. Over the years, we have developed tricks and shortcuts that help us to better translate and label what we perceive, but these too can be flawed. And our flawed cognition is what gives the shadows form. To really understand how reality relates to perception, we have to understand where cognition comes from and what it gets wrong.

It may seem obvious, but to properly perceive and understand the world, your brain has to think about it. Cognition—the raw thought-stuff of the brain—is a mainstay of human consciousness. Descartes said it best: “I think therefore I am.” But what are thoughts? And how does cognition impact our perception of reality? To build an understanding of anything in the real world, we first need a language to represent it. These thoughts don’t necessarily need to be encoded in a literal language—some people report thinking primarily in images and others “think in thoughts” as Hank Green so elegantly put it. Some people who suffer from aphasia—damage to the language centers of the brain that results in incoherent speech—continue to demonstrate perfectly coherent cognition. The languages our brains speak have a profound influence on the way we construct our reality. For example, there is evidence that people who speak Russian—a language that has different words for dark blue and light blue—tend to have a more nuanced perception of the color blue than people who only have one blue classification. This phenomenon arises because our brains use the language of thought to translate the raw complexity of reality into simplified groupings of familiar concepts.

Picture this: you’re out walking in the park, and from ahead of you a small, furry animal comes racing towards you on four legs and starts rubbing against you and barking. It’s adorable, and this is without a doubt the best day of your life. Except wait—how did you know it was a dog? I never told you that the animal was a dog. You probably guessed that it was a dog based on context clues like “furry,” “four legs,” and “barking.” But how did you come to associate those words with “dog”? This thought experiment actually bears some resemblance to how we identify things in the real world. You might very well remember every single dog you’ve ever met, but you don’t use all that information to identify an approaching fluffy beast. It would take way too much computational power to compare and contrast this new dog to every dog you’ve ever seen. Instead, your brain uses those memories to build a prototype—a mental representation of the category “dog.” Our brains do this for most of the things we regularly encounter, breaking them up into discrete categories of concepts and constructing prototypical representations of each. This is incredibly efficient but also prone to error and bias. When I say the word “dog,” each of you came up with a different image in your mind. If you have a dog that you’re attached to, your dog prototype might resemble them. If you’re afraid of dogs because one bit you when you were young, your prototype dog might be an intimidating, slobbering beast. And this bias probably affects how you perceive and react to dogs in the real world, reinforcing your negative association. Already, we can see how simple realities can be bent by cognition.

Your memories and experiences with dogs influence the type of schema your brain places associates the “dog” concept with. Schemas are groupings of concepts that we associate with each other. If you’re afraid of dogs, your “dog” schema may include concepts like “dangerous,” “feral,” and “hard to control.” When you see something your brain identifies as a dog, your cognition runs the “dog schema” script, which helps it determine how to react—with terror and contempt. Rationally, you probably recognize that not all dogs are dangerous, but your perception might not get the memo. Over time, you may be able to retrain your brain to associate dogs with a new schema, but this can be very difficult to accomplish. For such an elegant and adaptable machine, the brain can be remarkably stubborn.

One person's cognition may perceive a barking dog like this as friendly while another person's cognition perceives it as dangerous.
One person’s cognition may perceive a barking dog like this as friendly while another person’s cognition perceives it as dangerous.

Our cognition helps us to identify and categorize what we perceive, which can be an invaluable tool for solving problems and making decisions. But the system of oversimplification and grouping that cognition relies on can cling to irrational biases and errors of perception. When making a decision on the dangerousness of the dog, your brain might latch onto the first thing you notice about it—it’s running toward you and barking—even if that’s not the most relevant piece of information. The anchoring bias refers to your mind’s tendency to place too much importance on the first piece of information you learn about a subject. It turns out first impressions do matter—at least to your brain.

Even if the dog wasn’t barking, your brain would latch onto the pieces of information that confirm your existing “dog schema.” The dog is still running towards you. They seem unpredictable. Is that a smile or a snarl? Your brain’s tendency to seek out information that confirms your bias is known as confirmation bias. Confirmation bias is what can make learning science so difficult. Science can seem to contradict many of our instinctual beliefs about the world. Luckily for our stubborn brains, it is now easier than ever to find information that confirms our bias. As long as there is one piece of information to corroborate our flawed belief, it can continue to proliferate in our mind unless we make the effort to challenge it.

It would be pretty hard to be afraid of this dog—will that change your schema?
It would be pretty hard to be afraid of this dog—will that change your schema?

But what if the dog isn’t barking or running towards you? What if it’s a docile pug, complete with the squashed face and slobbery tongue. Well, that sort of dog doesn’t fit very well with your existing “dog schema,” so. . . what now? You might actually enjoy your time with the pug and make a new canine friend. Does that make you a dog person now? But you’re afraid of dogs! Your brain isn’t equipped to hold two conflicting schemas, so rather than adapting your long-held belief that dogs are evil, you resolve the conflict by rationalizing that this dog is the exception to the rule. Dogs are still evil, but this one is okay. This effect is called cognitive dissonance when your cognition rejects or avoids new information that clashes with existing beliefs. There are countless other biases and errors our brains can fall prey to—far too many for me to list here. I encourage you to take a look at the Decision Lab’s index of biases to learn more about all the shortcuts cognition takes and how these shortcuts impact our perception of the world.

So what is the truth? If we can’t even agree on whether an approaching animal is a fluffy, friendly dog or an intimidating, slobbering beast, how do we tell which is reality? Is there even such a thing as a reality separate from our perception of it? I’ll leave these questions to the philosophers, who are much better at answering questions that don’t have answers. As far as science is concerned, reality is the world we perceive when we peel back the layers of bias and seek objective, scientific inquiry. And we still get different results sometimes. We still make mistakes. But we keep trying to read the shadows on the walls anyway.

But more on that next week! For now, check out last month’s series on the architecture of the nervous system and brain. Science You Can Bring Home To Mom will be back in two weeks with a new series on mental health. Comment on this post or email me at contact@anyonecanscience.com to let me know what you think about this week’s blog post and tell me what sorts of topics you want me to cover in the future. And subscribe below for weekly science posts sent straight to your email!

Social media & sharing icons powered by UltimatelySocial