www.onebee.com

Web standards alert

Account: log in (or sign up)
onebee Writing Photos Reviews About

Finding Neo

The Matrix and philosophy

Viewing The Matrix raises some interesting questions for further exploration. For example, what would life really be like in a Matrix-type simulation? It's a compelling thought-experiment to follow and it's natural to place yourself in the position of the characters in the film and wonder how you would react. Would you be like Morpheus, desperately pursuing an end to the Matrix no matter what the consequences to you personally? Like Neo, driven to destroy the Matrix and its oppression and deception of your fellow humans? Morpheus believes strongly in fate – that events are pre-determined and it's up to him to search out the path toward the fated destruction of the Matrix by finding Neo and opening a path toward his enlightenment. Neo doesn't believe in fate – doesn't want to believe his life has already been determined – and he tells the Oracle so. (Perhaps this is key to his hatred for the Matrix – it implies the same sort of determinism.) She plays with his mind a bit – to demonstrate what fate can do, she creates a situation in which Neo's actions lead to the destruction of a vase no matter what he does. Or maybe you'd be like Cypher. Prioritizing ignorant pleasure over informed misery. As I mentioned in my review of the film, one of my favorite things about The Matrix is that it gives us so many things to think about. But some of these ideas are deceptively complex. The film's underlying philosophy probes into questions of perception, human nature, fate, and free will.

So what would life be like in a Matrix-type simulation? Well, who's to say that's not already what life is? It sounds preposterous to us, because our world seems too large and too intricate to be a simulation. Our computer technology, while impressive, is nowhere near powerful enough to create such a thing. That's what Neo thought too at the beginning of the movie, isn't it? We'll come back in a moment to questions of how we prove to ourselves that we're not in a pod somewhere with a computer jacked into our brain; first, let's broaden our definition of a Matrix-like simulation. Let's ignore for a moment the war between man and machine, and set aside the dark, grimy world in which dormant human bodies are used to generate electricity. What about any system in which our actions are pre-determined towards goals that are not our own by a creator whom we can never see because we have been locked into an incompatible world view? Douglas Adams, in his brilliant The Hitchhiker's Guide to the Galaxy, presents a reality in which our planet Earth is a giant computer, designed millions of years ago by another giant computer in order to calculate, over a period of eons, the Ultimate Question of Life, the Universe, and Everything. This is a more pleasant view than that of The Matrix, but nonetheless one in which we are unwilling participants. Our actions, or rather those on a molecular level that we are merely slave to, are an immensely elaborate series of tests and experiments, working toward that larger calculation. As with the Matrix, we'd have no way to disprove this from our position as unwitting participants. (Just because Adams is a gifted comic novelist and his stories are purposely silly doesn't mean they have nothing to teach us. The beauty of science fiction – in any form – is its ability to present an alternative perspective and reveal to us just how much we take for granted about the universe around us.)

The Hitchhiker's Guide scenario, while also fascinating in terms of its implications for the role of humanity, is still a little epic for a starting-off point. Let's try a simpler parallel. Without the technology, without the unseen observer, we might still consider ourselves as removed from reality, restricted in our actions and serving a purpose that is not our own. Recall Morpheus's statement that what we think of as seeing, hearing, and feeling are simply electrical signals that are interpreted by our brains. This is true, as far as we know. So, to borrow an example from philosopher John Seale, couldn't our skull be considered a pod in which our brain is floating, unable to interact with the outside world except by indirect stimuli in the form of electrical signals from our eyes, ears, and fingers? Are we not constrained in our actions by our bodies, our mortality, and the laws of physics? These constraints have become familiar because they have been with us for so long. Just as they were to Neo before he learned he could fly. And that external purpose we so slavishly serve? DNA, of course. Our every action – in fact that of every living thing on this planet since life began – can be distilled to one goal. The transportation, reproduction, and perpetuation of DNA. In a sense, even without any evil robots or elaborate computer simulations, we are in a situation very similar to Neo's. Our physical body is a disconnected avatar of our mind, serving the purposes of DNA, just as Neo's Matrix body is a disconnected avatar of his pod-enclosed mind, which serves the purpose of the machines. So the questions about control, reality, and free will that are posed in The Matrix are in fact more immediate than they seem at first glance.

Now that we can see how familiar such a scenario might be, is it such a tremendous leap to envision a situation in which the distance between our mind and our perceived reality is longer than our spinal cord? What if we really are sleeping in a pod with a computer connected to our brain? It's a question philosophers have been pondering since Rene Descartes. The French philosopher/mathematician asked what proof he had that he was not just a "brain in a vat," as the theory has come to be known. What real-world evidence do we have to shatter this hypothesis? In fact, what if there's no brain and no vat? Could we be mere floating consciousnesses in some larger spectrum the structure of which we cannot even fathom? Could we, in fact, not exist? Descartes's response was that the input we receive in our mind is inevitably subjective and subject to flawed interpretation, but it is all we have. And, since we define ourselves in terms of this input, we can be said to exist, because we recognize ourselves in terms of the only frame of reference that is available to us. In a vat or not, one's ability to think implies the existence of a thinker. If we doubt our existence, it implies a doubter. So, in Descartes's famous words, "I think, therefore I am." What it means to "be" is necessarily relative, so satisfying its conditions in relative terms makes sense. It becomes a fundamental question of what it means to "know" something and what it is for something to "be." More recently, philosopher Hilary Putnam looked at the issue with a more semantic approach. He posited that when we refer to something, this reference requires intent in order for it to be established as true (in philosopher's words, to "obtain"). If you randomly say the name "Ed" without referring to any specific Ed, Putnam argues that you fail to refer to any Ed. Following this reasoning, he suggests that since we cannot know that there is a pod that we're floating in, we cannot intentionally refer to it, and thus it cannot exist. By the same token, since we can refer to each other or to a "chair" or "apple" in our world, these exist even if someone outside the vat might see that our reference does not obtain. Even if a flower is made up of computer-simulated stimuli when we think it is composed of plant material, the flower continues to exist; it's just a question of its fundamental makeup.

So what does it mean to know something? A school of thinking known as verificationism states that something is true only if it can be verified with available evidence. This can be carried to the extreme that you must personally be able to verify something if it is to be true for you. It sounds a little silly, but for a brain in a vat it would be quite comforting. Unable to prove that he is in a vat, the brain could simply regard such a suggestion as false. However, we hesitate to take verificationism too much to heart, because it seems to imply a very lonely subjective existence. But this is, to a degree, what we experience every day. Most of us are familiar with the question "Is my orange your purple?" meant as a puzzle for thinking about perception and communication. We can describe to another person what we see when we look at orange, but it is impossible for any such description to be independent enough to defeat a reality in which the color spectrum each sees is shifted. Saying that orange is a "warm" color does nothing for someone who sees only violets in the orange spectrum. To him, these colors have always been "warm." (Assuming for a moment that we are not envatted brains, I would say that to some degree my orange is your purple. It's unlikely that the entire color wheel would be inverted, but given the vast evolutionary spectrum of human morphology, I think it's quite likely that we see color in subtly different hues: my cobalt is your azure. We can think of color-blindness as an extreme degree of this visual shift where certain groups of colors take on a uniform appearance, making them indistinguishable.) It's simply another example of how the realities we take for granted about our surroundings are supported by foundations that are inherently permeable. Most of us would reject the possibility of existing in a Matrix-like simulation, because we are not accustomed to the world operating on a level that is beyond our ordinary perception. But the fact that our solid, motionless granite block is in fact composed of mostly empty space, punctuated by tiny atoms whizzing at enormous speeds proves that the world in fact operates beyond our powers of perception all the time. There are enormous tracts of the spectrum of light that we cannot see. Bees can see ultraviolet light, but for us visible light stops at violet. Our eyes just don't pick up the shorter wavelengths.

While we can argue that within our reality we are not brains in a computer's vat, we can never definitively prove that this is the ultimate truth. Based on the film, it's clear that this possibility should be somewhat troubling. Overall, The Matrix seems committed to the point of view that living as a brain in a vat is a bad thing. (Although, outside the Matrix, Neo can't fly or stop bullets with his mind. Unreal as it may be, life certainly seems a lot more fun for Neo in the Matrix.) In general, the negative connotation of the envatted lifestyle hinges on the lack of freedom or control. But, on a scale as grand as the Matrix simulation, is freedom really lost? Even if the future is pre-determined, you can still make free choices. From the perspective of the machines, these choices are made for you, but in your mind they are free – you feel no outside compulsion pulling you in one direction or another. Like Descartes says, from your frame of reference they're free so – for you – how can they be anything but free? What's interesting, as far as The Matrix is concerned, is that humans seem to prefer free will over happiness. Agent Smith indicates (we assume truthfully) that before the Matrix we see, there was another which was designed to make everyone happy all the time. In deconstructing this concept, we can assume that the trade-off was free will or human interaction. If each person were plugged into a personal simulation just for him, he could do whatever he wanted and be happy all the time. Because of the logistical overhead involved, I am willing to bet that the first Matrix was a shared setup, allowing the envatted humans to interact just like they do in the second Matrix. Therefore, they must have traded in their free will. It would be impossible for sentient beings to freely express their will in a world where everyone was happy all the time. Think of it. If I want to have sex with you and you don't want to have sex with me, one of us will end up sad. So, the humans rejected a blissful utopia, if it meant surrendering their free will.

We desire control. We want to be free of restriction, and we want to decide what we do – and towards what end. We want to be free of deception, so we learn about our world through science. We want to be free of constraints, so we develop technology to extend our abilities. What if, through these pursuits, we developed a Matrix of our own? If humans hold the keys, would life in a Matrix be more amenable even if, once inside, you would never know that? It could represent a grand solution to the problem of overpopulation. Envatted individuals take up less space than walking-around people, so we could store them in pods and plug them into a dozen or so parallel Matrices where space would be plentiful. I'm guessing it still feels like a lesser option because, even if you don't know what they are, there are options which have been eliminated for you. The Matrix has a fixed set of operating parameters which would rule out any long-term accomplishments. Even though you'd never be conscious of it, you'd be unable to leave any lasting effects. (On a slower schedule, this is true on Earth anyway. At some point, human or celestial events will inevitably wipe the planet clean at which point everything starts over. Maybe this has even happened already, billions of years ago.) At last we seem to be near the root of the problem. As humans, we want to make our mark. It's a surprising priority, considering how few of us really create anything lasting. Christ or Socrates are among the longest-lasting achievers and their few thousand years are mere blinks of an eye in terms of the age of the universe. In fact, the only enduring legacy on the planet is, once again, DNA.

Interestingly, most of the control imposed upon inhabitants of the Matrix appears to be the result of their own conformist failure to challenge any natural laws. They just accept what they've always known. The electrodes that connect to Neo's brain, providing sensory input and enacting his motor output on his simulated body, do not prevent him from flying. Once he believes he can fly, the program obediently provides him with the appropriate sensory input of flight. Similarly then, Neo constrains himself from even greater powers. He doesn't see in all directions at once, or appear in two places simultaneously. In order to retain a coherent view of his environment, Neo only bends the rules to a certain point – one that he can process without becoming bewildered.

Lastly, the movie portrays the Matrix as "unfair." We want Neo to fight against the machines because we feel the humans have been unjustly treated – that as sentient beings they deserve more than a life of ignorant slavery. But are the machines not sentient as well? Once they are self-aware, pursuing individual goals and valuing their lives, don't they count, too? (See The Second Renaissance segments of The Animatrix for further exploration of the machines' cruel treatment at the hands of man, and their remarkable "turn the other cheek" approach right up to the end.) As humans, we have rested atop the evolutionary ladder for as long as our species can remember. Most of our recent evolutionary predecessors are extinct (cavemen, neanderthals, etc). Our nearest living relatives, the chimpanzees, are quite distant indeed in terms of length of time between their domination and ours. We've lost any sense of that relationship; we have no frame of reference for that power transfer, so we have no idea how an evolutionary superior coexists with the next level down. In contrast to the example set by humanity, the machines in The Matrix are gracious and thoughtful. The cages they build for humans offer far more freedom than the ones we build for chimps. I think the humans in the story are just frustrated by the irony that they made the machines in the first place. Or perhaps it's DNA, finally not the dominant force on the planet and fighting desperately to regain control. In Contact, Ellie Arroway (Jodie Foster) suggests that for some extraterrestrial intelligence to contact humans in the interest of killing them would be tantamount to our going out of our way to destroy an anthill in Africa. David Drumlin (Tom Skeritt) replies, "Yes, but if we did destroy that anthill in Africa, how bad would we feel?" Unfair as it may seem, it is simply the time-honored evolutionary inevitability of becoming that anthill.

In developing a few of the many ideas that The Matrix sends galloping around my head, I drew inspiration from the collection of compelling and highly readable articles posted in the Philosophy section the film's official website: whatisthematrix.com
Your Comments
Name: OR Log in / Register to comment
e-mail:

Comments: (show/hide formatting tips)

send me e-mail when new comments are posted

onebee
Your Comments

Mark on Die Hard

"Sandra" on Mermaid Park

Anonymous Coward on Fake Country Lyrics

Anonymous Coward on Fake Country Lyrics

Anonymous Coward on "A Sound of Thunder" - free reads!

"fuckmehard" on "A Sound of Thunder" - free reads!

Recently