“And There It Stood”: A Short Horror

On an eccentric November night, mostly one hundred years ago, at a time when the strange seemed rather charming, a boy child was born to an unlikely couple. The year was 1910. The Titanic had not yet sunk (but it was about to). And the First World War had not yet been started.

The boy’s parents were both engineers at Cambridge.

His mother sunk into trepidation the moment she first beheld his eyes. All she could see were eyes—big, looming eyes. Eyes that could swallow an entire horizon. Eyes that were like two overgrown moons floating effortlessly in a fluorescent night sky. She fell immediately in love with the boy.

They named him Wesley. But he preferred Wezel. He was a precocious child who spent most of his days studying his immediate surroundings.

At the age of seven he thought himself to be Vincent van Gogh’s “spirit-child.”

So he painted The Starry Night. An art historian came to see it. He walked around the room in a most elegant manner, now pacing up and now pacing down the entirety of the room. “Humm—pff!” he would exclaim as he’d pivot on his heels. “It is peculiarly unique relative to other replicas of the work in that the brush stokes are exact, measured with modest reserve, and pedantically calculated.”

The little Wezel loved perfection, and his artwork became a Cambridge sensation. It was rumored that during the First World War, when Cambridge’s own art department housed Van Gogh’s painting at the Fitzwilliam Museum it was actually Wenzel’s artwork that was on display—for the museum curators were “afraid of a loss of the original artwork during a potential air raid.” And so, in a matter of a mere eight years—by now it was 1918—Wezel’s fame grew beyond the confines of a single bedroom apartment that housed the two professors and their big-eyed child.

During his years at a local primary school, Wezel made two friends: one was the teacher and the other a kitchen rat. The teacher shared her lunchtime cookies with him, and he shared his portion with the kitchen rat.

The students didn’t like Wezel for several reasons. One, he looked like a disheveled old soul—whose entire physiognomy was reduced to an emphasis that was placed on his eyes and his “death-glare.” Two, he could not understand ordinary human language. He struggled to talk the baby-talk of his fellow peers, and so, in a most necessary manner, engaged his teachers in dialogue regarding math, logic, and a myriad of oil-on-canvas painting techniques.

His third friend need not be mentioned here, since, if I recall correctly, she never returned the favor. Her name was Katherine, and she avoided Wezel’s impulsive romantic approaches. He once tried to share the teacher’s half-cookie with her but she refused. So he went to the kitchen and gave it to the rat instead. Such was the result of his first dreamy endeavor.

Because the students feared him, Wezel had to reallocate his energy-expenditures in a more fitting manner. By the summer of 1920, Wezel—then being a decade old—locked himself in his parent’s attic (they had moved a few blocks into a small home) and vowed to never reappear unless he had produced a masterpiece. His parents fed him through a tiny crack in the wall, sustaining him for six weeks and three days with crackers, chocolate and prenatal multivitamins. Every third day he requested a large, boiling pot of coffee for “mental energy.” His parents complied. Staying true to his word, Wezel emerged—six weeks and three days later—with the art in his hand a dark and forlorn figure, bearing the anguish of a tortured genius.

His parents rushed to greet their wild-eyed child. His mother fell to the ground kissing his dimpled cheeks and swearing that she would never let him do this to her again. His father stood by silently watching the strange emotions take over his mostly rational wife.

“What did you create this time, Wes,” his mother asked tenderly.

The boy looked into her eyes without blinking.

“Is he horrified by us?” his mother thought to herself. “Why, surely, he knows we love him dearly!”

Wezel walked past his parents, as if in a daze, with an old cloth-sheet covering his hidden masterpiece. During dinner, after he had broken the silence, and having alleviated his mother’s fears, Wezel requested the presence of Sydney Cockerell, who was, at the time, the director of the Fitzwilliam.

The following day, with an eye-loop in hand, Director Cockerell came to see Wezel. He walked up and down the room like the last art historian.

“Aren’t all these art historian creatures the same?” Wezel silently asked himself. “They walk in the same manner; no two are different!”

“Yes, yes—indeed! Yes! Hmm. Wow. Yes, indeed!” the Director kept mumbling to himself. “Yes, very particular. Almost real. Yes, yes! Real. As real as rain in London!”

The piece measured one hundred sixty centimeters by two hundred. It was a large oil-on-canvas painting depicting Wezel’s last place of residence: the attic. It was an accurate depiction of reality. So accurate, in fact, that Cockerell spent the following days speaking about it incessantly.

“You should have seen it. The attic. Oh, god. How authentic it was! The sheer splendor of the piece,” he told everyone he met. “I was transported there—and have not left since!”

The piece was purchased by the museum for millions of pounds, allowing Wezel to drop out of primary school, pay for his parent’s first honeymoon vacation, and resume all artistic activity immediately and forever.

Within weeks, word got out that the great philosopher Ludwig Wittgenstein was seeking out the company of none other than our very own Wezel. At the time, Wezel was unfamiliar with Wittgenstein’s thinking. He had, however, gone through Russell and Whitehead’s Principia Mathematica, a book he criticized for “fatalistic logical errors in its presentation of the foundation of mathematics.” Wittgenstein, having heard and having seen Wezel’s work became all the more interested in meeting the decade-old human being who criticized—quite accurately, in his own opinion—the Principia.

As fate would have it, on an August evening, Wezel met with Wittgenstein. To this day, nobody knows the exact contents of the conversation, but from what I could gather, it seems that Wezel encouraged Wittgenstein to write his Tractatus Logico-Philosophicus. Moreover, it was rumored that Wezel wrote parts of it. For example aphorism 2.12 reads: “The picture is a model of reality.”[1] The “picture” Wittgenstein had in mind—or, if Wezel wrote this, then “the picture Wezel had in his own mind”—was none other than The Attic (as Wezel’s masterpiece was later called). Per Wezel’s own account, the first remark Wittgenstein made upon his meeting him was: “But your eyes! How large must the world appear to them!” To which Wezel replied, “I can see the world accurately.”

In 1922, the year Wezel turned twelve, was the year Wittgenstein published his work. It became a philosophical sensation overnight. Wittgenstein became famous, while Wezel became a historical relic of the past.

In 1931, during a meeting at the Vienna Circle, in which Kurt Gödel was expounding his recently published ideas on the “incompleteness theorems,” Wezel met Wittgenstein yet again. The years had done nothing but shed their blessings on Wittgenstein; he was cheerful, optimistic, and open to new ideas. Upon seeing Wezel, he hugged the now grown, young lad.

“How is your work coming along?” he asked after the discussions were over.

“I became a professor of philosophy, Ludwig,” Wezel replied nostalgically. “I gave up art when I met you.”

No, you cannot say that. I would not encourage the study of philosophy,” Wittgenstein replied tersely and with peculiar force. “You must resume your art. You have a talent.”

“You don’t understand, Ludwig,” Wezel said in a hushed voice. “I’m now depicting reality with language—just as you suggested!”

“Why language?!” Wittgenstein moaned out loud. “The world is going to suffer much having lost you.”

With that, Wittgenstein angrily walked out, leaving the Circle. He never spoke with Wezel again.

In 1932, Wezel published an article titled “The Impossibility of Atheism.” In it he argued what he had argued ever since he met Wittgenstein: language is a depiction of objective reality. “In our minds we create a pictorial representation of the world. This picture of reality corresponds with the real world. There is a direct relationship between the picture in our minds and the world around us. Words refer to things in the world. An apple is an apple because there is the word ‘apple’ and its objective referent: an apple in the real world. Unicorns imagined in our mind are not an accurate picture of reality because there are no unicorns in the world. For words to have meaning, they must be grounded in reality.”

That was the beginning of the paper. Professor Wezel argued that Wittgenstein was right in his Tractatus: only that which exists in the real world should have words in our language. Since God did not exist in our world, there was no use having a mental image of God. Where did this image come from? If not from the world, then where from?

The second part of the paper proved the impossibility of atheism. “Since God is thought to be a metaphysical Being existing outside of the post-Einsteinian space-time continuum, it is, in fact, impossible to speak about God’s existence or non-existence thereof. God, as understood by some authors of the Bible, for example, does not exist in this world; He is above the world, above the natural order of things. Since God is outside of the world, being eternal and non-objective, language cannot be used either against God or for God: ‘whereof one cannot speak, thereof one must remain silent.’”[2]

The paper caused a sensation amongst both the scientists and theologians. The theologians were angered at the fact that Wezel attacked positive statements about God, while relishing his attacks on the atheists for their positive claims regarding God’s non-existence. The atheists, on the other hand, while happy that Wezel supported their thesis that religion was meaningless, were angered by the fact that he debunked the possibility of atheism. And so neither side was happy or unhappy: they were both equally miserable. Wezel, for his part, rejoiced tremendously that he could irritate people.

In 1933, a young professor by the name of Dolly, specializing in a secret field pioneered by her called micro-tectonic astro-physiology, heard Wezel’s paper being read at some academic society of sorts. In a matter of hours she arranged for a meeting with that “most dazzling of minds.” Wezel proposed to Dolly the following day; they were married the following weekend.

 

And this is where our story truly begins.

You see, Wezel’s eccentric gaze frightened many people out of many nights of peaceful rest. Some even avoided walking past him on their way to Cambridge just to dodge his “piercing, eerie stare.” His wife, however, a simple beauty of extraordinary mental capacities, was blind. And this was, perhaps, the only reason she never left Wezel: she never had a chance to be frightened by him.

On their first night together, Wezel awakened at three in the morning to find his wife tranquilly sleeping. He had, for many years, struggled with imagined demons. Every time he closed his voluptuous eyes, he would immediately begin sensing the presence of toxic evil. Not only did he feel the company of the demonic, he also imagined it. Demons of various shapes and sizes resided in his mind, swimming out from their lagoons every time his eyelids shut.

On this summer night, in early September, it was no different. Wezel kept imagining the demonic. He would blink only to be bolted back into the wide-eyed and terrified.

He praised God that his wife was blind. “If only she knew the demons I struggle with…and what I’m about to do…” he thought to himself.

He reached over the bed and quietly opened his drawer. He fumbled around for the duct tape.

Having found it, he gently brought it in to his chest. The roll of tape felt cool against his nervously hot skin. His sore fingers dug into the worn edges, seeking out a place where he could grip the tape.

He counted to ten under his breath.

And slowly made noiseless progress. “Good,” he muttered under his breath in the most silent of manners. “At least she can’t hear me.”

The project continued. He slowly removed two pieces of tape measuring two centimeters a piece in length.

Without disturbing his wife, he placed a single piece on his eyebrow, taping his eyelid to it to keep his eye from closing. He did the same with the other eye.

In a matter of minutes, he was fast asleep.

 

The following night, around two thirty in the morning, Wezel awakened to the sound of heavy breathing. Once he trained his ears to listen—to really listen—he heard nothing but silence. The breathing was all an illusion. What he thought was not real; it did not correspond with reality.

He closed his eyes again—and rested.

Only moments later, he imagined a beast of tremendous terror standing before him. He opened his eyes.

There was nothing there.

“Professor Wezel,” he reassured himself professionally in the most cool and academic of ways. “Your language, your imagination does not correspond to reality. There are no demons—not even gods.”

He convinced himself of this—and fell back asleep.

 

After a few weeks of living with his wife, Wezel began to realize the uncertainty of reality. His wife was, according to him, a schizophrenic. One minute she wanted Italian for dinner; the next minute, she wanted French. One second she felt cold next to him; the next second, she felt too hot. He would close his eyes, imagine her wanting Italian food—only to open them and have her state something entirely different.

And it drove him mad. She made no sense to him.

One night, before bed, he imagined they would make love. It was a Wednesday, and they always had sex on Wednesdays. He closed his eyes and imagined his wife’s naked body. Then he opened them.

She was still dressed in her nightgown.

“Maybe we will have sex next Wednesday,” he said to himself. “Maybe she just forgot. It is, after all, November—and people don’t make any sense during the holidays.”

 

The following Wednesday, Wezel, by means of induction, decided that his wife would not have sex with him tonight either. He closed his eyes and imagined that, when he’d open them, she’d be fully dressed.

And so he opened them.

She was naked.

 

For the rest of the week, Wezel slept relatively peacefully. He asked the leading sexologist at Cambridge what the reason was for his unusual calm and discovered that sex was, indeed, the reason. Wezel made note of this in his journals.

 

On a stormy night in December, just before Christmas, Wezel’s unrest returned. For the past few weeks, he had been lecturing his students on the certainty of reality. And, having come clean with his academic peers, he was not entirely certain of the certainty that he so expounded. “It is entirely possible that I know nothing,” he once said out loud to them in exasperation.

His demons were haunting him—changing him as a person. They began speaking to him, telling him to kill his wife. He found her to be too unpredictable. And so, if the demons were on the side of certainty, then surely they were right. She was, after all, a very uncertain creature.

He closed his eyes and imagined the demonic persuading him.

He opened his eyes and there was nothing there.

He counted to ten while taking a deep breath. “This is all just a bad dream—an inaccurate picture of reality,” he restlessly convinced himself.

He thought he heard a voice—it was directly addressing him.

He opened his eyes.

And there it stood.

 

Written by: Moses Y. Mikheyev

 

 

NOTE: This is a work of fiction. Wittgenstein obviously existed. I can assure you: he never met Professor Wezel. 

 

 

FOOTNOTES:

[1] Ludwig Wittgenstein, Tractatus Logico-Philosophicus, trans. C. K. Ogden (Mineola: Dover Publications, 1999), 33.

[2] Ibid., 108.

In Defense of Materialism: Philosophy of Language and Employing Material Things as Symbols

Materialism has been criticized on many grounds that I will not cover here. In fact, I have, in various ways, been strongly opposed to materialism. (Read my essay Materialism; Or, The Human in Decay as a case in point.) That is, until now. In this paper, I will attempt to articulate a sympathetic approach towards materialism. More specifically, I will argue that materialism, when seen through the perspective of the philosophy of language, is actually a type of “language” used to communicate certain things (like wealth, power, prestige, responsibility, success, etc.). In fact, “the pursuit and possession of grand material objects” (my modest, working definition of “materialism” in this paper) is beneficial to a human being attempting to communicate and convey certain values and/or facts. First, I will argue that the philosophy of language sheds light on how we humans employ “communication” (and it is not simply reduced to “language” and “writing”). Second, I will argue that materialism allows humans to communicate certain messages rapidly/promptly (without resorting to “proving yourself”). Third, I will argue that this is actually a good thing, that materialism, as I see it, is beneficial to finite human beings.

Paul Ricoeur, a phenomenologist interested in language, once said, “The word is my work; the word is my kingdom.”[1] That is, within our words, within our language, that is where all life and communicating occurs—it is our “kingdom.” Ricoeur defined language as using “symbols,” symbols that functioned as pointers to objective things in reality, myth, etc. Such symbols had multiple meanings, and, hence, could confuse interpreters. Ultimately, all acts in which the reading and understanding of texts—which used symbols—occurred were inevitably going to end up being interpretations. However, it should be noted that symbols in and of themselves need not be inherently reduced to language/writing. A symbol could be a national flag or, as in my case, a luxury vehicle. All such “symbols” communicate and stand-in-for something else. (A luxury vehicle, for example, may communicate to those around you that you are a successful individual who is responsible, who will provide for a future family, etc., etc.) The point here is the following: as we try to communicate things to those around us, we use symbols all the time. In most cases, symbols are words or phrases. I say, “I love you” and that means that I will take you on dates, buy you dinner, send you flowers on Thursdays, be concerned about your wellbeing, etc., etc. The phrase, “I love you,” is a stand-in-for something else. In and of itself it means…nothing. (Of course, this, too, could be debated.) I employ the phrase in such a way that it points to something outside it; it points to actions I will take on behalf of my beloved. The phrase, in this case, is a “symbol.”

Ricoeur writes: “I define symbol as: any structure of signification in which a direct, primary, literal meaning designates, in addition, another meaning which is indirect, secondary, and figurative and which can be apprehended only through the first.” Moreover, he goes on to define the process of “interpretation.” “Interpretation, we will say, is the work of thought which consists in deciphering the hidden meaning in the apparent meaning, in unfolding the levels of meaning implied in the literal meaning.”[2] Ultimately, he writes, “[T]here is interpretation wherever there is multiple meaning…”[3] Since symbols are almost always open to being interpreted in a plurality of ways—and, thus, of being found guilty of “double meaning”—it is the task of the interpreter to discover what the meaning is.

Going back to our luxury vehicle example, the “symbol” (i.e., the vehicle) may also be interpreted to mean, “I am a thorough-going materialist only interested in material things. I care not for relationships and people. Give me a dollar, and I’ll sell you my soul.” Of course, this is one way of reading materialism. It is one way of interpreting the symbol.

But notice what I am saying here, even as I speak the critique: it is merely one way of interpretation. (“One” way implies there are more ways.) It is possible to behold a symbol (i.e., a luxury vehicle) and to interpret it in a different way, another way. It is possible to see its owner as a good person. It is possible to see its owner as being a thoughtful person who goes to work on time, is punctual, cares about his family and tries to provide for them. Notice, then, that there is nothing in this interpretation of the symbol that is utterly negative and/or derogatory. In fact, I would like to be such a person. And maybe you’d like to meet such a person.

The next point I want to make has to do with prompt communication. If I am attempting to—let us theorize here—meet a girl, in what ways should I go about doing it? First, I am a finite human being, bound to space-time. I cannot be everywhere at once, meeting millions of girls in the span of one minute. Being thus bound, I have to make the most of my time. Second, and by implication, if I want to make the most of my time, I have to communicate things clearly and promptly. I could, in theory, be an “anti-materialist,” and resort to explaining to each and every girl I meet that I am successful, that I will take care of her, that I am a responsible human being, etc., etc. That’s one way of doing. It’s a very time-consuming way, but it is certainly an option. (If you have the time for it, go ahead and do it, I say!) In this case, you would essentially have to “prove” to every girl you meet all of the above. Or, you could do things differently.

It is possible to use symbols that communicate more rather than less. A picture says a thousand words. Driving up on a luxury vehicle conveys more than several hours of conversation over coffee. (And what makes you think she’ll believe you when all you’re doing is feeding her “words”?) That is, the symbol (i.e., the vehicle) conveys more than a million words spoken in defense of your alleged success.

Finally, as I’ve already hinted, materialism—as I have defined it here—seems to be something that is possibly beneficial to human beings. It allows us to communicate things to those around us. It allows us to do more with less. It also allows us to spend our coffee dates talking about things like love and romance, loves and hates, rather than trying to prove to the Other that we are responsible, successful, wealthy, etc., etc. In other words, I stand by my word: buy yourself that Lamborghini and enjoy your finite life!

 

Written by: Moses Y. Mikheyev

 

Dedicated to: Petr Bulkhak—for being a good conversationalist regarding this particular subject.

 

FOOTNOTES:

[1] Paul Ricoeur, “La Parole est mon royaume,” Espirit, XXIII (February, 1995), p. 192.

[2] Paul Ricoeur, “Existence and Hermeneutics,” in The Philosophy of Paul Ricoeur: An Anthology of His Work, eds. Charles E. Reagan and David Stewart (Boston: Beacon Press, 1978), 98. Italics original for both citations.

[3] Ibid.

human decay materialism critique of

Materialism; Or, The Human in Decay

Material things function as extensions of ourselves, extensions that serve as substitutes for the living and breathing beings that we cannot have. A romantic in despair is an impulsive consumer. When the life which we lead forces us to be “at peace” with the unpredictable, we, naturally, seek something stable, something material that will numb our senses. For example, if you were involved romantically with someone, and out-of-the-blue he or she decided to call it quits on the relationship, this instability—this unpredictable and uncontrollable chaos—would have terrified you, the romantic. Maybe it was all in my head the entire time? Maybe love was never there to begin with? Maybe people are just too unpredictable? Perhaps, this is why my own world is so prone to falling apart…

The unstable characters which surround us, characters that are no different than us, create within us the desire—nay, the demanding need!—to find something eternal, something stable, something that would last “forever.” And what could be the polar opposite of the unstable and the human? Material things of course!

It is in material things that we find a kind of permanence. No, I’m not suggesting that material things are permanent (for one knows that steel rusts and wood burns); rather, I am suggesting that material things are permanent enough for us to feel as if they are, indeed, stable. This stability, this permanence that we desire begins to surround us in our chaotic world as our material things increase in number. The more material things we buy, the more stable the environment around us feels.

Take me, for example (allow me to function as a sort of “martyr” for this piece!). In losing a relationship—or should I rather state “since having lost a relationship”?—I have done nothing but consume. And it was this nonsensical consumption that prompted my interest in examining, philosophically, what exactly it was that was causing me to consume.

I purchased a Fossil wallet, throwing my old, black leather one away. I use my wallets all the time, and the fact that I changed the wallet gave me a sense of “Well, you’ve entered a different stage in life; now you are a different person. Cast off your worries! The things that occurred in the past are no more!” Maybe that’s what I had been looking for all this time: I wanted to feel as if the past was the past, the relationship was over, and it was time for me to move on. My old wallet would have hindered that process. I am no longer that man who had used that wallet!

I purchased a new car, trading in my hybrid for a convertible Lexus. And why the hell not? You only live once, they say. I drive a lot. Maybe this, too, was a way for me to evade reliving the past—and I wanted nothing to do with it.

I also went shopping. (You might as well change out your wardrobe if you are planning on reinventing yourself, correct?) Why wear the same clothes that you used to wear? That person that wore them, that was a different he. That was a he that belonged to a she; a he that lived a life that is now completely foreign to you. You’ve left all of that behind too.

I then dyed my hair. Why look the same when you are no longer that you?

And so, in a mere few weeks, a distance had been created between the present you and the you who had lived in the past. Lines were drawn, phone numbers deleted, photos erased—an entire epoch in your life brought to a slow and annihilating death. And that was that.

You left it all behind.

You walk around feeling like a million bucks; you laugh in the most evolved of manners—for, by all means, you have changed.

But that is all change controlled by you. The entire time, you had been in charge. The dying of the hair, the purchasing of a new vehicle, the sheer mind-numbness of materialism—that was all “controlled demolition.” You were in charge. It gave you a sense of power, a sense of control. And it had been control that you had wanted the entire time. You wanted to feel like life made sense. People made sense.

But they didn’t. And they don’t.

Out of the tumultuous dizziness of heartbreaks and sorrows, out of the nauseating suppression of the human—there, in the dampest and darkest of places, out of the utter decay of the human, there materialism rears its monstrous head.

But materialism is not some sort of dream-state. Like anything else, it has its problems. Surrounded by the nonsensical possessions, one sinks into a despair far worse than the original wound—for the plastics and the steels of this world cannot quench the fires of a burning love, a dying-yet-resilient passion.

And so all one can do is return to the initial despair, to the initial wounds, the initial life-beginnings of a romance that would not be. “As a dog returns to its vomit…” (Proverbs 26:11). Perhaps it is here, in that most remedial of places, that one discovers a single truth: tgtyelijtlablir.[1]

Or maybe not.

 

 

As I leave the Mall of Georgia, sporting a new jacket, the sound of a folk artist playing guitar and singing some melancholic tunes distracts me. I approach him, toss him some money, and sit next to him. I ask him to “play me something romantic.” He complies with my wish.

After playing three songs for me, I finally leave a lighter and happier soul.

 

Written by: Moses Y. Mikheyev

 

[1] For those who do not know, this is the acronym for “the greatest thing you’ll ever learn is just to love and be loved in return.” It is a lyric written by Eden Ahbez for Nat King Cole’s song Nature Boy.

A History of Virginity: Purity Culture’s Ideals, Feminist Critiques, and a Philosophy of History; Or, How in the Hell Did We Go From Virginity to Hymens to Purity Balls?

 

It’s a Saturday night somewhere. A warm summer breeze caresses a chiseled male jaw. The middle-aged man with grey streaks splattered in rusty patches on his head walks hand-in-hand with a younger lady. In fact, she’s drop dead gorgeous, dressed to kill, and much younger than he. They make their way to the entrance. It’s a late night and they’re going to a party. No, it’s more like a fancy-pants dance. The speakers are undoubtedly playing Taylor Swift’s “Love Story.” And, to be sure, this is about love: it’s about true love. Banners above the entrance read, like those awful planes-in-the-sky carrying messages, the following: True Love Waits. They enter the building, grab some drinks, and begin dancing. They are dancing away in celebration of the young lady. She’s doing something special: she’s keeping her virginity. And the man dancing away with her is her father. How sensible and how sweet.

Such dances are real. They happen in small towns and big towns just like yours. The evangelical Christians like to call them “purity balls.” It’s like the whole Cinderella story except it goes like this: “Once upon a time there lived an intact hymen. And Cinderella promised to keep it intact. And so one night…” But, of course, nobody really begins the fairytale of Cinderella like that. Instead, we use cute, sanitized words like “purity” and “virginity.”

While such balls may actually be fun—and maybe a little creepy?—they are intimately connected to their culture. The concept of virginity has a history; it has a past, a present, and, almost certainly, a future. It’s a living tradition. Purity culture, an outshoot of the conservative Christian evangelical movement, has some rather black-and-white lines drawn when it comes to defining virginity. In other words, they seem to know virginity’s history and its relationship to the present moment. On the other hand, you also have the feminists criticizing this purity culture stuff. Feminists such as Jessica Valenti have a lot of troubling words to say when it comes to the concept of virginity. To be sure, they’ve even written entire books on the subject. And—oh boy!—believe it or not, does virginity have a past! It’s as creepy as Frankenstein’s bastard child; as beautiful as the Mona Lisa; and is as raw-fully detailed as Andreas Vesalius’ De humani corporis fabrica, an early textbook on human anatomy.

In this paper, I will trace ancient and modern perceptions of virginity. I will then examine how both purity culture and feminism view the concept of virginity, especially paying close attention to the way history intersects with modern culture, and how such a coalescence may have helped each of them shape their unique views on the subject of virginity. I will then examine virginity’s history, as it is treated by purity culture and feminism, from Paul Ricoeur’s philosophy of history.

One of the earliest texts that we have specifically dealing with virginity comes from none other than the good old Bible. In Deuteronomy 22:28-29, we encounter the following passage:

“If a man meets a virgin who is not engaged, and seizes her and lies with her, and they are caught in the act, the man who lay with her shall give fifty shekels of silver to the young woman’s father, and she shall become his wife. Because he violated her he shall not be permitted to divorce her as long as he lives.”[1]

A virgin, once de-virginized by a male is told to immediately marry her rapist. This passage makes perfect sense in an age where birth control and abortion did not exist. The virgin may have gotten pregnant from the rape, gave birth to a child, and would have needed support raising the child. And so, as punishment for the crime, and as a way to serve the rape victim some justice, the Bible prescribes marriage certificates when a female victim is diagnosed with rape. And, as far as we know, this sort of legislation may have prevented males from raping virgins. If you rape her, you marry her. And, as if to settle the case in eternity, the male is not allowed to ever divorce his rape-victim-turned-wife. In other words, here’s to a once-upon-a-time Cinderella story told in epic biblical proportions. Cheers.

The Bible doesn’t stop there. Apparently, the ancients even knew how to verify that a human being—specifically a female—were a virgin. Enter the “magic bed sheet.”

“Suppose a man marries a woman, but after going in to her, he dislikes her and makes up charges against her, slandering her by saying, “I married this woman; but when I lay with her, I did not find evidence of her virginity.” The father of the young woman and her mother shall then submit the evidence of the young woman’s virginity to the elders of the city at the gate” (Deut. 22:13-15 NRSV).

The “evidence” that the parents of the bride would submit would be, it is theorized, the bed sheets from the wedding night.[2] Blood and the loss of virginity apparently go hand-in-hand, according to ancient Jewish customs. However you look at it, the ancient Jews were certainly concerned with the concept of virginity. It was a very important subject, hence it being mentioned in the Bible. The concept of virginity, at least as it stands in Deuteronomy, is not necessarily about notions of purity or morality. It is, rather, about property and economics. The commandments concerning female virginity “see[k] to protect the honor of the father and make the seduction or slander of an Israelite virgin an expensive proposition.”[3] To lose one’s virginity in ancient Israel was to lose one’s socioeconomic standing. Males sought female brides who were virgin. And if you weren’t a virgin daughter, you were an expensive long-term inhabitant of your father’s household. You were not marriage material by any means.

The New Testament, likewise, has some things to say about virginity. One well-known story is the tale of the Virgin Birth. Apparently, being a virgin—and giving birth—resulted in the birth of a god (or demi-god). While the New Testament itself doesn’t describe in detail Mary’s virginity, an apocryphal text that was extremely popular in the second-century, the Protoevangelium Jacobi, does. Bart D. Ehrman, a famous biblical scholar, summarizes the text’s tale:

The midwife is astonished at the miracle and goes off to another midwife, named Salome, that a virgin has given birth. Salome, however, is doubtful and indicates that she won’t believe until she herself gives Mary a postpartum inspection to see for herself. Really. They come to the cave, and the first midwife tells Mary, “Brace yourself.” Salome performs an internal inspection and becomes an instant believer. Mary has not only conceived as a virgin, she has given birth as a virgin: her hymen is still intact.[4]

For various reasons, virginity is seen as something good. To have it even after giving birth is a supernatural event. And, this should be noted, apparently there was an objective referent one could resort to when seeking out whether a woman was virgin or not. (Ehrman thinks this was the hymen, but, as the research shows, we cannot be too sure.) I will later show how even the prized hymen, so well known in today’s culture, was not discovered until the sixteenth century!

In ancient Greece, virginity was prized likewise. One Athenian archon gave his daughter to a “hunger-crazed horse” for nourishment after discovering that she had been de-virginized by some male.[5] In fact, the social custom under Solon was that a father, upon discovering that his daughter lost her virginity, would immediately disown her. “It was the single circumstance in all of Solon’s legal code in which a freeborn Athenian could be forced into slavery.”[6]

Why this obsession with virginity? Why did the ancient Romans, for example, have the Vestal Virgins? Why did Christianity produce an enormous amount of celibate monks, who lived in the desert, battled lustful thoughts and maintained their virginity? Why did the second-century theologian, Origen, castrate himself? Was sex really that bad? While the focus of this paper is not Origin’s psychological status in regards to his perpetual virginity caused by self-castration, this paper is interested in examining how, from a historical perspective, virginity was defined, tested for, and discussed. To that I now turn.

Virginity in females did not always have a relationship with the hymen. In fact, in the past, a good portion of the population believed that virginity had something to do with a tight vaginal canal engulfed by arteries and capillaries. One trailblazer seeking evidence for the hymen concluded that it was a mythical thing, something akin to Ponce de León’s fountain of youth.

In som virgins or maidens in the orifice of the neck of the womb there is found a certain tunicle or membrane called of antient writers Hymen…But I could never find it in anie, seeking of all ages from three to twelv, of all that I had under my hands in the Hospital of Paris.[7]

Those were the words of Ambroise Paré, a French surgeon and anatomist. Apparently, even the professional medical doctors of the day had trouble finding the elusive hymen. The word hymen comes to us from the Greek. It was used by Aristotle to mean “membrane.” “The thick membrane around the brain that we call the dura was one such hymen. The mesentery, which anchors all of our intestines in place inside the abdominal cavity, was another. So too with the sac around the heart we call the pericardium…Hymens, hymens everywhere.”[8] In other words, “hymen” was, in the ancient days, a catchall term for “membrane.” So, if you ever run across it in the ancient literature, it may—or may not—refer to what we now call the hymen.

The first time in the historical literature that we find the use of the word hymen in the sense that we use it occurs in Michael Savonarola’s Practica maior (writing sometime in the 1400s). For Savonarola, “the cervix is covered by a subtle membrane called the hymen, which is broken at the time of deflowering, so that the blood flows.”[9] After Savonarola, the word appeared in the English dictionary produced by Thomas Elyot. He defined it as “a skinne in the secrete place of a maiden, which whanne she is defloured is broken.”[10]

Prior to the discovery of the hymen, some ancient anatomists thought that the blood that sometimes resulted from first-time sex came from the vaginal canal itself. The earliest text describing this comes from third-century Rome, Soranus’s Gynecology.

In virgins the vagina is depressed and narrower, because it contains ridges that are held down by vessels originating in the uterus; when defloration occurs, these ridges unfold, causing pain; they burst, resulting in the excretion of blood that ordinarily flows. In fact, the belief that a thin membrane grows in the middle of the vagina and that it is this membrane that tears in defloration or when menstruation comes on too quickly, and that this membrane, by persisting and becoming thicker, causes the malady known as “imperforation,” is an error.[11]

And there you have it: no such thing as a hymen. But, of course, in retrospect Soranus was wrong. Dead wrong. In 1543, Vesalius finally found empirical proof of the hymen. He dissected a couple of stolen bodies and found it. It was right there in front of him in all its membrane glory.

History has a strange way of interacting with us. On the one hand, we clearly want objectivity when discussing it. On the other, it seems that all too often we simply see what we want to see. For example, purity culture believes in the existence of the hymen because it exists (a) today and (b) existed in the past. The Bible, along with the ancients, apparently knew about the hymen and its relation to virginity, so the thinking goes. But then, as you examine history, and dig through the historical texts, the truth may not be so simple. We now know that not everyone believed in the hymen. In fact, when reading the Bible, and its discussing proofs of virginity, even the Rabbis weren’t so sure that all virgins bled on that fateful wedding day. This is why the Talmud contains debates regarding this matter precisely.[12] They, too, were not sure testing for virginity in females was that simple, that black-and-white.

In the fourth-century, the church father Augustine of Hippo was faced with a particular dilemma. He believed that virginity was physical. It probably had something to do with hymens or capillaries in vaginal canals. But a historical situation—in his day, it was a modern one—caused him to rethink his notions of virginity. Christian virgins were being raped. Were they still virgins even though they were raped, and clearly did not consent? Augustine thought so. The reasoning went that if you resisted with your heart and soul, you did not lose your virginity. For Augustine, virginity was an attribute of the soul—it wasn’t merely physical.[13]

Purity culture has its own particular way of engaging with the concept of virginity. For the mostly evangelical Christian population, virginity is pretty much a female thing. Girls must have an intact hymen on their wedding day. Males, on the other hand, have no such “physical” requirements. They simply must not engage their penis in vaginal sexual intercourse. That seems to be the broad, working definition. For males, there’s no physical proof that they are “virgins.” Women, on the other hand, it is thought, have such proof. In fact, there are even theological arguments made discussing God’s design of the hymen and its theological functions. Dannah Gresh, author of And the Bride Wore White: Seven Secrets to Sexual Purity, writes, “You see, God created you and me with a protective membrane, the hymen, which in most cases is broken the first time that we have intercourse…When it breaks, a woman’s blood spills over her husband. Your sexual union is a blood covenant between you, your husband, and God.”[14] No commentary is needed here; God has spoken: your hymen serves as the crux of a blood covenant.

Gresh may be an unheard-of author, but Joshua Harris is not. It is he, after all, who wrote the best-selling, controversial book I Kissed Dating Goodbye; he, too, places big emphasis on virginity and first-time sex. He begins his book with the following “dream”:

It was finally here—Anna’s wedding day, the day she had dreamed about and planned for months. The small, picturesque church was crowded with friends and family…But as the minister began to lead Anna and David through their vows, the unthinkable happened. A girl stood up in the middle of the congregation, walked quietly to the altar, and took David’s other hand. Another girl approached him and stood next to the first, followed by another. Soon, a chain of six girls stood by him as he repeated his vows to Anna. Anna felt her lip quiver as tears welled up in her eyes.

“Is this some kind of joke?” she whispered to David.

“I’m…I’m sorry, Anna,” he said, staring at the floor… “They’re girls from my past… I’ve given part of my heart to each of them.”

“I thought your hear was mine?” she said.

“It is, it is,” he pleaded. “Everything that’s left is yours.”[15]

As Harris sees it, the stakes are enormously high. The threshold for having the perfect marriage, the perfect wedding night, is set so high, so far up in heaven, even Stephen Hawking with all of his telescopes is having trouble seeing where it all ends. And if you make a mistake—God forbid!—if you even dare lose your virginity (whatever that means), your future is damned: you have effectively rendered yourself useless. “[E]ven the most innocent form of sexual expression outside of marriage could be dangerous.”[16] With teenagers reading such books, and the stakes so high for women with their hymens, it’s a surprise that a majority of them don’t resort to some kind of prison-like state of complete isolation from the male species in solitary confinement.

This obsession with the hymen in particular leads to strange things. This results in young Christian college girls engaging in all kinds of sex acts—oral sex, anal sex, masturbation, implementation of dildos and vibrators, etc.—while remaining virgin. How? One sex act was missing from my list: vaginal sexual intercourse. As long as vaginal sexual intercourse is not engaged in—and the hymen remains intact—one could, theoretically speaking, consider oneself a “virgin.”

The way purity culture has valued virginity, and its notions of virginity, has also influenced the “science” of virginity. Since males are taught, incorrectly, that females almost always bleed upon their first sexual encounter, the males have assumed blood along with pain are good indicators of virginity. The problem is that a good portion of the population does not bleed and experiences no pain during first time sex. One study found that 63% of women experienced no blood after their first act of vaginal intercourse.[17] This is nothing new. Males have been duped all these years. They have believed in “blood and guts” because they so wanted to find them. Women have been using all kinds of tricks to maintain this illusion of virginity. For example, we have ancient texts instructing women how to bleed on their wedding night in order to make certain that the male believes in their virginity. The ninth-century Persian physician Rhazes recommended inserting the intestines of dove’s into the vaginal canal, along with the warm blood of the animal, to make the vagina tight and, of course, bloody.[18] I was not kidding when I said “blood and guts.” Literally. And even women today get what they come looking for. In one study conducted in Germany on 669 patients coming in for a gynecological exam, they found a direct correlation between anxiety and the experience of increased pain.[19] That is, if a girl is taught from a young age that first time sex is painful and bloody, it may not be bloody, but it will almost certainly be painful. Not in an objective sense, of course, but in a subjective sense. You will experience pain because you have duped yourself into thinking it’ll be painful. Hanne Blank writes, “A woman is also more likely to have a painless experience, as well as a more positive impression of losing her virginity overall, research tells us, if she is not coerced or pressured, feels safe and secure with her partner, and is not worried about being interrupted or discovered during sex.”[20]

Reality alone will not change Gresh’s “blood covenant’s theology of the hymen.” If the data is accurate, a majority of women will not bleed during first time sex. I guess God doesn’t bless their intercourse. (Such may be the theological response.) Oh well. However, this is not the only case in which the concept of virginity, as it has been traditionally understood by purity culture, has been scrutinized. The feminists have also criticized this extensively. It is to the feminist critiques that I now turn.

As in Augustine’s time, so today, a real modern issue forces one to rethink traditional concepts. With the rise of homosexuality and the invention of condoms, all kinds of sex acts are now, well, sex acts. Heterosexuals can engage in anal sex in a safe manner by using condoms and some lubricating jelly. Lesbians can use various phallic-shaped devices, be they dildos or vibrators, and engage in, well, sex acts. Gay men engage in anal sex. Traditional conceptions of virginity—that is, no vaginal sexual intercourse either passively [female] or actively [male] engaged in—have been usually accepted because heterosexuality has been accepted as the norm. Jessica Valenti points out how absurd the traditional conception is: “If it’s just heterosexual intercourse, then we’d have to come to the fairly ridiculous conclusion that all lesbians and gay men are virgins, and that different kinds of intimacy, like oral sex, mean nothing.”[21] But, of course, most of us here would be inclined to consider anal sex to be sexual intercourse. So, yes, a virgin with an intact hymen having anal sex with her boyfriend three times a day is, by the modern definition, not a virgin. Did I make myself clear? Or should I say “not” again?

And it’s not only homosexuality that has challenged traditional conceptions of virginity. With the rise of various sex toys, I think it’s high time we reevaluated what it means to be a virgin. If a male, without prior vaginal sexual intercourse, has sex with a blow up doll, isn’t he no longer a “virgin”? On the flip side, what if a “virgin” female with her hymen intact “loses” her “hymen intact-ness” to a dildo, is she still a virgin? (She did, according to the traditional conception of virginity, “lose” her intact hymen. But, in a strange way, a penis attached to a male never penetrated her.) Such scenarios make our heads spin. But it all makes sense. This is why Hanne Blank’s modern definition of “virginity” is so vague and broad. She defines it as “a human sexual status that is characterized by a lack of any current or prior sexual interaction with others.”[22] According to her, losing your virginity occurs when some kind of sex act—whether vaginal, oral, or otherwise—takes places between two (whether gay or straight) individuals. The requirements, then, for being a virgin are: (a) no sexual activity with (b) another human being. (Sexual activities, such as masturbation and/or the use of a dildo in a private setting, do not constitute a loss of virginity.)

The rise of homosexuality, various forms of birth control, and sex toys have not been the only thing that have forced moderns to reevaluate what they mean by “virgin.” Another fact has come to light: all hymens are not created equally. If the traditional conception is to be maintained in the modern era—which I don’t think it can be—it must address the problem the objective science presents us with. Hymens, we now know, are not all the same. They come in various shapes and sizes. Some women, for example, are born with imperforate hymens: that is, hymens that cover the entire vaginal opening. This presents menstruating women with a difficulty, so, naturally, the surgeons have to incise the hymen.[23] “Hymenal tissue itself appears in a number of forms. It might be fragile and barely there, or resilient and rubbery.”[24] Some hymens disintegrate on their own; others are “so resilient that they endure years of sexual intercourse quite handily…”[25] As far as hymenology goes, I think it is safe to conclude that it is unscientific and irrational to make an intact hymen bear the crux of “proof” when it, by no means, can do so. The hymen is not as “universal” as the ancients may have imagined or as “theological” as purity culture may have believed. It’s a piece of tissue that comes in all kinds of shapes and sizes, and, in an odd way, takes on a life of its own: disintegrating, at times; at other times, remaining intact throughout years of sexual intercourse.

The strangest thing, however, is that even animals have hymens. So, they’re nothing special. Yes, you heard me correctly: “llamas, guinea pigs, bush babies, manatees, moles, toothed whales, chimpanzees, elephants, rats, ruffed lemurs, and seals all have them.”[26] God must have been having nasty thoughts the moment He decided that a female rat needed to seal her “marriage” to another rat with a “blood covenant.” Strange gods, those guys.[27]

We have seen how conceptions of virginity were construed in the past, and how such historical conceptions were employed by purity culture only to be criticized by feminism. In both purity culture and feminism, traditional conceptions of virginity—as found in ancient texts, for example—guided the modern discussions. One question we have not addressed yet is the question of how we as people read and understand history. How is it that the history of virginity could be, in some ways, shaped by our own prejudices? How is it, for example, that we perpetrate the myth of “blood and guts” in association with first-time sex? Paul Ricoeur, a philosopher, has some interesting things to tell us.

For Ricoeur, all history is, essentially, an act of living interpretation. In the modern era, prior to Immanuel Kant, people generally believed in an objective world that was “out there,” one which they had access to. They were relatively certain in our ability to grasp the objective. After Kant, a shift occurred: people began recognizing their subjectivity. The mind was limited by its very nature. The world “out there,” the noumenon—that is, the thing-in-itself—was not to be confused with the way we perceived it to be; the perceptions were the phenomena, the thing-as-it-appears-to-us. There lie a vast chasm between the noumenon and the phenomenon. In the modern era, an era in which the philosopher Descartes worked, history was viewed as a collection of objective facts—a collection of noumena—to which we, the people, had access. After Kant dropped his atomic bomb in philosophy, and having initiated civilization into the post-modern era, historians began to recognize how un-objective the historical enterprise itself was. Ricoeur welcomed this more balanced-yet-critical approach towards history. For Ricoeur, a good method was one in which “[a] deep distrust for any simple reductive explanation of man or culture remains constant.”[28] The historical data should not just be seen as objective; no, humans who have subjectivities are engaging the historical data. But the historian must not stop there. Ricoeur believed that we should go even further than Kant: we should not merely criticize objectivity, while emphasizing subjectivity, we should criticize subjectivity too! There are methods and counter-methods, subjects and objects, one must not place greater emphasis on one or the other; instead, Ricoeur argues that they must together remain in dialectic tension, “the dialectic of oppositions.”[29] Out of this tension, Ricoeur was forced to discuss the elephant in the room: language. Language—“words”—are the things we use to write history. Ricoeur became increasingly aware that language should be carefully scrutinized. “The word is my work; the word is my kingdom.”[30] It is only within the sphere of a given language that a historian operates, hence his having called “the word” his “kingdom”—it is the place in which one lives and breathes and does history. Ricoeur takes language to mean a system that incorporates the use of “symbols.” The symbols function as pointers to objective things in reality, myth, etc. Such symbols have multiple meanings, and, hence, can confuse interpreters. And, ultimately, all acts in which the reading and understanding of texts—which use symbols—occurs are inevitably going to end up being interpretations. “I define symbol as: any structure of signification in which a direct, primary, literal meaning designates, in addition, another meaning which is indirect, secondary, and figurative and which can be apprehended only through the first.” Moreover, he goes on to define the process of “interpretation.” “Interpretation, we will say, is the work of thought which consists in deciphering the hidden meaning in the apparent meaning, in unfolding the levels of meaning implied in the literal meaning.”[31]

As one can readily tell, the concept of “virginity” undoubtedly has some grounding in objective fact. There are women who have some form or another of hymenal tissue, which can, at times, be torn during first-time sex. But, as our discussion has revealed—as we have lunged into the issue of history, meta-history, language, and the human experience—we have seen how problematic, how complex the symbol of virginity in our language really is. In fact, it is by no means absurd to conclude that we still have issues with grasping virginity’s “hidden meaning in the apparent meaning.” We are onto something but we cannot seem to grasp it. As Blank remarks in her own work, concluding a chapter on the history of “virginity testing”:

There is no single virginal body, no single virginal experience, no single virginal vagina, not even a single virginal hymen. There is only the question, how doe we know whether this woman is a virgin? The answer has been written innumerable times, with alum and doves’ blood and urine and decoctions of mint and lady’s mantle, with charts and graphs and clinical photography. But no matter how many times someone attempts to inscribe it, no matter how firmly they press the pen to the paper, we are left forever with the same blank page.[32]

In a rather strange turn of events, the history of virginity had become biography. As documented earlier, a woman who believes first-time sex would be painful, experiences pain. A woman who believes she will bleed excessively will, by all means, bleed—probably a little—but she’ll end up exaggerating the event.[33] “Sociologist Sharon Thompson’s research has shown that in telling their virginity-loss stories, some women seem to positively revel in gory (and in some cases clearly exaggerated) details…”[34] The males who expect their “virgin” wives to bleed, end up seeing blood on the wedding night because their new brides plan wedding days when they would be on their menstrual periods.[35] The history of virginity, then, is not really history so much as it is our own biography. We want to see blood, so blood we see. Why? Because we want to see it. And if we don’t see blood, somebody bring me dove’ intestines—or, better yet, make sure coitarche (first-time sex) occurs during a woman’s period! And so the “history” of virginity continues. It continues to write its story in blood and guts. But what were we expecting to find anyhow? Weren’t we all in it for the blood and the guts in the first place? As Ricoeur correctly points out:

The purpose of all interpretation is to conquer a remoteness, a distance between the past cultural epoch to which the text belongs and the interpreter himself. By overcoming this distance, by making himself contemporary with the text, the exegete can appropriate its meaning to himself: foreign, he makes it familiar, that is, he makes it his own. It is thus the growth of his own understanding of himself that he pursues through his understanding of the other. Every hermeneutics is thus, explicitly or implicitly, self-understanding by means of understanding others.[36]

In such a way, we, too, have made the foreign familiar; we, too, have made the gory stories in times past our very own. We, as a people, as those who engage in the task of interpreting history, make the text into something that speaks to us—so long as it speaks to us in a domestic language. We want it all for ourselves.

Objective facts—what happened and how—are less important than communicating symbolic truths. The stories that we tell say less about what was literally experienced than they do about how we felt about the experience, how we wanted to feel about it, and how our culture expects us to feel about it.[37]

From Harris’ I Kissed Dating Goodbye to Valenti’s The Purity Myth, virginity, and its shady history, played a central role. How it was understood in the past—be it in the Bible or in ancient medical texts—shaped and informed the modern discussions. However, as we have seen, the task of understanding history involved engagement with human subjectivities, even as Ricoeur philosophically theorized and as the science now suggests. What was theory in Ricoeur has become a working method in this paper. I hope I have, as Ricoeur suggested, examined the history of virginity while engaging in “the dialectic of oppositions.” Having said that, I do not think that virginity, either as it has been traditionally understood or otherwise, is going to stop engaging us as a culture. Sexuality is here to stay, for better or for worse, and we will continue to read ancient texts, medical texts, and blogs, allowing them to shape how we think about the concept of virginity. For the female, it may remain inextricably linked to her hymen; in males, it will probably remain something abstract, ambiguous and immaterial. Jesus was onto something when he slit the connection between physical adultery and “adultery of the heart.” “But I say to you that everyone who looks at a woman with lust has already committed adultery with her in his heart” (Matt. 5:28 NRSV). Even in the ancient past, a thinker such as Jesus recognized that sexuality was more than just “of hymens and dildos.” There was, perhaps, a spiritual element to the sexual. One could engage in adulterous behavior merely by looking at some woman and imagining a sex act. Jesus—like Augustine after him—must have considered the possibility that sexuality cannot merely be reduced to intact hymens; that virginity—and this is per Augustine—is a characteristic, a virtue even, of the soul. If the ancients could think along ambiguous lines—that is, they were willing to think about more than just the physical—so should we be willing to critically examine our own culturally influenced conceptions of virginity.

Written by Moses Y. Mikheyev

I’m a graduate student at Emory University interested in religion, philosophy, and the philosophy of language. 

 

 

BIBLIOGRAPHY:

Ehrman, Bart D. Jesus Before the Gospels: How the Earliest Christians Remembered, Changed, and Invented Their Stories of the Savior. New York: HarperOne, 2016.

Hanne, Blank. Virgin: The Untouched History. New York: Bloomsbury, 2007.

Harris, Joshua. I Kissed Dating Goodbye. Colorado Springs: Multnomah, 1997.

Ihde, Don. Hermeneutic Phenomenology: The Philosophy of Paul Ricoeur. Evanston: Northwestern University Press, 1971.

Knust, Jennifer. Unprotected Texts: The Bible’s Surprising Contradictions About Sex and Desire. New York: HarperOne, 2011.

Lundbom, Jack R. Deuteronomy: A Commentary. Grand Rapids: William B. Eerdmans Publishing Co., 2013.

Ricoeur, Paul. “Existence and Hermeneutics,” in The Philosophy of Paul Ricoeur: An Anthology of His Work. Edited by Charles E. Reagan and David Stewart. Boston: Beacon Press, 1978.

Valenti, Jessica. The Purity Myth: How America’s Obsession with Virginity is Hurting Young Women. Berkeley: Seal Press, 2009.

FOOTNOTES:

[1] New Revised Standard Version.

[2] Jack R. Lundbom, Deuteronomy: A Commentary (Grand Rapids: William B. Eerdmans Publishing Co., 2013), 633. “Other texts dealing with cases similar to the present one—one Old Babylonian and another from Qumran—report (trustworthy) women being called in to inspect the bride and hopefully to settle the matter. A similar procedure is attested among the Arabs. The whole procedure is admittedly primitive and could easily bring unjust verdicts, since women do not always emit blood on their first intercourse, hymens could have been broken for other reasons, and so on” (Ibid.).

[3] Jennifer Knust, Unprotected Texts: The Bible’s Surprising Contradictions About Sex and Desire (New York: HarperOne, 2011), 62.

[4] Bart D. Ehrman, Jesus Before the Gospels: How the Earliest Christians Remembered, Changed, and Invented Their Stories of the Savior (New York: HarperOne, 2016), 33-4.

[5] Hanne Blank, Virgin: The Untouched History (New York: Bloomsbury, 2007), 124

[6] Ibid.

[7] Quoted in Blank, Virgin, 42.

[8] Ibid., 44.

[9] Ibid., 45.

[10] Ibid.

[11] Ibid., 46.

[12] Ibid., 30.

[13] Ibid., 7-8.

[14] Quoted in Blank, Virgin, 112.

[15] Joshua Harris, I Kissed Dating Goodbye (Colorado Springs: Multnomah, 1997), 13-4. There are so many things wrong with this immature paragraph that I will express what I think, at the very least, in a footnote. Harris was young when he wrote this. And, by all means, it sounds very much like an adolescent writing this, with an inability to see the world outside of hard-drawn black-and-white dichotomizing lines. No, Harris, people don’t give their wives “what’s left.” It is life itself that has created them in the present. Their past is a part of what made them, at any given present moment, who they are. Life is, as Søren Kierkegaard and Heidegger point out, “a becoming.” You never “are” anything. You are always in the process of “becoming.” What the fictional David is giving Anna is who he has become—up until that point. But he won’t remain static. He will continue to grow, develop, share history with others—be they male or female—and continue to “become” something of his choosing. To say that spending time with others is somehow immoral or wrong is idealistic and arrogant. People can’t read the future, neither can we know beforehand whom we are going to marry. In a perfect world, hell, I, too, would prefer to spend my youth on my future wife. But in this world—with all of our limitations—spending time with girls that won’t end up with me comes with the territory. I don’t know which world you live in, but on planet earth, people are not omniscient, do not forecast the weather, and—and this point is important—they make mistakes. But only in retrospect. Hindsight. We don’t always know something is a mistake in the present moment. I, for one, have no such crystal ball.

[16] Ibid., 96.

[17] Blank, Virgin, 89.

[18] Ibid., 91.

[19] Ibid., 114.

[20] Ibid., 115.

[21] Jessica Valenti, The Purity Myth: How America’s Obsession with Virginity is Hurting Young Women (Berkeley: Seal Press, 2009), 20.

[22] Blank, Virgin, 6.

[23] The traditional conception of virginity as being directly related to the status of the hymen must, I assume, have problems with a surgeon “taking” a patient’s virginity.

[24] Blank, Virgin, 37.

[25] Ibid., 40.

[26] Ibid., 23.

[27] I’m rolling my eyes so much typing this; they are beginning to feel like bowling balls.

[28] Paul Ricoeur, “Existence and Hermeneutics,” in The Philosophy of Paul Ricoeur: An Anthology of His Work, eds. Charles E. Reagan and David Stewart (Boston: Beacon Press, 1978), 98.

[29] Don Ihde, Hermeneutic Phenomenology: The Philosophy of Paul Ricoeur (Evanston: Northwestern University Press, 1971), 16.

[30] Cited in Ihde, Hermeneutic Phenomenology, 24.

[31] Ricoeur, “Existence and Hermeneutics,” 98. Italics original.

[32] Blank, Virgin, 95.

[33] Believe it or not, but there have been studies done on this too. And women make up “blood and guts” tales about their wedding nights all the time. See Blank, Virgin, pp. 111-3.

[34] Blank, Virgin, 111-2.

[35] Ibid., 91.

[36] Ricoeur, “Existence and Hermeneutics,” 101.

[37] Blank, Virgin, 103.

How Do We Talk to Others?: Wittgenstein and Language-Games

What does it mean “to speak to another human being”? That is, what does it mean to convey something using sounds-words to another? Is it even possible to convey anything for that matter? Moreover, if one were to assume that X were being conveyed from Person A to Person B, how would Person A go about verifying that X, in fact, were accurately conveyed? If you have ever wondered about human language and communication, rest assured, you have company: Ludwig Wittgenstein. Wittgenstein, too, thought about such things. In this paper, I will consider Wittgenstein’s contributions to the philosophy of language, or, as some would have it, his work on “ordinary language.” While it is beyond the scope of this paper to thoroughly deal with Wittgenstein’s continually developing ideas, paradoxes, etc., it is the hope of this writer to help make Wittgenstein’s ideas palatable to the general public. I hope the teenager reading this learns something about the apparent subjectivity of much human language; I hope the college student reading this walks away with a better appreciation of Wittgenstein and a better understanding of his relevance for practical living. After all, we all use language on a daily basis. We all attempt to convey things with it—be they emotions, commands, facts, etc.—without taking the necessary time to think about what it is that goes on when two human beings (a) share a common language; (b) share a common human body with common sensory apparatuses; and (c) attempt to convey something to the Other using sounds and words (i.e., language). In a nutshell, this paper is just a modest attempt at understanding human language and communication.

Wittgenstein’s famous collection of notes taken by his students at Cambridge during the years 1933-34—the so-called “Blue Book”—begins by asking, “What is the meaning of a word?”[1] Even before one begins addressing this question, Wittgenstein went even further: What does it even mean to produce a “meaning of the word”? That is, Wittgenstein is asking what a definition of a word would even look like. What makes a definition acceptable to the general public? Who or what determines that such-and-such a definition is providing us with meaning regarding a particular, singular word? For example, one could theorize that there are, at least, two ways of providing a meaningful definition of a word: the verbal and the ostensive definitions. The verbal definition merely uses other words to describe the particular word we are trying to define (e.g., one defines the word “to hate” by appealing to the dictionary and saying that it means “to dislike some person intensely”).[2] The ostensive definition, on the other hand, is pointing us to something objective (e.g., in ostensively defining the color “red,” a teacher may show her students a red apple and, pointing to it, say, “This is what I mean when I say ‘red.’”). The point Wittgenstein is making here is relatively straightforward: if we are using an ostensive definition we have a word, such as “red,” and we have something objective it refers to—for example, the word refers to a red-patch that reflects a particular light wave-length that stimulates certain photoreceptors in our retina producing a subjective psychical state in our cerebrums that, it is assumed, is shared by most (all?) human beings who are able to perceive color.

But there is a problem even with this ostensive defining of a word. How do we learn that our subjective experience of the color red is actually what our teacher means when she says, “This is ‘red’”? Here Wittgenstein gets into the problem of learning and understanding a language.

Wittgenstein lucidly reveals the problem of communicating using a human language when he discusses learning a language by “ostensive defining.” For example, if I wanted to teach someone that a pencil was called a “pencil,” and I pointed to a pencil and said, “pencil,” how does the listener know that what I am trying to convey is that the thing in front of me (e.g., the entire pencil) is called a “pencil”? Isn’t it possible that the listener would associate “pencil” with “wood”? Maybe the listener would associate the word “pencil” with “round” instead (as pencils are, usually, in fact, round!). Wittgenstein writes regarding several possible interpretations that may arise after such a lesson. The student may interpret your pointing at a pencil and saying “pencil” to mean the following: (1) This is a pencil; (2) This is round; (3) This is wood; (4) This is one; (5) This is hard, etc., etc.[3]

We haven’t even begun defining a word ostensively and we’ve already run into the problem of learning a language. If, in fact, ostensive definitions are the way to go when trying to make sense of human communication and language, how is it that even when we are learning the language, it seems that rules are already in play here too? That is, it seems that the class of students learning the word “pencil” already have some idea of what it means to learn the meaning of a word! Where does such meaning come from? Or, to ask the same question differently, where do the students get this notion of learning a language in such a way? Why is it not the case that more students would hear “pencil” and interpret it to mean “the thing in front of me that appears round shall from henceforth be known to me as ‘pencil’”? Why is it that a large portion of the class already inherently seems to know what it means to learn a language and, hence, what the process looks like when learning the word “pencil”?

“If we are taught the meaning of the word ‘yellow’ by being given some sort of ostensive definition [in this case, ostensive means something like “denoting a way of defining by direct demonstration, e.g., by pointing”] (a rule of the usage of the word) this teaching can be looked at in two different ways: (A). The teaching is a drill. This drill causes us to associate a yellow image, yellow things, with the word ‘yellow.’ Thus when I gave the order ‘Choose a yellow ball from this bag’ the word ‘yellow’ might have brought up a yellow image, or a feeling of recognition when the person’s eye fell on the yellow ball. The drill of teaching could in this case be said to have built up a psychical mechanism. This, however, would only be a hypothesis or else a metaphor. We could compare teaching with installing an electric connection between a switch and a bulb. The parallel to the connection going wrong or breaking down would then be what we call forgetting the explanation, or the meaning, of the word…[I]t is the hypothesis that the process of teaching should be needed in order to bring about these effects. It is conceivable, in this sense, that all the processes of understanding, obeying, etc., should have happened without the person ever having been taught the language; (B). The teaching may have supplied us with a rule which is itself involved in the processes of understanding, obeying, etc.: ‘involved,’ however, meaning that the expression of this rule forms part of these processes…”[4]

Two options are offered us by Wittgenstein when it comes to learning a language: (1) the process of learning a language comes about by a process of “drilling” (as in example A above); or (2) it comes about via a process of learning “rules” (as in example B above). But drilling seems to have its own issues (such as learning the word “pencil”). Moreover, learning rules also has its problems. If learning a language means learning the rules of that particular language, then what do its rules look like? Wittgenstein examines rules also.

Wittgenstein understood language as being comparable to a game. In order to play in a game, one must know which words refer to which objects (for example, in chess, you may need to know that the piece which looks like a horse is known by the word “knight”) [here one can recall the “ostensive definition”]. But that’s not all: in order to play the game, one must also know the rules of the game. And, moreover, one must recognize that one is playing a game. For Wittgenstein, much like the game of chess, languages took on a form of life—they were very complex and were deeply interwoven into the community using it. In several of his aphorisms found in Philosophical Investigations (1953), Wittgenstein shed some light on this particular issue of language-games and, moreover, what he even meant by “game.”

 

Here we come up against the great question that lies behind all these considerations—For someone might object against me: “You take the easy way out! You talk about all sorts of language-games, but have nowhere said what the essence of a language-game, and hence of language, is: what is common to all these activities, and what makes them into language or parts of language. So you let yourself off the very part of the investigation that once gave you yourself most headache, the part about the general form of propositions and of language. And this is true—Instead of producing something common to all that we call language, I am saying that these phenomena have no one thing in common which makes us use the same word for all, but that they are related to one another in many different ways. And it is because of this relationship, or these relationships, that we call them all ‘language.’ I will try to explain this.

Consider for example the proceedings that we call ‘games.’ I mean board-games, card-games, ball-games, Olympic games, and so on. What is common to them all?—Don’t say: “There must be something common, or they would not be called ‘games’”—but look and see whether there is anything common to all.—For if you look at them you will not see something that is common to all, but similarities, relationships, and a whole series of them at that. To repeat: don’t think, but look!— Look for example at board-games, with their multifarious relationships. Now pass to card-games; here you find many correspondences with the first group, but many common features drop out, and others appear…I can think of no better expression to characterize these similarities than “family resemblances”… (§ 65-7).

Wittgenstein is saying that language-games are not necessarily bound to strict, calculus-like rigid rules; instead, language-games have a very complex, living sort of life to them. The ways we use words are akin to the ways we play games. There are certain rules that have definite boundaries, but these rules, themselves, seem to have something almost indefinable about them. For example, we cannot really reduce the word “game” to any one thing. As Wittgenstein showed us, the word “game” seems to relate to us that there are certain things we call “games” which share certain characteristics, but do not share all of them. Much like a son who looks like his father (sharing a “family resemblance”) but not being reduced to his father.

Wittgenstein suggests that to learn a language, to know a language, is really to practice the language in a life-setting. That is, in order to be fluent in a given language, one has to understand the multifarious relationships that are going on between single words, their referents, possible nuances, etc. Maybe the word “water” coming from the lips of my lover means, “Please bring me some water.” But, in another context, maybe her shouting “Water!” means, “The water in your cup, which you are about to consume, is poisoned!” (That is, “Please don’t drink the water!”) The word “water” itself is not merely reduced to a single concept (for example, H2O). The word cannot be understood apart from an intricate web of relationships which Wittgenstein calls a language’s “form of life.” This is why Wittgenstein would go so far as to say, “You learned the concept of ‘pain’ in learning a language.”[5] What he means is that even though pain is subjective, the entire relationship created between the word “pain” and your subjective experience is mediated by culture. You were taught what it means to be in pain. You were taught what to do when in pain. Another example. Let’s look at the concept of “love.” While many could persuasively argue that love is a subjective state of mind (or a subjective “feeling”), Wittgenstein would quickly point out how insufficient this view of love really would be. For example, is it not true that in order for you to communicate love to, let’s say, Juliet, you would need to communicate it within a context created and sustained by a community which shares your language? If the community says that love is expressed by the sending of red roses to the object of your affections, isn’t that more than just a subjective state? Doesn’t love, then, transcend the prison of your own subjectivity?

While it is true that you cannot empirically verify that another person is, in fact, experiencing exactly what you are experiencing when you say, “I am in pain,” nonetheless, because of all the associations we make with being in pain (such as a grimace, a screech, a high pulse rate, etc.) [notice that such associations are objective and can be verified], it is reasonable for us to assume that when someone else says that they are in pain—and they appear to be—that they are, in fact, in pain. Why? Because even the concept of pain was taught to us by the community! That is, even while we were young, we were told that when we feel “pain” we should make a frown, call in sick, and act “down.” In other words, our expression of pain—the way it is lived out within a community—is itself already defined by the community (hence not being completely “subjective”).

“It is for this reason that our mental words must be, as they are, connected with features of our situation which anyone can in principle observe. Every inner process must have its outward criteria…Statements about pain in the first person, Wittgenstein says, are in fact extensions of natural pain-behavior, conventionalized alternatives to crying out which we are trained to adopt. They are not so much descriptions of pain but manifestations of it.”[6]

So how do we know when someone is really in pain? Are there rigid rules for this? Is there a list of “requirements” that must be met? Or, to speak of something more existential, is there a list of requirements that would help us distinguish “those who are really in love” from “those who are not [really in love]”? “[I]n general we don’t use language according to strict rules—it hasn’t been taught us by means of strict rules, either.”[7] Wittgenstein is well aware that humans don’t normally use extremely rigid rules when learning a language, participating in a language, etc. So where does this leave us? If language can’t be rigidly reduced to a formula—something one can do in math—what happens to exactness and certainty in language? Well, it seems that certainty cannot be found even in language. We simply lack the tools, the environment, the brainpower to convey things, understand things, accurately every time and always.

In the end, one is left engaging with a language within its own boundaries. The point may be simply to be conscious of these facts, to be conscious of language-games. Being conscious of the plethora of ways in which we deploy words may help us attempt to speak in a lucid and clear manner. Maybe, at times, we may even throw in a definition of a word in the hopes that the definition may betray the language-game from which we are speaking…

Written by: Moses Y. Mikheyev

 

FOOTNOTES:

[1] Ludwig Wittgenstein, The Blue and Brown Books: Preliminary Studies for the “Philosophical Investigations,” trans. Rush Rhees (New York: Harper & Row, 1958), 1.

[2] But such “verbal definitions” may get caught up in a circular argument. For example, one is then forced to define what it means to “dislike” someone “intensely.” After looking those words up, and discovering other words (such as, “aversion,” “loathing,” “hatred”) one, immediately, finds that “to dislike” someone is defined by “hating” someone; and “hating” someone is defined as “disliking” someone! In the end, the verbal definition takes us nowhere: we go from vacuously arguing about words by appealing to other words, which are then re-appealing back to the initial word we set out to define! To bring to the fore the logical positivist arguments of the 1930s’ Vienna Circle, the only words that are “meaningful” are words that could be ostensively defined by having an objectively existing referent.

[3] Ibid., 2. Note: This paragraph was taken virtually wholesale from the Wikipedia page Blue and Brown Books; however, I wrote the section in Wikipedia anyhow.

[4] Ibid., 12-13.

[5] Ludwig Wittgenstein, Philosophical Investigations, 4th ed., trans. G. E. M. Anscombe, P. M. S Hacker, and Joachim Schulte (Hoboken: Wiley Blackwell, 2009), 125.

[6] A. M. Quinton, “Excerpt from ‘Contemporary British Philosophy,’” in Wittgenstein, The Philosophical Investigations: A Collection of Critical Essays, Modern Studies in Philosophy, ed. George Pitcher (London: University of Notre Dame, 1968), 20.

[7] Wittgenstein, The Blue and Brown Books, 25.

“There Are No Existentialists Here”

It’s a Friday night and I’m stuck at a twenty-four hour Starbuck’s drinking coffee with a crowd of young earthlings between the ages of sixteen and thirty. I had recently moved “down South” from Washington state to Atlanta, Georgia. Here I was a thoroughbred northerner stuck in the self-deprecating, yes-ma’am-ing, door-opening, deep-fried South.

I moved here to attend graduate school at Emory University. Rumor had it that Emory was a good place to be, especially considering the fact that, relative to the south, it was a darn good school. So there I was—young, energetic, and full of life—aching to discuss “the big questions.” I was, after all, obtaining a master’s in “theological studies.” That is, I was essentially studying God (whatever the hell that meant).

But my youthful naiveté would soon meet its life-sucking Count Dracula. I would soon come to discover that the people in the South, as a general rule, didn’t really care about the big questions. In fact, they were permanently disinterested in thinking about them. I even suspect that they don’t even know that such questions exist. Take one such question—what is the meaning of your life?—for brief consideration. I asked this youthful bunch to think about that while they sipped their almost-deep-fried, double-shot Crème brûlée latte. I then waited like a cat hunting a mouse for a response. But it never came.

Somewhere amidst all of the important topics filling the discussion like hot air balloons at a two-year-olds birthday party, my mere mention of “meaning” got lost. It was never heard amongst all of the bells and whistles.

You see, being the generous soul that I am, I, quite naturally, assumed that my question must have never tickled a single soul’s eardrum. So I sputtered out the dying remains of this existentially unnerving question. Like its previous contender, the question fell on deaf ears.

Having said all that, I think the people here are quite happy-go-lucky. I mean, they are so enamored of themselves that they never ask big questions. Or, maybe, they are so thoroughly enjoying life that they don’t have time for such petty things as “meaning.” Come on, who cares about “meaning” (what’s that?) when you’re having the best time of your life?

Well, that was somewhere around “month one” in Atlanta. I had something like two years to spend here, so I decided that, contrary to my subjective opinion, Atlanta might prove me wrong. There was, after all, still time.

The months turned into seasons—summer came and went—and I failed to meet a person who thinks about thinking. (I ended up meeting one such soul, contrary to my previous sentence, but he was originally from Boston. And to which, in due time, his soul returned happily again.)

His name was Andrei. He was a philosopher at heart. He studied cognitive psychology at Penn State; prior to that, he did his undergraduate work in classical piano. We talked about a lot of things—music, people, public opinion, etc.—and then we came upon that most touché of subjects: the meaning of it all.

Andrei brought up a funny anecdote that stuck. We were having dinner together—I was sipping a Moscato and he was drinking chardonnay—when we began discussing the philosophy of language (it’s practically impossible to find people in the South, amidst the general public, who would be familiar with the subject, much less able to hold a discussion). As I told Andrei my views on God and His/Her/Its ability to communicate meaningfully to us, Andrei related his own concerns. “When I was at Penn State, all of the psychology graduate students used to wonder how philosophers and theologians would speak and write about God. We’d sit together and discuss in utter amazement how these guys could imagine that they knew what they were talking about. Here we were trying to figure out how the human mind was able to perceive a ‘cup’ as a ‘cup,’ and these guys were writing dissertations on God!” “Look at this cup,” he continued, as he pointed at a cup in front of him. “It’s different from this other cup here. This one doesn’t have a handle on it. It’s a different color. It’s made from a different material. How does our mind still classify both of these very different objects as belonging to the category of ‘cup’?”

            I understood Andrei’s point because I, too, wondered; I, too, lived asking similar questions. How do we know—almost intuitively—that something is a cup? Here I was trying to speak about a Being I’ve never met, using words I never did understand (infinite, omnipresent, omniscient, etc.)—and yet, I never even got past grade school; I never solved the problem of cup-ness.

How could we speak of “meaning” when we have a hard time understanding cup-ness? How could we use such abstract nouns, when even basic nouns still evaded us? We could pretend to discuss “the meaning of life”—and use such abstract nouns no one has access to like “morality,” “goodness,” and “end-goal”—but we’ll only go as far as cup-ness will take us. And that’s really not that far.

Let’s go back to the cup dilemma for a second. The problem with the cup—as we seen it—has far-reaching implications. If one were to set out to define what it means for an object to be a “cup,” one would have to demarcate certain lines, that is, create certain criteria that would have to be met when someone would be defining a cup. For example, one might say that a cup must be able to hold a liquid and be circular. What about cups that are square? One might qualify the statement and add that cups may not necessarily be circular/round. What about cups that don’t hold liquids well? For example, what if the ceramic cup is cracked? Is it still a cup? If not, what is it?…

We never were able to define “cup” in such a way that would enable us to include every cup that had ever existed—and would exist—in all human history. We were unable to come up with a definition for “cup-ness.” In other words, when asked to define “cup,” us educated folk were left stuttering…

Like a session during a smoke and mirrors magic show, the whole idea of a “cup” kept evading us. It was here for a second, there for a second—then it was completely gone. Like an illusive term in a Wittgensteinian language-game, the cup never materialized enough for us to grasp it, for us to drink from its waters.

Empty we came, and thirsty we left.

“There are no cups here.”

But on a more serious note, when Andrei moved—and I realized I hadn’t had dinner with a friend in months—I wrote my professor-friend from undergrad. He was glad to hear that I was alive and well. He was glad to hear that I was reading Kierkegaard. But then something tragic happened. After he asked me to tell him how I was doing—and I had texted him a summary of my experience in Atlanta—I made the following concluding remark in my text message: “This place lacks existentialists.”

He replied: “Easily one of the best lines I’ve ever read in a text.”

It’s a heart-breaking moment when your professor-friend tells you that noticing the fact that existentialists are missing is important. It’s a good thing to read that someone still tracks existentialists—for it suggests that there is still someone existential enough to notice!

The situation, in retrospect, is much more dire than initially observed. This place doesn’t just lack existentialists: they have gone extinct.

Somewhere along the journey to Hell, I bet there’s a post that reads:

“There are no existentialists here.”

 

Written by: Moses Y. Mikheyev

I’m a graduate student at Emory University interested in religion, philosophy, and the philosophy of language. 

The Birth of Moral Leadership: How to be a Moral Leader in the Modern Society

In this paper, I deal with the following question: What does it mean for a person to be a moral leader in our modern society? But what does it mean for our society to have a moral leader? For the sake of this paper, I will make a subtle distinction between ethics and morals. By “ethics” I will generally mean: the external, theoretical principles informing one’s concept of right versus wrong that govern one’s behavior. When using the noun “morals” I will generally mean: the internal, practical activities an individual conscientiously and willfully engages in, activities that reflect one’s own internalized concept of right versus wrong. In layman’s terms, “ethics” has to do with theory, and “morals” has to do with practice.

Given the aforementioned definition of “morals,” what do we mean when we say “right versus wrong”? That is, when speaking of “morals,” what makes an action “right” and what makes an action “wrong”? And, when speaking of “ethics,” what makes a theory “right” and what makes it “wrong”? I do, as many other moral philosophers, believe that theory informs practice. One cannot, generally speaking, have morality without having an ethical rationale. Lawrence Kohlberg, a moral psychologist, discovered empirically that moral education was directly related to moral practice. The more educated one was in ethical theory, the more one would tend to act morally.[1] It is for this reason that I will now briefly attempt to articulate guidelines for an ethical theory which sheds insight on what is right and wrong conduct before developing my thoughts on moral leadership. The following pages, then, are not meant to be exhaustive and dogmatic; rather, in this paper, I merely seek to offer what I think are tentative guidelines for ethics and moral conduct. That is, I modestly can only hope that I will offer some insight on this most thorny of issues.

Ethical theory is not yet unanimously agreed upon or universalized: many ethical theorists do not even share agreement regarding basic elements that make up concepts of right or wrong. Since there are vast amounts of disagreement, as there are also oceans of numerous and contradictory theories, I will selectively articulate my own ethical theory that informs my moral actions.

I like to begin by dealing with J. S. Mill’s “utilitarianism.” This is, perhaps, one of the simplest ethical theories. Mill writes:

“Utility” or the “greatest happiness principle” holds that actions are right in proportion as they tend to promote happiness; wrong as they tend to produce the reverse of happiness. By happiness is intended pleasure and the absence of pain…[2]

Bentham’s phrase “the greatest good for the greatest number”[3] succinctly reflects this view. Mill would argue that a right action is one which produced, consequentially speaking, increased (relative to a prior state) amounts of pleasure. It is an empirical fact that humans have nociceptors (neurons that send pain signals). It is an empirical fact that humans also have opioid receptors and dopamine, responsible for pleasure sensations and anticipation of pleasure, respectively. It is not hard, objectively speaking, for one to develop an ethical theory regarding that which we should universally do or not do when it is grounded on such a universal fact of human anatomy: virtually all normal, functional human bodies experience pleasure and pain. We, intuitively, seek out pleasure and avoid pain. In fact, the majority of the time, humans mostly have a heightened awareness when it comes to perceiving anything that may cause us pain: we are constantly on the look out for avoiding anything that may result consequentially in the experience of pain.

Whenever anyone develops any kind of theory, it is always a good thing to ask oneself: Is this a model of the world? and Is this a model for the world? Utilitarianism certainly understands the first question well. Utilitarian theory is grounded in objective facts, facts that are accurately portrayed in its “model of the world.” But is it also a “model for the world”? Does it say something not only about what is, but also about how things should be?

In utilitarian ethics, that which causes pain is to be avoided; it is labeled “wrong.” On the other hand, that which causes pleasure is to be pursued; it is labeled “right.” But is this form of ethical reasoning a valid way for humans to think about how the world should be? That is, is utilitarianism a model for the world? Should we wish it to be a model for all of us? I can think of many reasons why utilitarianism alone cannot function as an exhaustive ethical theory. If pleasure is the greatest ethical principle guiding moral behavior, I would argue that, according to utilitarianism, Hugh Hefner and all drug abusers are clearly more ethical than the rest of us: for they alone experience dopamine at rates that most of us have never dreamed of. But maybe, just maybe, consequentially speaking, their actions do not lead to the greatest amount of pleasure for the greatest amount of people? Maybe the drug abuser, consequentially speaking, is going to end up suffering greatly at some future point in life? In other words, would not a utilitarian argue that his actions—namely, drug abuse—are not, consequentially, in the right? But how do we go about predicting the future? How do we detach ourselves from our current experience of pleasure and think about a theoretical, future experience of pleasure? In fact, is it even possible to have this kind of omniscient knowledge beforehand? For example, if sex is a pleasurable experience, and contraception works, why not engage in all kinds of sex acts with the most amount of people? Should I wait for a monogamous marriage and, hence, betray my own principles in favor of something unseen and not currently empirical? (That is, the theoretical, future monogamous marriage is not currently and empirically being experienced by the individual.) Should the utilitarianist ever place pleasurable experiences on hold for something else?

Utilitarianism fails to account for the conflicts which arise between “the greatest good” and “the greatest number.” As Nicholas Rescher has shown, it is possible that Bentham’s statement—“the greatest good for the greatest number”—can produce chaos. Take the following distribution scheme, for example:[4]

Scheme I Scheme II
A receives 8 units of happiness A receives 4 units of happiness
B receives 1 unit B receives 4
C receives 1 unit C receives 1

 

It should be quite evident that Scheme 1 is in accord with “the greatest good,” but Scheme 2 is in accord with “the greatest good for the greatest number.” So which do we honor? Which action is “right”?

Many more such critiques exist. However, it is beyond the scope of this paper to present all of them. Having said that, I would now like to deal with another famous ethical theory: Kantian deontology. It is to this ethical theory that I now turn.

Kantian ethical theory, a form of deontology, has to do with intents rather than consequences. Instead of focusing on the consequences of an action—such as the utilitarian consequentialists—the deontologist, specifically one of a Kantian bent, focuses on the intent behind the action in determining whether the action is right or wrong. If the action was intended to hurt an individual, but accidentally resulted in something positive, according to Kant, that action was not right. (According to some utilitarians[5] it would be deemed right nonetheless, since it ended up increasing pleasure, though the individual had different original intents!) Kant believed, unsurprisingly, that consequences never mattered. In fact, “It is not possible to think of anything in the world, or indeed out of it, that can be held to be good without limitation except a good will” (GMS 4: 3935-8).[6] Kant focused on the autonomous lawgiver, that is, the autonomous individual who followed in all of his or her actions a self-imposed moral law. In Kant’s famous categorical imperative, Kant set out to universalize his ethical theory. “Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.” If an action cannot be universalized, it should not be committed. This may sound less practical than utilitarianism, but, I assure you, it is not. For example, I was once in an isolated part of northeastern Washington standing on a boardwalk over a lake. I was carrying a water bottle and became quite annoyed with it. I entertained the thought, for a split second, whether I should throw it into the pristine lake waters below. And then Kant spoke to me with that “still, small voice” of his: “Do you wish to universalize this action?” I certainly did not! “What would the lake look like if everyone dropped his or her waste into it,” I thought to myself. I ended up carrying the bottle for the rest of the trip.

But even the divine Kant has his problems. What if two categorical imperatives conflict with one another? What if I am placed in such a situation in which I must choose between one or the other? What if, to invoke Kant’s article On a Supposed Right to Lie Because of Philanthropic Concerns, an individual is faced with a choice between lying and murder, except in this case, whatever the reasons, it is a choice only between lying and murdering someone—you must commit one or the other. What do you do then? Even with Kantian ethics, we run into problems in determining what is the “right” thing to do. What if our intentions are always good and yet, strangely, our actions end up, consequentially, always harming others—are such actions “right”?

I find both utilitarianism (consequentialism) and Kantianism (deontology) useful. However, as one can tell, I also find both ethical theories to be problematic to an extent. How do I, as the moral individual, resolve these problems? In essence, I resort to a via media by attempting to reconcile the two by means of some form of compatibilism.

The theologian Dietrich Bonhoeffer was perplexed as well by this problem. However, in his theological work Ethics, he found a way out, finding inspiration in Jesus’ saying, “[E]very good tree bears good fruit, but a bad tree bears bad fruit” (Mt. 7:17, NIV):

There is an old argument about whether only the will, the act of the mind, the person, can be good, or whether achievement, work, consequence, or condition can be called good as well—and if so, which comes first and which is more important. This argument, which has also seeped into theology, leading there as elsewhere to serious aberrations, proceeds from a basically perverse way of putting the question. It tears apart what is originally and essentially one, namely, the good and the real, the person and the work. The objection that Jesus, too, had this distinction between person and work in mind, when he spoke about the good tree that brings forth good fruits, distorts this saying of Jesus into its exact opposite. Its meaning is not that first the person is good and then the work, but that only the two together, only both as united in one, are to be understood as good or bad.[7]

Like Bonhoeffer, I think that for an action to be morally right it must also be ethically right. That is, the ethical theory must be right, the intent must be right, and the consequential action must be right. The greatest good action is an action that produces the greatest good for the greatest number—according to empirical notions of pleasure and pain—being inspired by right intent; an action, at the same time, you would will to become a universal law.

How does all of this translate into helping us become better, moral leaders in a modern society? Moreover, having considered ethics and morality, I now turn to leadership: what does it mean to be a “leader”?

For the sake of clarity and simplicity, I will define “leadership” as the ability of an individual, functioning as a leader, to guide other individuals, functioning as followers, to act in accordance with the desired course of action of the leader. That is, a leader is able to get others to do what he or she desires that they should do. What, then, is moral leadership? Moral leadership, harkening back to our previous definitions, would entail the following definition:

The ability of an individual, functioning as a leader, to guide other individuals, functioning as followers, to act in accordance with the desired course of action of the leader; the “desired course of action” being informed by a theoretical ethic, which are the external, theoretical principles informing one’s concept of right versus wrong that govern one’s behavior. Such theoretical ethics are then acted upon and become moral habits, which are the internal, practical activities an individual conscientiously and willfully engages in, activities that reflect one’s own internalized concept of right versus wrong. The moral leader’s concept of right versus wrong is greatly influenced by the maxim: The greatest good action is an action that produces the greatest good for the greatest number—according to empirical notions of pleasure and pain—being inspired by right intent; an action, at the same time, you would will to become a universal law.

A moral leader, in my opinion, is inseparable from his[8] theoretical ethic (the “stuff” floating in his head) and his practical morals (the “stuff” everyone sees him doing). A moral leader is one who is aware of basic concepts regarding pleasure and pain. A moral leader is aware that not all utilitarian actions are “right.” He is aware that not all deontological actions are “right.” He is acutely aware of the problems one encounters when dealing with morality. However, a moral leader attempts to, nonetheless, strive to do the right thing. He formulates theories and rationales for his actions. He is the guy you find thinking long and hard about his actions and why he chooses to do them. And, most importantly, a moral leader guides others, influencing them to participate in his vision, a vision that he shares both passionately and with rationality with those who follow him. In inspiring others to act like him, to reason like him, to follow his desired course of action, the leader implicitly universalizes his morality. In doing so, one could only hope that he takes Kantian ethics seriously.

Since I have offered my thoughts on moral leadership, I would now like to focus somewhat more specifically on practical ways a leader goes about bringing his “desired course of action” to fruition. In the following paragraphs, I will engage with the popular Bennis and Goldsmith text, Learning to Lead.

Bennis and Goldsmith believe that all successful leaders have the following six “competencies”[9]:

  • Mastering the Context. Leaders are able to get a feel for their surroundings and understand “the big picture.”

  • Knowing Yourself. The leader is aware of his or her ethical commitments, subjective worldviews, being always aware of who he or she is. Such leaders are also always learning about themselves.

  • Creating a Vision. Leaders create a vision so real that “they live and breathe” it.

  • Communicating with Meaning. Leaders are able to understand and function at the level of their followers.

  • Building Trust Through Integrity. Leaders lead ethical lives that those who follow them witness on a daily basis. They are consistent with their actions.

  • Realizing Intentions Through Actions. Leaders are able to bring their ideas to fruition by making them concretely real.

Many of the above “six competencies” are quite self-explanatory; therefore, I will not pedantically engage in making superficial commentary. Rather, I will focus my remarks on a couple of them while discussing things that I believe are of utmost importance, especially for a moral leader in the modern society.

With the continuing increase in technological development—think of social media, the Internet, cell phones, etc.—humans have begun to create a context that is vastly different than all previous contexts in history. We are now living and leading in a society in which a follower may never physically meet a leader; in which relationships between boyfriend and girlfriend may span oceans and be entirely virtual. The landscape upon which we now act has become something else. How does a leader function within the present structures set in place? What is specifically different about our modern society? It seems to me that communication and human relations have now become de-personalized. A Black Lives Matter activist may use all kinds of tools that were not available to Martin Luther King, Jr. This sort of de-personalization comes with its pros and cons. We can have a leader spread her message using social media far beyond her immediate surroundings. But this comes with a cost. Such communicating lacks many features that are necessary for a leader to be successful. I may see a talking head on Facebook. The message may even inspire me. And I may not do anything about it. How do I know that she is telling the truth? How do I know that she really will do what she is claiming she will do? Do I even understand her message? What if I have questions for her but cannot bring them to her since I am not able to communicate with meaning with her? The modern leader is faced with the problem of communicating with meaning. It is common today for all kinds of quotes to be taken out of context. With the creation of Twitter’s 140-character tweets, human beings are now expected to “communicate” messages in under two or three sentences. The twitterization of human language and communication is a death sentence to a modern orator striving to be a Lincoln or a Demosthenes. What is the solution to this problem? One possible response is that we adapt to this. We may simply have to strive to say as much as possible without becoming verbose. Another option is that we try to communicate using a different platform, something akin to TED Talks.

This twitterization of language has also deformed the way we listen and hear one another, too. A leader has to understand the people he is striving to engage. With the little information people are communicating these days, it’s helpful to restate to the other person, in your own words, what you heard him/her say. This allows the leader to clarify any misunderstandings. At all points must one recognize the subjectivity of one’s audience and his/her own subjectivity. Terms and phrases such as socialist, goodness, the right thing, etc. may mean vastly different things to different people. It would be a good idea to have people define thorny terms. Robert Franklin reminds us, “Conversation is the highest form of human activity.”[10] It’s a good idea to communicate meaningfully.

One cannot have a paper on leadership and ethics without making recourse to Aristotle’s Nicomachean Ethics. People sometimes forget the basic advice Aristotle left us: “by doing just things we become just…”[11] It’s a pithy truth. One of the ways a leader builds trust is by integrity. And integrity means nothing less than being undivided, consistent, honest, and morally upright. But as with all virtues, one must practice a life of virtue in order to be considered virtuous. To be known as an honest person, one must consistently practice being honest. To be a moral leader, and to be known as one, is to consistently act like one.

While this paper is not as exhaustive as one may like—and many theoretical (and maybe practical) scenarios have not been considered—my hope has been to present a definition of moral leadership that would generally work for many people. My goal has not been to offer some dogmatic truth; rather, I have sought to offer my thoughts on a thorny subject, thoughts which I hope may stimulate my reader to make whatever progress one could towards becoming a moral leader him- or herself.

Written by: Moses Y. Mikheyev

 

 

BIBLIOGRAPHY

Allison, Henry E. Kant’s Groundwork for the Metaphysics of Morals: A Commentary. New York: Oxford, 2011.

Bartlett, Robert C., and Susan Collins, trans. Aristotle’s Nicomachean Ethics: A New Translation. Chicago: University of Chicago Press, 2011.

Bennis, Warren and Joan Goldsmith. Learning to Lead: A Workbook on Becoming a Leader, 4th ed. New York: Basic Books, 2010.

Bonhoeffer, Dietrich. Ethics. Dietrich Bonhoeffer Works. Volume 6. Translated by Reinhard Krauss, Charles C. West, and Douglas W. Stott. Minneapolis: Fortress Press, 2005.

Franklin, Robert M. Liberating Visions: Human Fulfillment and Social Justice in   African-American Thought. Minneapolis: Fortress, 1990.

Gielen, Uwe. “Kohlberg’s Moral Development Theory.” In The Kohlberg Legacy for the Helping Professions. Lisa Kuhmerker. Birmingham: Doxa Books, 1991.

Lebacqz, Karen. Six Theories of Justice. Minneapolis: Augsburg, 1997.

Mill, John Stuart. Utilitarianism. New York: Bobbs-Merrill, 1957.

 

FOOTNOTES

[1] Uwe Gielen, “Kohlberg’s Moral Development Theory,” in The Kohlberg Legacy for the Helping Professions, Lisa Kuhmerker (Birmingham: Doxa Books, 1991), 35, 55.

[2] John Stuart Mill, Utilitarianism (New York: Bobbs-Merrill, 1957), 10.

[3] Karen Lebacqz, Six Theories of Justice (Minneapolis: Augsburg, 1997), 25.

[4] Table adopted from Lebacqz, Six Theories of Justice, 25.

[5] Most certainly the “act” utilitarians. The “rule” utilitarian may object at this point.

[6] Henry E. Allison, Kant’s Groundwork for the Metaphysics of Morals: A Commentary (New York: Oxford, 2011), 71.

[7] Dietrich Bonhoeffer, Ethics, Dietrich Bonhoeffer Works, Volume 6, trans. Reinhard Krauss, Charles C. West, and Douglas W. Stott (Minneapolis: Fortress Press, 2005), 51. Italics original.

[8] This is an all-inclusive “his.” I could not come up with a gender-neutral way of articulating the following sentences without making them sound cumbersome and pedantically politically correct.

[9] Adopted from Warren Bennis and Joan Goldsmith, Learning to Lead: A Workbook on Becoming a Leader, 4th ed. (New York: Basic Books, 2010), xxi-xxii.

[10] Robert M. Franklin, Liberating Visions: Human Fulfillment and Social Justice in African-American Thought (Minneapolis: Fortress, 1990), viii.

[11] Robert C. Bartlett and Susan D. Collins, trans., Aristotle’s Nicomachean Ethics: A New Translation (Chicago: University of Chicago Press, 2011), 27.

People are Flowers: The Art of Morality as Painted in The Little Prince

Roughly once a year, I grab some coffee, settle comfortably in a sofa, and re-read The Little Prince by Antoine de Saint-Exupery. It serves as a transcendental gateway between who I am and who I want to be; that is, in Kierkegaard’s language: “The measure of a person’s fundamental disposition is determined by how far is what he understands from what he does, how great is the distance between his understanding and his action.” It allows me to bridge the vast chasm between what I think and what I do.[1] The book reminds the child that I once was—and, maybe, still am!—that I must constantly reevaluate what I do, how I do it, and why I do it. In the following pages, I wish to reflect upon this book as a piece of art that introduces us to “the little prince,” who, I will argue, is a moral leader; one who reminds us, time and time again, what it is we really need to focus on while living on earth. Moreover, many of the fictional characters in the story are also involved in providing witty and ingenious remarks on what it means to lead a moral life.

Reading The Little Prince is like being ripped from the delusional reality most of us have grown callously accustomed to. In the first few pages, the text demonstrates this rather memorably. The narrator describes a time when he was a young child and drew a boa constrictor swallowing an elephant. He was aspiring to be an artist. When he showed “grown-ups” his drawing, all they could see was what looked like a hat; they had lost what children still had: imagination. The narrator eventually became a pilot, his dreams of becoming an artist crushed by the cold comments made by the grown-ups that surrounded him. Eventually, the now-adult narrator ends up in a plane wreck in the Sahara Desert. He awakes only to find a golden-haired boy, from a distant planet, whom he calls “the little prince.” The prince asks the pilot to draw a sheep for him. When the pilot attempts to draw sheep he ends up drawing his “hat.” The prince immediately recognizes it for what it is: a boa constrictor swallowing an elephant, of course! The prince is not yet a grown-up; he is able to see more than the mundane things grown-ups see. “Grown-ups never understand anything by themselves, and it is exhausting for children to have to provide explanations over and over again.”[2]

In the discussions that ensue, the prince reminds the once-child (who is only now a “grown-up”) what is truly important in life: “[a]nything essential is invisible to the eyes.”[3] The objective mumbo-jumbo that adults find themselves caught up in is not what makes life beautiful or meaningful. Things like friendship, beauty, and love make the world go ‘round—and all such things are invisible. When the pilot, attempting to fix a part of his plane’s engine, becomes angry and short-tempered with the prince—while the prince is describing something “important”—the prince responds appropriately:

“You confuse everything…You’ve got it all mixed up!” He was really annoyed. He tossed his golden curls in the wind. “I know a planet inhabited by a red-faced gentleman. He’s never smelled a flower. He’s never looked at a star. He’s never loved anyone. He’s never done anything except add up numbers. And all day long he says over and over, just like you, ‘I’m a serious man! I’m a serious man!’ And that puffs him up with pride. But he’s not a man at all—he’s a mushroom!”

The prince was asking the pilot if his sheep—one which the pilot drew—would be able to graze on flowers. The pilot was unaware of the subjective importance of this question. The prince, living on a small planet, took great care of a rose with four thorns. He watered it daily, spoke with it, and loved it. The rose was threatened by a wild species of weed called baobabs. These baobabs would kill the rose if they were left to grow on their own; the prince’s job was to maintain his planet and protect his rose. In contemplating bringing a sheep to the planet—albeit, one which may potentially threaten the rose’s survival—the prince was profoundly distressed at the thought of a rose-eating sheep.

“If someone loves a flower of which just one example exists among all the millions and millions of stars, that’s enough to make him happy when he looks at the stars. He tells himself, ‘My flower’s up there somewhere…’ But if the sheep eats the flower, then for him it’s as if, suddenly, all the stars went out. And that isn’t important?”[4]

Once the pilot realized the context of the prince’s question, he immediately ran to the prince, hugged him tightly, and suggested he draw a muzzle for the sheep.

Due to some “pretensions” between the rose and himself—one’s which he later would reflect upon with remorse and guilt—the prince left the rose on his planet to explore other planets. He felt as if the rose did not need him.

“In those days , I didn’t understand anything. I should have judged her according to her actions, not her words. She perfumed my planet and lit up my life. I should never have run away!…Flowers are so contradictory! But I was too young to know how to love her.[5]

Saint-Exupery, in writing this, implied that people are flowers; and that sometimes we are immature and do not know how to love them.

Even the rose functions as a moral agent by telling the prince that, “I need to put up with two or three caterpillars if I want to get to know the butterflies.”[6] (This was said in response to his wanting to destroy the caterpillars prior to his departure.) In this quotable aphorism, Saint-Exupery implicitly suggests that some suffering can sometimes lead to beauty.

Once the little prince departs his planet, he finds himself landing on the “first” planet. There he finds a rather sensible “king” who only commands that which his “subjects,” and objects he oversees, are already doing (or are prone to do). The king explains why he commands what someone or something is already doing. “One must command from each what each can perform.” For “[a]uthority is based first of all upon reason. If you command your subjects to jump in the ocean, there will be a revolution. I am entitled to command obedience because my orders are reasonable.”[7] When the king commands the little prince to become a minister of justice, the prince asks whom he’ll judge, if there’s no one on the planet. The king suggests that judging oneself is much more harder than judging others.[8] But then the king remembers that there may be a single rat on the planet, one he hears ever so often.

“You could judge that old rat. From time to time you will condemn him to death. That way his life will depend on your justice. But you’ll pardon him each time for economy’s sake. There’s only one rat.”[9]

In saying this, the king succinctly reminds us all that we first begin with judging ourselves; and, if we ever do judge others, we should be merciful—for, “there’s only one rat.” Also, another concept is at play here: the concept of inter-dependent existence. The king’s suggestion reveals that it is good to be in “need” of other people. The judge “needs” a criminal; the criminal “needs” a judge. Both must exist in order for their roles to be played out.

By the time the prince visits all the small planets—six in all—he realizes that only on the fifth planet lived a man who cared about something other than himself. The fifth planet had a lamplighter who lit the lamp every minute, since day and night all occurred within the short span of sixty seconds. As to why he was doing what he was doing, the lamplighter could only say: “Orders.” Here was a man who was almost as ridiculous as the inhabitants of the other planets—but in a way less so.

Finally, the prince arrives to earth. It is on earth that he meets the pilot. It is also on earth that he meets a fox, one which explains to him the meaning of friendship and time. The fox tells the little prince that he is not “tamed.” The prince wonders what “tamed” means. The fox explains that it means “to create ties.”[10] If one creates ties, according to the fox, one tames the animal, and the animal becomes your friend. No longer would the fox be just an ordinary fox, one in a billion; rather, the fox would become the only fox for you in the world. “But if you tame me, we’ll need each other. You’ll be the only boy in the world for me. I’ll be the only fox in the world for you…”[11] The fox goes on to teach the little prince a thing or two about human relations and friendship. “The only things you learn are the things you tame…People haven’t time to learn anything…” It is because of a lack of friendship, time, and involvement that humans don’t really “tame” anything or really know anything anymore. In other words, they are lazy and boring.

The prince, on his way towards finding human beings, encounters a field of five thousand roses, just like the one on his own planet. He is shocked to discover that his rose isn’t the only rose in the universe, as he previously thought. He begins speaking to the roses:

“You’re lovely, but you’re empty,” he went on. “One couldn’t die for you. Of course, an ordinary passerby would think my rose looked just like you. But my rose, all on her own, is more important than all of you together, since she’s the one I’ve watered. Since she’s the one I put under glass. Since she’s the one for whom I killed the caterpillars (except the two or three for butterflies). Since she’s the one I listened to when she complained, or when she boasted, or even sometimes when she said nothing at all. Since she’s my rose.”[12]

The prince, in finding this field of roses, realizes something important—something all of us could empathize with: the importance of our subjectivity. Sure, the roses were similar. But there was a rose out there, far above all the other stars, on a small planet, that belonged to the prince; it was his rose. They have had a long relationship. They went through thick and through thin together. They had a shared history, a “we-ness” about them. The prince races back to the fox in time to hear him disclose “secrets” to life: “One sees clearly only with the heart. Anything essential is invisible to the eyes.”[13] For what is invisible? Time. “It’s the time you spent on your rose that makes your rose so important,” the fox finishes.

The Little Prince is a literary achievement of immense moral significance, being a museum of moral aphorisms, witty jokes, and touching tales. How, then, does it compare to the likes of, say, Aristotle’s Nicomachean Ethics? Is it possible to relay morality by means of art and literature? I’m not sure what an agreed-upon answer may be, one which functions as some universal truth, but I think that this book does an excellent job reminding all of us regarding what it is that makes us live and thrive. In writing this “children’s tale,” Saint-Exupery really meant to remind us that we were all children once, and that most times a child’s simplicity and honesty is better than a million quantum equations.

In dealing with love and friendship, the book points out—I think, correctly—that we have to spend time with people. We have to “tame” people in order to begin understanding them. And when we love, we must do so by recognizing that the Other may be like the rest of the “five thousand,” but the Other is ultimately ours. In loving, the book gently urges us to learn “how” to love the Other. And not only that. Love requires the ability to think about someone other than yourself. It was the rose that made the prince’s world light up. It was his rose that made it worthwhile to look up at the stars at night, knowing that somewhere out there the love of his life waited for him. Aristotle, likewise, spends a great deal of time talking about friendship in a vein akin to the sense Saint-Exupery is trying to convey.

“The base person is held to do everything for his own sake, and the more corrupt he is, the more he does this: people accuse him of doing nothing apart from what concerns his own [good]. The decent person, by contrast, acts on account of what is noble; and the better a person he is, the more he acts on account of what is noble and for the sake of a friend, while disregarding himself.”[14]

Aristotle recognizes, as many Christian theologians do, that there is something intrinsically good about caring for someone other than yourself; there is something good about caring enough to lay your life down for your friends.

What in particular stands out for me are the many ways in which moral instruction—such as Aristotle’s Ethics, The Little Prince, or the Parables of Jesus—take form. It could easily be argued that all three offer relatively similar teachings regarding friendship. Living in a post-Freud world, we know that childhood experiences have a profound effect on children’s later adult life. Being able to instill morals from an early age is, arguably, a huge benefit, one that enriches the life of a child. Reading Aristotle presupposes a grown-up; whereas the parables of Jesus and The Little Prince are not limited to age as much. Specifically, The Little Prince is easily digested by a two-year-old child. It invites children to begin thinking about morality and the meaning of life. It invites children to think about their “flowers”—what matters to them? And, as grown-ups, we know that people are flowers.

“People where you live,” the little prince said, “grow five thousand roses in one garden…yet they don’t find what they’re looking for…”

“They don’t find it,” I answered.

“And yet what they’re looking for could be found in a single rose, or a little water…”

“Of course,” I answered.

And the little prince added, “But eyes are blind. You have to look with the heart.”[15]

Written by: Moses Y. Mikheyev

FOOTNOTES:

[1] Søren Kierkegaard, Provocations: Spiritual Writings of Kierkegaard, compiled and edited by Charles E. Moore (Farmington: The Plough Publishing Co., 1999), 265-6.

[2] Antoine de Saint-Exupery, The Little Prince, trans. Richard Howard (New York: Harcourt Publishing, 2000), 2.

[3] Ibid., 64.

[4] Ibid., 21.

[5] Ibid., 24-5.

[6] Ibid., 27.

[7] Ibid., 31.

[8] Ibid., 32.

[9] Ibid.

[10] Ibid., 59.

[11] Ibid.

[12] Ibid., 63.

[13] Ibid.

[14] Robert C. Bartlett and Susan D. Collins, trans., Aristotle’s Nicomachean Ethics: A New Translation (Chicago: University of Chicago Press, 2011), 200.

[15] Ibid., 71.

Miracles and Falsification: The Myth of Miracles

I had heard about miracles ever since I had been a child. I have heard—and continue to hear—about people being healed of diseases, big, bad, ugly diseases. People pray on many continents asking for a miracle. A two-year-old struck with leukemia—that demon of the blood. A five-year-old run over by a car by his own mother. A twenty-one year old girl, fresh out of college, killed instantly in a head on collision by a drunk-driver. A boy diving during the hot summer season strikes a rock, losing control of his upper and lower body forever. A missionary bitten by a mosquito suffers for weeks on end, fevers paralyzing his shaken body. A terrorist enters a building someplace in the Middle East, blasting hundreds to smithereens; hundreds who either were killed or who would spend the rest of their lives in dark misery. Then there’s the little girl living in Iraq who happened to be in a particular time and place in which a particular foreign empire (read: The United States of America) decided to drop bombs on her place of residence. Her only question, while hospitalized, with her body torn to shreds—yet with breath in her lungs—“Why does America hate me so much?”[1]

I, too, have dreamed of miracles.

But miracles seldom come.

The little girl suffering from leukemia dies, being buried on a damp April night under torrential rain. Her parents huddle closely, aching for death to take them too. They mumble prayers to the sound of raindrops bulleting the last of their hopes. Their god leaves them to their sorrows, offering them not so much as an ounce, a flicker, of comfort; a god who only wears black. The parents listen to the monotonous sermon being preached to the monotonous thunderclaps under a banal sky. “What a eulogy!” they think to themselves. “This, this is what we get for bringing life into this world! An entire two years of manipulative baiting. God, yes God, he baits us with illusions of happiness, of family—then his claws take all that has life away!” But those thoughts, yes, those faint glimmers of truth, remain unspoken. Forever they are silent. The mother goes back to her mundane day job. She goes through the motions. She listens to the repetitive sermons…of hope. Some future kingdom where tears remain fossilized forever, relics of the god-forsaken, fuck-inducing life upon a pathetic planet we used to call earth. It’s only after the sermons end. After all the bullshit stops—the lies, the longing for miracles, the promise of something good—it is only then that she goes home, as Jesus so tactfully recommends doing, and prays behind closed doors. “But when you pray, go into your room, close the door and pray to your Father, who is unseen. Then your Father, who sees what is done in secret, will reward you” (Matthew 6:6 NIV). And still. What is asked in the quiet of the home remains—unbeknownst to the world—in the quiet of the home. It is as if Jesus knew that what she would ask would be impossible. Incomprehensible. Why ask in public if it’ll never happen publicly? Keep your prayers to yourselves! Your hope for the Promised Land is just that: hope. It is wishful thinking. The mother spends her days reminiscing of what could have been. Maybe her two-year-old could have graduated college. Maybe she could have gotten married. Maybe the two of them could have spent time together, sipping coffee under a red-soaked sun.

Maybe.

How many more such maybes will there be? How many more such mothers? Fathers? The prayers never end, along with the problems. The disasters. One disease leaves you the moment two take over. Or maybe it was three? You walk restlessly between states of health and epochs of madness.

God never comes to you. You never hear anything anymore. Not from God, that is. You hear the piercing cries of mothers and fathers in your church, synagogue, mosque, temple—all gasping, as if for the first time, for some miracle.

Then you have the children. The thirteen-year-old girl whose father was diagnosed with pancreatic cancer. She wants her daddy to be there at her wedding. So they throw her a make-believe wedding (almost as make-believe as the miracles, the gods, the hopes of a better world). She walks down the aisle drenched in tears. A day of rejoicing, they said, it would be. Her little hands holding—no, clenching fiercely—the strong arms of a soon-to-be-dead father. She is only thirteen. She doesn’t know what it all means. Not at all. All she knows is that daddy will never be there. There won’t be another Father’s Day for her. There won’t be another walk in the rain with him. There won’t be that excitement, those nights where she runs home to tell him about the boy she just met. There’ll be none of that.

Miracles.

That’s what religion promises.

Miracles.

But all you see, all you really feel and hear is nothing but the hum-drum preaching of the eulogist. But what were we all—really—expecting? Could it really be that God the Healer was a hoax? Is it possible that god wears black, day in and day out, preparing eulogies?

“It’s all too terrible,” they say. “Don’t make us think of it. Stay silent. What you are describing is heart-wrenchingly suicidal.” “Don’t make me sit here and put up with your rants,” someone thinks. “Is it really so?” a thought flashes through another’s mind.

The existential problem of miracles is, perhaps, the most persuasive. One could not but be moved by the stories. I, too, have dreamed of a miracle. However, there is also the philosophical problem with miracles. I turn to this particular issue now.

Religious people—be they Muslim, Hindu, or Christian—claim that their god is capable of miracles. But what is a miracle? By definition, a miracle is a supernatural event. By definition, a miracle is a supra-natural event; it is an event which is “above nature.” It is an event that does not go in accordance with the known (and unknown) rules of physics. It is something that happens which no physical law could explain. A miracle is not the disappearance of a headache. It happens to the rest of us all the time. A miracle is not the curing of cancer—it happens enough of the times. A miracle is not the healing of insanity. A miracle is not the healing of fractured bones. All of these things happen naturally. So what is a miracle?

A miracle would be a person who walks on water without the aid of any kind of special shoes, footwear, or underwater bridges (you get the point). A miracle would, most simply, be an amputee with their amputated limb appearing, rematerializing, spontaneously. (Notice that I did not say “re-growing.” It will probably be possible, in the future, for us to do that.) A miracle would be such an event which, again, by definition, would convince any person capable of seeing and thinking along physical lines that this is not normal; that the event is strange, unheard of, physically impossible—in other words, simply in violation of natural law. The resurrection of Jesus, for example, would, theoretically, constitute a miracle.

Given such a very loaded, strict, and robust definition of miracle (by “strict,” I mean that it excludes [possibly] every event that has ever occurred in history—excepting the origin of life and of the universe), how is it that people today still speak of miracles? You hear it all the time.[2] I have discovered one of the reasons. It comes from one of Christianity’s greatest liberal theologians, Friedrich Schleiermacher.

Schleiermacher defined miracles in an unfalsifiable way. When someone makes something, like miracles, unfalsifiable this means two things: (a) every event becomes a “miracle” and (b) there is no way to prove nor disprove the event. Schleiermacher writes:

“Miracle is simply the religious name for event. Every event, even the most natural and usual, becomes a miracle, as soon as the religious view of it can be the dominant. To me all is miracle. In your sense the inexplicable and strange alone is miracle, in mine it is no miracle. The more religious you are, the more miracle would you see everywhere.”[3]

I could not have said it better. Schleiermacher and I agree: religion makes everything a miracle. Because everything becomes a miracle, nothing is miraculous anymore. Because everything becomes a miracle, the term “miracle” becomes devoid of meaning.

People do experience miracles today. Believe me, they do. All of life may be seen as one continuous miracle. From the Big Bang to the evolution of human life, all of this, even by a skeptic, is seen as a miracle. But miracles are not really events that happen; they are not singular events occurring in history on a daily, interventionist basis. Miracles are probably things like the origin of DNA. They are isolated events that appear miraculous. For just a moment. And then the scientific mind—be it religious or secular—finds a way to unravel the miraculous and make it the mundane.

Such is the world we live in. It is full of mystery, of pain, of suffering, and of miracles. While the miracles we experience are probably non-existent, the one miracle we can claim is the miracle of today.

Written by: Moses Y. Mikheyev

FOOTNOTES:

[1] This is my own retelling of the story. For this, and other such stories, see Shane Claiborne, The Irresistible Revolution: Living as an Ordinary Radical (Grand Rapids: Zondervan, 2006), esp. 163-175.

[2] I have purposefully used the verb “hear” many times in this article. The reason being that miracles are, in my opinion, non-existent; they don’t happen. This means that nobody has documentation, empirical evidence, etc., of a miracle to date. All you have is hearsay. Hence my use of the word “hear.”

[3] Friedrich Schleiermacher, On Religion: Speeches to Its Cultured Despisers, trans. John Oman (Louisville: John Knox, 1994), 88.

Against Suicide: On Meaning and Suicide—Why Suicides Demonstrate that Life Has Meaning

“Romeo killed himself because he could not have Juliet. The meaning of life for him was to possess that woman.”[1]

Vladimir Solovyov

We sometimes hear people lament that life has no inherent meaning. After an anguished display of profound sadness, some such individuals commit suicide. One day they lament, the next day their blood is oozing, filling the voids of an un-vacuumed carpet. What once held life and meaning is now an empty token demonstrating empirically that no such meaning exists (or has ever existed). This seems to be the train of thought most depressed and suicidal individuals follow. They move from living a life of meaning to living a life of meaninglessness. And then they commit suicide. But is this really how things are? Do not people commit suicide precisely because they have discovered meaning? While this may, initially, not appear to be obvious, I think it is.

For years, I’ve read some of the most depressing and suicidal literature in the history of humankind. I’m talking about the writings of that melancholic Dane, Søren Kierkegaard. Moreover, I’ve also read Albert Camus’ The Myth of Sisyphus. I’ve amused myself with the musings of Jean-Paul Sartre, Dostovesky’s characters in Crime and Punishment and The Brothers Karamazov, and Immanuel Kant. Excepting the suicide-inducing characters of Dostoevsky, none of these philosophers have been able to really make an argument against suicide. What I mean to say is that I have not read anything that made me think: “Wow, I’ve never thought that before. I now want to live!” Usually, you read attacks on suicide and come away more depressed than ever.

Allow me to talk about Immanuel Kant’s views. Kant argued that human beings are bound to categorical imperatives. One such imperative, commonly known as the thought-experiment The Kingdom of Ends, articulates the idea that human beings are not “things” and cannot be treated as a means to an end. By committing suicide, a human individual is treating himself as a means to an end (i.e., a thing) and is also, by implication, willing for his action to become a universal maxim.

“He who contemplates suicide should ask himself whether his action can be consistent with the idea of humanity as an end in itself. If he destroys himself in order to escape from painful circumstances, he uses a person merely as a mean to maintain a tolerable condition up to the end of life. But a man is not a thing, that is to say, something which can be used merely as means, but must in all his actions be always considered as an end in himself.”[2]

While Kant is right, to an extent, his views make most sense in a perfect world. In a wretched place such as Earth, it is hard to find many suicidal people (if any) who would find his argument existentially convincing. Theoretically speaking, if I were ever suicidal, do me a favor and please do not read Kant to me!

And so, after years of reading literature on suicide, I’ve recently run into a lone-wolf philosopher who has written something (finally!) meaningful. And I hope that you, too, would share my sympathies. Vladimir Solovyov argued that those who commit suicide actually prove that life has meaning. How so? Well, the person committing suicide is, in retrospect, deeming his life meaningless due to a loss of meaning. The suicidal individual is actually the only individual who acutely knows and feels what it is like to live a meaningful life. Those who are suicidal are acutely aware of their loss of meaning. But in order for an individual to lose meaning one must have had meaning. A loss presupposes past possession. Given this state of affairs, it doesn’t take a rocket scientist to figure out that suicidal people are some of the most meaning-driven people inhabiting our planet. These are individuals who seek meaning. In fact, they crave meaning so obsessively that they lose sight of meaning’s ever-changing reappearances. Meaning is something that reintroduces itself throughout the course of one’s life.

As a child, meaning (the noun) was identified with sucking on a lollipop. A few years later, meaning took the shape of another human being—be it a friend, girlfriend, or boyfriend. Years later, meaning took on another form. In old age, meaning can be rocking in a chair reading Antoine de Saint-Exupéry.

In suicide, meaning is suspended. It is prevented from evolving. Meaning becomes static; it becomes frozen in time. Meaning is reduced to a non-evolving entity. It becomes a means to an end. It ceases to be something that changes, something that grows old with you. In essence, meaning becomes crucified.

Those of us who are suicidal have discovered just one form of meaning. Those who commit suicide due to a loss of a family member have crystallized meaning in the Other. Those who are suicidal have actually come to the epitome of meaning. They have come to a point in their lives where meaning becomes inextricably linked and made static in the Other. This is why Romeo’s suicide makes so much sense. Romeo lived because he had found meaning. He died because he lost that meaning.

And so, we now come to Solovyov’s point:

“Pessimists who are in earnest and commit suicide also involuntarily prove that life has a meaning. I am thinking of conscious and self-possessed suicides, who kill themselves because of disappointment or despair. They supposed that life had a certain meaning which made it worth living, but became convinced that that meaning did not hold good. Unwilling to submit passively and unconsciously—as the theoretical pessimists do—to a different and unknown meaning, they take their own life.”[3]

What is suicide, then? Suicidal thinking is the acute experience of an individual who has found—and then proceeded to lose—life’s meaning. It is the association made between meaning and some (possibly) external object taking on an unnecessarily static form. Once the object is frozen in space and time, loss of the object correlates to loss of meaning. However, if meaning is seen as an ever-changing “thing,” suicidal thinking becomes unnecessary. This is not to say that suicide is never an option; however, it is to say that many (if not all) suicides only serve to prove that life has meaning.

Written by: Moses Y. Mikheyev

FOOTNOTES:

[1] Vladimir Solovyov, preface to The Justification of the Good: An Essay on Moral Philosophy, trans. Nathalie A. Duddington (Grand Rapids: William B. Eerdmans Publishing Co., 2005), xviii.

[2] Immanuel Kant, Fundamental Principles of the Metaphysics of Ethics, trans. Thomas Kingsmill Abbott, 2nd ed. (New York: Longmans, Green, and Co., 1900), 56-7. Italics original.

[3] Solovyov, preface, xvii.