I am only three-hundred pages into David Foster Wallace’s Infinite Jest (1996), which is a strange, idiosyncratic, yet majestic read so far. I am not sure what to make of a book where I skip over several pages at a time and then stop and slowly read over certain passages again and again. I suspect there will be more to come, but this one in particular is one that I regularly come back to since it is so applicable to our contemporary, “smart-phone” charmed kind of life
It turned out that there was something terribly stressful about visual telephone interfaces that hadn’t been stressful at all about voice-only interfaces. Videophone consumers seemed suddenly to realize that they’d been subject to an insidious but wholly marvelous delusion about conventional voice-only telephony. They’d never noticed it before, the delusion— it’s like it was so emotionally complex that it could be countenanced only in the context of its loss. Good old traditional audio-only phone conversations allowed you to presume that the person on the other end was paying complete attention to you while also permitting you not to have to pay anything even close to complete attention to her. A traditional aural-only conversation— utilizing a hand-held phone whose earpiece contained only 6 little pinholes but whose mouthpiece (rather significantly, it later seemed) contained (62) or 36 little pinholes— let you enter a kind of highway-hypnotic semi-attentive fugue: while conversing, you could look around the room, doodle, fine-groom, peel tiny bits of dead skin away from your cuticles, compose phone-pad haiku, stir things on the stove; you could even carry on a whole separate additional sign-language-and-exaggerated-facial-expression type of conversation with people right there in the room with you, all while seeming to be right there attending closely to the voice on the phone. And yet— and this was the retrospectively marvelous part— even as you were dividing your attention between the phone call and all sorts of other idle little fuguelike activities, you were somehow never haunted by the suspicion that the person on the other end’s attention might be similarly divided. During a traditional call, e.g., as you let’s say performed a close tactile blemish-scan of your chin, you were in no way oppressed by the thought that your phonemate was perhaps also devoting a good percentage of her attention to a close tactile blemish-scan. It was an illusion and the illusion was aural and aurally supported: the phone-line’s other end’s voice was dense, tightly compressed, and vectored right into your ear, enabling you to imagine that the voice’s owner’s attention was similarly compressed and focused… even though your own attention was not, was the thing. This bilateral illusion of unilateral attention was almost infantilely gratifying from an emotional standpoint: you got to believe you were receiving somebody’s complete attention without having to return it. Regarded with the objectivity of hindsight, the illusion appears arational, almost literally fantastic: it would be like being able both to lie and to trust other people at the same time.
Video telephony rendered the fantasy insupportable. Callers now found they had to compose the same sort of earnest, slightly overintense listener’s expression they had to compose for in-person exchanges. Those callers who out of unconscious habit succumbed to fuguelike doodling or pants-crease-adjustment now came off looking rude, absentminded, or childishly self-absorbed. Callers who even more unconsciously blemish-scanned or nostril-explored looked up to find horrified expressions on the video-faces at the other end. All of which resulted in videophonic stress.
Of all the things C.S. Lewis wrote with his fluent pen, my favorite comes from a short interjection about his experience in World War I, an event that is strangely absent in Lewis’ writings. Alan Jacobs, a literary critic and biographer of Lewis, suggests that the following passage is a “rhetorical hand-waving away the horrors of war” and “a critique of the massive literature by his fellow soldiers […]” (The Narnian, p. 74). Indeed, it seemed Lewis was bored by such realism, but that did not mean he could not express it. From Surprised by Joy (p. 196):
But for the rest, the war–the frights, the cold, the smell of H.E. [high-explosive shells], the horribly smashed men still moving like half-crushed beetles, the sitting or standing corpses, the landscape of sheer earth without a blade of grass, the boots worn day and night till they seemed to grow to your feet–all this shows rarely and faintly in memory. It is too cut off from the rest of my experience and seems to have happened to someone else. It is even in a way unimportant.
Jacobs goes on to remark that this is less than fully honest or at least self-knowing as his correspondence with his family shows that he suffered from “nerves” as many a returning soldier did (and still do). What I like about this passage is that Lewis clearly shows he is capable of adding to the great literary history of the War, but would rather not. Other things were more important to him, though what he wrote above has a tantalizing if not terrifying beauty to it.
I’m going to start a series where I post my favorite pieces of writing. These are passages I’ve found myself returning to and reading again and again. Today’s entry comes from Annie Dillard’s Pilgrim at Tinker Creek:
I used to have a cat, an old fighting tom, who would jump through the open window by my bed in the middle of the night and land on my chest. I’d half-awaken. He’d stick his skull under my nose and purr, stinking of urine and blood. Some night he kneaded my bare chest with his front paws, powerfully, arching his back, as if sharpening his claws, or pummeling a mother for milk. And some mornings I’d wake in the daylight to find my body covered with paw prints in blood; I looked as though I’d been painted with roses.
It was hot, so hot the mirror felt warm. I washed before the mirror in a daze, my twisted summer sleep still hung about me like sea kelp. What blood was this, and what roses? It could have been the rose of union, the blood of murder, or the rose of beauty bare and the blood of some unspeakable sacrifice or birth. The sign on my body could have been an emblem or a stain, the keys to the kingdom or the mark of Cain. I never knew. I never knew as I washed, and the blood streaked, faded, and finally disappeared, whether I’d purified myself or ruined the blood sign of the passover. We wake, if we ever wake at all, to mystery, rumors of death, beauty, violence…. “Seems like we’re just set down here,” a woman said to me recently, “and don’t nobody know why.”
It’s been two weeks since I visited Auschwitz. I am still haunted by its stillness. The mountains of shoes, suitcases, and human hair stick in my memory as do those horrible hooked spires of concrete, spiked with porcelain and strung with barbed wire. Seeing the reconstructed gas chamber and ovens was bad enough; seeing the barracks where prisoners wasted away was somehow worse–perhaps because they were more “authentic” in some vague way. All of it was ghastly. I often tried to imagine what it was like to sleep in such places or to be herded through the Sauna or to work as a member of the Sonderkommando. I couldn’t do it, though not because of a failure of imagination, but because of an unwillingness to imagine it at all: my mind’s eye recoiled from looking too deeply into such things. That I did this despite my efforts to imagine what it was like seems natural if not proper to me; like how we snap our hands back quickly from a hot iron, so the mind buffets the will when it tries to imagine hell.
While the story of Auschwitz is long and awful, the story the death camp itself tells is dreadfully simple. Every site of interest has a small sign explaining what you are looking at, and almost all of them say, “Jews were murdered here” (sometime it was Polish or Soviet prisoners). It’s like a drum beat. The same assertion over and over without variation. Sometimes questions present themselves such as “How?” and “When?” but the interest in answers to those questions tends to diminish.
There is one sign, however, that tells a different story. In response to an escape, the SS selected 10 others die by starvation. One of them, a Polish army sergeant named Franciszek Gajowniczek, began to cry, “My wife! My children!” At this a Polish priest named Maximilian Kolbe stepped forward and asked to take his place. While starving, the priest sang hymns and lead prisoners in prayer. After two weeks of dehydration and starvation, only Kolbe remained alive; the Nazis finished him off with a lethal injection. This had a profoundly positive effect on the prisoners of the camp and inspired much hope in a very dark place. Amazingly, Gajowniczek survived and was a guest at the Vatican when John Paul II canonized Kolbe in 1982. Kolbe’s prison cell is now a shrine to which many Catholics make pilgrimage every year. To be sure, there were other heroic acts of humanity at Auschwitz which I read about later. But this is the only one I saw written out in full on the understated signage at the camp.
Seeing Kolbe’s cell was the highlight for me; it made me think of the line from the Psalms several times afterwards: “Yea, though I walk through the valley of the shadow of death, you are with me.”
My thoughts concerning the is-ought fallacy are confused, because I am not sure what the content of the fallacy is supposed to be. If it is such that one cannot derive an ‘ought’ from an ‘is’, a ready counterexample comes to mind:
- If a person sees a baby drowning and can help, that person ought to help.
- A person sees a baby drowning and can help.
- Therefore, that person ought to help.
There, I’ve done it–I’ve derived an ‘ought’ from an ‘is’. The argument is deductively valid; if the premises are true, the conclusion must be true. But are the premises true? While premise  isn’t true of me or anyone I know, it is true of someone. Thus, the soundness of the argument is not threatened by premises . What about premise ? Ah, this is where the is-oughter can press her challenge. She can assert, “You are helping yourself to an ‘ought’ in the consequent of the conditional. What you have to do is give purely descriptive premises and then conclude with an ‘ought’ something you cannot do.” Thus, I would have to argue like so:
- A person sees baby drowning and can help.
- Therefore, that person ought to help.
Now the argument is invalid, because it assumes p and concludes q, which is a non sequitur. Thus, one cannot derive an ‘ought’ from an ‘is.’
I demur. Suppose, I cannot derive an ‘ought’ from an ‘is’ and that I see a baby drowning and I can help. Should I help? I should think so and so should you. It would be no defense to appeal to the impossibility of deriving an ‘ought’ from an ‘is’ if I responded to the situation by saying, “That’s too bad,” and kept on my way. We know that such a response is wrong if we know anything is wrong (would it be wrong to punish me if my is-ought defense is legitimate? If so, then why?). But if we know this is wrong, then the problem is not with our moral knowledge, but our logical language, which does have a rule of inference that allows us to move from an ‘is’ to an ‘ought.’ Just because the logic we are using is incomplete does not mean that there is no fact of the matter regarding what I should do.
Therefore, at best, the is-ought problem is one that besets our deontic logical languages. Perhaps there is a language that has a sound rule of inference by which we can derive an ‘ought’ from an ‘is’. The is-oughter cannot assume there isn’t one without begging the question. In any event, it is far from obvious that the is-ought problem besets our moral reasoning in general. We should beware of becoming ethical methodists who require that every empirically conditioned moral claim be justified by some method of derivation. Furthermore, we should make room for our intuitive moral judgments–they cannot be ignored.
I recently saw the movie Selma, and it prompted some thought about the distinction between intending to do something, and foreseeing but not intending to do something. This distinction is a controversial one. Some think it makes all the difference and allows for a variety of actions that would normally be condemned, for example diverting a runaway trolley on to the one rather than the five or bombing civilians in a raid on military targets. Proponents of the Doctrine of Double-Effect affirm this distinction–call this ‘position A.’ Others think it makes no difference; if one foresees a consequence of one’s action, and acts anyway, then one intends to bring about that consequence. Utilitarian ethicists like Henry Sidgwick deny this distinction as does the author of a book I’m reading right now, Ethics Without Intention. Call this ‘position B.’
How does this relate to Selma? Well, here’s my question: did the protesters who marched on Bloody Sunday intend to be violently repressed or did they they merely foresee, but not intend it? (Yes, I realize only a philosopher could be so irreverently pedantic to reflect on such an abstract question in the wake of such a solemn event. For what its worth I shed tears while watching the movie and gasped while reading the first-hand accounts of it afterwards–I didn’t think about this question until later.).
At the end of February earlier this year I had the privilege of hearing Peter van Inwagen give the keynote address at the North Carolina Philosophical Society. His paper was entitled “The Problem of Fr** W*ll” which sounds a lot like “The Problem of Free Will”–more on that in a moment. What follows is taken from his handout and it is worth thinking hard about. At issue are three theses:
Thesis One: On at least some occasions when a human agent is trying to decide between two or more incompatible courses of action, that agent is able to perform each of them.
Thesis Two: If the bad consequences of a decision are ever the fault of the person who made the decision, then Thesis One is true.
Determinism: the thesis that the past and the laws of nature determine a unique future. Indeterminism is the thesis that the past and the laws of nature do not determine a unique future.
The problem: There are seemingly unanswerable arguments for the conclusion that Thesis One is incompatible with both determinism and indeterminism, and there are seemingly unanswerable arguments for Thesis Two. Since either determinism or indeterminism must be true, the conclusion of these arguments imply that nothing is ever anyone’s fault–and it is evident that it’s simply false that nothing is ever anyone’s fault.