Seager Essays

From the bed in his Oxford suite, Allan Seager thought about the effort it would take to retrieve his cigarettes from a table across the room, and as much as he would’ve liked a smoke, decided that the four steps from bed to table and back were simply not worth it.

The porter, Haines, entered and frowned at the uneaten breakfast and listless young man shivering under a pile of bedclothes. If his charge had not roused by nine, the rules of Oriel College empowered Haines to summon a doctor, which he quickly did. Even so, both were thinking it was a touch of the flu, at worst. The gray-misted English days laid everyone low from time to time.

After an initial examination, the red-faced doctor concurred with the initial flu diagnosis and prescribed a weekend of bed rest, but as he packed his bag to leave, the doctor heard Seager cough. Dismissing flu, he produced slides for the appropriate sputum samples and retired to his lab. He was back in an hour, asking Seager, “Where do you want to go, Switzerland, or the States?”

That was 1932, and at the time, Allan Seager was an accomplished young scholar, an American from the University of Michigan who had entered Oriel College on a Rhodes Scholarship and quickly impressed his tutor so much that he thought it likely Seager would be his first pupil in many years to earn a First in his examinations.

A chest X-ray in the doctor’s office confirmed that Seager had tuberculosis and that he would have to leave Oxford for an extended stay in a sanitarium, where either he would get well or he would die, each being about as likely as the other.

By 1935, though, Seager was still alive, and E.J. O’Brien, the original Best American Short Stories editor, had declared that the “apostolic succession of the American short story” ran from Sherwood Anderson, to Ernest Hemingway, to Allan Seager.

Seager’s peers also expressed admiration; Robert Penn Warren, Carl Sandburg, and Sherwood Anderson himself all testified to Seager’s talent and skill. Poet and novelist James Dickey said he owed his career to reading Amos Berry, Seager’s third, and by many accounts, best novel, once remarking, “I doubt if I’d’ve tried to be a poet if it weren’t for Charles Berry [Amos’s son and the novel’s narrator]. There was no call for poetry in my background... But he wanted to try, and he kept on with it. So I did, too.”

Today, while Anderson and Hemingway are in the permanent canon, even at the time of his death in 1968 Seager was not widely known. Toward the end, he’d declare his own superiority to William Faulkner, whom he considered a minor, regional writer, while lamenting his own obscurity and recognizing his likely disappearance. He abandoned a last novel to complete a biography (The Glass House) of fellow Michigander and friend Theodore Roethke. Seager figured that his name might live on as long as Roethke was remembered.

Like all writers, he wanted to be listened to because he thought he had something important to say. The abandoned novel was to be about the dangers of humans living as slaves to automation as seen in Ford’s assembly line and the advancing computer age. In his notes, he remarked on an incident told to him by a female friend who worked for a survey research firm in Ann Arbor, Mich.

“Whenever she goes into the room with the computer, it sulks and spits back cards and (because of this) they have asked her to keep out. I think this is grotesque.”

In those same notes, he declared his intentions for the novel in all caps:

WHY DO I COME BEFORE YOU NOW WITH THIS SHEAF OF PAGES TO TRY TO ENGAGE YOUR ATTENTION? IF YOU PLOW THROUGH THESE PAGES, WHAT WILL HAVE HAPPENED TO YOU BY THE TIME YOU FINISH THEM? (I WOULD LIKE TO TEAR YOU APART, O WELL-EDUCATED MIDDLE CLASS READER. TEAR YOU APART AND LEAVE YOU IN PIECES AND NO SUGGESTION FROM THE KING’S MEN ON HOW TO GET BACK TOGETHER. PURELY DESCTRUCTIVE CRITICISIM.) [sic]

I remember Allan Seager because he was my great uncle—my paternal grandmother, Jane, was his younger sister. But “remember” is the wrong word, as Allan died two years before I was born, and well before that a falling out between the siblings over their father’s post-stroke care drove them permanently apart.

To me, then, he should mean little: the source of my father’s middle name, the writer of a few dusty books kept on the upper shelves of my parents’ house. But like his protagonist Charles Berry (and like James Dickey, apparently), I have little call for the life of the writer (lawyering seems to run in the family), yet I have kept with it for long enough to see the publication of a novel of my own on the near horizon, something I’m not sure I would’ve believed was possible without his example.

While Allan Seager’s name has faded, the first story he ever published is etched permanently into the American consciousness in a way even Hemingway can’t match. Chances are, you have heard or at least are familiar with this story. It concerns two seriously ill men sharing a hospital room, one man with a view out the window, and one man without. The man with the view spends his days regaling the other man with the various goings on, lovers walking arm in arm, children playing, and once even a parade. The man without the view grows slowly jealous of the man and his view and plots to get the bed by the window. If this story sounds at all familiar, you likely already know the ending.

 

By any measure, Allan Seager lived an uncommon life: two-time national champion swimmer at Michigan, Rhodes Scholar, and upon returning to the states, an assistant editor at Vanity Fair, where in addition to providing the magazine with a short story a month during his tenure, he chauffeured such 1930s luminaries as Joe Louis, Walt Disney, and Katherine Hepburn (whom he referred to as “bandy-legged”) to photo shoots. With characteristic confidence, he even once asked Loretta Young for a date, but was quickly shot down.

After leaving Vanity Fair in 1935, he joined the faculty at Michigan, where he remained until his death in 1968. In 1950, James Michener named Seager as one of the top creative writing teachers in the country. Seager’s first novel, Equinox (1943), was a best seller, and he wrote and published four more, along with two collections of stories, a Stendhal translation, and The Glass House (1968). In the ’40s, ’50s, and ’60s, hardly a month would go by without one of his stories appearing in the mainstream outlets: Good Housekeeping, Cosmopolitan, Redbook, Sports Illustrated, Esquire, the Atlantic, the New Yorker, etc. Sometimes more than one would run in the same issue, necessitating the use of “H.W. Fordyce” as a pen name. Better than 80 of his stories were published in his lifetime.

The man without the view grows slowly jealous of the man and his view and plots to get the bed by the window. If this story sounds at all familiar, you likely already know the ending.

Stephen Connelly, the only scholar to do any extensive writing on Seager (and a former student of his), lays much of the blame for Seager’s ultimate obscurity on plain bad luck. When Equinox was published, the film rights quickly sold, and the novel was headed for a Literary Guild (think: Book of the Month Club) deal, but a wartime paper shortage stopped the sales at 40,000 copies. A farm Seager purchased for his father in Onsted, Mich., with money earned writing a radio serial (Scattergood Baines) was a constant financial drain made worse by his father’s stroke and the wartime drafting of the hired hand. In 1948, his first wife, Barbara, was diagnosed with multiple sclerosis, sucking Seager’s time as he cared for her and his two young daughters. His pursuit of the latest cures for Barbara cost heavy sums, paid for by the scads of slick, mainstream stories, which Seager viewed as well-paid hack work.

If that wasn’t enough, McDowell, Obolensky, the publisher of his final novel, Death of Anger, was heading toward bankruptcy when that book was released, and as Seager was proofing the manuscript for The Glass House, even as he lay dying in the hospital, Beatrice Roethke forced numerous cuts and withheld permission for use of her husband’s poems.

But after this lifetime of work, much of it accomplished and well received, we are primarily left with the story of the two men in the hospital. Seager’s version was called “The Street,” first published in the London Mercury in 1933 and republished in Vanity Fair in 1934. Today, though, the story, or something resembling it, is generally referred to as “The Window,” and you may have seen it in an email forward, or heard it in a church or synagogue, perhaps in a Sunday school class, paying half-attention while eating powdered doughnuts and sipping pale red punch.

The jealousy of the man without the view grows greater and greater, consuming his every waking thought, until he eventually sees his chance. One night, the man near the window begins to cough, then choke, and struggles to ring the bell to call the nurse. In his moment of truth, the man without the view lets his roommate expire, rather than calling for help.

When the dead man is taken away, the man without the view asks if perhaps he could get the bed by the window. The request granted, he prepares to savor his victory: “Slowly, painfully, he propped himself up on one elbow to take his first look at the world outside. Finally, he would have the joy of seeing it all himself. He strained to slowly turn to look out the window beside the bed.”

It faced a blank wall.

A Google search turns up many thousands of web pages that contain versions of the story. Sometimes the blank wall is made of brick, and other times, the window looks out onto an empty courtyard or an alley. Another common variation removes the immoral (non-)act of the roommate, and the man near the window simply dies quietly in his sleep. Often, there are standard religious messages tacked on as an addendum to the tale, aphorisms such as: “There is tremendous happiness in making others happy, despite our own situations.” “Shared grief is half the sorrow, but happiness, when shared, is doubled.” “If you want to feel rich, just count all of the things you have that money can’t buy.” Others sum up the story with a (usually unattributed) quote from Family Circus cartoonist Bill Keane, “Today is a gift, that’s why they call it the present!”

The wide dissemination of the present-day version is likely tied to its inclusion in the book, Laugh Again (1992) by Charles “Chuck” Swindoll, a Texas-based minister, leader of the Insight for Living ministries, and a prolific author of Christian self-help texts. Swindoll uses the story to draw a lesson on the difference between the man near the window who, despite his illness, chooses “joy,” in the form of these imaginative stories, versus the man away from the window who is instead consumed by envy and need. The moral is clear, and the twist ending archetypal enough to wedge the tale firmly in the memory banks upon hearing it.

The story is not Swindoll’s, though, nor does it belong to the man who Swindoll credits with authorship, G.W. Target. That the story originates with Seager is nearly certain, mostly because he lived it.

It began in 1932, in that Oxford room, when Seager answered that quick-thinking doctor’s question, “Switzerland, or the States?”

“I think I’d better go home,” he said.

 

If Seager had been born 100 or so years earlier, a writer of romantic poetry, and named John Keats, the doctor who came to see him in his Oxford suite would have prescribed a regimen of horseback riding, strict rations of food, and daily doses of antimony. And, of course, the “bleedings,” which medical hindsight tells us was only very occasionally, and accidentally, an effective medical treatment.

The limited knowledge of disease treatment in the 1800s led to Keats being basically tortured into his grave by his well-meaning physician’s penchant for opening veins. Ironically, the poet’s only moments of relief during his decline came when a different doctor decided that Keats was simply suffering from a bad case of sensitive temperament. The starvation diet was ended, moderate exercise prescribed, and Keats recovered long enough to revise some last poems before a relapse that is famously blamed on bad reviews. Near death, Keats (like Seager) felt that his place was not secure, declaring, “I am one whose name is writ in water.”

Sometimes the blank wall is made of brick, and other times, the window looks out onto an empty courtyard or an alley. Another common variation removes the immoral (non-)act of the roommate, and the man near the window simply dies quietly in his sleep.

Fortunately for Keats, subsequent generations rehabilitated his literary reputation. Fortunately for Seager, by 1932, treatment of tuberculosis had at least progressed via a somewhat strange route to a kind of “do less harm” approach. Rather than the bloodletting and starvation, Seager was given an artificial pneumothorax (an intentional deflation of one lung, which was thought to ease the infected area) and put on total bed rest at University Hospital in Ann Arbor. After several months there, Seager was eventually sent to the Trudeau Sanitorium in Saranac Lake, N.Y., the thick of the Adirondacks, for “the cure,” a period of forced inactivity marked by light, graduated exercise in the form of walks, careful attention to diet, and copious amounts of bed rest. 

Seager gives a very funny, Cuckoo’s Nest-like recounting of his time at Trudeau in his story “The Cure,” originally published in the Atlantic in 1964 and later collected in his volume A Frieze of Girls, a series of Sedaris-esque “fictional essays.” He tells of his Swiss roommate Karl, a “mechanical dentist” who kept “a few loose teeth scattered on the table in his dressing room” and suffered not only from TB, but also from hyperthyroidism, which meant he “paced up and down ceaselessly and meaninglessly all day like a big cat in the zoo.” As part of the cure, patients slept outside on screened porches in the extreme cold, which created an obvious problem that Seager solved with an early electric blanket, and something known as the “Adirondack Pack” (two pillows crisscrossed over the face, so only the thoroughly greased nose was exposed to the cold). Karl, being “close” with his money, approached the problem through different means:

He had so many wool blankets on his bed that he used a bookmark pinched from the library to tell where to insert himself. About eight o’clock at night he would start to prepare for bed. He would pull on the pants to a sweatsuit over a pair of flannel pajamas, tie the ankles, crumple the New York Times and Herald Tribune and stuff them down the legs until he looked like a tackling dummy; then he would tie the waist and stuff the top, put on a knitted cap, two pair of wool socks, a pair of gauntlets, and he would get into bed and rustle. This is enough to deter, if not prevent sleep entirely.

“The Cure” tells a tale of spirited individuals making sport of defying their possible death sentences, even celebrating the period between Christmas and New Year’s as a “licensed saturnalia”: trees decorated with cigarette foil, haphazard mixing of the men’s and women’s compounds featuring universal temporary amnesia regarding one’s marriage, and, to lubricate the proceedings, the indiscriminate enjoyment of bootleg liquor, which in one case, was unfortunately laced with pyridine, causing widespread but thankfully temporary paralysis among the patients.

The reality of life at a tuberculosis sanitarium was considerably less pleasant. The minimum stay was generally a year, with stints up to three years being fairly common, and with their regimented structure and close supervision of patients, the sanitariums, despite their bucolic surroundings, most closely resembled insane asylums with a dash of the leper colony. Their remote locations, Spartan accommodations, and the unknown nature of the disease’s transmission kept visitors at a minimum, and while walking up to two miles a day was permitted for those who could manage it, the persistent, low-grade fever that accompanies TB hindered sustained concentration and made reading or writing difficult. If a patient was going to read, it was suggested by doctors that he best keep it light, along the lines of romance, lest the brain become overtaxed. Many of the patients with advanced disease were even subjected to thoracoplasty (rib removal), today the alleged province of supermodels but at the time used to restrict the function of a diseased lung and aid in healing. At Trudeau, going in for “the rib” was the last step before leaving the Adirondacks in a pine box.

For Seager, this atmosphere was a strain. A national champion swimmer as an undergrad at Michigan, Seager had set an English-soil record in the 50- and 100-meter freestyle while at Oxford, even as the TB microbes were likely beginning their invasion. He’d read and studied literature seemingly forever as well. During college, he did this mostly in secret while maintaining the façade of champion swimmer and fraternity scene raconteur. At Oxford, where he could study openly, his devotion to his studies only increased. At Trudeau, thanks to “the cure,” these releases were closed off to him.

It is clear that the ameliorative effects of time allowed for the light tone of “The Cure,” because remarks Seager made during his stay at Trudeau are markedly different, as expressed in a letter to Helen Trumble, an old girlfriend.

Dear Helen,

I would have written more but I was dejected. This place is a madhouse. High caloric food, confinement and the incessant idleness reduce the poor dopes to gibbering. It doesn’t take long. I am being abraded gradually into the worst state of nerves I’ve ever had. There is a neurotic tension always in the air—when anyone leaves the place for good, they are clapped out of the dining room after their last meal. That is the signal for a half dozen women to sob, and after all interviews with the staff, they break down if the report is not favorable. There are hectic outbursts of drunkenness, and as compensation for the strain, more silliness than I ever saw. It’s this that gets under my skin. I don’t worry about my lungs but more about the lethargy caused by systemic changes accompanying the disease. I’ve started a dozen things and let them drop. It always seems better to lie back and sleep. 

As difficult as the time was, in those moments of creeping madness, Allan Seager, the writer, began to take form.

 

The list of literary figures who suffered from tuberculosis is long and distinguished, in addition to Keats: Albert Camus, Percy Bysshe Shelley, Henry David Thoreau, Ralph Waldo Emerson (who lived to the ripe old age of 79 despite contracting it as a youth), George Orwell, Laurence Sterne, Charlotte, Emily, and Anne Brontë, Franz Kafka, H.G. Wells, W. Somerset Maugham, and Walker Percy (who survived his bout and gave up a medical practice for writing). Most of these either were killed or had their lives significantly shortened by the disease.

And of course, there is the great master, Anton Chekhov. In the hospital, knowing death was close, he famously drank a final glass of champagne before setting his head back on the pillow and dying. Chekhov’s experience illustrates the romantic side of the disease, the gradual diminishment of life that, in theory, allows one enough time to savor existence, yet also, due to the disease’s lethality, ensures an ultimate and tragic death. We find it central in La Bohème, when Mimi expires in the final scene just after she and Rudolf have recalled their happiest times. Verdi used it to the same effect in La Traviata, concluding the opera with the beautiful and tragic consumptive, Violetta, dying at her lover’s feet.

While there’s no reason to believe that tuberculosis struck the literary-inclined with any greater frequency than the general populace, there is evidence that struggle with the disease and the enforced quietude attendant with it helped to foster a keen eye, a mind better able to focus on the meanings of small gestures, and creates, in the words of medical historian Thomas Dormandy, an “acutely heightened awareness.” Dormandy saw it in the writings of Keats. Seager, after being released from Trudeau and returning to Oxford to finish his studies, recognized it in himself. Seager wrote again to Helen Trumble while in Paris during a break in his studies:

And so I have come again to Paris. It has proved something or other to me that I could come. (It’s like that now. I spend a lot of time trying to figure out just what sickness has done to me. It may be my old man in me but I think it’s mostly good. Everything looks good to me now, just the surfaces, and pray God always will.) The first few times I was here I looked at everything like hell because I was afraid I mightn’t ever see them again. Now that I’ve come this time, I know I can always come back, but since I lay up there in that bed, just a plain chair and table look different. There’s more tension, significance, and when you look long enough, repose. And at last, baby, I’ve started to write.

On his return to Oxford in the spring of 1933 Seager took his exams, earning a Second, the TB robbing him of his First. He applied for a third year of study, which was granted, but summer stretched in front of him with no money for travel, and the need for a fresh artificial pneumothorax every two weeks. So, he rented a cheap room above a pub, The Crown, in East Hanney, Berkshire, and spurred by spare time and his brush with death, he indeed began to write.

Seager provides an account of that summer in “The Last Return,” also collected in A Frieze of Girls. He brought no books other than a volume of de Maupassant’s stories in the original French. He slept quite a bit, and played pub games–darts, shove ha’penny, and dominoes, each game for a half-pint of beer. He practiced shove ha’penny until he made sure to win as often as he lost so the price of the games would not drain his limited funds.

He also struggled to find a voice as a writer:

For some time I had considered myself marked for a writing career but I had done nothing. I finished the Maupassant and it occurred to me to try to write a story as good as one of his. I had the story, based on an incident I had seen in the hospital, but I didn’t know how to begin. Contemplating the writer’s condition beforehand, how he did it seemed quite simple. Now that I was at the brink, it was not…. It seemed to me the right sentence was floating somewhere but to get it was like catching a rabbit with a hat, a quick move and it scuttled away. At last of course I got one and maybe two hundred more to follow it. I rewrote the story so often I knew it by heart and would have been glad to recite it had anyone asked me to.

As recounted in “The Last Return,” by the end of the summer he had an eight-page story that came from 126 pages of drafts. (This may be something of a personal fish tale, with the number of pages necessary to produce the final draft stretching over time. Writing to Trumble closer to the actual creation of the story, he recounted having drafted 60 pages to garner six.) The finished story was called “The Street” and was published in the London Mercury in 1933. He was tired of the story when he finished it, and not entirely pleased with his effort, but he was excited for his new life. Writing to Trumble again, he said, “I did not know what writing was. Not that I do now but I know more than I did. I have not done anything yet. It will take study and work but Christ it is swell to be at it at last.” 

That first story sent him on his way, gaining the attention of E.J. O’Brien, who put the next story Seager finished, “This Town and Salamanca,” into the 1935 edition of Best American Short Stories, and even dedicated the volume to Seager, his latest discovery.

 

“The Street”—unlike “The Window,” the somewhat simplistic tale it has morphed into and that we commonly recall—is subtle, scary, dryly funny in parts, and always finely observed. It is clear that the heightened sensitivity brought about by Seager’s treatment for tuberculosis informs his earliest foray into creation. He was ready to write and had something to write about. Seager’s worldview was informed by the book of Job: “Yet man is born unto trouble, as the sparks fly upward” (5:7). His “purely destructive” criticism was meant as a wake-up call to recognize man’s inherently base nature. “The Street” is his exploration of the darkness within himself. Seager’s original shares plot with its present incarnation, but is very different in important and instructive ways. Rather than the easily grasped moral of the contemporary version, “The Street” tells the more complex tale of an unnamed protagonist and his struggle with the confinement of disease, a struggle that ultimately leads him to madness.

As with the contemporary version, the man near the window spends his days describing various goings on, and eventually these tales irritate and frustrate the other man, causing him to covet the bed with the view. In “The Window,” this envy is strong enough for him to ultimately decide not to call the nurse for assistance as the man near the window chokes on the fluid in his lungs.

In “The Street,” though, the protagonist, while expressing these same feelings of resentment toward his roommate (named Whitaker), manages at first to quiet them:

At first he was astonished at his own baseness. He had always regarded himself as a decent fellow. He had not known that confinement and disease can taint decency. “This is silly. I really ought not to notice it at all. I am thirty years old. People, men, do not…” the naiveté of the process disgusted him but he went through it like a rosary every day for a while and sometimes he was polite to Whitaker afterward. As the days went by, each like the last, he hated Whitaker simply, and did not think about the baseness.           

Eventually, though, as time passes through the summer, the disease begins to wear on the man:

At sundown his fever rose and as the room grew dim, his head ached and the daytime clarity of his mind vanished. When he looked at the pale oblong of the window, it seemed to him a gateway and beyond it were all the joys and brightness he had been forced to forsake. If he could only stand at the gate and look out, but no, there was Whitaker like a demon guarding the way. It was the gateway to life, and in the darkness Whitaker’s gaunt face began to assume the shape and hollows of a skull. He was Death, of course. The reverend sonorities of the church arose in his mind, confusing him. Phrases about Death and Life, solemn and distorted, he remembered from hymns and prayer books. If he could vanquish Death, he would be granted everlasting life.

Only when the illness has allowed him to de-humanize Whitaker, and turn him into the personification of death, does he allow himself to consider the ultimate drastic inaction.

Suddenly one night he saw his course clearly: he would not press the button. When he heard Whitaker begin to twitch and pant, he would not press the button. The nurse would not come and Whitaker would die. Through the steaming days of late summer when the street outside was full of people that he could not see; while Whitaker lay describing with brutal detail these men and women who were well and strong, who could even go on holidays, he stared at the bare gray wall, smiling over his secret. The nurse would find Whitaker when she made her rounds. The doctors would come shaking their heads and clucking. They would wheel him out on a stretcher with a sheet over him. Then he would have the bed by the window. He could look out upon the world again and this would make him well. No, the next time—or the time after—he would not press the button.

Secure in this new plan and falling further from reality due to the persistence and progression of the disease, he starts being polite to Whitaker, and the transition in his mind from Whitaker = Man to Whitaker = Death becomes complete.

There was no more sullen silence before Whitaker’s unending chronicle; he made talk, even put questions with a false vivacity. This was difficult, for when he looked toward the other bed, he felt a horrible disgust. The frantic nights of fever had imprinted the image of Death on Whitaker, and though the sun was in the room, he saw the fleshless sockets and the bone. He always thought of Whitaker as “He” now, and in dealing with such an adversary one must be courteous so that He may not suspect anything. By this time he never doubted he was right. He was even righteous.

In his mind, he begins to equate the bed near the window with a tower from which he could look over the world and make it bloom, but Death bars the way:

Firm on that parapet, he would have the power, somehow, to make this bleak season blossom as the rose. That was it, “blossom as the rose.” He would make it warm and pleasant, not cold like the night outside and never so hot as his head felt now, but warm and green with bright flowers, and the people would be happy always. But he must have the tower so he could watch and see that nothing went wrong. Death barred the way and over Him he must gain the victory.

Finally, one night, the time comes:

Dim against the window like a shadow, he could see Him writhe and struggle for air. It sounded like a dog that had run too far. This was the time. Now He would die. He smiled calmly into the darkness and waited for Him to stop heaving and panting and shaking the bed and die. It was very severe this time, and at the end Whitaker broke into long dry sobs. Presently the sobbing stopped and the room was still except for the spatter of the rain on the glass. He looked over cautiously. At last. He was dead. In the morning when the street shone in the sun, he would go to the tower and keep watch over the city.

With Death defeated, the man requests the bed near the window, but (unlike the current version where he looks out onto a “blank wall”) the protagonist of “The Street” encounters this:

Soberly, without haste, he arranged the pillows, making ready to look out as one who had come into a kingdom. But when he looked, it was not into a sunlit street. There were no trees. Below him was a rear courtyard of the hospital, a blank place, and all day long it was empty.

By remaining entirely in the close third-person point of view of the man without the window, the story does not give the reader an opportunity to see the scene objectively and draw a distinction between the two men, as Rev. Swindoll’s lesson does. The act is not one of jealousy or covetousness. Instead, “The Street” tells us that illness and confronting death can turn a regular man to madness and insanity. “The Street” offers no perspective on moral behavior or choice. That any person would be defeated under the same circumstances seems likely, even inevitable.

Seager’s despair during his own illness went deeper than he liked to admit. At his lowest points, he wished for a dose of potassium cyanide to end it. But he didn’t give into this temptation, and he felt ashamed about it afterward. Neither did he succumb to the madness that overcomes the protagonist of “The Street,” and it is in that decision that we see the split between life and artistry.  

 

Today, one would expect a raft of lawsuits chasing plagiarists around the globe, as versions of “The Street” popped up hither and yon, but during his life, Seager had moved on from the story, and even, at times, seemed to express a disgruntled pleasure in the staying power of a story he never thought that much of.

None of the currently circulating versions, save a debunking of the “reality” of the tale at the urban legends website Snopes, identifies Seager as the original author. Seager himself declared that he’d seen plagiarized versions twice in magazines and three times on television. While in a doctor’s waiting room in Brazil (he was there seeking treatment for his wife’s multiple sclerosis), he saw the story in a magazine, done in Portuguese. He’d once even seen it attributed to Chekhov, which no doubt pleased him greatly. In class, when asked for an example of an oral tale that might make a good short story, one of his students told his own story back to him.

A thin volume from 1945, 101 Plots Used and Abused, contains a basic summary of “The Street” listed as the (strangely) 125th and last of the frequently abused plots. James Young, an editor at Collier’s and the compiler of the volume, credits the story to Seager and notes a recent spate of versions, at least six appearing in magazines around the time of his book’s publication. One from October 1943 even appeared in his own magazine. Young notes the superiority of Seager’s original and attributes the run of copies to the outbreak of World War II (most of the versions during this era involved soldiers in the hospital after being wounded). Interestingly, the version that Young cites removes the immoral act of the man without the window failing to summon help, and instead has the man near the window slipping away quietly in his sleep. It seems likely that the patriotic tenor of the times may not have allowed one to show a soldier acting so selfishly after sacrificing himself in battle.

“The Street” offers no perspective on moral behavior or choice. That any person would be defeated under the same circumstances seems likely, even inevitable.

The vast majority of the story’s bowdlerized versions, particularly those found today, are actually not attributed to anyone. When an author is named (as one was in Rev. Swindoll’s book), credit is most often given to G.W. Target.

Thanks to the good memory of British writer and journalist Quentin Crewe, we are able to trace this part of the tale’s journey. Writing in the London Sunday Mirror on Sept. 11, 1966, Crewe tells of hearing a story on BBC radio the previous Wednesday. The story involved two men in a hospital room, one with a view, and one without. The man without the view slowly grows jealous of the other man, and… well, we all know the end, as did Crewe. He was certain that he’d heard it before. “I remembered hearing it done as a radio play during the war. Was it one of those universal stories which crop up in all forms?”

Crewe called the writer of the episode, George Target, and asked after the origin of the story. Target conceded that it may be one of those universal stories, but said he had thought it up and had based it on an episode of Doctor Kildare, “about a man who could only be kept alive by being told how beautiful the world outside was.”

Unsatisfied, Crewe called the BBC and found the play he initially remembered. It was broadcast on July 20, 1944, and again in altered form in 1960, the same tale both times. Crewe was told that the play was adapted from a story by J.B.A. Seager (Seager used the initials of his full name, John Braithwaite Allan Seager, to seem more “British” in publications there), and that the adapter had died.

Undeterred, Crewe found a “Professor Allan Seager of Michigan University in the United States” and called him on the phone. Seager confirmed that he was the story’s original author and remarked, “I would gladly sell the rights to it for a hundred dollars, as it makes me so mad every time someone pinches it.” Crewe asked him where the story came from.

“It happened to me,” he said. “I was in a TB hospital. I was the guy without the window. The only difference was that I always pressed the button.”

TMN contributing writer John Warner’s first novel, The Funny Man was recently published by Soho Press. He teaches at the College of Charleston and is co-color commentator for The Morning News Tournament of Books. More by John Warner

  • .(JavaScript must be enabled to view this email address)

Living on the Edge

(reply to seven essays on Consciousness Explained), Inquiry, 36, March 1993.

Daniel C. Dennett

Living on the Edge

In a survey of issues in philosophy of mind some years ago, I observed that "it is widely granted these days that dualism is not a serious view to contend with, but rather a cliff over which to push one's opponents." (Dennett, 1978, p.252) That was true enough, and I for one certainly didn't deplore the fact, but this rich array of essays tackling my book amply demonstrates that a cliff examined with care is better than a cliff ignored. And, as I have noted in my discussion of the blind spot and other gaps, you really can't perceive an edge unless you represent both sides of it. So one of the virtues of this gathering of essays is that it permits both friend and foe alike to take a good hard look at dualism, as represented by those who are tempted by it, those who can imagine no alternative to it, and those who, like me, still find it to be--in a word--hopeless.

The seven essays arrange themselves in such a way as to span the cliff edge handily. At one extreme, Clark and Sprigge are well over the edge, hovering, like cartoon characters held aloft by nothing but the strength of their convictions. It would be a crime to disillusion them. In the middle are Foster, and Fellows and O'Hear, utterly unpersuaded by my version of functionalist materialism, and willing to defend dualist (or apparently dualist) positions positively and vigorously, without begging the question against my alternative. Then there are the critics, Lockwood, Seager and Siewert, whose sympathies lie (in the main) with the others, but who do not commit themselves to any solution to the problems, dualist or otherwise, but concentrate more on flaws they think they detect in my arguments.

The idea of detecting flaws in my arguments must seem risible to Clark and Sprigge, who sometimes find it sufficient rebuttal simply to paraphrase one of my claims and append an exclamation point to it. But even this is useful; it goes to confirm one of my main claims: some serious thinkers find it impossible even to entertain my hypotheses. Not only do their incredulous dismissals testify to the vigor of the horses I am beating (something about which doubts have been expressed in some quarters), but they also provide independent benchmarks for re-calibrating my responses to the most persistent objections. For instance, my fictional Otto has been assailed as a stooge by some critics, in spite of the fact that his speeches are all, in fact, tightened up versions of actual objections raised against the penultimate draft. Here Otto finds friends galore. Lockwood, bless him, even describes him as my philosophical conscience!

All of the essays provide valuable clarifications and innovations--there is not a workaday or routine exercise in the lot--and I am proud to have provoked such a variety of contributions to our vision of these issues. There is a good deal of useful overlap, with several themes of mine attacked from slightly different angles, and I think the best way of exploiting this is to start with the most radical, over-the-edge (if not over-the-top) opposition, and work my way back to solid ground, occasionally skewing the order to take advantage of converging lines of attack.

Stephen R. L. Clark

Why may I not insist, against Dennett, that I do indeed know that I intend and feel, or that I know it better than I can possibly know the truth of any neurological theory he propounds?

Why not indeed? Feel free. Now what? Clark provides a marvelous bouquet of quotations, ancient and modern, to let readers see how different is the company we keep. I admire the resoluteness with which he issues his obiter dicta. It must be exhilarating to have such an uncomplicated and absolute faith. He recognizes that he is offering no arguments, but says that I don't either--or at best "very few" and he does go on to vouchsafe that my technique is "not wholly reprehensible," for which I am grateful.

In the main, Clark leans on Searle, and in one of the few passages in his essay that comes to grips with an argument of mine, he rather seriously misrepresents what I say in rebuttal of the Chinese Room. He says:

Dennett simply composes a set of witty conversations, which, he says, might be reproducible by computers responding solely to syntactical cues. He then encourages his readers to believe that such conversations are 'proof' that such (wholly imaginary) computers 'understand' as well as we.

But Searle has stipulated (because his argument requires that he do so) that a computer could indeed be programmed to produce (not "reproduce") such a conversation responding solely to syntactical cues; Searle surely would not quarrel with any of those details, for he has allowed the defender of strong AI carte blanche in composing such stories, by stipulating that the resulting program might even pass the Turing Test. It is he, not I, who introduced wholly imaginary computer programs as the test-case for his argument. I simply point out a few of the less immediately apparent implications of this (obligatory) generosity on Searle's part, and, more to the point here, do not in any way imply--let alone say (as Clark's not wholly reprehensible quotation marks suggest)--that these reflections are 'proof' that computers can understand. What I say is:

it is no longer obvious, I trust, that there is no genuine understanding of the joke going on. Maybe the billions of actions . . . produce genuine understanding in the system after all. If your response to this hypothesis is that you haven't the faintest idea whether there would be genuine understanding in such a complex system, that is already enough to show that Searle's thought experiment depends, illicitly, on your imagining too simple a case, an irrelevant case, and drawing the 'obvious' conclusion from it.(p.438)

I quote myself at some length here just to show how I belabored the point that I was not offering the imagined conversation as a proof of computer understanding. Clark misses this, but then he misses Searle, too. In "Fast Thinking" (Dennett, 1987, ch. 9) I surmised that many of Searle's champions confuse Searle's actual conclusion (which I call S) with a look-alike conclusion (which I call D, and then defend, since there is something--not much, but something--to be said for it). Clark provides a confirming instance of my surmise; his parenthetical insertion of "wholly imaginary" in the passage quoted above makes no sense coming from a defender of Searle's S, but would be an appropriate caveat from one who defended D.

The affinity between Searle and self-professed dualists like Clark has always been an unholy alliance (if I may put it that way). Searle has always insisted that his position is not dualistic at all, that he is a good materialist, that the presence or absence of a soul or spirit has no bearing on the behavioral competence of a physical body so far as he's concerned. Those who think that the presence of "spirit" or "soul" gives us powers to act that no computer could even mimic should thus be indifferent to Searle's strange thesis, but so desperate are they, I suppose, to find champions who will keep evil AI at bay, that they forgive Searle for conceding their chief disagreement to the opposition. This has always posed something of a diplomatic problem for Searle, who hotly denies he is a dualist whenever challenged by us hardheads, but seems to have been less eager to alert his cheering dualist supporters to the embarrassing fact that they have entirely missed his point.

The other interesting feature of Clark's essay is his criticism of Dawkins' memes on what purport to be biological grounds: disanalogies between memes and genes. Dawkins does not make a genotype/phenotype distinction for memes, nor are there identifiable loci for memes. Neither of these are clearly shortcomings of the meme concept--rather than just differences--nor does Clark give any reason for supposing these disanalogies couldn't be removed if there were grounds for doing so. In fact, in a commentary (forthcoming a) on a new paper by Dawkins (forthcoming), I note that something like the genotype/phenotype distinction might yield a readily achieved improvement in his concept. Moreover, a functional (not anatomical) notion of locus for memes is clearly definable--whether or not it would prove useful is an interesting question. But that is not what Clark really objects to about memes; what he really objects to is that "It is just this sense of a divine intellect that is missing in meme-theory, and with it any respect for truth." But as Clark must know, this is a question-begging assertion, for some of us view this absence of the divine intellect as a prerequisite for any acceptable theory of meaning and truth, and respect for truth is the source of our conviction.

T. L. S. Sprigge

I am grateful to him for coming to the defense of "what Dennett abusively calls 'figment'," for he thereby renders explicit, for all to see, the covert reasoning that I have otherwise had to impute before criticizing. Here is his argument: First he draws a threeway distinction illustrated by

(1) my image of my friend

(2) my friend as I currently imagine him

(3) my friend as he really is

and goes on to claim that "the important thing here is that one should distinguish components of consciousness from the object these components function to set before me (in ways which differ significantly in perception from in thought and imagination)."

I agree; the actual components of whatever-it-is that is the medium of conscious content must be distinguished from the intentional objects thereby constituted or represented by those vehicles (whatever they are) of content. Sprigge goes on:

Of course if I think in a merely verbal way of a blue cow no component of consciousness will be blue in any sense at all, but if I imagine a blue cow my image, though it is not the blue cow I am imagining, has a certain quality which I am prepared to call 'blue' myself, though pedants may argue about its proper label till the cows, of whatever colour, come home.

So the cow is imaginary but the image of the cow is real, and the image has a certain (real) quality: "But the colour one can identify as a component of one's stream of consciousness is not primarily part of one's intentional world--it only becomes so insofar as it features in one's theory of the world, (as it does for me, but not Dennett)." That is, Sprigge believes there is something real with a certain (real) quality that I do not believe in. Sprigge objects to my name for this stuff ("figment"), but I don't understand how he can also object as follows: "All in all, it is very misleading to represent his opponents as believing in some extra ingredient in the objective world." How is it misleading? Sprigge calls himself a "psychical realist," and has just emphasized that the difference between us is that he believes in the reality of something that I deny the existence of. It must be existence "in the objective world" that is at issue, since I quite agree that figment has subjective reality for him--which just means he believes in it.

We agree that people can have beliefs about the components of their consciousness--he expresses quite a few such general beliefs of his own, as he notes. In such instances, the components themselves are the intentional objects of those (reflective) beliefs. Now the question that must divide us is this: are they merely intentional objects or are they also real? It depends, of course, on what you believe about the components you are thinking of. Whatever you think your own components-of-consciousness are, whether you are right, I claim, is always an open, empirical question, and the answer is never obvious--it only appears to be obvious to those who think they have privileged access to the "inherent" nature of these components.

Consider the room full of Marilyn Monroe pictures. Here there is no doubt at all what the heterophenomenology is: a vast, regular array of identical, high-resolution photos of Marilyn Monroe. That is the object of your (unreflective) consciousness. Now reflect on the conscious experience itself: What components does your consciousness have, in virtue of which this is its object? Are there details among the components or are the components quite minimal? If the intentional objects of the experience at a certain moment include, say, thirty high-resolution images of Marilyn Monroe, does it follow that there are components of your consciousness for each of these thirty images? Notice that you are not in a position of authority to answer this question.

Sprigge speaks the "medium" in which even abstract thinking is conducted, and I agree that for there to be content, there must be vehicles of content--a medium in which that content is embodied.Endnote 1 I say the medium (in human beings) is neural processes, and hence the real qualities of these components are the qualities that neural processes have in the normal exercise of their functions. Being blue is not one of them, but encoding-blue-in-virtue-of-such-and-such-functional-properties can be. Sprigge would perhaps object that the medium of consciousness certainly doesn't seem to be neural processes, but if he did, I would suggest that this is because he is himself failing to make the distinction he thinks I ignore: he is mistaking the content for the medium. (This point will become clearer below, I trust, in my discussion of Seager.)

I consider Sprigge to have paid me a backhanded compliment for my arguments against qualia: he proposes to overcome them by jettisoning the standard appeal to what he calls the Humean denial of necessary connections between distinct existences. He says "once one realizes that something can both be a distinct quality of experience with its own inherent nature and also be necessarily related to certain behavioral dispositions one is released from this unpalatable choice." Sprigge is right about one thing: all the qualia arguments in the literature share the assumption that there is a contradiction in any view which says that there is a necessary connection between qualia and behavioral effects on the one hand (functionalism, "logical behaviorism") and that their identity is "intrinsic" or "inherent" on the other. I'm not known for my appeals to the sanctity of philosophical traditions regarding necessity and possibility, but when someone declares that a revolution in metaphysics is the way to evade my objections, I consider myself to have hit a nerve.

John Foster

I am similarly delighted to see that Foster thinks my arguments have secured one of my main conclusions: "Dennett's reasoning is impeccable: There is no way of preserving forms of non-cognitive presentation within a materialist framework." Of course we draw opposite conclusions from then on: I say so much the worse for "non-cognitive presentation" and he says so much the worse for materialism. Foster is not alone in reaching this verdict. One of the main themes of my book is that it is harder to be a good materialist than most casual materialists have thought, and it is been fascinating to me to see how many closet dualists I have driven into the open. This is progress no matter who "wins," for it obliges materialists to take more seriously the issues that have underlain the dualist impulse all along, while also offering dualists a perspective on how they might achieve at least some of the goals they care about without acquiescing in a theoretical cul-de-sac.

Foster shows that I was wise not to claim to have offered a definitive refutation of dualism. I am quite sure that no such refutation is possible, but that is faint praise for dualism. We can imagine lots of non-starters of which that is true, such as the theory that says that everything is just five minutes old but arranged just as if the universe was some billions of years old. Foster himself is prepared to join me in dismissing epiphenomenalism as a non-serious (though irrefutable) theory.

My main objection to dualism is that it is an unnatural and unnecessary stopping point--a way of giving up, not a research program. That is quite enough for me. Foster eventually confronts this objection, and asks why I should want to avoid dualism at all costs? Only because that is my way of keeping the scientific enterprise going. It is a self-imposed constraint: never put "and then a miracle happens" in your theory. Now maybe there are miracles, but they are nothing science should ever posit in the course of business. Temporarily unexplained whatchamacallits are often a brilliant stopgap--Newtonian gravitational attraction as "action at a distance" before Einstein, Mendelian genes before the discovery of the structure of DNA--but positing something which one has reason to believe must be inexplicable is going too far. At one point Foster discusses the prospect of treating the fundamental physical laws as "probabilistic": "We could then think of the interventionist causal role of the non-physical mind as that of selecting between, or at least affecting the probabilities of, these physically possible states." Now if, like Roger Penrose (1989), Foster supposed this "interventionist causal role" was explicable in principle, he would be advocating, like Penrose, a non-conservative, expansionist materialism, but he goes on to deny this, and he notes, correctly, that this would be enough to make the view anathema to me.

This raises Foster's main point: isn't this way of characterizing the difference between unacceptable dualism and tolerable expansionist materialism vacuous or question-begging? Why, Foster asks, should the dualist be required to explain things more deeply than the materialist? I'd pose a more lenient demand: that the dualist offer any articulated, non-vacuous explanation of anything in the realm of psychology or mind-brain puzzles. Since I am simply proposing a constraint on what sort of theory to take seriously, it really doesn't matter to me (except as a matter of communicative convenience) whether the term "dualism" is defined in such a way as to permit varieties of dualism to meet the constraint. Indeed, Nicholas Humphrey (1992) declares that his position is, in a certain sense, a kind of dualism, and yet since it undertakes to meet the demands of objective science, I consider it radical, but eminently worthy of attention now--not a theory to postpone till doomsday. And if Penrose were to declare that his position, too, was really a sort of dualism, and if this understanding of the term caught on, I'd want to shift nomenclature and find some new blanket pejorative for theories that tolerate "and then a miracle happens."

This sheds light on Foster's claim that my "prior rejection of dualism" is "the basis" for my denial of the inner theater.

What enables Dennett to represent his functional approach as correct is that, given the falsity of Cartesian dualism, there is no possibility of finding a 'central conceptualizer and meaner' to be the subject of the irreducible cognitive states and activities which our initial (anti-functionalist) intuitions envisage; and without such a subject, there is no serious rival to an account of cognition along functionalist lines.

Yes, one could say that I diagnose dualism as a sort of false crutch for the imagination; it gives people the illusion that they can understand how there could be a "central conceptualizer and meaner." A case in point: Foster asks

If the mind is a non-physical substance, what is its intrinsic nature? But it seems to me that the dualist has no special problems here. To begin with, I do not myself see why the dualist needs to admit that there is anything more to the nature of the mind than what introspection can reveal.

Recall the challenge I put to Sprigge about the room full of Marilyns. I claimed that we are not authoritative about the "components" of our conscious experience. Is Foster disagreeing? What does introspection reveal to him about the contents of his mind in this instance? Does it include lots of high-resolution Marilyns? I wonder if Foster would endorse the speech I put in Otto's mouth:

You argue very persuasively that there aren't hundreds of high-resolution Marilyns in the brain, and then conclude that there aren't any anywhere! But I argue that since what I see is hundreds of high-resolution Marilyns, then since, as you argue, they aren't anywhere in my brain, they must be somewhere else--in my nonphysical mind! (p.359)

Either way, the dualist "has special problems"; endorsing Otto's reliance on his introspection exposes the vacuity of dualistic "investigation," and denying it leaves dualism with no avenues of exploration.

Foster's account of my view is largely accurate and sympathetic, in spite of his deep opposition to it, but there is one point where he slightly misrepresents it. He is right that I argue against any such thing as "distinctively presentational, non-cognitive awareness" but it is not quite right to say that my view is "that consciousness, whether sensory or introspective, is purely cognitive--a matter of acquiring beliefs or making judgments," since I grant that contentful states of all sorts occur, and have effects that are in their varying degrees and ways constitutive of consciousness. The disruptive effect of pain, for instance, is not "cognitive" in itself; it is surely one of the most weighty factors in the family of dispositional factors that make something pain. In my discussion of Lockwood, I will review and expand upon the elements of my view that make Foster almost right.

Fellows and O'Hear

It is always reassuring to encounter accurate summaries of one's views by one's critics, and Fellows and O'Hear come up with some expressions that actually improve on the source text. For instance their version of the Cartesian Theatre is non pareil:

In the Cartesian Theatre, what seems to me is. If I seem to be a unitary self, then I am. If I seem to be seeing a bent stick or a pink elephant, then I am seeing such things--not in the external world, it is true, but in my private theatre nonetheless.

Their account of how I think heterophenomenology opens up an alternative to this vision is right on target: " . . . the fact that we are suffering from illusions with respect to our mental life does not show that there are illusory objects (qualia, selves) of which we are aware. For Dennett, it shows only that we are inclined to have and affirm false beliefs of various sorts."

They go on to attribute to me the "thesis that there is no more to experience than thinking, or that seeing is believing." This echoes Foster's reading (as just noted) and, again, I will defer my response of this crucial point to the discussion of Seager, and concentrate first on their alternative and its difficulties.

Dennett will say to us that our thought that we are more than zombies is an illusion we suffer from owing to a defective set of metaphors we use to think about the mind.

Exactly right.

We hope to show, by contrast, that Dennett's replacement metaphors leave out something which is essential to us as human beings.

What might that be? " . . . a seeming whose presence to our consciousness makes all the difference between human life and zombiehood or, quite simply, between animate and inanimate existence." Real seeming, in short. They cite Wittgenstein's claim that sensation is "not a something but not a nothing either," and go on to suggest that "the not-nothing which conscious experience is, is not the same thing as the judgments we make about it." They think, then, that I am not Wittgensteinian enough, whereas I think that this is one instance in which I am more Wittgensteinian than St. Ludwig himself, who chickened out in this oft-quoted passage. Or perhaps it is just his followers who have wanted to read more into his uncharacteristically awkward phrase than he meant. I suppose one can read a sort of vitalist subtext into some of Wittgenstein's comments, but that is the dark side of his inimitable chiaroscuro. In any event, Fellows and O'Hear support their suggestion not with further reflections from Wittgenstein, but--in a somewhat jarring juxtaposition--by a novel interpretation of Searle's Chinese Room.

They see exactly the point of my rejection of Searle's Chinese room, and they accurately recount my argument against it, but say it misses the point. Searle's argument has to do "with the ascent from formal or syntactic operations to semantical ones." Their discussion betrays a variety of naive misconceptions about computer science in general and AI in particular, but let's concentrate on the conclusion, not the path: programmes, they say, are intrinsically syntactical and only extrinsically semantical or meaning-bearing; "symbol shuffling by itself does not give any access to the meanings of the symbols." Access to what or to whom? I would say (deliberately adopting Searle's slanted terminology) that symbol shuffling is what makes there be something that could have access to, want access to, need access to, the meanings of symbols. They say "computer programmes need minds to read them, hence minds cannot be computer programmes merely" This just begs the question, of course, but it hints at the bottom line of their objection:

Dennett cannot, then, on pain of circularity, say that it is just another text or set of texts which gives texts meaning. . . . At least some texts will have to have the meaning-conferring properties of selves and agents.

Aha! They would break the threatened circle (or regress) with a Central Meaner. Or, being good Wittgensteinians, if not a Central Meaner, then an animate meaner over and above (somehow) the merely apparent meaner to be found in a zombie or robot. There is some very dubious animism or vitalism hinted at (e.g., in the concession about what a Frankenstein could, in principle, do with "living tissue"). I will forbear trotting out the usual objections to this vitalistic theme, since they are presumably as familiar and unpersuasive to Fellows and O'Hear as they are familiar and conclusive to me. Instead, I will simply highlight my alternative.

Philosophers often maneuver themselves into a position from which they can see only two alternatives: infinite regress versus some sort of "intrinsic" foundation--a Prime Mover of one sort or another. There is always another alternative, which naturalistic philosophers should look on with favor: a finite regress that peters out without marked foundations or thresholds or essences. Here is an easily avoided paradox: every mammal has a mammal for a mother--which implies an infinite genealogy of mammals (which cannot be the case). The solution is not to search for an essence of mammalhood that would permit us in principle to identify the Prime Mammal, but rather to tolerate a finite regress that connects mammals to their non-mammalian ancestors by a sequence that can only be partitioned arbitrarily. The reality of today's mammals is secure without foundations. (For more on this theme in my work, see Dennett, forthcoming b.) In this instance, the solution is to show via a finite regress (or progress, if one works "bottom up') how it can be the case that "intrinsically syntactic" mechanisms ultimately compose systems that deserve to be called semantic engines, capable of "seeing [each other's] outpourings in semantic terms."

The last trump played by Fellows and O'Hear is an unflinching defense of the reality of the self: "Each of us can say that unless . . . there was a genuine sense of an 'I' as the centre of my experience, there would be no way to fix or locate a particular set of thoughts or texts as the place from which I operated." What puzzles me about this argument is that it seems quite obvious to me that everything they say about indexicality and confusion about location applies to zombies--we just have to add scare-quotes to avoid begging the question. Zombies are presumably as subject to the disruptive disorders of Multiple Personality Disorder and scatter-brainedness as we are, after all, and when they manage to rise above these afflictions, it is because they have a well-designed "sense of an 'I'." A zombie can "wonder" where he is, "discover" that the hand in the yellow glove is not his own hand after all, and "recognize" himself in a mirror. I don't see that Fellows and O'Hear have offered any reasons for dismissing the solution to the problem of indexicality I proposed: you are what you control. It's as simple as that. Or as I put it: "How come I can tell you all about what [goes] on in my head? Because that is what I am; . . . a knower and reporter of such things in such terms is what is me." (p410)

Finally, after an excellent summary of my position, they say that I fail to explain thereby "how it is that Otto could be said to be mistaken unless there was an Otto who was, in the sense Dennett wants to rule out, the subject of the deception." Let's consider this objection in more detail. Suppose Otto were a mere zombie. Then he would be mistaken in his pseudo-beliefs (the pseudo-beliefs he "expresses" in the speeches I have given him). If an unconscious zombie can have a "belief", he can have a mistaken "belief". Fellows and O'Hear, like Searle and many others, have already conceded that there would be a certain explanatory robustness and systematicity to scare-quoted talk about the "beliefs" and other pseudo-mental states of zombies. It falls to them therefore to show that there is a further problem with the attribution of particular sorts of beliefs--e.g., indexical beliefs or higher-order beliefs or mistaken beliefs--and I see no grounds being given.

Michael Lockwood

The most strikingly pre-emptive criticism I have encountered to my theory is simple: I have (quite obviously) not explained consciousness at all; I have left out consciousness; I have "side-stepped" the central puzzle.Endnote 2 I have missed the whole point. Now this would be a rather strange sort of neglect, even by the standards of neuropsychology--to set out to write a book explaining consciousness, and to write a book that actually accomplishes something but nevertheless entirely overlooked the project it putatively set out to cover. If I told you someone had written a book entitled The Arab-Israeli Conflict Solved, and that it somewhat surprisingly neglected to mention the fact that the Arabs and Israelis dispute the right to certain tracts of land in the Middle East, you would probably conclude that the author must be insane (or post-modern, if that means anything different).

But so alien is my explanation of consciousness to many readers that they do not even recognize it as a flawed explanation, or a refuted explanation of consciousness--they don't see that I have even tried to tackle what they consider the nub. Sometimes the confidence with which this view is expressed amuses me. It reminds me of an encounter I just learned about from a colleague. A team of medical educators recently returned to Boston from Mexico, where they had been engaged in a project to teach the principles of birth control to uneducated women in remote areas. Armed with all the latest audio-visual equipment, they had held large groups of these women spellbound with their videotapes of microscopic sperm wriggling their way towards an ovum, computer-animated diagrams of conception, and so forth. After one presentation, one of the spectators was asked her opinion. "It's really very interesting how you people make babies," she replied, "but here we don't do it that way. You see, our men have this milky fluid that comes out of their penises . . . " It is almost embarrassingly obvious to Sprigge and Clarke, for instance, that I could not be talking about consciousness--the consciousness they know so well--and so they are tempted to conclude that I must be a very different sort of animal!

Which is harder to credit: that I would write a book that didn't even try (in spite of its title), or that they (and so many of them!) would fail to see an attempt as an attempt? I could point to those who do see my book as offering not only an explanation but a good one, but the other side will clearly view them as taken in by my slippery rhetoric, lulled to sleep by tricks and examples. Somebody is missing something big; who is missing what?

Faced with such a curious question, I find Lockwood's essay a godsend, for he sees exactly what my attempt at explanation is, and sees that it is such an attempt, and such a radical one that he can scarcely believe that I mean it. But he gives me the benefit of the doubt, thank goodness. His first line of attack concerns the consciousness of animals and infants. This is a frequently voiced objection to my theory of consciousness as a culture-borne virtual machine: isn't my theory refuted by the obvious fact that animals bereft of culture (and, of course, newborn human infants) are conscious? Lockwood, appealing as so many do to Nagel's "what it is like to be" formula, says:

Consciousness in this sense is presumably to be found in all mammals, and probably in all birds, reptiles and amphibians as well.

It is the "presumably" and "probably" that I want us to attend to. Lockwood gives us no hint as to how he would set out to replace these terms with something more definite. I'm not asking for certainty. Birds aren't just probably warm-blooded, and amphibians aren't just presumably air-breathing. Nagel confessed at the outset not to know--or to have any recipe for discovering--where to draw the line as we descend the scale of complexity (or is it the cuddliness scale?). This embarrassment is standardly waved aside by those who find it just obvious that there is something it is like to be a bat or a dog, equally obvious that there is not something it is like to be a brick, and unhelpful at this time to dispute whether it is like anything to be a fish or a spider (to choose a few standard candidates for the median).

Fellows and O'Hear put the same point somewhat more circumspectly:

animals and human infants seem to be conscious perfectly well without the mediation of any culturally acquired 'software'.

I agree; they seem to be. But are they? And what does it mean to say they are or they aren't? It has passed for good philosophical form to invoke mutual agreement here that we know what we're talking about even if we can't explain it yet. I want to challenge that standard methodological assumption. I claim that this question has no clear pre-theoretical meaning, and that since this is so, it is ideally suited to play the deadly role of the "shared" intuition that conceals the solution from us. Maybe there really is a huge difference between us and all other species in this regard; maybe we should consider the idea that there could be unconscious pains (and that animal pain, though real, and--yes--morally important, was unconscious pain); maybe there is a certain amount of generous-minded delusion (which I once called the Beatrix Potter syndrome) in our bland mutual assurance that, as Lockwood puts it, "Pace Descartes, consciousness, thus construed, isn't remotely, on this planet, the monopoly of human beings."

How, though, could we ever explore these "maybes"? We could do so in a constructive, anchored way by first devising a theory that concentrated exclusively on human consciousness--the one variety about which we will brook no "maybes" or "probablys"--and then look and see which features of that account apply to which animals, and why. There will still be plenty of time to throw out our theory if and when we find it fails to carve nature at the joints, and we might just learn something interesting.

Forget culture, forget language. The mystery begins with the lowliest organism which, when you stick a pin in it, say, doesn't merely react, but actually feels something.

Indeed, that is where the mystery begins if you insist on starting there, with the assumption that you know what you mean by the contrast between merely reacting and actually feeling. In an insightful essay on bats (and whether it is like anything to be a bat), Kathleen Akins (forthcoming) shows that Nagel inadvisedly assumes that a bat must have a point of view. There are many different stories that can be told from the vantage point of the various subsystems that go to making up a bat's nervous system, and they are all quite different. It is tempting, on learning these details, to ask ourselves "and where in the brain does the bat itself reside?" but this is an even more dubious question in the case of the bat than it is in our own case! There are many parallel stories that could be told about what goes on in you and me. What gives one of those stories about us pride of place at any one time is just that it is the story you or I will tell if asked (to put a complicated matter crudely). When we consider a creature that isn't a teller--has no language--what happens to the supposition that one of its stories is privileged? The hypothesis that there is one such story that would tell us (if we could understand it) what it is actually like to be that creature dangles with no evident foundation or source of motivation--except the dubious tradition appealed to by Lockwood, and Fellows and O'Hear.

Bats, like us, have plenty of relatively peripheral neural machinery devoted to "low level processing" of the sorts that are routinely supposed to be entirely unconscious in us. And bats have no machinery analogous to our machinery for issuing public protocols regarding their current subjective circumstances, of course. Do they then have some other "high level" or "central" system that plays a privileged role? Perhaps they do and perhaps they don't. Perhaps there is no role for such a level to play, no room for any system to perform the dimly imagined task of elevating merely unconscious neural processes to consciousness. Lockwood says "probably" all birds are conscious, but maybe some of them--or even all of them--are rather like sleepwalkers, or non-zimbo zombies! The hypothesis is not new. Descartes notoriously held a version of it, but it is Julian Jaynes (1976) who deserves credit for resurrecting it as a serious candidate for further consideration. It may be wrong, but it is not inconceivable--except to those who cling to their traditions as if they were life-rafts.

And what of the "one great blooming, buzzing confusion" of infant consciousness? (James, 1890, p.462.) Well, vivid as James's oft-quoted (and misquoted) phrase is--a rival on the philosophy hit parade for Nagel's formula--it manifestly presumes more than any cautious investigator would claim to be able to support. That the inchoate human brain is unorganized to some degree is not in doubt; that this incipient jumble of competing circuits is experienced as anything at all by the infant is the merest presumption. It may be, and then again, it may not. The standard working assumption appealed to by Lockwood and Fellows and O'Hear doesn't let us consider these as open hypotheses, in spite of the considerable scientific grounds for doing so. At least some animals and infants seem to be conscious in just the way we adults do, but when we adopt an investigative strategy that first develops an articulated theory of adult human consciousness, and then attempt to apply it to other candidates (as I do in the last chapter), it turns out that appearances are misleading at best.

In particular, the very idea of there being a dividing line between those creatures "it is like something to be" and those that are mere "automata" begins to look like an artifact of our traditional presumptions. Since in the case of adult human consciousness there is no principled way of distinguishing when or if the mythic light bulb of consciousness is turned on (and shone on this or that item), debating whether it is "probable" that all mammals have it begins to look like wondering whether or not any birds are wise or reptiles have gumption. Of course if you simply will not contemplate the hypothesis that consciousness might turn out not to be a property that sunders the universe in twain, you will be sure that I must have overlooked consciousness altogether, since I entertain and even defend this hypothesis.

Lockwood recognizes that my defense of the scarcely credible hypothesis involves denying "the reality of the appearance itself." Like Siewert, whose views I will turn to next, Lockwood appreciates the pivotal role of the example of blindsight in my campaign against "real seeming." The standard presumption is that blindsight subjects make judgments (well, guesses) in the absence of any qualia, and I use this presumption to build the case that ordinary experience is not all that different.

Dennett's position, in effect, is that it is only in degree that normal sight differs from blindsight. Normal sight carries with it far greater confidence in the corresponding judgments, and is of vastly greater discriminative power; but there is, in the end, no qualitative difference between that and blindsight.

Exactly. Lockwood aptly presents my Marilyns case as a supporting argument, and grants: "Here, then, is a concrete instance of an illusion of phenomenology." It is only the extension of my claim from this example that he cannot accept, because he cannot see how I could explain "the activity of turning the 'spotlight of attention' on to the deliverances of our senses. He says:

So what are we supposed to be doing? Simply generating new judgments and checking the old ones against them? Surely not. Judgments, Otto would insist, are too anaemic, too high-level, too intellectual to do duty for the substance of sensation and perception.

Lockwood's Otto has just echoed the objections of Foster, Sprigge, and Fellows and O'Hear to my suggestion that "seeing is believing." A nice thing about Otto (even when it is somebody else putting the words in his mouth!) is that he actually suffers from the failures of imagination that I only suspect other philosophers of succumbing to. In this instance, Otto has usefully betrayed the source of his error: he is thinking of judgments on the mistaken model of a short, simple sentence you might say to yourself (with conviction) in, oh, less than a minute's-worth of silent soliloquy. Such a judgment is pretty thin gruel, compared to the zing of real seemings. (As Lockwood adds, "I hear Otto asking: 'Is an orgasm merely a judgment, or bundle of judgments?'")

What are judgments, then, if they are not to be modeled on sentences expressed to oneself? Haven't I myself on occasion called them propositional episodes? Yes, and I beg to remind my fellow philosophers that propositions, officially, are not the same as sentences in any medium, and as abstractions they come in all "sizes." There is no upper bound on the "amount of content" in a single proposition, so a single, swift, rich "propositional episode" might (for all philosophical theory tells us) have so much content, in its brainish, non-sentential way, that an army of Prousts might fail to express it exhaustively in a library of volumes.

Is it "remotely credible" that "seeing is believing"? Lockwood's Otto is incredulous because he has fallen for some covert (or "sophisticated") version of the following: Seeing is like pictures, and believing is like sentences, so since a picture is worth a thousand words, seeing could not be believing!

If you think that the contrast between "merely verbal" and "imagistic" (Sprigge) secures a distinction between (informational) content and quality, you are dismissing a major theoretical option without trying properly to imagine it. (For more on this theme, see Dennett forthcoming c.)

Charles Siewert Siewert, in his scrupulous, ingenious essay, shares with the other authors the reluctance to abandon qualia (or, in his terms, "visual quality") and, like Lockwood, he sees my discussion of blindsight as a weak link in my chain of arguments. I have claimed that there is no gradual story that can be coherently told that takes us from actual blindsight to zombiehood. Siewert accepts the challenge:

But now if we can imagine a sort of minor loss of consciousness with conscious-like responsiveness intact in the case of unprompted blindsight, then why not suppose this sort of loss gradually augmented, so that the variety and extent of consciousness diminishes finally to nothing, while the behavior remains that of a conscious human being? If there is no conceptual obstacle to this, we arrive in piecemeal fashion at the notion of a totally unconscious morpho-behavioral homologue to ourselves--the dreaded zombie. . . .

This is just the sort of examination of an intuition pump that I recommend. Can the crucial knobs be turned or not? He sees that the burden of proof here is delicately poised--judgments of conceivability or inconceivability are too easily come by to "count" without something like a supporting demonstration, and that will have to involve a careful survey of possible sources of illusion or confusion.

Let us first catalogue the differences that have to be traversed as we move from actual blindsight to the target of zombiehood (for the moment, just partial zombiehood--visual zombiehood). Actual blindsight subjects need to be prompted. They claim they see nothing (in their blind fields), and moreover, they don't spontaneously volunteer any judgments, or modulate their nonverbal actions on the basis of visual information arising from the blind region. Actual blindsight subjects exhibit sensitivity to a very limited or crude repertoire of contents: they can guess well about the shape of simple stimuli, and very recently (since Consciousness Explained was published) evidence of color-discrimination has been secured by Stoerig and Cowey (1992), but there is still no evidence that blindsight subjects have powers beyond what can be manifest in binary forced choice guessing of particular widely-dispersed properties. (No one has yet shown delicate color discriminations in blindsight--or even the capacity to tell a red cup on a green saucer from a green cup on a red saucer, for instance.)

My contention is that what people have in mind when they talk of "visual" consciousness, "actually seeing" and the like, is nothing over and above some collection or other of these missing talents (discriminatory, regulatory, evaluative, etc.); I don't know where to "draw the line"--I leave that to those who disagree with me--but I should think that any believer in visual properties is going to become embarrassed at some point in the traverse from actual blindsight to partial zombiehood. Let us look at Siewert's path. He asks you to imagine having a blindsight scotoma, but noticing that "you are on occasion struck by the thought, as by a powerful hunch or presentiment, that there was something just present (say, an X) in the area corresponding to your deficit." This thought, though conscious, would not be an instance of a conscious visual experience, however accurate and reliable, he claims, and he supposes that I would say it is a "terrible mistake" to claim to be able to imagine this prospect. But I agree that it is readily imaginable. I agree that if I found myself having such hunches, and grew to rely on them, I would still be unlikely to consider them instances of visual consciousness--but only because they are (as imagined by Siewert) so poor in content: an X or even an X suddenly moving left to right, and currently just about there or even a pink X. The paradigmatic presentiment is a content-sparse propositional episode, while vision is paradigmatically rich.

Siewert finds this line of mine unpersuasive. He even sees as "evasive" what I consider to be an essential move. Let me review the bidding: we're talking about an intuition-pump transition, excellently oriented and posed by Siewert, and I have drawn attention to a feature that is, I claim, doing the dirty work: the tacit assumption that the "amount of content" or whether a discriminative talent is "high-grade" (Siewert's term) makes no difference. This is the knob we must turn this way and that to see what happens. Let's try, slowly.

Can I imagine having a presentiment, lacking all visual quality, but with a full serving of visual content?

(A) It suddenly occurred to me that there was a wad of crumpled paper lying on the floor, shaped remarkably (if viewed from this angle) like a sleeping kitten, except that (it suddenly occurred to me) the sun was glinting off the edges just so, and this led me to have the further hunch that if I squinted, the wad of paper seemed to be exactly the same color as my bedspread over there. . . but of course there was nothing visual about my experience--I'm blind!

What effect does speech (A) have on your intuitions? As the content rises, as the visual competence becomes higher and higher grade, do you find yourself less willing to take the subject's word for what it is like? Perhaps you find yourself tempted to declare that nobody could have presentiments that rich in content without their being somehow based on, or at least accompanied by, visual qualities. That gives away the game, however, since it implies that there couldn't be a visual zombie after all; anyone who could pass all those "behavioral" vision tests would have to have visual qualities "on the inside". Anybody who said (A) to me would arouse my suspicion that they were suffering from some sort of hysterical linguistic amnesia.Endnote 3 So I have trouble imagining myself asserting (A)--except as a joke. But I can imagine it. That is, I can imagine finding myself in the curious position of wanting to say that my current hunch had all that content (and more--much more than I could express in ordinary conversational time) while at the same time wanting to insist that nevertheless my experience was strangely missing something--something I might want to call visual quality. But when I imagine myself in this circumstance, I find myself hoping that I would also have the alertness to question my own desire so to speak. "Gosh! Maybe I'm suffering from some strange sort of hysterical semantic amnesia!" After all, some colorblind people are oblivious of their affliction, and I suppose there could be an opposite condition, a sort of visual hypochondria, or what we might call "acute vision nostalgia." ("Oh yes, I can still make visual judgments, color judgments and the like, but, you know, things just don't look the way they used to look! In fact, things don't look to me like anything at all! I've lost all visual seeming--I'm blind, in fact!") What I have a very hard time imagining is what could induce me to think I could choose ("from the inside") between the hypothesis that I really had lost all visual quality (but none of the content), and the hypothesis that I had succumbed to the delusion that other people, no more gifted at visual discernment than I, enjoyed an extra sort of visual quality that I sadly lacked.

If I found myself in the imagined predicament, I might well panic. In a weak moment I might even convert, and give up my own theory, But that is just to say that in fact I not only can imagine that my theory is wrong; I can even imagine myself coming to believe that it is wrong. Big deal. I can also imagine myself having the presence of mind in these bizarre straits to take seriously the hypothesis that my own theory favors: I'm deluding myself about the absence of visual quality. That might even seem obvious to me.

So it is not, as Siewert realizes, a simple question of what he or I can and can't imagine. I have argued against the familiar idea among philosophers that blindsight offers a clean example of visual function without visual quality--a secure first step towards taking the concept of zombies seriously. I don't consider myself to have given a conclusive a priori argument against this idea, but just to have offered a plausible alternative account that explains (I claim) the same primary phenomena--and the secondary phenomena: the tendency of philosophers to overlook my account.

Siewert sees that there is an ominous stability (ominous by his lights) to my position, and he diagnoses its dependence on an epistemological position of mine he calls "third-person absolutism." As one who thinks absolutism of any sort is (almost!) always wrong, I heartily dislike the bloodcurdling connotations of this epithet, but I think he's got my epistemological position clear. Whatever the position is called, it is not a rare one. It is, in fact, the more or less standard or default epistemology assumed by scientists and other "naturalists" when dealing with other phenomena, and for good reason. As he notes, he has yet to work out the details of a defense of an alternative that doesn't slide into solipsism or something equally bad.

While waiting for him to compose a justification for his novel epistemology, with its "distinctive warrant", I propose a pre-emptive strike--at any rate a glancing blow. According to Siewert's neutral epistemology, certain things are conceivable that are not (or not clearly) conceivable according to standard "third-person absolutism". And so . . .? Would this show that these things are actually possible, or would it show that this novel epistemology is too lenient? Should science take these newly conceived possibilities seriously? Why? The "neutrality" of his proposed vantage point midway between traditional first-person authority and traditional third-person objectivity is fragile. At least the old infallibility doctrine had a certain self-supporting chutzpah in its favor.

William Seager

Seager's essay is constructed around the attempt to present me with a dilemma: "either his verificationism must be so strong as to yield highly implausible claims about conscious experience, or it will be too weak to impugn the reality of actual phenomenology." Like others (e.g., some of the commentaries in Behavioral and Brain Sciences on Dennett and Kinsbourne, 1992, and Baars and McGovern, forthcoming), Seager wants to diagnose the central move of my theory as a more or less standard verificationist power play--too strong to command assent. In the months that have intervened since my book went to press, I've composed several corrective passages that put my apparent arch-verificationism in better light. (Those darn Multiple Drafts--there's just no keeping up with them!) They are tailor-made, it turns out, to meet Seager's objections, so, with apologies for self-quotation, I will repeat them below, since I think it is valuable to get all this point and counterpoint together in a single place.

As Seager shows very clearly, in his careful discussion of the color phi case, the difference between his H1 and H2 is that while experience is "generated" in both cases, this happens either before (Orwellian) or after (Stalinesque) the binding of the mid-trajectory color shift. And he notes that my account "does not require consciousness at all"--that is, does not require the "generation of experience" as a separate component in the manner of H1 and H2. He sees this "disturbing", since it seems to him to imply that all is dark inside. But of course! We know that to be true. There is nothing "luminous" (as Purvis put it) going on in the brain.

The conviction that this extra, well-lit process must occur is, of course, a persistent symptom of Cartesian Materialism, and Seager's view is illuminated (if I may put it that way) by Lockwood's ingenious dramatization. Lockwood imagines a version of my Multiple Drafts Model that retains the Cartesian Theater, as a stage (or at least a "pool of light") in which an "avant garde" play is performed, complete with inconsistent flashbacks, revisions, and tampered "instant replay" video. This does indeed preserve, as Lockwood claims, all the curious temporal features I used to disparage the Cartesian Theater, without abandoning the presentation process. I welcome this elaboration, for it lays bare the fundamental problem: Lockwood's troupe is not avant garde enough! Why should they bother with the boring and bourgeois ritual of actually presenting the play (with actors in costume, etc.) when all they really have to do is send their real-time script-revisions directly to the critics, libraries, and subscribers? The content is all there, in time to have its apposite effects (their play will seem to have been performed), and you save a fortune on lighting! There has to be some extra role for presentation in this special medium, and Lockwood offers nothing but "common sense" in favor of the need for such a process or such a medium.Endnote 4

Seager tries to fill this gap by developing what he considers an embarrassment, if not a reductio. He formulates

(H3) There is conscious experience

(H4) There is no conscious experience, but (false) memories of conscious experience are being formed continuously . . .

and notes that it follows that I cannot distinguish these. He is right; H3 and H4 are just a different way of stating the apparently rival hypotheses that you are conscious or that you are a zimbo who only mistakenly thinks he's conscious, and my conclusion is indeed that the apparent difference between these hypotheses is an artifact of bad concepts. I should have made this more explicit in the book, but it took a critic, Bruce Mangan (forthcoming), to distill the essence of the point. Consciousness, he proposes, is "a distinct information-bearing medium":

He points out that there are many physically different media of information in our bodies: the ear drum, the saline solution in the cochlea, the basilar membrane, each with its own specific properties, some of which bear on its capacity as an information medium (e.g., the color of the basilar membrane is probably irrelevant, but its elasticity is crucial). "If consciousness is simply one more information-bearing medium among others, we can add it to an already rather long list of media without serious qualms."

But now consider: all the other media he mentions are fungible (replaceable in principle without loss of information-bearing capacity, so long as the relevant physical properties are preserved). As long as we're looking at human "peripherals" such as the lens of the eye, or the retina, or the auditory peripherals on Mangan's list, it is clear that one could well get by with an artificial replacement. So far, this is just shared common sense; I have never encountered a theorist who supposed an artificial lens or even a whole artificial eye was impossible; getting the artificial eye to yield vision just like the vision it replaces might be beyond technological feasibility, but only because of the intricacy or subtlety of the information-bearing properties of the biological medium.

And here is Mangan's hypothesis: when it comes to prosthetic replacements of media, all media are fungible in principle except one: the privileged central medium of consciousness itself, the medium that "counts" because representation in that medium is conscious experience. What a fine expression of Cartesian materialism! I wish I had thought of it myself. Now neurons are, undoubtedly, the basic building blocks of the medium of consciousness, and the question is: are they, too, fungible? The question of whether there could be a conscious silicon-brained robot is really the same question as whether, if your neurons were replaced by an informationally-equivalent medium, you would still be conscious. Now we can see why Mangan, Searle, and others are so exercised by the zombie question: they think of consciousness as a "distinct medium", not a distinct system of content that could be realized in many different media. . . . The alternative hypothesis, which looks pretty good, I think, once these implications are brought out, is that, first appearances to the contrary, consciousness itself is a content-system, not a medium. And that, of course, is why the distinction between a zombie and a really conscious person lapses, since a zombie has (by definition) exactly the same content-systems as the conscious person. (Dennett, forthcoming d)

So I do not shrink from the apparently embarrassing implication Seager adduces. He goes on, in any case, to offer further arguments against my supposed verificationism. The first concerns time scale. He can see no reason why the difference in time scale between the absent-minded driving case and the cutaneous rabbit case should lead me to describe them in different terms. I say the driving case as best described as "rolling consciousness with swift memory loss", and Seager quite properly asks why we shouldn't conceive of the cutaneous rabbit in just the same (Orwellian) way. There is indeed only a difference in degree--in elapsed time--but for that very reason, of course, also in collateral effects; and my claim is that these collateral effects are just the differences in degree that eventually yield us the only difference that can be made out. Consider the following parallel:

. . . certain sorts of questions about the British Empire have no answers, simply because the British Empire was nothing over and above the various institutions, bureaucracies and individuals that composed it. The question "Exactly when did the British Empire become informed of the truce in the War of 1812?" cannot be answered. The most that can be said is "Sometime between December 24, 1814 and mid-January, 1815." The signing of the truce was one official, intentional act of the Empire, but the later participation by the British forces in the Battle of New Orleans was another, and it was an act performed under the assumption that no truce had been signed. Even if we can give precise times for the various moments at which various officials of the Empire became informed, no one of these moments can be singled out--except arbitrarily--as the time the Empire itself was informed. Similarly, since You are nothing over and above the various subagencies and processes in your nervous system that compose you, the following sort of question is always a trap: "exactly when did I (as opposed to various parts of my brain) become informed (aware, conscious) of some event?" Conscious experience, in our view, is a succession of states constituted by various processes occurring in the brain, and not something over and above these processes that is caused by them.(Dennett and Kinsbourne, 1992b, p.235-36.)

There is nothing other than the various possible, normal or abnormal, collateral effects of various content-determinations that could count towards (or against) any particular verdict regarding the relative timing of consciousness, so when those effects are reduced to near zero, there is nothing left to motivate a verdict.

This point is even better illustrated in response to Seager's discussion of dreams. He astutely observes that much of the theoretical apparatus of my 1976 paper, "Are Dreams Experiences?" foreshadow the analyses in Consciousness Explained--right down to my apparently outrageous suggestion, way back then, that one might dream a dream backwards but remember it back-to-front, a bit of elbow room for the brain that is not just possible in principle, but (I now claim) necessary in practice. What about forgotten dreams? Is it the case that no test could reveal whether we had them or not? No, there are lots of imaginable tests that could determine whether or not, while you slept, a particular narrative was activated, composed, re-activated, rehearsed, etc.--while remaining entirely inaccessible to waking recollection in the morning. What would be beyond testing is the apparent distinction between this all going on entirely unconsciously and going on in the consciousness of dreams.

So I can appeal to the findings of sleep researchers (of the future--as Seager says, the REM findings are nowhere near enough) to remove "forgotten dreams from the realm of the unverifiable"--but the price, which I for one will gladly pay, is that by "dreams" we have to equivocate (apparently) between conscious dreams and unconscious (zomboid) dreams. I am not sure why he thinks I hold there would be no way of investigating these hypotheses. I have already stipulated that all the various contents in all the narrative threads can be (in principle) identified, and their vehicles traced, timed, and located, so there will be no bar at all to the discovery of what Seager calls episodes of narrative spinning. There will be less reason than ever for calling them conscious, of course, and this was the germ of truth in Norman Malcolm's notorious claims.

This still seems verificationist, of course, but the appearance is misleading, and I now have a new way of clarifying my position, thanks to Lockwood. In a debate with me at Amherst College some months ago, Michael came up with a wonderful phrase (which appears in slightly revised form in his essay in this journal): consciousness, he said (with an air of reminding his audience of the obvious) is "the 'leading edge' of . . . memory." "Edge? Edge?" I replied, "What makes you think there is an edge?" and my response to him on that occasion has since grown into a separate paper, "Is Perception the 'Leading Edge' of Memory?" (forthcoming e). It also provoked me to compose yet another little story, which I have used to stave off this misconception in another reply to critics (Dennett, forthcoming d):

You go to the racetrack and watch three horses, Able, Baker and Charlie, gallop around the track. At pole 97 Able leads by a neck; at pole 98 Baker, at pole 99 Charlie, but then Able takes the lead again, and then Baker and Charlie run ahead neck and neck for awhile, and then, eventually all the horses slow down to a walk and are led off to the stable. You recount all this to a friend, who asks "Who won the race?" and you say, "Well, since there was no finish line, there's no telling. It wasn't a real race, you see, with a finish line. First one horse led and then another, and eventually they all stopped running." The event you witnessed was not a real race, but it was a real event--not some mere illusion or figment of your imagination. Just what kind of an event to call it is perhaps not clear, but whatever it was, it was as real as real can be.

Notice that verificationism has nothing to do with this case. You have simply pointed out to your friend that since there was no finish line, there is no fact of the matter about who "won the race" because there was no race. Your friend has simply attempted to apply an inappropriate concept to the phenomenon in question. That's just a straightforward logical point, and I don't see how anyone could deny it. You certainly don't have to be a verificationist to agree with it. I am making a parallel claim: the events in the brain that contribute to the composition of conscious experiences all have locations and times associated with them, and these can be measured as accurately as technology permits, but if there is no finish line in the brain that marks a divide between preconscious preparation and the real thing--if there is no finish line relative to which pre-experienced editorial revision can be distinguished from post-experienced editorial revision--the question of whether a particular revision is Orwellian or Stalinesque has no meaning.

There would be a finish line if there were, in the brain, a transduction of information into a new medium, but I have argued that there is no such transduction. The functions or competences that together compose what we think of as definitive of consciousness eventually come to apply to some of the various contents that float by in our brains; it is access to these functions, and nothing else, that puts contents into our streams of consciousness (in contrast to our streams of unconsciousness). There is a stream of consciousness, but there is no bridge over the stream!

***

I have tried, in these responses, to repay in kind the respect these critics have paid to my book. Better than ever I appreciate how hard it is to make oneself take seriously views one finds outrageous, how easy it is to be tempted by cheap caricature. These critics have set a good example, time and again coming up with keenly observed and constructively expressed versions of doctrines of which they are deeply skeptical. As one who is all too often deeply disappointed and embarrassed by the way my fellow philosophers snipe at each other, I would like to express my deep satisfaction with the way this encounter has come out.

When I have pounced with glee on telling turns of phrase in my opponent's essays, I hope I have managed to be as fair as they have been with me. My belief is that it is in relatively casual and unguarded choices of expression that we philosophers tend to betray what is really moving us, so opportunistic "pouncing" is an ineliminable part of philosophical method. What is required to keep it from deteriorating into cheap debating tricks and sea-lawyering is, on the side of the pouncer, a proper attention to the principle of charity, and on the side of the pouncee, a willingness to listen, to entertain the other side's points before composing rebuttals--or (wonder of wonders) concessions. It is a pleasure and an honor to count these philosophers as not just the loyal opposition, but as fellow investigators on what must be, in the end, a common project.

Endnotes

1.For more on the "medium" of consciousness see below, in the discussion of Seager.

2.In his review in Science, Dale Purves (1992) claims I "sidestep" the question of consciousness in my book because what consciousness is is a "luminous and immediate sense of the present, about which we are quite certain." He never attempts to unpack that metaphor of luminosity, and in the end he allows as how "metaphor is not enough"--I quite agree.

3. Or perhaps that variety of temporal lobe epilepsy for which pronounced "philosophical interest" is known to be a defining symptom. See, e.g., Waxman and Geschwind, 1975

4.He suggests that I am wrong about "filling in" and cites Ramachandran's recent research as bearing on this. I welcome the attention drawn to Ramachandran's work, for in fact it ends up supporting my view, not undermining it. The issues are much too involved to do justice to here, but Churchland and Ramachandran (forthcoming) present the attack in great detail, and I reply in kind in three papers (Dennett, 1992, forthcoming a and c).

References

Akins, K., forthcoming, "What is it Like to be Boring and Myopic?" in B. Dahlbom, ed., Dennett and his Critics: Demystifying Mind, Oxford: Blackwells.

Baars, B., and McGovern, K., forthcoming, "Does Philosophy Help or Hinder Scientific Work on Consciousness?" in Consciousness and Cognition.

Churchland and Ramachandran (forthcoming), "Filling In: Why Dennett is Wrong," in B. Dahlbom, ed., Dennett and his Critics: Demystifying Mind, Oxford: Blackwells.

Dawkins, R., forthcoming, "Viruses of the Mind," in B. Dahlbom, ed., Dennett and his Critics: Demystifying Mind, Oxford: Blackwells.

Dennett, D. C., 1976, "Are Dreams Experiences?" Phil. Review, April, pp. 151-71.

Dennett, D. C., 1978a "Current Issues in the Philosophy of Mind," American Philosophical Quarterly, October, pp. 249-61.

Dennett, D. C., 1987, The Intentional Stance, Cambridge, MA: MIT Press.

Dennett, D. C., 1992, "Filling in vs. Finding out: a ubiquitous confusion in cognitive science," H. Pick, P. Van den Broek, D. Knill, eds., Cognition, Conceptual, and Methodological Issues, Washington, DC: American Psychological Association.

Dennett, D. C., forthcoming a, "Back From the Drawing Board," in B. Dahlbom, ed., Dennett and his Critics: Demystifying Mind, Oxford: Blackwells.

Dennett, D. C., forthcoming b, "Self Portrait," for S. Guttenplan, ed., Companion to the Philosophy of Mind, Oxford: Blackwell.

Dennett, D. C., forthcoming c, "Seeing is Believing--or is it?" in K. Akins, ed., Perception (Vancouver Studies in Cognitive Science, vol 5), Oxford: Oxford Univ. Press.

Dennett, D. C., forthcoming d, "Caveat Emptor" (reply to my critics) in Consciousness and Cognition.

Dennett, D. C., forthcoming e, "Is Perception the 'Leading Edge' of Memory?" in A. Spadafora, ed., Memory and Oblivion, Locarno Conference, Locarno, Switzerland, October, 1992.

Dennett, D. C., and Kinsbourne, M., 1992, "Time and the observer: The Where and When of Consciousness in the Brain," Behavioral and Brain Sciences, 15, pp.183-200.

Dennett, D. C., and Kinsbourne, M., 1992b, "Escape from the Cartesian Theatre" (reply to commentators), Behavioral and Brain Sciences, 15, pp.234-47.

Humphrey, N., 1992, A History of the Mind, London: Chatto & Windus; New York: Simon & Schuster.

James, W., 1890, The Principles of Psychology, Cambridge, MA: Harvard University Press (1983 edition).

Jaynes, J., 1976, The Origins of Consciousness in the Breakdown of the Bicameral Mind, Boston: Houghton Mifflin.

Mangan, B., forthcoming, "Dennett, Consciousness, and the Sorrows of Functionalism," in Consciousness and Cognition.

Penrose, R., 1989, The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics, Oxford, Oxford Univ. Press.

Purvis, Dale, 1992, "Consciousness Redux," Science, 257, pp.1291-2.

Stoerig, P., and Cowey, A., 1992, "Wavelength Discrimination in Blindsight," Brain, 115, pp.425-44.

Waxman, S. G., and Geschwind, N., 1975, "The interictal behavior syndrome of temporal lobe epilepsy," Archives of General Psychiatry, 32, pp.1580-86.

0 comments

Leave a Reply

Your email address will not be published. Required fields are marked *