Read The Twilight of the American Enlightenment Online
Authors: George Marsden
In retrospect, the striking thing to notice about mainstream liberals' faith is that they believed that their pragmatically based relativistic democratic principles might lead to a pluralistic or inclusive mainstream cultural consensus that, at least ideally, might be resisted only by reactionaries and ideologues on the fringes. By the end of the 1960s, any such hopes, even as just an ideal, had proven fanciful. Rather than America being an ever-broadening consensus society, drawing peoples of
all ethnicities, races, and religions into the mainstream, it became glaringly apparent that the nation was made up of many subcommunities and interest groups, and that, despite many
shared beliefs, some of their fundamental principles were incompatible with those of others. Young and old, white and black, pacifist and patriot, religious and secular, liberal religious and conservative religious, women and men, gay and straight, were all contending with each other, and there was no single set of principles by which to adjudicate the differences.
How was it possible for so many liberal thinkers of the midcentury to retain their faith in what amounted to the enlightenment conclusions of the founders (“liberty and justice for all,” and the like) while dismissing the enlightenment foundations on which those conclusions rested? Why did they not, like LippÂmann or conservative thinkers of the day, see that the edifice on which they were building their pluralistic consensus was about to collapse? How could they be both skeptics regarding fixed first principles
and
believers in the principles of the American way?
The answer is that at the time the outlook seemed to make good sense as a comparative matter. Compared to the alternatives, especially compared to the incredible brutalities of twentieth-century totalitarianism, the prevailing American principles had indeed proved themselves. Unprecedented prosperity, moreover, had validated that the American capitalist system, for all its faults and inequalities, worked; it was even subject to incremental improvements. World War II had generated patriotism, loyalty to the American way, and dedication to freedom, and these shared ideals persisted in the Cold War. Furthermore, if one accepted the premise that natural scientific assumptions and methods provided the closest thing to objectivity that could be obtained, and the corollary that no religious or metaphysical creed could plausibly claim universality, then the nondogmatic relativistic pragmatic method of testing beliefs seemed the best hope for building a uni
f
ied society.
In an era when World War II and the Cold War had created an unusual sense of unity, it seemed plausible not to worry about first principles. As Yale political scientist Robert Dahl put it in his 1956 book
A Preface to Democratic Theory
, “the assumptions that made the idea of natural rights intellectually defensible have tended to dissolve in modern times.” Still, that was no great concern, because the “strange hybrid” of the American political system had proven remarkably adaptable in its evolution over time. Or, as Dahl's Harvard counterpart, Louis Hartz, put it: “We have made the Enlightenment work in spite of itself, and surely it is time we ceased to be frightened of the mechanisms we have derived to do so.”
20
At a time when such was the standard wisdom, Walter LippÂmann appeared rather old-fashioned, at least when in the company of other liberals, in seeing the lack of foundations as so fundamental a problem as to demand a collective rebuilding of philosophical first principles. In many respects, the enlightenment still reigned in America, yet it continued to reign only by default. Lippmann's disagreement with his peers was not over whether a unifying consensus based on the founders' enlightenment principles should continueâthey all agreed on thatâbut over whether it
could
long endure without the foundations on which the founders had built.
Such debates regarding pragmatism versus natural law might seem abstract and theoretical, but their implications were no better illustrated than in the greatest domestic political struggle of the late 1950s and early 1960s: the civil rights movement.
Looking back from the twenty-first century, it is easy to see that there was a lot missing from the inclusive pluralism of the midcentury public intellectuals. The insiders were almost all white males; most were secularists; some had a Protestant background, but more were Jewish; and almost all lived in the Northeast. Catholics were useful because of their urban political power, but they were only barely beginning to gain a voice in national discussions and were not a discernible presence in major universities. Fundamentalists and evangelical Protestants were off the radar of most academics and cultural observers, except as reminders of anti-intellectualism in the nation's hinterlands. Those observers also regarded the South as largely a cultural backwater. Ethnic Protestants, even some
with considerable intellectual traditions, received no more hearing than ethnic Catholics. Hispanic and Asian Americans likewise were not thought of except as among those who would be drawn into the consensus.
African Americans, however, were a glaring absence. Unlike most other outsiders, they were not only largely ignored, they were often excluded, and that exclusion permeated almost every dimension of their lives. Many liberal intellectuals accordingly recognized racial discrimination and lack of “Negro” civil rights as the most flagrant reproach to American democracy. Arthur Schlesinger Jr., for instance, in his 1949 manifesto
The Vital Center
, declared that “the sin of racial pride still represents the most basic challenge to the American conscience,” and that even though we cannot “transform folkways and eradicate bigotry overnight,” we must “maintain an unrelenting attack on all forms of racial discrimination.”
21
With the 1954 US Supreme Court decision mandating school integration, and President Eisenhower's use of federal troops to enforce integration in Little Rock, Arkansas, in 1957, everyone had to include racial issues as being among the top challenges facing the nation. Where one stood on the question of how fast the nation should move on the question of civil rights was a pretty good index of the degree of one's liberalism. In an era when liberals, with their emphasis on incremental changes, often looked like conservatives on many social issues, ending racial injustice was a matter on which they were typically dedicated to advocating substantial social change.
22
Despite such dedication, one great shortcoming of the approach of consensus liberals to civil rights was that they,
like the federal government itself, had no real solution to the problem of southern white intransigence. In general, the liberal political establishment held a view of human nature that was too naïvely optimistic to overcome the entrenched power and deeply held racial prejudices that undergirded southern public segregation. Arthur Schlesinger Jr., who may be taken as prototypical of the most thoughtful of such centrist liberals, partook in this naïveté. Despite being an admirer of Reinhold Niebuhr, Schlesinger did not take to heart the degrees to which human perversity could disrupt the operations of the pragmatic vital center. Thus, Schlesinger was so confident in the incremental problem-solving approach of the American experience that he could declare, “I am certain that history has equipped modern liberalism . . . to construct a society where men will be both free and happy.” His hope, which was typical of the liberalism of the time, was that prejudice would eventually yield to education. Already in 1949 he could claim with ungrounded optimism that “the South on the whole accepts the objectives of the civil rights program as legitimate, even though it may have serious and intelligible reservations about timing and methods.”
23
In fact, efforts at incremental change only increased the backlash among southern white racists, and it took the African American protest movement to turn the tide. Martin Luther King Jr.'s effective leadership in that movement was built around a combination of the fervor of southern black revivalism and the power of nonviolent resistance. What might not be quite as evident is that the doctrine of nonviolent resistance was based on a realistic view of human nature that
power must be met with power. King recognized that a people without political power could nonetheless mobilize their moral power
if they were willing to suffer in the cause of justice. To get that
to happen, he drew on a tradition of fervor in the black churches.
24
It needs to be added that, underlying these essential factors, what gave such widely compelling force to King's leadership and oratory was his bedrock conviction that moral law was built into the universe. In this he was different from most of the liberal proponents of civil rights. His conviction was grounded in his Christian beliefs, which in turn were shaped by the “personalist” theology he had studied at Boston University. Personalism was an idealist philosophy based on the premise that God's person was the center and the source of reality, and hence that human personality had moral significance in that it participated in that most basic aspect of reality. King said that personalism helped him to sustain a faith in a personal God. Integral to that faith was the conviction that God had “placed within the very structure of the universe certain absolute moral laws.”
25
Everything else that King advocated for the movement followed from this confidence in a moral order. King believed that God was working in history toward bringing justice and his kingdom, although the process was not direct or inevitable, but involved human agency in combating evil. The power of nonresistance was a moral power that was built around the belief that all people have some degree of moral sensibility, and so moral suasion is a real form of power. Further, central to all moral actions must be the recognition that all persons, even
one's enemies, are of infinite worth, because they are created in the image of God. Since personality is at the center of reality, history cannot be explained simply by economic forces, but is more basically a matter of personal and moral relationships. The goal of society, King proclaimed, ought to be a “beloved community” in which “brotherhood is a reality.” King blended his progressive idealism with the American political heritage (“let freedom ring”) in such a way as to revive the founding ideals with a latter-day force.
26
Appeal to a higher moral law was the centerpiece of King's 1963 “Letter from the Birmingham Jail,” in which he admonished moderate white clergy for thinking it “unwise and untimely” to resist unjust laws. For such an audience King invoked St. Augustine to argue that “an unjust law is no law at all,” and St. Thomas Aquinas to say that “an unjust law is a human law that is not rooted in eternal and natural law.” King elaborated his personalist test for what was rooted in eternal or natural law: “Any law that uplifts human personality is just. Any law that degrades human personality is unjust.” By that standard, “all segregation laws are unjust because segregation distorts the soul and damages the personality.”
27
King's invocation of objective moral law casts light upon the era in a couple of revealing ironies. Progressive observers celebrated King's stance and agreed that the segregation laws of the American South were self-evidently unjust. Yet the whole structure of King's thought and the motivation for his action rested on theistic and higher-law premises that many of those same observers believed to be self-evidently untrue. Secular liberal pragmatists could share in King's moral indignation
even while they lacked his rationale for universalizing such moral claims.
The other irony is that, just as the ideals of universal justice, equality, mutuality, peace, and integrated brotherhood were burning the brightest, they were lighting the torches of identity politics. By the time of King's death in 1968, the ideal of one American, integrated, consensus-based community had already flamed out, even though not everyone was ready to recognize that. Frustrated hopes had already turned portions of the African American community to Black Power and Black Pride. The African American civil rights movement became in some respects a model for other rights movementsâparticularly
women's rights, gay rights, and rights for other minoritiesâbut, although some of the rhetoric of justice and equality was similar, it was now reshaped by the frameworks of identity politics. Whatever the merits of these causes, rather than grounding reforms in a universalized moral order, their outlooks were often frankly shaped on perceptions and experiences unique to their group. American founding ideals, such as those of the self-evidence of rights to freedom and equality, were still often proclaimed as though they were moral absolutes, but they glittered as fragments in the ruins of the dream of shaping a nation on the basis of a universal moral order.
FOUR
The Problem of Authority: The Two Masters
If natural law could not be revived as a shared
basis for mainstream moral authority, where might such authority come from? There were, of course, shared American traditions, such as liberty and justice, national loyalty, and equal opportunity, that carried some presumptive weight. But by what standards was one to determine the meanings of these very broad concepts when they conflicted or were matters of dispute? Or, when it came to what might be taught in the universities, or in the public schools, or in the magazines, advice books, or guides to life, what were the most commonly shared cultural authorities?
At all these levels of mainstream American life, from the highest intellectual forums to the most practical everyday advice columns, two such authorities were almost universally celebrated: the authority of the scientific method and the authority of the autonomous individual. If you were in a public setting in the 1950s, two of the things that you might say on
which you would likely get the widest possible assent were, one, that one ought to be scientific, and two, that one ought to be true to oneself. But despite the immense acclaim for each of these ideals, there was also a lurking question as to
whether these two great authorities, the one objective and the
other subjective, were really compatible with each other. The grand hope in the Western world in the eighteenth century was that they would beâthat enlightened science would establish principles of individual freedom. But since then, from the romanticism of the nineteenth century through the scientifically augmented totalitarianism of the twentieth, there were many reasons to suppose that they might be in conflict. Such debates were still going on in the mid-twentieth century. Yet, despite such arguments, when it came to the practical aspects of life, the most common and influential cultural attitude was that science and freedom were complementary rather than contradictory.
As one might expect, the points of tension were most sharply defined in the highly intellectual field of philosophy. On the side of freedom and the individual was the vogue of existentialism in midcentury American thought. Existentialism was largely imported from continental Europe, and it had the appeal of offering a frank look at the human predicament. In the late 1950s and early 1960s, existentialism was popular among sophisticated college students, beatniks, and others looking for alternatives to American conformity, complacency, and scientism.
One can quickly gain an appreciation for the appeal of existentialism as an expression of dissent from the mainstream by
looking at what became the canonical American summation of the outlook, William Barrett's 1958 volume
Irrational Man
. Barrett, a professor of philosophy at New York University, summarized existentialism and its critique of Western civilization's dependence on rationality with compelling clarity.
It took the disasters of the twentieth century, Barrett observed, for modern Europeans to recognize that the rational ordering of society and hopes for material progress “had rested, like everything human, upon a void.” The modern person became a stranger to himself: “He saw that his rational and enlightened philosophy could no longer console him with the assurance that it satisfactorily answered the question What is man?” At the heart of existentialism, which Barrett illustrated in the philosophies of Søren Kierkegaard, Friedrich Nietzsche, Martin Heidegger, and Jean-Paul Sartre, was the project of facing the stark reality of one's own finitude, “the impotence of reason when confronted with the depths of existence, the threat of Nothingness and the solitary and unsheltered condition of the individual before this threat.” The emphasis on human finitude had the appeal of countering the “can do” optimism about human abilities so common in most homegrown American outlooks.
Barrett characterized existentialism as “the counter-Enlightenment come at last to philosophical expression,” saying that “it demonstrates that the ideology of the enlightenment is thin, abstract, and therefore dangerous.” The rationality and technological reasoning of the modern postÂ
enlightenment world had not freed people, but detached them from meaningful identities. The “lonely crowd” had been
discovered by Kierkegaard long before it was documented by
David Riesman. Contrary to the enlightenment, which put the
essence of man in his rationality, existentialism dealt with “the whole man,” including such “unpleasant things as death, anxiety, guilt, fear, and trembling, and despair.” Modern man
had tried to deny these realities or to explain them away through
psychoanalysis. “We are still so rooted in the enlightenmentâor
up
rooted in itâthat these unpleasant aspects of life are like the Furies for us: hostile forces from which we would escape.” The lesson of the twentieth century was that even “the rationalism of the enlightenment will have to recognize that at the very heart of its light is also darkness.”
Despite this realism regarding the human condition, Barrett's existentialist solution otherwise fit much of the spirit of the time in emphasizing the primacy of the self. The difference from easy American optimism was, as he put it, “if, as the Existentialists hold, an authentic life is not handed to us on a platter, but involves our own act of self-determination (self-finitization) within our time and place, then we have got to know and face up to that time, both in its threats and its promises.”
Existentialism represented one pole of philosophy and of midcentury culture and the artsâthe pole celebrating individual freedom, self-determination, and even irrationality. Almost all of the rest of professional American philosophy clustered around the other pole, which flew the flag of rationality based on the scientific ideal. William Barrett was especially scathing in characterizing such tendencies among his fellow philosophers. In fact, if one wanted guidance regarding
the meaning of life, he suggested, one of the least likely places to find it would be among professional philosophers. The dominant philosophies in American university philosophy departments, he observed, were examples of what had gone wrong in modern intellectual life. “The modern university,” Barrett declared, “is as much an expression of the specialization of the age as is the modern factory.” Modern knowledge had advanced through scientific specialization. Specialists focused on increasingly narrow and technical issues that only other specialists could understand. Philosophers, believing they needed to carve out a place for themselves in this scheme of things, had imitated the scientists in such specialization. Unlike physicists,
however, whose retreat into esoteric specialization could eventually
result in something as earthshaking as the production of the bomb, “the philosopher has no such explosive effect upon the life of his time.” Rather, philosophers had given up any traditional role of being the sages who helped guide society and instead were finding that they had less and less influence on anyone beyond other philosophers. “Their disputes have become disputes among themselves,” wrote Barrett.
1
Barrett's complaint was based on the reality that American professional philosophy had come to be dominated by technical analytic philosophy, which indeed illustrated the disconnect between scientific models for knowledge and humanistic
goals. These “logical positivists” were attempting to find definitive criteria for all genuine knowledge by carefully analyzing
the differences between the language of hard empirical science and the less precise language used regarding
ethics, art, or religion. The project of strict language analysis was developed by Bertrand Russell and G. E. Moore at Oxford University and in the early work of Russell's most brilliant student, Ludwig Wittgenstein, in the early 1920s. One can gain a sense of what was involved by looking at a relatively accessible encapsulation in A. J. Ayer's
Language, Truth, and Logic
. First published in Great Britain in 1936, Ayer's overview was still widely used as a text in American colleges in the 1950s.
2
According to Ayer, philosophy was a specialized branch of knowledge that was distinguishable from natural science in that it dealt not with empirical verification, but with the logic of propositions that might be proven true. For statements to be true, they needed to be able to meet one of two criteria: either they were statements that were tautologies, or they were statements that could be empirically verified. If nontautological statements were not, at least in principle, subject to empirical verification, they were, strictly speaking, meaningless. With this breathtaking victory by definition, Ayer could sweep away centuries of metaphysical discussions as “superstitions” and dismiss the possibility that theological statements could make truth claims about God. For instance, a seemingly empirical claim of a personal encounter with a deity told us only about the mental state of the observer; it said nothing about the existence of a transcendent being, because it was a statement that had “no literal significance.” Even an ethical statement, such as, “You acted wrongly in stealing that money,” was a “pseudo-concept” with no factual content, and nothing more than an “emotive” expression of a moral sentiment. Logical positivists were not saying that theological, or ethical, or aesthetic state
ments were pure gibberish and needed to be entirely abandoned. They were claiming these were just not the sorts of statements that could be used to make true-false claims.
3
By the postwar era, many of the analytic philosophers, most notably Wittgenstein himself, were repudiating the strictest early logical-positivist criteria as too rigid and as leading to a sort of self-inflicted
reductio ad absurdum
. Nonetheless, logical positivism had helped to set the agenda of professional philosophy as a narrow specialization dealing with language and logic. Its purpose was to determine the most reliable foundations for a science of knowledge on which other sciences ought to be built. This project has since come to be called “classical foundationalism” by its many critics.
4
In terms of a wider cultural analysis, one can see the dominance of analytical philosophy in American and British academia as a notable instance of that side of modern culture that was attempting to preserve the enlightenment ideal, an ideal that focused on developing principles and procedures of rationality that ought to command the assent of all open-minded hearers. Logical positivism preserved that ideal of finding common ground, but also pointed to the problem involved: strictly speaking (as analytical philosophers were), such agreement could only be established by severely limiting the range of rational discourse, so much so that there was almost nothing left worth talking about. No wonder, as William Barrett pointed out, that professional philosophy was one of the last places to go if one were searching for the meaning of life.
Furthermore, as Barrett also observed, the differentiation and specialization of modern intellectual life meant that
philosophers were not providing foundations for any thought
beyond their own discipline. An intelligent generalist, such as Walter Lippmannâor any middlebrow person, for that matter
âwas not likely to find much guidance from academic philosophers. That was in marked contrast to the situation a generation earlier, when Lippmann had been able to bring the insights of his teacher William James into the public arena. Furthermore, not only did social philosophers not turn to the analytic philosophers for guidance, but also, and more ironically, neither did the practitioners of the sciences themselves. Natural scientists already knew what worked. Moreover, in the social sciences, specialization meant that each discipline was a sovereign domain in which practitioners set their own standards for how best to study the slice of human activity that their specialties considered.
Though not many people were saying it at the time, it was symptomatic of the crisis in the mainstream thought of the day that few people were listening to its most brilliant philosophers. Existentialists did offer insights on personal authenticity, but their following was small. Analytic philosophers searched for scientific-style verification, but they spoke almost only to each other.
If one is looking for
the practical philosophies of the day that helped to shape the lives of ordinary people, the place to turn is the field of psychology. There one can find similar tensions between science and the individual, but in far more influential form. As psychology was a science, and one of its
principal subjects was individual experience, it was inevitable
that it would be a focal point for debates on the pivotal question of the day: How do scientific understandings of human behavior fit with faith in human autonomy and freedom? Western culture had inherited these two grand ideals, but did they support each other? In an era when many people had turned to psychology as a guide to life, that was a practical problem as well.
Although midcentury psychological theories related science to individual autonomy in many different ways, there were two views on the subject that marked opposite ends of the spectrum. These were the views represented first and foremost by B. F. Skinner and Carl Rogers.
B. F. Skinner is especially important to this issue because he was one of the few midcentury practitioners of the social sciences to directly address the question of the relationship between science and individual freedom. Skinner, born in 1904, grew up in a town in central Pennsylvania where he had from
an early age challenged conventional authorities, and he was always an independent thinker. In addition to being trained as an
experimental behavioral psychologist, he was an inventor in the tradition of American tinkerers. In October 1945,
Ladies' Home Journal
featured his “baby tender,” a climate-controlled
box that he constructed for his own daughter to sleep in. His aim was to provide a safe environment for her that would also eliminate some of the burdens of parenting.
5
In experimental psychology his most important invention was the “Skinner Box,” which was a mechanical device for automatically providing rewards to animals in order to reinforce their behavior as they were learning a task. Skinner was devoted to the
stimulus-response model for understanding all learning, and he believed that the factors shaping human behavior and those shaping animal behavior differed only in complexity, not kind. People adopted behaviors that were positively reinforced, and they learned to avoid behaviors that were associated with unpleasant consequences or were negatively reinforced. As could be demonstrated with white rats in a Skinner Box, positive reinforcement worked better than punishments.