Beer and Circus (10 page)

Read Beer and Circus Online

Authors: Murray Sperber

BOOK: Beer and Circus
3.96Mb size Format: txt, pdf, ePub
University president William Greiner explained, “You do [big-time] athletics because … it is certainly a major contribution to the total quality of student life and the visibility of your institution.” “Quality of student life” is often a code word for student partying in conjunction with college sports events, and the Buffalo athletic director suggested as much when he commented, “Not having big-time college athletics at Buffalo meant there was a quality of life element that was missing here” for our students. Also missing at Buffalo during the 1980s and into the 1990s were the usual number of undergraduates—the demographic drop in college enrollment affected UB, as did deep budget cuts by the State of New York. In addition,
unlike schools able to recruit a sizable cohort of out-of-state students, Buffalo could not break into double digits in this endeavor. Hence the administrative belief that big-time college sports would solve the university's enrollment problems, and, with a winning team, the Flutie Factor would occur. Applications, including from out of state, had increased at the University of Massachusetts (Amherst campus) in the early and mid 1990s after the Minutemen had excellent runs in the NCAA men's basketball tourney; why couldn't that happen at SUNY-Buffalo? The school was changing its name to the University of Buffalo, why not transform its image?
“Every school wants to believe they will be the one to make it big” in college sports, explains NYU president Jay Oliva, but they mainly end up wasting huge amounts of money on the effort—funds that could be spent on academics. Newcomers also “believe that they can avoid the scandals that have marred Division I-A athletics,” as well as the accompanying bad publicity. But again, according to Oliva and other experts, the best-case scenario almost never happens; for example, UMass endured messy sports scandals during its basketball rise, as well as a negative beer-and-circus reputation as “ZooMass.” Nevertheless, although Buffalo has so far avoided major scandals (minor ones have occurred), it has generated a new kind of adverse publicity—the loser tag. (In addition to its winless football team, its basketball squad went 1–17 and 3–15 in its first years in Division I conference play.)
Moreover, true to newcomer form, Buffalo has racked up major financial losses. It joined the Mid-America Conference (MAC) and had to upgrade its intercollegiate athletic facilities to NCAA Division I-A standards at a multimillion-dollar cost. It also had to increase its athletic department's annual budget to the $10 million range (as opposed to the $3 million average in Division III), and school officials acknowledge that the sea of red ink will expand during the first decade of the twenty-first century. In a small-city sports market featuring the popular NFL Bills and NHL Sabres, and with almost no college sports tradition, marketing the collegiate Bulls is a “hard sell.” A local sportswriter noted: “Fans accustomed to seeing the [Miami] Dolphins won't be too thrilled about Kent State, Toledo, and Central Michigan [of the MAC]. The [UB] team will have to win big to draw … . Over the next few years, he [the head football coach] has to get the program to a high competitive level or risk embarrassing himself before disappointing home crowds.”
 
Why did UB embark upon this risky venture? The university president provided part of the answer when he discussed his hopes of moving his intercollegiate
athletics program from the MAC to the Big East, and then to the Big Ten, his school's “peers in research, teaching, and service.” However, if his teams cannot win in the low-wattage MAC, how can they compete in higher-powered conferences? Only someone like this university president, an academic with apparently no knowledge of college sports recruiting, could believe that a new Division I-A team in Buffalo could suddenly snatch blue chip recruits away from Syracuse and Penn State—the dominant football powers in the region—or from Notre Dame, Michigan, and other national programs who regularly pluck high school All-Americans from the area. UB, at the bottom of the football recruiting food chain, can only scavenge the scraps that remain after the majors, as well as many lesser but well-established programs, obtain their fill. The University of Buffalo Bulls seem fated to recruit badly and lose an enormous number of football games.
However, the athletic director suggested another reason for Buffalo's “mission-driven athletics” when he signed on to his president's ambition to join a major athletic group like the Big Ten: “A big conference isn't going to reach down to some undergraduate teaching institution and say, ‘We want you to be with us.'” But “we fit the profile” of Big-time U's, and, like them, Buffalo does not emphasize undergraduate education. Therefore, big-time college sports—and, by implication, beer-and-circus—would make “a huge difference on campus. The students and faculty don't know what they're missing.”
According to
The Princeton Review
's 1999 and 2000 ratings of colleges and universities, UB undergraduates are definitely missing their professors. In the category, “Professors make themselves scarce,” Buffalo came first in the entire United States both years. It also topped all other schools in “Professors [who] suck all life from materials” both years; and third (2000) and fourth (1999) in “Least happy students.” Another directory, the Yale
Insider's Guide to the Colleges
, 1999 and 2000 editions, explained one source of undergraduate discontent at Buffalo: the enormous lecture classes and the fact that “professors don't usually know students' names or answer many questions in the larger classes.” One UB student commented, “There are many classes where I haven't come within 20 feet of my professor.” Predictably, Buffalo also ranked high in the
Princeton Review
category, “Class discussions rare (1999), and in a new category for 2000,”Teaching assistants teach too many upper-level courses.”
 
Within this context—the school's neglect of general undergraduate education—Buffalo's move to big-time college sports makes sense. UB's
situation is typical of many universities: because they cannot provide their undergraduates with an adequate education, but they need their tuition dollars, they hope to improve “the quality of student life” on their campus, in other words, bring on the beer-and-circus.
At Buffalo, because of the losing teams, the move to big-time college sports has failed up to now; nonetheless, at many schools with successful teams, beer-and-circus rules, and the student happiness level rises. Buffalo is using a paradigm that has succeeded elsewhere but, because of the demography of college sports recruiting, probably will never work for this school. Indeed, with consistently losing teams, UB might generate an anti-Flutie effect, an image as a “loser school” with declining enrollments. UB professor William Fischer worries that this “negative halo effect,” along with a decade of state funding cutbacks, will further “degrade” his school.
In contrast, Florida State University has achieved an almost permanent Flutie Factor. With fertile southern high school football fields to harvest, the Seminoles are always near or at the top of the national football polls, as well as the “party school” lists (it held its high ranking in the
Princeton Review
's “Party school” list throughout the 1990s). But FSU also rates very low in quality of undergraduate education. In addition, as a university with research ambitions, Florida State officials have poured millions into its research and graduate programs. This school is the current national champion in college football, and a prime example of an institution that provides its students with beer-and-circus and not much undergraduate education. If a beer-and-circus poll existed, FSU would be the national champ.
 
The next section of the book details the neglect of undergraduate education and the entrenchment of beer-and-circus. To understand this phenomenon, one must first examine the finances of higher education in the final decades of the twentieth century, and the inability of university leaders to confront the new economic reality, while at the same time pursuing research prestige for their institutions. With this framework in place, the role of beer-and-circus in the contemporary research university becomes clear.
COLLEGE LITE: LESS EDUCATIONALLY FILLING
SHAFT THE UNDERGRADUATES
I
n an influential early-1960s book
The Uses of the University
, Clark Kerr, the president of the University of California system, contrasted the established research university, for example, Harvard, Yale, and his campus at Berkeley, with newer schools striving for research prestige. He noted that “the mark of a university ‘on the make' is a mad scramble for football stars and professorial luminaries. The former do little studying and the latter little teaching, and so they form a neat combination of muscle and intellect” that keeps the faculty and the collegiate students happy. In addition, the administrators who create this conjunction between football and faculty stars do well: they bring fame and fortune to their schools and enhance their jobs.
Kerr described the beginnings of a phenomenon that, because of the turmoil in higher education from the mid-1960s to the early 1970s, was temporarily put on hold. But his vision of universities “on the make” and their use of intercollegiate athletics as campus and public entertainment started to come true in the mid-1970s. He also foresaw an “inevitable” side-effect: “a superior [research] faculty results in an inferior concern for undergraduate teaching.”
This section of
Beer and Circus
focuses on this phenomenon: universities striving for research fame, neglecting undergraduate education, and promoting their college sports franchises.
 
 
Table 1 lists the universities in the 1906 ranking, matched against the order of the top 15 in 1982. These listings demonstrate that a
reputation once attained usually keeps on drawing faculty members and resources that sustain the reputation … .
Over the nearly 80 years from 1906 to 1982, only three institutions dropped out from those ranked as the top 15—but in each case not very much … and only three were added.
—Clark Kerr, University of California president emeritus
Clark Kerr went from the University of California to head the Carnegie Foundation for the Advancement of Teaching and, in 1991, he published an important article comparing the rankings of the top fifteen research universities in 1906 with those in 1982. Considering the momentous changes in higher education during that time span, his findings were unexpected but, after analysis, were entirely logical: the rich arrived first and stayed on top, and no matter what the rest did, they could never overtake these institutions. In 1906, the early period of university research and graduate schools, Ivy League universities dominated the top-fifteen list, and almost eighty years later, they continued to prevail. Similarly, the first private, non-Ivies that emphasized research and graduate education—Johns Hopkins, Chicago, MIT, and Stanford—were still in the top fifteen, as were the first public universities that embraced research and PhD programs—Berkeley, Michigan, and Wisconsin. Predictably, at the beginning of the twenty-first century, almost all of these schools remain in the top echelon, with only Duke and Cal Tech now consistently joining them.
Kerr titled his article, “The New Race to Be Harvard or Berkeley or Stanford,” and he began, “All 2,400 non-specialized institutions of higher learning in the United States aspire to higher things. These aspirations are particularly intense among the approximately 200 research and other doctorate-granting universities.” He then demonstrated that this race was a fool's errand for almost all participants. Additionally, it had negative side-effects for all schools, including the winners: the emphasis on research devalued undergraduate education, and “the regrettably low status of teaching in higher education provides faculty members less reward from that activity than they expect to gain from heightened research” work.
In his article, Kerr also discussed the phenomenon of “Upward Drift”: those universities, whether they could afford the cost or not, that relentlessly added graduate and doctoral programs in order to compete in the research prestige race. Moreover, administrators of Upward Drift schools chose this course of action during a time of economic difficulties for higher education: in the 1970s and 1980s, with the end of the baby boom, tuition revenue dropped; also, state legislators and taxpayers, disillusioned with
most public agencies, drastically cut funding to higher education; and inflation squeezed every school's financial resources. But Upward Drift continued.
With diminished revenue, most schools had to make choices. Only the richest universities could afford to maintain high-powered graduate schools and quality undergraduate education programs. Some small private colleges that had started graduate programs during flush times cut them, concentrating their resources on undergraduate education. Upward Drift universities made the opposite choice: they put scarce dollars into their graduate schools and neglected undergraduate education. A 1990s study explained that the pursuit of research fame and prestige were the “potent drivers of institutional direction and decision-making” at Upward Drift U's. The study also indicated that these schools continued this policy in the 1990s, despite “much talk on campuses about downsizing and concentrating on the core business of undergraduate teaching.”
 
In 1973, Clark Kerr created a classification system for higher education that also provided a way to measure Upward Drift. His top category, “Research Universities I,” consisted of those institutions granting at least fifty PhD's per year, giving a “high priority to research,” and meeting various other criteria. The established research universities dominated the group, but, in the next two decades, a number of schools joined them. Significantly, almost all of the new members of Research Universities I also belonged to NCAA Division I, for example, Arizona State, Florida State, Kansas, Kentucky, Louisiana State, Nebraska, Temple, UConn, UMass, Virginia Tech, and West Virginia. However, even though these schools frequently had top-twenty college sports teams, none of them ever broke into the top
fifty
on the standard rankings of national universities. But all of these universities changed the nature of their institutions: as the authors of the Upward Drift study indicated, “Despite pressures to emphasize the role of undergraduate education, ambitious institutions” were and are “beguiled by the promise of prestige associated with doctorate-level education.” These universities spent, and continue to spend, enormous sums of money on their graduate departments, and much less proportionally on undergraduate teaching.
Upward Drift also involved schools moving up to “Research Universities II” (fewer doctoral programs than in RU-I, but still committed to graduate education). Among the new arrivals in II were Houston, Mississippi, Ohio U, Rhode Island, South Carolina, Texas Tech, and Wyoming—all members of NCAA Division I but, predictably, trailing their wealthier siblings in that
field as well. Upward Drift continued in the lower categories—Doctorate-granting Universities I and II (smaller graduate programs and fewer PhD's per year)—and included many schools near the bottom of NCAA Division I trying to climb the research and athletic polls. Again, none of these universities ever made the top-fifty rankings of national universities, but they all chose to participate in the research game—even though, before the 1970s, some were liberal arts colleges doing a good job of educating undergraduates.
The universities sitting on top of the research polls throughout the twentieth century have always dictated the rules of the game. The result, according to one critic, “is a monolithic status system that pervades all of higher education, a system which places an inappropriate value on so-called ‘pure' research and on the national reputation for the person [the professor] and the institution that this research can bring.” Since the 1970s, the administrators of almost all universities have endorsed this “monolithic status system”—whether suitable to their particular campus or not—believing that research prestige was the way to attract attention to their institution and to improve its standing in the academic world.
For an Upward Drift school to move higher in the prestige polls, it has to pass a more established research institution. But higher-ranked schools are not standing still or drifting downward; in fact, they work hard to improve their positions in the polls. For example, the University of Illinois at Champaign-Urbana, with very tight budgets throughout the 1980s and 1990s, continued to pour millions into its graduate programs and to neglect its undergraduate ones. An editor of the University of Illinois student newspaper described the state of her campus in the late 1980s: “It's clear that all the money is going to research. It seems so blatant when you see the run-down English [and other classroom] buildings and the fancy new research buildings. The U of I is really a research park that allows undergraduates to hang around as long as they don't get in the way.”
 
 
At Rutgers University we have spent the past fifteen years [from the mid-1970s through the 1980s] successfully competing both for talented junior faculty [researchers] and for world-class scholars by promising them minimal teaching schedules. I know of junior colleagues who have been on the faculty roster for two years and have scarcely seen the inside of a classroom.
—Benjamin Barber, Rutgers professor
Schools try to ascend the academic polls by accumulating faculty who possess or will achieve research fame. Rutgers, the main public university in New Jersey, provides an example of a university “on the make” for research prestige. In the 1970s and 1980s, it aggressively tried to move up in the academic research world (it also entered big-time college sports at this time), but, for all of its efforts, as well as some success in faculty hiring, it never managed to break into the top-fifty rankings of national universities in the
U.S. News
poll (or the top twenty in the sports polls). Moreover, as Rutgers anthropologist Michael Moffat documented in a 1980s book, general undergraduate education at the school was abysmal and deteriorating.
Professor Barber also related an anecdote about an “Ivy League university, disturbed by the disrepute into which teaching had fallen, [that] recently offered its faculty a teaching prize. The reward? A course off the following year!” Amazingly, other schools offered similar bonuses as part of their teaching awards. These stories, as well as the Rutgers tale, spotlighted the faculty's role in the deterioration of undergraduate education during the era of Upward Drift.
 
Trained in the old and the new graduate programs, most professors come from the ranks of academically inclined undergraduates, and exhibit the traditional professorial distaste for teaching large numbers of collegiates and vocationals. Only the faculty's academic “children” and some rebels were worthy of their time—but not too much of it. In a 1980s study, the Carnegie Foundation determined that at research universities, only 9 percent of the faculty spent more than eleven hours a week teaching undergraduates, whereas 65 percent logged less than ten hours a week in this endeavor, and 26 percent spent zero hours on undergraduate teaching (two decades later, there is even less classroom contact between faculty and undergraduates, particularly between faculty and nonhonors students).
Yet, the Carnegie investigators found that faculty members were busy with their research, a majority devoting more than twenty hours a week to it, and many over forty hours per week. Professors sometimes criticized their school's “publish-or-perish” syndrome, but they participated in it, usually quite willingly. Their language revealed their priorities: faculty referred to their “teaching loads,” as if pedagogy were a burden—at a time when most research universities established two-courses-per-semester as the standard teaching assignment for a faculty member, that is to say, six hours per week in class (however, at least one-third of all professors managed to spend fewer hours in a classroom, sometimes none at all). Faculty also
talked about “research opportunities”—those bright, shiny projects and grants to live and die for. Moreover, when professors discussed their “own work,” they never meant their teaching, only their research.
 
In America, because money measures the value of work, universities send clear signals with their pay scales. Before the 1970s, a few star professors received more money and perks than their colleagues; however, most faculty salaries were uniformly low but equitable, with years in rank as the main criteria. Upward Drift and the tight financial budgets of the 1970s and 1980s created a new pay scale: universities generously rewarded all professors who furthered the institution's research goals, and they gave the rest of the faculty—no matter how excellent their teaching—minimal raises. Similarly, they rewarded “productive faculty,” a.k.a. researchers, with such perks as personal research accounts, extended paid leaves to do research, and fewer, if any, undergraduate courses. Only faculty who became full-time administrators continued to climb the salary ladder, but not with the same speed as the outstanding researchers.
In addition, in promotion and tenure decisions, universities emphasized research achievements and potential to a greater extent than previously; if a candidate was an ordinary researcher but an outstanding teacher, his or her chances for promotion and tenure were slim to none. The research imperative drove the reward system, but American business culture, notably its obsession with quantitative measurements and numbers, influenced the process. University administrators and committees could count a faculty member's publications; however, they could not evaluate teaching in any numerical way (even quantitative student evaluations were and are unreliable because of instructor manipulation and student subjectivity). Most important, research built a faculty member's reputation outside the institution and reflected back upon the school, enhancing its reputation; whereas the fame of even a superb undergraduate teacher rarely extended beyond campus boundaries and made almost no impact on the national ranking of the university.

Other books

Commander-In-Chief by Mark Greaney, Tom Clancy
Forever (This #5) by J. B. McGee
Let's Play in the Garden by Grover, John
Glass Grapes by Martha Ronk
Resurgence by Charles Sheffield
A Hunger Like No Other by Kresley Cole
The Perils of Pleasure by Julie Anne Long
PW01 - Died On The Vine by Joyce Harmon
Touch Me by Chris Scully
Tiger Born by Tressie Lockwood