The Sabermetric Revolution: Assessing the Growth of Analytics in Baseball (23 page)

BOOK: The Sabermetric Revolution: Assessing the Growth of Analytics in Baseball
5.7Mb size Format: txt, pdf, ePub

WPCT = f(PAY, PAY
2
)

The R
2
from this regression is 0.222, meaning that PAY and PAY
2
together explain 22.2 percent of the variance in win percentage, or that nonpayroll factors explain 77.8 percent. This result, of course, leaves ample margin for the influence of good scouting, good sabermetrics, good health, good chemistry, good managing, and good luck.
21

The portion of win percentage that is not explained by the payroll factors is known as the residuals. To understand how the residual for each team in each year is generated, consider the Tampa Bay Rays in 2008. The estimated equation of wins on payrolls is Win Pct = 0.397+ 0.117 PAY – 0.012 PAY
2
. The Rays spent about $51 million that season, which represented about 53 percent of the league average payroll. Using the equation above, this leads to the estimate that the Rays would have a 2008 win percentage of .455, which translates to about 74 wins. Their actual win percentage was .599 (97 wins). In this case, the Rays have a positive residual of .144 in 2008. Another way of looking at it is that .144 was that part of the Rays win percentage that was not explained by the Rays payroll.

Our next step is to see if teams’ sabermetric intensity score is correlated to the teams’ residuals in this win percentage regression. Specifically, we estimate the equation,

Residuals = f(SI),

for the entire period, 1985–2011. Our saber-intensity index explains 36.7 percent of the variance of the residuals; that is, SI explains just over one-third of that portion of team success that is not explained by team payroll.

Residual WPCT = −2.845 + 2.844 SI R
2
= 0.367

Viewed differently, for every .01 points that the SI
22
index increases, a team’s win percentage increases by .028 points, or by roughly 4.5 games.

This result appears to give a ringing endorsement to sabermetrics. As a simple illustration, if a team can hire a corps of three statistical analysts for,
say, $200,000 and thereby raise its sabermetric intensity from the average 1.00 to 1.01 (or by one percent), it would be equivalent to signing a player with a WAR of 4.5.
23
Such a player is likely to cost over $15 million. Indeed, on average, baseball teams appear to pay approximately $4 million per win on the free agent market. At today’s prices, hiring a sabermetrician, or two, or three, is a steal.

An important caveat is that the low-hanging fruit—such as using OBP instead of BA—has already been picked. To be as productive going forward, tomorrow’s sabermetricians will have to be smarter than their predecessors—and they are likely to cost more.
24
Still, they probably won’t be as expensive as Alex Rodriguez per unit of output.

One more word of caution: as in most rapidly growing new fields, lots of people want to get in on the action. Before standards and procedures are developed for credentializing the would-be practitioners, this dynamic produces its share of charlatans, selling their supposed sabermetric skills to unwitting GMs. They usually come in the form of outside consultants with simple, yet flawed, statistical models. Sabermetrics doesn’t work automatically; the right people have to be employed.

Conclusion

Sabermetrics, among other things, purports to quantify the value of players and their skills. In this chapter, we have attempted to develop a methodology to quantify the value of sabermetrics, or more precisely, the contribution that properly applying sabermetric insights makes to team performance. We do this in the spirit of initiating a line of inquiry, rather than presenting a final appraisal. Our initial findings are instructive and encouraging. Sabermetrics can add significant value.

Several qualifications are in order. First, like others, we have reified sabermetrics, but sabermetrics is not a static thing. It is a process that involves careful analysis of data. Sometimes the data is numerical, and sometimes it is visual. Increasingly, the cutting edge of sabermetric analysis seems to be focused on defense. Many of the new approaches to defense involve video analysis.

Thus, the saber-analyst more and more is using eyesight to observe play on the field—just like the scout. Meanwhile, scouts are assimilating many of the new metrics in their standard toolbox. The result is that much of the work of the sabermetrician and the scout is being integrated. As the functions merge, not only does it become more difficult to assess saber-intensity, but it is likely that a new paradigm for player evaluation is evolving that is multidimensional. Even as this happens, however, there will be a growing need for more sophisticated statistical analysis and programming skills to parse and process the mountains of new data that overwhelm many front offices.

Second, as old market inefficiencies disappear and the search for new ones continues, what really distinguishes one MLB front office from another is intelligence. Intelligent executives are more likely to explore new terrain and less likely to be threatened by innovative, more productive methods. They are also more able to identify what is important and what is superfluous. The consistently high scores on the saber-intensity index for the Atlanta Braves in the 1990s reflect not the explicit adoption of saber techniques, but the clear recognition of the importance of defense and pitching by Stan Kasten and John Schuerholz, the hiring of top scouts, and an effective player development system.
25

While the Oakland A’s reemergence as a competitive team in 2012 may be partly due to a renewed emphasis on saber-savvy metrics, it also reflects front office intelligence. The A’s have been expanding, rather than contracting, their scouting budget in recent years. Billy Beane says that it is the A’s commitment to statistical analysis that led them to increase the scouting budget. Beane elaborates: “What defines a good scout? Finding out information that other people can’t. Getting to know the kid. Getting to know the family. There’s just some things you have to find out in person.”
26

Almost by definition, baseball players who make it to the minor leagues are loaded with physical talent. A player’s character is often the key factor that distinguishes him from others. Nate Silver breaks down a player’s character traits this way: his work ethic, his concentration and focus, his competitiveness and self-confidence, his ability to manage stress, and his humility.
27
If a scout can identify these skills, then he is making a major contribution to team success.

Silver goes on assert: “If a team’s forecasting is exceptionally good, perhaps it can pay $10 million a year for a player whose real value is $12 million. But if its scouting is really good, it might be paying the same player $400,000.”
28
While Silver’s example may be atypical in its magnitude, the direction of his claim is on the mark.

While Michael Lewis emphasizes the conflict between scouts and sabermetricians, smart baseball executives today know that there is no reason to tie one hand behind their backs. There is no sense in arbitrarily limiting the amount of information you gather. The trick is to parse and process the information effectively—a lesson that all companies have to learn.

APPENDIX

THE EXPECTED RUN MATRIX

At various points in this book we have referred to the Expected Run Matrix. As we indicated in the Preface, there are eight possible configurations of the baserunners in baseball (two possibilities for each of the three bases), and three possibilities for the number of outs in an inning. Thus, at any given point in an inning, the game can be classified as being in exactly one of twenty-four possible states. For example, at the beginning of each inning, the state is: nobody on, nobody out. Later, we might be in the state: runners on second and third, two outs. It is quite useful to associate with each of these twenty-four states, an estimate of the expected number of runs that an average team will score in the remainder of the inning. For the state (nobody on, nobody out), that value is simply the average number of runs scored per inning, which for the years 1985–2011 was 0.510. In contrast, if runners are on second and third with two outs, then the expected number of runs scored in the inning is 0.378. Obviously, these values are approximations of unknown quantities, since in a real game there are dozens of variables for which we haven’t accounted. Nevertheless, knowing the expected run values for each state has proven to be an extremely useful framework for sabermetric analysis.

It is important to note that the Expected Run Matrix is not static. That is, the values can change considerably during different eras, when, for whatever reason, the run scoring environment is different. In
Table 19
, we present the Expected Run Matrix for 1982–1985 alongside the corresponding matrix for 1997–1999. Note how increasing the number of outs always decreases the run expectation, while increasing the numbers of baserunners always increases run expectation.

To illustrate, observe that with a runner on first and nobody out, a successful sacrifice bunt that advances the runner to second would result in a net loss of 0.866 − 0.667 = 0.199 expected runs in the early 1980s, but a decrease of 0.939 − 0.707 = 0.232 expected runs in the late 1990s. The matrix is one methodology that is used for detecting the average incremental impact of different strategies and batter outcomes during the course of a game.

Table 19. Expected Run Matrix for 1982–1985 and 1997–1999

MODELING THE EFFECTIVENESS OF
SABERMETRIC STATISTICS

In this section we describe our methodology for modeling the extent of effective implementation of sabermetrics in Major League Baseball and the resulting impact on team performance. Our approach has three main components:

1. We build a model for the winning percentage of a team as a function of their payroll.

2. We construct metrics that are designed to indicate the influence of sabermetrics on the team’s composition and performance.

3. We examine the relationship between the residuals from the model from part 1 with the metrics constructed in part 2.

The presence of meaningful associations between the residuals from the first model and the sabermetric intensity metrics is a method for detecting the presence of and quantifying the impact of sabermetrics among clubs.

Other books

Devolution by Chris Papst
Ninja by Chris Bradford
Fortune's Favorites by Colleen McCullough
Face to Face by Ellery Queen
Everyone Pays by Seth Harwood
The Autograph Man by Zadie Smith
Torn by Escamilla, Michelle