Jon Sanders, Higher Education Policy Analyst, John Locke Foundation, 200 W. Morgan Street, Suite 200, Raleigh, NC 27601
Darcy Olsen, President and CEO, Goldwater Institute, 500 E. Coronado, Phoenix, AZ 85004
Re: May 15 letter by ASU President Michael Crow
Thank you for forwarding Dr. Crow's criticisms to me. I certainly join him in encouraging debate on these matters. Without question, we need "to discuss the strategy, means, and logic by which investment in higher education will be made." To that end I am interested, as I'm sure you are, in learning more about his "enterprise model of investment."
In his review of my study, Dr. Crow was emphatic upon the point that the model I tested in the study was insufficient because it "assumes that state appropriations for higher education provide the only influence over economic growth. No single variable could possibly be the determinant of gross state product." Yet this is precisely the model politicians and lobbyists use in their efforts to win greater appropriations for higher education. The model I tested omits, as Dr. Crow continues, "key variables in the model that are well known to have a profound effect on gross state product." Again, the model I used was the very one suggested by appropriations proponents, who themselves omit key variables in their public fiscal policy discourse. (For example, see pp. 2-3 of the study).
Furthermore, my study did not purport to put forth a positive model for state economic growth. And, as stated in the study, it is only the "first in a series designed to examine the impact of higher-education spending and investment on Arizona's economy."
Dr. Crow and his colleagues attacked the methodology I used for "incorrectly assum[ing] that all states currently have the same base level of economic development" and for "not control[ling] for important, fixed effects." Yet all the data I used are at the level of the individual in the state, not the state itself. My data for state economic growth are per capita real gross state product (GSP), and my data for higher education funding are per capita real state appropriations. All the effects should be measured at the individual level because an individual who receives a college education increases his earning power. It follows that individuals pursuing and receiving higher education--especially those who are able to do so only through the lower prices offered them by public higher education (which receives the state higher education appropriations)--are able, on the aggregate, to raise per capita GSP. Thus, if public spending on higher education has the economic growth effects claimed by advocates of increased spending, we would expect to see those effects manifested as increases in per capita GSP.
Furthermore, to circumvent the problem of differing base levels of economic development, I tested the model proposed to the public in three ways. Along with testing real GSP per capita against real appropriations per capita, I also tested the one-year change in those variables, as well as the percent by which those variables changed in each year. Both of the latter tests should have at least greatly reduced the effects of differing base levels of economic development.
Regarding the question of summing the coefficients, I did not think that step was necessary because only about half of the coefficients in the three models were significant. The scattershot nature of the coefficients testing significant seemed counter to the model proposed for us -- as did the oscillating nature of the coefficients. Nevertheless, if you sum only the significant coefficients, then the simple model (Table 1) is negative, and the change and percentage change models (Tables 2 and 3) are positive.
Dr. Crow says the model should have tested "more than five lagged variables to capture impacts over a longer run;" he suggests also that the discussion was "focused on the first year coefficient" (it wasn't, as explained in a later paragraph). The number of lagged variables was set with the idea of giving enough time to test for a lagged effect while also allowing us to test several years' worth of data on the model proposed for us. I have therefore taken the liberty of running the data with 15 lagged variables, which still leaves us 4-5 years' worth of data to test (five years for the simple model, four for the change models). The new regression data is included as a postscript to this letter.
Here's what I found: (1) For the simple model, the sum of the coefficients (excluding the intercept) was slightly positive, at 4.6, but none of the coefficients were significant at the 95 percent level of confidence. One was close: it was the nine-year lag, with a coefficient of 70.5. (2) For the change model, the sum of the coefficients (excluding the intercept) was negative, at -70.4. The sum of the significant coefficients was also negative at -62.6. (3) For the percentage change model, the sum of the coefficients (excluding the intercept) was negative, at -0.3. The sum of the significant coefficients was also negative at -0.2. These models excluded the concurrent variable and focused solely on the 15 lagged variables, but I also ran them with the concurrent variable and found no substantial change in the effects.
Contrary to Dr. Crow's charge, the study did not rest on "the [adjusted] R-squared value." Nor was the study focused on the concurrent variable. Further, it did discuss the significance of the individual coefficients (see pages 8 and 16-17), though perhaps I should have made that aspect more clear.
Regarding the purpose of the "reverse model," the reason I reversed the dependent and independent variables is because I was not at all confident that the model as publicly delineated by the higher education lobby had those variables in their proper places. This study does not, as I stated above, put forth a positive model for state economic growth. I used the "reverse model" to question the model proposed. Doing so also prompts the question of endogeneity, and in future research Goldwater will test for that feature, as well. Of course, I welcome other Arizona research organizations, such as those affiliated with ASU, to conduct their own studies, treating control variables not included in my model. Those studies would be a welcome contribution to the important debate over higher education spending.
As for the statistics of 89,000 and 22,000 jobs, Crow's criticism centers on a sentence that was mangled during the editing process. On page 4, we wrote that "the Board of Regents found that the state's three public universities had a combined impact on their local communities of more than $4.4 billion, in addition to creating more than 89,000 jobs." The next sentence is imprecise: "Roughly one quarter of those jobs are held by people who live in Arizona." Here we should have quoted the regents directly, as we did in the endnote for the sentence: the Arizona university system "employs more than 22,000 who live and work in communities throughout the state." Dr. Crow, however, turns the poorly-worded sentence on page 4 into "an elementary omission and contradiction of fundamental data" that "calls into question the author's ability to undertake credible analysis." In fact, there is no such "omission," since the reader is clearly referred to the endnote, where the statistic is presented correctly.
Then, on page 6, in making a point about higher education budget cuts, we repeated the 89,000 jobs statistic: "According to the regents' own estimates, however, the Arizona university system's contribution accounted for 89,000 jobs, or only four percent of all Arizona jobs in 1999." There is no "contradiction," as Dr. Crow claims, since we used the very same statistic we used the first time. While Dr. Crow believes the purported "contradiction seriously understates the economic impact that is the very subject of the study," I suggest, on the contrary, that we gave the regents the benefit of the doubt with regard to the 89,000 jobs figure. If 89,000 jobs turns out to be an inflated figure, then we actually overestimated the economic impact of the state's universities. On the point of credible analysis and university economic impact studies generally, which I did not treat in my study, I would respectfully refer Dr. Crow to Will Potter's May 9, 2003, article from the Chronicle of Higher Education, "Public Colleges Try to Show Their Value to States, but Not Everyone Is Convinced."
Regarding the difference between appropriations and expenditures, I believe that appropriations is a proper variable to test. First, the public policy concern is appropriations, not expenditures. Second, both the higher education lobby and I assume that money appropriated for the universities will be spent. A debate upon expenditures is most welcome, of course, and I assume that expenditure concerns factor heavily in the enterprise model.
As for my choice of studies for the section on "corroboration by other studies," I did not mean to suggest that those were the only studies corroborating mine. As with any other study, a review of corroborative studies is never fully comprehensive. Furthermore, a review of related literature is never fully comprehensive, and I look forward to examining the five studies cited by Dr. Crow to see what bearing, if any, they would have on the proposition tested in my study.
In concluding this letter, may I restate my agreement with Dr. Crow on the need for debate and my interest in learning more about his "enterprise model of investment" for higher education. As you know, every year thousands of government spending initiatives at the federal and state levels masquerade as "investments." But it may be that Dr. Crow has a plan that is significantly different than other plans involving government spending. It's certainly worth exploring, and I am sure that the people of Arizona are eager to learn more.
Jon Sanders, Higher Education Policy Analyst