Karel Mertens and Morten Ravn on Fiscal Policy, Anticipation Effects, Expectations and Crisis
Karel Mertens is an Assistant Professor at Cornell University. Karel’s research has been concerned with monetary and fiscal policy. Merten’s RePEc/IDEAS profile.Morten O. Ravn is a Professor of Economics at University College London and a Research Fellow of the Centre for Economic Policy Research, London. Ravn’s research has been concerned with fiscal policy, business cycles, and international macroeconomics. Ravn’s RePEC/IDEAS profile.
Our recent research has focused upon macroeconomic aspects of fiscal policy but we have also looked into liquidity traps and theories of expectations driven crises. We have been particularly interested in the topic of expectations and fiscal policy. In a line of papers we have examined the empirical evidence regarding anticipation effects of fiscal interventions. We have also examined how modern DSGE models can account for the empirical regularities that we uncover regarding the impact of tax policies. We find that DSGE models are powerful labs for thinking about tax policies. Another line of our research is examining how to exploit narrative accounts when estimating the impact of fiscal shocks but without making extreme assumptions regarding the reliability of the narratives.We have also looked into the question about the efficacy of fiscal policy instruments in situations where there are binding constraints on the standard monetary policy instrument (the short term interest rate). We have shown how models that form the basis for arguments in favor of large spending multipliers and small (even negative) labor income tax multipliers in liquidity traps allow for the existence of another –expectations-driven– liquidity trap. Importantly, in this alternative equilibrium, government spending loses potency while labor income tax changes gain efficacy. We have also shown how financial frictions can lead to very deep recessions following a wave of pessimistic beliefs that drive the short term nominal interest rate to its lower floor.
2. Estimating anticipation effects of tax policy interventions
Fiscal interventions are often partially known well in advance of their implementation. For that reason, fiscal policy interventions may be associated with anticipation effects, i.e. policies may affect the economy prior to their actual implementation. A partial list of reasons for the presence of such anticipation effects include: (i) Fiscal interventions usually need to pass through democratic institutions resulting in delays between the formulation of a policy and its implementation; (ii) Temporary tax or spending changes introduce anticipation effects through sunsets. Such measures are common for e.g. investment tax credits and for consumption taxes (the 2008/09 temporary VAT cut in the UK is a recent example); (iii) Major policy interventions may sometimes be phased-in; (iv) Policies may have been part of election campaigns and therefore anticipated long before their implementation.Whatever the reason, the presence of such anticipation effects may be an important aspect to take into account when estimating fiscal shocks and their impact upon the economy. If ignored, researchers not only exclude potentially important information but may also mistime shocks. Moreover, fiscal policy shocks provide an interesting lab for examining the empirical relevance of “news driven business cycles,” see e.g. Beaudry and Portier (2007). In their seminal piece, Blanchard and Perotti (2002) found anticipation effects to be of little empirical relevance. Their estimation approach exploits the existence of reaction lags to obtain identification of fiscal shocks in a vector autoregression (VAR) framework. In the face of anticipation effects, this approach requires one to assume reaction lags that exceed the anticipation period. Thus, when using quarterly data, if fiscal shocks are anticipated, say 3 or 6 months in advance, reaction lags need to be at least 9 months, an assumption that appears implausible. For that reason, their analysis of anticipation effects was limited to a one quarter anticipation horizon and they found little evidence that fiscal shocks impact on the economy ahead of their implementation. Moreover, their estimates of the effects of implemented fiscal policies were very similar regardless of whether they allowed for a one quarter anticipation period or not.
In practice, fiscal policy interventions may sometimes be known years in advance of their implementation making the Blanchard and Perotti (2002) approach unattractive. To address this issue, Mertens and Ravn (2010a) apply a technique akin to Poterba (1988) in order to estimate anticipation effects. We study the impact of tax liability changes using the US tax narrative provided by Romer and Romer (2008). We focus upon those changes in tax liabilities that these authors deem “exogenous” (an assumption that we test formally in terms of Granger non-causality and fail to reject). For each piece of tax legislation we define an announcement date and an implementation date. When the difference between these two dates exceeds 90 days, we assume that the tax liability change is pre-announced and allow for anticipation effects. Following Poterba (1988) we define the announcement date to be the date at which the tax bill became law, a definition that is conservative but meaningful since it removes uncertainties regarding the policy’s implementation. We find that around half of the tax changes were anticipated and that the median anticipation lag is 6 quarters, a lag much longer than that considered by Blanchard and Perotti (2002).
We translate the tax liability changes into average tax rate equivalents by measuring them relative to aggregate output. We discriminate between anticipated and unanticipated tax changes by (a) allowing for differential effects after their implementation and (b) by assuming that anticipated tax changes enter the information set from their announcement.
We find that a pre-announced tax cut with an anticipation horizon of 6 quarters gives rise to pre-implementation declines in aggregate output, investment and hours worked. The largest pre-implementation drops in these variables occur around a year before the implementation and aggregate investment reacts particularly elastically with a decrease of 4% below trend following a 1% pre-announced cut in the average tax rate. The peak pre-implementation drop in output is estimated to be around 1.5%. In contrast, aggregate consumption is hardly affected by the announcement, a result that is consistent with earlier analysis of aggregate and household level consumption data, see e.g. Parker (1999), Poterba (1988), or Souleles (2002). The finding that investment, output and hours worked drop during the pre-implementation period has two important consequences. First, one cannot conclude that the lack of a consumption response to tax announcements indicates that the private sector is not forward looking. Secondly, we find that good news (a tax cut) have negative consequences before its implementation which is not consistent with the news driven business cycle hypothesis. We also show that the anticipation effects are very small at short anticipation horizons which render our results consistent with those of Blanchard and Perotti (2002).
Once a tax cut is implemented, it provides a major stimulus regardless of whether it was pre-announced or not. A 1% cut in taxes leads to an increase in aggregate output just below 2% with the maximum impact taking place around 10 quarters after the implementation of the tax cut. The results are robust to eliminating particular types of tax changes, to the choice of the sample period, and to controlling for other structural shocks such as monetary policy shocks and changes in government spending.
We then ask if tax policy shocks have been an important impulse to US business cycles. For the post World War II sample we find that tax policy shocks have accounted for 25-30% of the in-sample variance of output at the business cycle frequencies. We also find that tax policy shocks were important for particular business cycle episodes. One controversial result is that the early 1980’s recession to a large extent can be accounted for by the combination of the implementation pre-announced tax increases associated with the 1977 Social Security Tax Amendments and the announcement of future tax cuts incorporated in the Economic Recovery Tax act of 1981. This result holds after controlling for the impact of the Volcker disinflation and for changes in government spending. Thus, we argue that tax policy shocks should be high on the list of macroeconomists’ candidates for business cycle impulses.
Another implication of our results is that it might be relevant to control for fiscal shocks when estimating the impact of other structural shocks. In Mertens and Ravn (2011c) we argue that allowing tax changes to have permanent effects on labor productivity may be important for the estimation of permanent neutral technology shocks and their effects. In an application to US time-series data we show that once one controls for taxes, a positive permanent neutral technology shock implies an increase in hours worked and technology shocks matter for business cycles.
3. Anticipation and Vector Autoregressions
The results discussed in the previous section indicate that fiscal news effects are empirically relevant. This has important implications. Suppose that one was to estimate fiscal shocks and their effects using a structural VAR approach. If fiscal news is relevant but not controlled for, a VAR estimator would potentially get the timing of shocks wrong and this could lead to serious problems. In an important contribution Ramey (forthcoming, 2011) argues that such mistiming accounts for why researchers that have applied SVARs find a positive impact of government spending shocks on private consumption and on real wages. She shows that professional forecasts and narrative accounts can forecast SVAR estimates of government spending shocks. Using information from professional forecasters, she finds instead that government spending shocks lower consumption and real wages.Building upon Hansen and Sargent (1991), a very insightful paper by Leeper, Walker and Yang (2008) develops a number of key results regarding the impact of fiscal news on VARs. A key point of their analysis is that, if news are not controlled for, fiscal VARs not only mistime the shocks but have non-fundamental moving average representations which can give rise to non-structural errors and misleading impulse response functions.
The approach in Mertens and Ravn (2010a) addresses this issue directly by including fiscal news in the relationships from which we estimate the impact of fiscal shocks but this is only possible due to the use of narrative tax data and such data may often not be available. Mertens and Ravn (2010b) demonstrate that rational expectations models introduce restrictions on the non-fundamental roots of the MA representations that allows one “ex-post” to examine the sensitivity of SVAR based estimates of the impact of fiscal shocks to news shocks. The key property that we exploit is that rational expectations models imply that fiscal news are discounted at a constant rate which we, along with Ljungqvist and Sargent (2004), denote the anticipation rate. The anticipation rate in the simplest Ramsey model corresponds to the inverse of the unstable root of the characteristic polynomial that determines the law of motion of the capital stock. This parameter turns out to be an input into Blaschke matrices that Lippi and Reichlin (1994) show can be used to flip the roots of the MA representation of the empirical VAR. To be precise, the Blaschke matrices take as inputs the anticipation rate and the anticipation horizon and we suggest that one may calibrate the former of these parameters using economic theory and then compute impulse responses for alternative anticipation horizons.
We exploit these insights to derive a new structural VAR estimator that we implement using a VECM formulation (and denote the new estimator for the VECM-BM estimator). We show that standard DSGE models imply anticipation rates that are very close to one unless agents discount future utility heavily or are close to risk neutral. For “standard” calibrations, the anticipation rate is somewhere between 92 and 96% per quarter. In this case the non-fundamentalness of SVARs is not a very serious problem. To put it simple, when the anticipation rate is high, although the econometrician’s information set when estimating a VAR is smaller than that of the agent (who has information about future fiscal innovations), current actions incorporate a lot of information about the future innovations. When the anticipation rate is low, misalignment of information is instead more serious and lead to very misleading VAR-based impulse response function estimates.
We apply the new VECM-BM estimator to quarterly US time series data for the sample period 1954-2006 and estimate the impact of permanent government spending shocks. The identifying assumptions are that government spending is unaffected contemporaneously by other structural shocks, that anticipated and unanticipated shocks are orthogonal, and that their long-run impacts are proportional. Assuming an anticipation horizon of 8 quarters we find that a permanent increase in government spending increases aggregate output and consumption when it is announced regardless of whether it is anticipated or not. This result is consistent with independent evidence in Fisher and Peters (2010). Thus, we find no evidence that mistiming accounts for the positive consumption response to government spending shocks estimated in the SVAR literature. This may imply that it is too soon to dismiss theories that deliver such positive consumption responses (see e.g. Ravn, Schmitt-Grohe and Uribe, 2006, 2007).
3. Understanding the Effects of Anticipated and Unanticipated Tax Policy Shocks
There is a long tradition in macroeconomics of thinking about the ways in which agents react to anticipated fiscal shocks. An early contribution to this literature is the seminal paper of Hall (1971) and more recent important papers include by Auerbach (1989) and Yang (2005). In Mertens and Ravn (2011a) we extend this literature by asking whether a DSGE model can account quantitatively for the impact of anticipated and unanticipated tax policy shocks that we discussed in Section II above.Our benchmark model is a flexible price DSGE model in which a representative household maximizes utility, there are no liquidity constraints and firms are competitive. We introduce features such as variable capital utilization, investment adjustment costs, habit formation, and a distinction between durable and non-durable consumption goods. Tax liability changes derive from changes in average (and marginal) labor and capital income tax rates and from changes in capital depreciation allowances due to changes in capital income tax rates. Exogenous changes in tax rates are either anticipated or unanticipated and we assume an anticipation horizon of 6 quarters (which corresponds to the median anticipation horizon in the US data). Agents living in this economy therefore have information about current and future innovation in tax rates and their information sets evolve in a recursive manner. Interestingly, the information structure implies that agents aggregate tax news according to their remaining anticipation lag, a property that we exploited in Mertens and Ravn (2010a) discussed above. One important consideration is the financing of the changes in tax rates. In our benchmark model we assume that the government holds constant government expenditure and varies lump-sum taxes (or government debt) in response to changes in revenues deriving from factor income taxation.
We estimate a subset of the structural parameters using indirect inference by matching the empirical impulse response functions. The estimator takes into account that the empirical estimates of the impact of tax changes are based on VAR models with a finite set of lagged tax changes. The DSGE model implies a very similar VAR model but with an infinite set of lagged tax changes. The importance of the lagged tax changes depend on a dampening matrix with roots that are determined by the persistence of the tax rate processes. These roots are large in practice and for that reason we simulate data from our model and estimate the structural parameters by matching the impulse response functions subject to the VAR filter. Our benchmark estimates of the structural parameters are within the range of values estimated in other recent studies. We find, for example, a relatively high habit persistence parameter (of close to 90%) and a Frisch elasticity of labor supply in the neighborhood of one. This value is on the higher side relative to estimates from the labor literature but lower than standard estimates in the macro literature.
We find that the DSGE model can account very precisely for the estimated impulse response functions. According to our results, a typical tax liability cut is a very persistent drop in labor income tax rates and a more temporary U-shaped, drop in capital income tax rates. As in the data, the model implies that implemented tax cuts give rise to a major boom in the economy with a elastic response of investment and a muted increase in hours worked. The model is also consistent with the empirical finding that a pre-announced tax cut leads to a pre-implementation drop in aggregate output, hours worked and investment.
It would appear a priori that the biggest challenge would be to account for the lack of a positive consumption response to a pre-announced tax cut that we estimate in the US data. The model’s ability to account for this feature derives from the importance of substitution effects relative to wealth effects. Wealth effects are small because the cut in income tax rates are debt financed but substitution effects can be large (and indeed are so in our benchmark results). Another important feature of the model in accounting for the shape of the consumption response is complementarity between consumer durables and consumer non-durables. Habit formation is also an important aspect but mostly when it comes to accounting for the gradual response of consumption to implemented tax cuts estimated in the US data. We also show that the muted response of hours worked to implemented tax cuts derive from the opposing effects of a temporary increase in after-tax wages and a more persistent (and bell-shaped) increase in the after- tax real return on capital which initially holds down the labor supply response.
A skeptic’s response to these results could be that we minimize the importance of wealth effects by assumption by excluding liquidity constraints and by not allowing changes in tax revenues to affect government spending. We therefore extend our analysis along both these lines. We find that allowing for feedback of tax revenues on government spending improves the fit of the model but changes little in terms of implications. Our estimates imply an elasticity of government spending to tax revenues of just above 20%. Following Galí, Lopez-Salido, and Valles (2007), we combine the introduction of liquidity constraints with a labor market distortion. We find an estimate of the share of liquidity constrained agents of only 15% of the population. The reason for this is very low estimate is that a high share of liquidity constrained agents is inconsistent with the elastic response of investment to changes in tax rates. This estimate is much smaller than the calibration of 50% in Galí, Lopez-Salido, and Valles (2007) (see also Canova and Ravn, 2000, for a similar calibration in a model of taxes and the welfare state).
We take away from this analysis the lesson that DSGE models are powerful laboratories for thinking about the macroeconomic impact of tax liability changes. There is still much to be explored such as monetary-fiscal interactions, the importance of fiscal rules, the impact of nominal rigidities etc. but even a quite stylized model appears broadly consistent with the empirical evidence on the impact of tax changes.
As mentioned, a key aspect one needs to consider when formulating fiscal policy models are how fiscal instruments adjust endogenously to changes in output, debt, etc. It is well-known that the impact of government spending shocks depends crucially upon their financing. When distortionary tax rates adjust endogenously to finance spending induced deficits, the distortions may partially undo the stimulating impact of higher government spending that occurs through wealth effects in standard models. The difficulty is that it is challenging to estimate such feedback mechanisms in practice. In Cloyne, Mertens and Ravn (2011) we address these issues by estimating the endogenous responses from narrative data and then feeding them into a DSGE model. The use of narrative data allows us to explore interesting non-linear features such as the fact that it is to some extent random under which circumstances policy makers adjust instruments in response to e.g. changes in government debt. We believe that this analysis is very important especially in the current environment where fiscal retrenchments are taking place and where there are rising concerns about rising levels of government debt.
5. Estimating the Impact of Fiscal Shocks Using Narrative Data
The empirical fiscal policy literature has lived a rather schizophrenic life in which one part of the literature has applied VAR based estimators while another part of the literature has relied upon narrative accounts. The former of these strands assumes fiscal policy shocks unobservable but estimable subject to identifying assumptions typically relating to timing assumptions, calibration of contemporaneous elasticities, or the use of sign restrictions (see Mountford and Uhlig, 2009, for an example of the latter). Another literature instead adopts the narrative approach and relies on the exogeneity of particular tax or spending episodes for identification. This divide would be of little interest were it not the case that key objects of interest such as the implied tax multipliers differ wildly across estimators and appear to be related to methodology. Thus, while Blanchard and Perotti (2002) find tax multipliers typically below one, Romer and Romer (2010) find a multiplier of 3 (but with a long lag). Clearly, such levels of discrepancy in the results are worrying.Mertens and Ravn (2011d) analyzes how these two strands of the literature can be combined and we find results that allow one to understand why previous estimates have differed so markedly. We assume that narratively identified fiscal policy shocks are noisy signals of the “true” latent policy shocks. This assumption reflects both that narrative accounts carry information but also that measurement error may be a concern. We also assume that the narratively identified fiscal shocks are Granger non-caused by the observables and orthogonal to other structural shocks. We then derive a new narrative fiscal VAR estimator that relies on the use of narratively identified policy innovations as proxies of the structural fiscal shocks. We apply the estimator to US data using the Romer and Romer (2008) narrative tax account as a proxy for the tax shock. Our identification scheme allow for estimation of both the spending response to tax shocks and of the tax revenue response to contemporaneous changes in aggregate output. The former of these two parameters is customarily assumed to be zero in the SVAR literature while the latter is calibrated to values that derive from estimates of the elasticity of tax base to (cyclical) output combines with institutional information regarding the elasticity of tax revenues to the tax base.
We find estimates of the impact of tax shocks that are much larger than those of Blanchard and Perotti (2002). A one dollar cut in taxes according to our results give rise to an impact increase in output of more than 2 dollars and the peak response of output is above 3 dollars which occurs around 1.5 years after the tax cut. What accounts these much larger tax multipliers? Our estimates imply a much higher elasticity of tax revenues to GDP (3.14) than the calibration of Blanchard and Perotti (2002) who assume a value of 2. Since an increase in taxes lowers output, this higher estimate of this elasticity implies (due to a standard endogeneity bias) a higher tax multiplier. We believe that this is important because the SVAR literature’s calibration of this parameter relies upon estimates from studies that may suffer from the exact same endogeneity problem that led the SVAR literature to calibrate certain parameters. Caldara (2010) contains an insightful discussion of these issues in a VAR setting. We provide further evidence on the plausibility of this higher contemporaneous elasticity of tax revenues to output by showing that it implies a much better out-of-sample forecast of tax revenues than standard VAR estimates.
Relative to Romer and Romer (2010), our analysis allows for measurement error in the mapping between the narrative identification of the tax shock and the “true” tax shocks while Romer and Romer (2010) along with the rest of the literature assumes that the narrative identification delivers the actual tax shocks. Thus, we argue that there may be an attenuation bias at short forecast horizons and find higher impact responses of output to tax shocks than Romer and Romer (2010). Finally, we evaluate the precision of the Romer and Romer (2008) narrative tax shock identification by means of estimate of its statistical reliability measure which we find to be close to 70%.
We believe that the estimator has a lot of interesting future applications since it allows one to exploit narrative identifications without making extreme assumptions regarding their quality.
6. Fiscal Policy in a Liquidity Trap
The recent downturn in aggregate activity in much of the world economy has triggered a lively discussion about fiscal policy effectiveness. In a line of very influential papers, Christiano, Eichenbaum and Rebelo (2010), Eggertsson (2009) and Woodford (2011) examine the impact of fiscal shocks in sticky price models where monetary policy is described by a rule for the short term nominal interest rate and where a shock takes the economy to the zero lower bound. In these analyses, a sufficiently large and temporary decline in the “efficient rate of interest” (due to a large increase in desired savings, an increase in the interest rate differential between lending and deposit rates, productivity shocks etc.) is the source of the liquidity trap (see Eggertsson, 2010). When the shock is so large that the central bank’s attempts to cut the real interest rate takes the nominal rate to its lower bound, a potentially large drop in output is needed to clear the goods market. These papers argue that the government spending multiplier (and the consumption tax multiplier) can be very large in a liquidity trap when the lower bound on the nominal interest rate is binding (or the interest rate is constant). Moreover, the marginal spending multiplier is very large when the drop in output in liquidity trap is very large. Labor income taxes instead lose potency in a liquidity trap and it may even be the case that higher taxes stimulate output in a liquidity trap. These results all refer cases in which the fiscal stimuli are removed once the liquidity trap terminates. Woodford’s analysis shows that expansionary fiscal policies may be optimal in such circumstances.In Mertens and Ravn (2010c) we build on an insight from the seminal contribution of Benhabib, Schmitt-Grohe and Uribe (2002) that interest rate rules can lead to multiple steady-states where one of these is is a permanent liquidity trap. We examine stochastic temporary sunspot equilibria – temporary episodes of expectations driven liquidity traps. We show that such equilibria exist in the sticky price model when pessimistic beliefs are sufficiently persistent. Intuitively, if agents become pessimistic for a sufficiently long period, their pessimism drives them to increase savings and producers to cut prices making this liquidity trap equilibrium self-fulfilling. The per-period drop in output is more dramatic the less persistent is the pessimistic state (but the equilibrium ceases to exist if the pessimistic state is too transitory) and also depends on preferences and on the extent of nominal rigidities. One could wonder whether higher inflation targets would eliminate these types of equilibria? We find that higher inflation targets increase the critical level for the persistence of pessimistic beliefs for which the self-fulfilling expectational equilibria can exist but only marginally so and come with cost of more dramatic output costs in a liquidity traps when they occur.
We examine the potency of fiscal policy in a liquidity trap when it arises due to expectations. We find that a (marginal) government spending stimulus implemented during an expectations driven liquidity trap is less potent than during normal times. The same is the case for a marginal cut in consumption taxes. A marginal cut in labor income taxes instead is associated with a larger multiplier than in normal times. These results are exactly the opposite of those of the literature referred to above. How come? Consider first the fundamentals driven liquidity trap case. Suppose agents’ valuation of current consumption drops drastically which leads to an increase in desired savings and a drop in inflation. If the economy ends up with nominal short-term interest rates at their lower bound, output needs to drop significantly to clear the goods market. In such a situation, an increase in government spending stimulates output through demand directly and indirectly through the real interest rate drop implied by the inflationary pressures that occur after an increase in spending. A cut in labor income taxes in contrast stimulates even further savings (for intertemporal reasons) and lowers current output even more because of a further fall inflation. Now consider an expectations driven liquidity trap. Agents start expecting lower income and inflation and if these pessimistic beliefs are sufficiently persistent, the economy sets on a path of self-fulfilling pessimistic expectations. In this scenario, when government spending rises, agents need to be even more pessimistic for the equilibrium to survive and this gives rise to a considerable amount of crowding out. A cut in labor income taxes instead is very effective because it implies that the self-fulfilling equilibrium can occur with milder drops in output.
One direct implication of these results is that statements about the potency of fiscal policy in a liquidity trap need to be made contingent upon its source. Without such information, caution may be the better option. Another implication is that it is important to reconsider ways in which policies can help avoiding expectations driven liquidity traps.
One reaction to our analysis would be that the implications do not seem consistent with empirical evidence regarding the effectiveness of fiscal policy during the Great Depression. On the other hand, it is hard to find any hard evidence that government spending was particularly effective during Japan’s long-lived liquidity trap. Added to this is the recent experience of a country like the UK where there seems to be little evidence that the spending hike during the early parts of the crisis did much to hinder a significant drop in output. Yet we do not know the counterfactuals so future research will hopefully help in settling the score between the competing theories.
7. Liquidity Traps and Credit Channels
One result coming out of the research discussed above is that the output drop in a stochastic sunspot can be quite large. This result is important because the (per period) output losses that occur in a permanent (expectations driven) liquidity trap tend to be minor. This result has led us to further explore the properties of stochastic sunspot equilibria in models with more elaborate financial markets.Mertens and Ravn (2011b, 2011e) consider stochastic sunspot equilibria in a model with patient consumers and impatient entrepreneurs, housing, and collateral constraints as formulated by Iacoviello (2005). Housing provides utility for households and is an input to production for entrepreneurs. We assume a collateral constraint for entrepreneurs which allows for leverage and relates entrepreneurial real estate debt to the expected resale value of their property portfolios. In a self-fulfilling expectations driven liquidity trap, the collateral constraint leads to a process of debt deflation and entrepreneurial fire sales of property and the financial accelerator may be very large. These aspects give rise to potentially large drops in output and in property prices.
In Mertens and Ravn (2011e) we decompose the channels through which this amplification process takes place. We show that the financial accelerator is large when (a) housing debt is written in nominal contracts, (b) the borrowing constraint is formulated as a collateral constraint, and (c) the amount of leverage is high. Each three of these requirements appear to have been present before the crisis. Mertens and Ravn (2011b) examine in some detail how financial innovation in terms of the size of the output loss in a liquidity trap and the critical persistence of the pessimistic state such that the self-fulfilling equilibria can exist. Higher leverage implies higher house prices and rules out short-lived liquidity traps but at the same time implies much more dramatic output losses when such equilibria do occur. Therefore, the process of financial innovation that took place during the 1990’s and the 2000’s may not only have spurred house price inflation but may also have laid the seeds of a severe crisis.
Q&A: Jeremy Greenwood on DGE beyond Macroeconomics
Jeremy Greenwood is Professor of Economics at the University of Pennsylvania. His first interests have been in economic fluctuations, and he has since strayed into other areas of economics and beyond: premarital sex, female emancipation, the information technology revolution, and more. Greenwood’s RePEc/IDEAS entry.
EconomicDynamics: Dynamic General Equilibrium (DGE) theory has found its first applications in the study of economic fluctuations, in particular with Real Business Cycle theory. You have applied DGE to many other areas. Why is it so useful outside of macroeconomics (in a narrow sense)?
Jeremy Greenwood: I remember Finn Kydland presenting “Time to Build and Aggregate Fluctuations” at the University of Rochester sometime in the early 1980s. I was a graduate student at the time, taking a reading course in dynamic programming from James Friedman, the game theorist (emeritus at UNC). We were reading Bellman’s book “Dynamic Programming”, Denardo’s (1967) paper on “Contraction Mappings in the Theory Underlying Dynamic Programming”, and Karlin’s (1955) paper “The Structure of Dynamic Programming Models”. I was the only macro student in this small class. During Finn’s seminar, I could feel the goose bumps on my arms: Who knew that you could solve dynamic programming models on the computer?The place that Kydland and Prescott’s (1982) landmark paper would have in economics was unclear at first. Look at McCallum’s (1989) discussion about Kydland and Prescott’s work. He talks about the meaning of technology shocks, the (ir)relevance of solving central planner’s problems, the absence of money in the framework, and the model’s inability to mimic the cyclical behavior of prices and wages. The importance for business cycle theory of Kydland and Prescott’s landmark paper cannot be understated. In retrospect, however, its prime achievement was to introduce modern techniques from operations research and numerical analysis into economics, more generally.
General equilibrium modeling was waning at the time because it had reached a point of diminishing returns. Higher level mathematics was no longer yielding interesting new results. Kydland and Prescott signaled that you could put a dynamic stochastic general equilibrium model onto a computer and simulate it to get new and interesting findings. Researchers were no longer limited by pencil and paper techniques. A new era had dawned. Of course, other disciplines had reached this point earlier. Aerospace engineers had realized for a long time that they cannot hope to discover the properties of a helicopter flying through turbulence by using pencil and paper techniques. The same is true for astrophysicists trying to model the instant after the big bang. Both use computers. In some quarters in economics there is still a resistance to this. Simulations aren’t general enough, people say. Yet, these same individuals will take drugs or fly in planes designed on computers. Nobody “proved” these things are safe. Anyway, the idea of simulating dynamic stochastic models has applications in many fields in economics. You can use it in public finance to study the impact of taxation. In fact, Shoven and Walley (1972) did this in a static setting before Kydland and Prescott. Labor economists can use it to explore topics such as the occupational mobility of workers, as in Kambourov and Manovksii (2009). People in industrial organization employ simulation methodology to model firm dynamics. A classic example here is Hopenhayn and Rogerson (1993). One can use it in models of international trade to explain how shifts in tariffs and other trade costs impact on job turnover and the distribution of wages–Cosar, Guner, and Tybout (2010). Questions in finance, such as consumer bankruptcy, have been addressed, as in Chatterjee, Corbae, Nakajima and Ríos-Rull (2007).
ED: Your applications do not limit themselves to Economics. How can Economics, and in particular DGE, teach us something about topics typically covered by other social sciences?
JG: Economics has a lot to say about modeling human behavior, both at the individual and aggregate levels. Take the sexual revolution. In yesteryear out-of-wedlock births were rare. Since contraception was primitive, one can surmise that premarital sex was not widespread. In the U.S. only 6% of 19-year-old females would have experienced premarital sex in 1900, versus approximately 75% today. Why? Historically, the consequences of a young woman engaging in premarital sex were very dire. Most families were at or near a subsistence level of consumption. Unmarried young women often just abandoned foundlings. This is documented in two great books by Fuchs (1984, 1992). Today, contraception is very effective. The odds of not becoming pregnant from a single encounter might have risen from something like 0.961 in 1900 to 0.996 today. This is a big difference. If a girl has sex many times, then in order to calculate the odds of not becoming pregnant after n encounters, you need to raise these numbers to the power of n. Therefore, the benefit/cost calculation of engaging in premarital sex has changed dramatically. Economists would accordingly expect a rise in premarital sex to occur.Culture is often taken as a force outside economics. Economics can cast some light on how culture evolves, though. In the past, draconian measures were taken to regulate premarital sex. In New Haven in 1700 about 70% of criminal cases were for “fornication.” There were huge incentives for parents to try to mold their offspring’s preferences to prevent premarital sex. They worked hard to stigmatize the act. The same thing was true for churches or states, which had to provide charity for unwed mothers. This was a great financial burden for them. This socialization process was very costly for parents, churches and states. The increase in the efficacy of contraception reduced the need for all of this. Therefore, over time the stigmatization associated with sex has declined as parents, churches and states engaged in less socialization. One message from this is that some component of culture is clearly endogenous.
ED: Speaking of adolescent sexuality, parents spend considerable efforts instilling their offspring some rationality in this regard. Economic models, and forward-looking expectation models in particular, assume rationality and rational expectations. Do you fell this is appropriate?
JG: Yes, I do. Rational behavior is a central tenet in modern economics. There are no good alternative modeling assumptions. Take sex, drugs and alcohol. You could assume that people engage in risky behavior because they don’t understand the hazards. Specifically, they don’t appreciate the risks of becoming pregnant or contacting HIV/AIDS, or the impact that alcohol and drugs can have on their health. Public policies aimed at educating people should then have a big impact on people’s behavior. An alternative view is that people do understand the consequences of their actions. They enjoy having sex, drinking alcohol, or doing drugs. The risks of becoming pregnant or contracting HIV/AIDS are actually low, if you just have a single encounter. Similarly, so are the odds of becoming addicted or overdosing on alcohol/drugs. From the perspective of a rational person, the cost of a single night engaging in such behavior might seem low relative to the benefit. So, they do it. The probability of trouble rises quickly with the number of nights involved in such activity. But, people decide such things sequentially, one act at a time. The public policy prescriptions when you take the latter approach are unclear. People are doing something they enjoy, they understand the risks, and a certain number of them will have a bad experience. If there were no negative externalities, one view might be to just leave these people alone. If you think insurance markets are incomplete then perhaps you would provide them with medical or other services. But, this encourages risky behavior. If you think there are externalities, perhaps you should tax the activity to dissuade people from engaging in it. Or, you could jail people for taking drugs. The latter policy doesn’t seem to curtail such behavior much, perhaps because the likelihood of getting caught for a single snort or toke is so small. Enforcement is very expensive too. There is no easy solution. This is the position society is in.Philipp Kircher, Michèle Tertilt and I adopt the rational approach to modeling the HIV/AIDS epidemic in Africa. Work by Delavande and Kohler (2009) on the HIV/AIDS epidemic in Malawi suggests that peoples’ subjective expectations about whether or not they have HIV/AIDS, based on their sexual behavior, are rational. In our analysis people update this belief in a Bayesian fashion, depending on what type of sexual behavior they have been engaging in. The analysis suggests that policies such as circumcising males may backfire. Circumcision reduces the risk of contracting HIV/AIDS for a male, or so some researchers believe. But, as a result, males may engage in more risky behavior. This is bad for females. While at this time it might be unclear just how rational people are in thinking about decisions involving risky behavior, the work done so far illustrates that a model based on rational behavior is able to account for a number of stylized phenomena concerning the Malawian HIV/AIDS epidemic.
ED: Your reason has dealt with issues usually handled by other social sciences. How have non-economists reacted to economists stepping on their toes?
JG: Well, so far most of the hostility has come from people within the economics profession. The work mentioned above spans areas such as economic history, labor economics and macroeconomics. Each area has its own way of doing things. People often aren’t so open minded when you enter their territory. Applied economists frequently are not acquainted with modern theory and sometimes aren’t enamored with it either. There is a lot of hostility toward using simulation based methods. “Engines of Liberation” was rejected at the American Economic Review, the Journal of Political Economy and the Quarterly Journal of Economics before it was accepted at the Review of Economic Studies. Even there it wasn’t smooth sailing but the Editor, Fabrizio Zilibotti, was willing to intervene and manage the process. This took both courage and effort on his part. You rarely see this because it is costly for an editor to do. Here is a link to a hostile referee report from a well-known economic historian that we received from the AER. The referee’s second point illustrates that he truly had no conception about how difficult doing this sort of work is. Other times people just don’t understand notions that are ubiquitous in macroeconomics, such as sequential optimization or perfect foresight equilibrium, as can be seen from this report on “Social Change: The Sexual Revolution.”Other people in the area relate similar experiences. In a very small way, I now appreciate the difficult time that Lucas, Prescott, Sargent and Wallace had in the late 1960s and 1970s. Given this, I have some concern about young researchers in this area. In the last two years Jim Heckman and Nezih Guner have held separate conferences in family economics, at The University of Chicago and Autònoma de Barcelona, which have brought together researchers in labor economics and macroeconomics. This is great. It promotes understanding across fields. Exciting new work was presented at these conferences, such as Stefania Albanesi’s research on maternal health and fertility. Satyajit Chatterjee, Lee Ohanian and I have held small conferences at the Philadelphia Fed that encourage this type of work. Hopefully, things will get better. It has been very exciting working on such topics (rather than doing the same old same old) and I would not have given up the experience for anything.
This volume was assembled by Konstantinos Tatsiramos and Klaus Zimmermann to celebrate the awarding to Dale Mortensen and Christopher Pissarides of the 2005 IZA Prize in Labor Economics. After their co-winning the 2010 Nobel Prize, the book takes another dimension.
The meat of the book is comprised of five chapters that are reprints of classics of Mortensen and Pissarides that shaped labor search theory. One chapter discusses matching as a non-cooperative process, the second introduces a model of short-run dynamics of vacancies and unemployment, which the next chapter applies to the United Kingdom. Chapter 4 uses the now famous Mortensen-Pissarides matching function to study job creation and job destruction. And the last chapter studies wage distributions in equilibrium. These five chapters are lead by an introduction by the laureates that presents their view on how flows on the labor market should be modelled, in particular how equilibrium theory is more coherent than the disequilibrium theory it replaced.