Economic Dynamics Newsletter

Volume 17, Issue 1 (April 2016)

The EconomicDynamics Newsletter is a free supplement to the Review of Economic Dynamics (RED). It is published twice a year in April and November.

In this issue

Jeremy Lise on Heterogeneity and dynamics in the labor market and within the household.

Jeremy Lise is currently Reader at University College London. He will be joining the University of Minnesota as Associate Professor in the Fall of 2016. His research interests lie in understanding labor markets and intra-household allocation. Lise’s RePEc/IDEAS profile.

I would like to take this opportunity to discuss some of the questions I am currently interested in and working on.

  1. What is the structure underlying the complex patterns we observe in earnings data? What is the relative importance of ability, effort, luck and frictions in explaining variation in labor market outcomes?
  2. Do frictions in labor markets affect low- and high-skilled labor differently?
  3. How is the allocation of heterogeneous workers to heterogeneous jobs affected by aggregate shocks?
  4. To what extent do cognitive, manual and interpersonal skills have differing returns in the labor market? How do these skills differ in the extent to which they can be learned on the job?
  5. What can economic theory tell us about how resources are shared within households? How might this change over the duration of a marriage? What light does the data shed on this?

My current work draws together and builds on developments in several branches of the literature: equilibrium labor search, matching and sorting; consumption and savings; estimating earnings processes; intra-household allocations; and equilibrium policy evaluation. These areas of the literature draw on a variety of complex data sets, including large-scale panel data, complex survey data, matched employer-employee data based on administrative records, and data generated by randomized control trials to understand various aspects of labor market and households dynamics. Economic theory provides a framework for how to approach measuring, aggregating and interpreting the data. My research combines the development of new theory or modeling approaches, detailed work with micro data, and the application of state of the art empirical methods.

1. Worker heterogeneity, uncertainty and labor market outcomes

In the labor market, we observe large cross-sectional differences across workers in terms of wages and employment rates, even after we condition on a large set of observable characteristics including measures of human capital such as education and experience. Much of this difference is likely due to remaining unmeasured differences in ability across workers. As such, unobserved heterogeneity will play a key role in any research that seeks to understand differences in outcomes. However, these fixed differences across individuals are not sufficient to explain several robust features of the age profile of dispersion in wages and consumption for a cohort of individuals: First, the variance of log wages (or earnings) increases approximately linearly with age for a cohort. Second, the variance of log consumption shows a similar linear increase, although the level and slope are less than for wages.

The observed pattern for wages is consistent with both the accumulation of permanent shocks to human capital as well as heterogeneity across workers in the slope of human capital accumulation. The pattern for consumption strongly suggests that individuals face substantial uncertainty during their working lives (either shocks to human capital or persistent uncertainty about their own human capital). Understanding the sources of this uncertainty is a major goal of my research.

Separating heterogeneity from uncertainty is difficult. It requires using jointly data that reflects shocks and data that reflects choices, combined with an economic model that provides a theoretical link. In Lise (2013) I developed a model of endogenous job search and savings to provide a link between the observed process of job mobility and job loss and consumption/savings decisions. The process of choosing to search for and move to better jobs, combined with the risk of job loss that resets this process, produces a wage process with strong asymmetries. Workers expect regular and moderate positive changes to wages as they move to better opportunities, while the down side risk (wage loss) becomes increasing large the higher up the job ladder. This asymmetry implies optimal savings behavior that produces substantial dispersion in assets and consumption, even across identical workers. The risk of falling off the job ladder when near the top gives workers a strong precautionary incentive to save.

Recently Arellano et al. (2015) and Guvenen et al. (2015) highlight a striking feature of earnings changes in the US data: most year-over-year changes are small, and the large changes are more often negative than positive (i.e. wage changes exhibit negative skewness and excess kurtosis), and this becomes increasingly true if you condition on higher and higher levels of previous earnings. Both sets of authors cite the model features of Lise (2013) as potentially useful to understanding this pattern and the consumption/savings behavior it would induce.

This anticipates my current work with Michael Graber (Graber and Lise, 2015) where we jointly model a stochastic process for human capital accumulation, job search and consumption/savings choices. The model incorporates heterogeneity across workers in fixed productivity, heterogeneity across workers in their ability to acquire human capital on the job, jobs that differ in terms of both productivity and how they facilitate human capital acquisition, and shocks to human capital as well as differences in the random opportunities to move to better jobs and job destruction shocks associated with the job ladder. While still very preliminary, our current results suggest that that pre-labor market differences across workers account for almost the entire initial cross-sectional dispersion; shocks to human capital account for almost the entire rise in wage and consumption variances; and the job ladder accounts for the entire profile of conditional skewness and kurtosis of wage changes. We find that the combination of permanent heterogeneity, stochastic human capital accumulation, and the job ladder are necessary ingredients in the model to jointly account for these patterns in the data.

2. Frictions and sorting in the labor market

An additional mechanism that has the potential to produce a rising variance of wages as a cohort of workers gains experience comes from the dynamic process of sorting of workers to jobs. To the extent that there are important complementarities in production between the skills of workers and the technology of firms, aggregate output is maximized under positive assortative matching (Becker 1973). Positive assortative matching also maximizes dispersion in wages across workers. If it takes time for the market to attain the fully assortative allocation (either because time and real resources must be devoted to the process or because individuals in the market have to learn about their best match) a cohort of workers may start out mismatched and only gradually become perfectly sorted. As a result, the variance of wages for this cohort will be low at the beginning when there is not much sorting and rise continuously until the variance is maximized with complete sorting.

In a recent paper with Costas Meghir and Jean-Marc Robin (Lise, Meghir and Robin, 2016 RED special issue in honor of Dale Mortensen) we analyze the wage and labor market outcomes for high school and college educated cohorts in the NLSY79 through the lens of a frictional sorting model. We find very different implications by education. We estimate that for the low-skilled workers skill and technology are essentially substitutes. Frictions are substantial and the decentralized allocation of workers to jobs is close to random, but this does not lead to a loss in output, as there are no complementarities in production. In contrast, we find substantial complementarities between skill and technology for the college educated; for this group positive sorting results in higher aggregate output. Here we find substantial positive sorting and the decentralized allocation is close to second best (a constrained social planner could only attain a very minor improvement) largely due to the speed of worker reallocation.

In Lise et al. (2016) we use the dynamic implications of sorting as a cohort ages to estimate our model using panel data on workers only. There has been continuous improvement as of late in the availability and usability of matched employer-employee data (constructed from administrative files) that will be particularly informative in furthering our understanding of the allocation of workers to jobs and sorting in the labor market. My current work with Thibaut Lamadon, Costas Meghir and Jean-Marc Robin (Lamadon et. al. 2015) provides an identification proof and develops an estimator to uncover the match specific production function using such matched employer-employee. A key contribution of this research is that we show that the model equilibrium and the implied wage and mobility dynamics are essential for identifying and interpreting any parameter estimates. This contrasts starkly with the widely adopted statistical approach (two sided fixed effects) which, while providing a convenient description of the data, requires identifying assumptions on mobility which are known to be inconsistent with most models of labor market mobility. This project relates closely to work that has been described in this Newsletter by Rasmus Lentz (2009), Jan Eeckhout and Philipp Kircher (2011), and Marcus Hagedorn and Iourii Manovskii (2014).

For a complete understanding of the interaction of heterogeneous workers and firms it is desirable to move beyond a scalar measure of skill. In current work with Fabien Postel-Vinay (Lise and Postel-Vinay, 2016) we develop and estimate a model in which jobs are defined by a vector of skill requirements and workers by a vector of human capital.  In particular we think of jobs (or occupations) as requiring various amount of cognitive, manual and interpersonal skills, and workers differing in the amount of these skills they currently possess. Our estimates indicate that the market treats these skills very differently. Cognitive skills have high returns and are difficult for workers to acquire on the job.  Manual skills have much lower returns, but are easily picked up on the job.  Interpersonal skills have moderate returns, and are approximately fixed over a worker’s lifetime.  We learn about the degree to which these skills can be acquired on the job by looking at workers who have the same measured skills at labor market entry, but start in different jobs that use these skills with differential intensity.  The extent to which these initial jobs differentially affect the types of jobs these workers do 5, 10 or 15 years later is informative about the extent to which skills can be adjusted as a function of the history of jobs the worker has had.

Accommodating correlated shocks in frictional labor market models with two-sided heterogeneity (either across industries, regions, or at the aggregate level) was generally thought to be intractable since the state space would then contain time varying distributions. This has limited the types of questions that researchers could address since it was necessary to assume a stationary environment from the start. Recently Guido Menzio and Shouyong Shi (2010a,b, 2011) showed that assuming that search is directed rather than random results in a block-recursive equilibrium that removes the distributions from the state space. It turns out that a related result can be proved for a class of random search models. In a recent project with Jean-Marc Robin (Lise and Robin, 2016) we develop an equilibrium model of random on-the-job search with ex-ante heterogeneous workers and firms, aggregate shocks and vacancy creation. The model produces rich dynamics in which the distributions of unemployed workers, vacancies and worker-firm matches evolve stochastically over time. We prove that the match surplus function, which fully characterizes the match value and the mobility decision of workers, does not depend on these distributions. This result means the model is tractable and can be estimated. We illustrate the quantitative implications of the model by fitting to US aggregate labor market data from 1951-2012. The model has rich implications for the cyclical dynamics of the distribution of skills of the unemployed, the distribution of types of vacancies posted, and sorting between heterogeneous workers and firms.

There are four key modeling assumptions that lead to the result. 1) Match formation and destruction are efficient, 2) utility is transferable, 3) the value of a vacancy is zero, and 4) firms make state contingent offers and counter offers to workers. The first three assumptions are standard. They simply mean that the worker and firm agree that the value of the match is the expected present discounted sum of output they can produce, and they should form a match (and remain matched) only if this exceeds the value of home production. The last assumption is exactly the wage determination process proposed by Postel-Vinay and Robin (2002). Firms make take-it-or-leave-it offers when hiring unemployed workers and engage in Bertrand competition with the other firm when hiring employed workers. The implication is that workers are always hired at their reservation value, which is equal to the value of remaining unemployed for those hired from unemployment and equal to the total match value with their current firm for those who are poached. A direct implication of this is that the value a worker receives when changing jobs does not depend on the type of firm she moves to. Bertrand competition between the poaching and incumbent firm always results in the worker receiving the total value of the match she leaves, independent of the match she goes to. When the worker leaves to another job the firm is left with a vacancy of zero value. Thus, the value to the current worker-firm pair is the same whether the worker stays and produces or leaves.

At parameters chosen to match aggregate time series for the US, the model implies that in booms a wider variety of vacancies are posted, unemployed workers find jobs more quickly (although they tend to be further from their ideal job on average), and workers receive alternative offers at an increased rate, reallocating quickly in the direction of jobs most suitable to their abilities. In contrast, in recessions unemployed workers find jobs more slowly (although they tend to be better matched for the jobs they do accept), employed workers receive offers less frequently and hence move more slowly toward their ideal job. As a result, workers tend to be more mismatched when transiting from unemployment in a boom, but quickly become well matched though on-the-job search. In recessions they are better matched when transiting from unemployment, but mismatched workers remain so longer due to fewer job-to-job transitions.

The model we developed in Lise and Robin (2016) works off comparing values. For the purposes of analyzing the dynamics of allocations it is not necessary to make any additional assumptions about the wage process used to deliver the value to the worker. This has the advantage of being robust to the particular wage determination one might assume, but has the disadvantage of not providing a mapping between model parameters and observed wages. Our current work in progress provides this mapping. With a little bit of additional structure on how wages relate to values we derive explicit expressions for the dynamics of the joint distribution of wages over worker-firm-type matches. The next step is to use this mapping, along with matched employer-employee data, to directly identify and estimate the underlying primitive heterogeneity across workers and firms, the match production function, and the structure that generates the observed cyclical patterns in the distribution of wages at the match level.

3. Intra-household allocations

In the models of the labor market described above, a worker-firm pair is the key unit of analysis. The skills of workers and the technologies of firms combine to produce output and local competition determines the transfers from firms to workers. Similarly in the marriage market, the productivity and preferences of women and men combine to produce marital surplus, and outside options allocate that surplus between spouses. In parallel to my work on the labor market, I have also been interested in better understanding the determinants of time and expenditure allocations within households (Lise and Seitz, 2011). In recent work with Ken Yamada (Lise and Yamada, 2015) we use particularly rich panel data from Japan that provides measures of the consumption expenditures allocated to household public consumption as well as the private consumption expenditures for each individual household member. Additionally, the data provides measures for time allocated by each household member to the market, home production and leisure. The fact that the data provides a complete description of allocations across individuals within the household and has repeated observations over time on the same households allows us to directly estimate and test between dynamic models of intra-household allocation. We find that information relating to relative differences between spouses in the level and growth rates of wages, which is known or predictable at the time of marriage, is strongly predictive of relative consumption and leisure allocations across households in the cross-section. Additionally, we find that new information about wages revealed during marriage predicts changes in within-household allocations in ways that are inconsistent with efficiency in the absence of renegotiation. The data strongly reject the hypothesis that households fully commit to state contingent allocations at the time of marriage. The results are consistent with a model of limited commitment in which new information about either partners’ market opportunities may require a renegotiation to prevent one of the spouses from being better off single. We are currently exploring further tests between models such as asymmetric information or complete lack of commitment (period by period renegotiation).

Our current agenda, which is in early stages, involves developing and estimating a dynamic model of household interaction with endogenous human capital and durable public goods (children). Clearly children are one of the key reasons for household formation. The tradeoff between time used in market production and time used investing in children’s development has complicated dynamic considerations when spouses cannot fully commit. Time spent with children will in general raise the value of the public good and hence the marital surplus; on the other hand, time spent away from the market may deteriorate a spouse’s human capital, and possibly their bargaining position. Given the importance of human capital formation, an open question is the extent to which households are able to attain the efficient level of investment in children.

References

Arellano, M., R. Blundell, and S. Bonhomme (2015): “Earnings and consumption dynamics: a nonlinear panel data framework.” Working paper, CeMMAP.

Becker, G. S. (1973): “A Theory of Marriage: Part I.” Journal of Political Economy, 81(4), 813-846.

Eeckhout, J. and P. Kircher (2011): “Sorting in Macroeconomic Models.” EconomicDynamics Newsletter, 13(1).

Graber, M. and J. Lise (2015): “Labor Market Frictions, Human Capital Accumulation, and Consumption Inequality.” Manuscript.

Guvenen, F., F. Karahan, S. Ozkan, and J. Song (2015): “What Do Data on Millions of U.S. Workers Reveal about Life-Cycle Earnings Risk?” Working paper, NBER.

Hagedorn, M. and I. Manovskii (2014): “Theory Ahead of Identification.” EconomicDynamics Newsletter, 15(1).

Lamadon, T., J. Lise, C. Meghir and J.-M. Robin (2015): “Matching, Sorting, Firm Output and Wages”. Manuscript.

Lentz, R. (2009): “Heterogeneity in the Labor Market.” EconomicDynamics Newsletter, 11(1).

Lise, J. (2013): “On-the-Job Search and Precautionary Savings.” Review of Economic Studies, 80(3): 1086-1113.

Lise, J., C. Meghir and J.-M. Robin (2016): “Matching, Sorting and Wages.” Review of Economic Dynamics, 19(1): 63-87. Special Issue in Honor of Dale Mortensen.

Lise, J. and F. Postel-Vinay (2015): “Multidimensional Skills, Sorting, and Human Capital Accumulation.” Manuscript.

Lise, J. and J.-M. Robin (2016): “The Macro-dynamics of Sorting between Workers and Firms.” Manuscript.

Lise, J. and S. Seitz (2011): “Consumption Inequality and Intra-Household Allocations.” Review of Economic Studies, 78(1): 328-355.

Lise, J. and K. Yamada (2015): “Household Sharing and Commitment: Evidence from Panel Data on Individual Expenditures and Time Use.” Manuscript.

Menzio, G. and S. Shi (2010a): “Block recursive equilibria for stochastic models of search on the job.” Journal of Economic Theory 145(4), 1453-1494.

Menzio, G. and S. Shi (2010b): “Directed search on the job, heterogeneity, and aggregate fluctuations.” American Economic Review: Papers and Proceedings 100(2), 327-32.

Menzio, G. and S. Shi (2011). “Efficient search on the job and the business cycle.” Journal of Political Economy 119(3), 468-510.

Postel-Vinay, F. and J. Robin (2002). “Equilibrium wage dispersion with worker and employer heterogeneity.” Econometrica 70(6), 2295-2350.

Back to Menu

EconomicDynamics Interview: Giancarlo Corsetti on Debt dynamics

Giancarlo Corsetti is Chair in Macroeconomics and Fellow of Clare College at the University of Cambridge. He is interested in open macroeconomics, in particular crises and policies in an international context. Corsetti’s RePEc/IDEAS profile.

EconomicDynamics: To some, it looks like Europe locked itself into a path with unsustainable debt that may even be entirely self-inflicted. What went wrong in the Euro area (EA)?

Giancarlo Corsetti: Consider a comparison between the US and the EA. The aggregate level of public debt in the US is not that different from the EA. Yet, the US has been able to reduce unemployment after the shock of the global crisis, running deficits and expanding the balance sheet of the Fed, without suffering any tension in the debt market. Initially, the crisis hit regions/states asymmetrically, causing large variations in unemployment and thus local fiscal conditions. Yet, thanks to a sufficiently developed institutional framework, there was no geographical polarization of risk and borrowing costs. In response to stabilization policies, aggregate recovery and regional convergence in unemployment rates went hand-in-hand. While the aggregate level of public debt in the US is not that different from the EA, the US has been able to stabilize economic activity (running deficits and expanding the balance sheet of the Fed) without suffering any significant tension in the debt market.

Conversely, even if the crisis shock in the EA was initially less geographically concentrated than in the US, an uncoordinated and unconvincing policy response in an institutional vacuum ended up magnifying regional weaknesses in fundamentals, up to generating a sovereign risk and ultimately a country-risk crisis.

As of today, aggregate economic activity has not recovered in the EA. The aggregate problem and the internal polarization are two sides of the same coin. In an economy with diverging fiscal, financial and macroeconomic conditions at regional level, (a) the transmission of monetary policy is profoundly asymmetric: borrowing conditions for public and private agents are quite different across borders and respond differently to policy decisions. (b) Fiscal policy has a strong contractionary (procyclical) bias. Most importantly, (c) a profound divergence in views and interests among national policy makers on how to adjust to shocks have reduced reciprocal trust to a historical minimum. Conflicts create continuing policy uncertainty and delay interventions. If anything is done, it is done too little too late.

In each country in the EA, the crisis has unique national features, i.e., rooted in a specific combination of financial, fiscal, and macroeconomic fragility. But after the emergence of the Greek problem, for the reasons highlighted above, the crisis became largely systemic with sovereign spread at times completely driven by common factors. It is well understood that multiple equilibria are possible in economies that lack policy credibility and have high levels of debt (a point stressed by many recent papers, see e.g. Lorenzoni and Werning 2015 or my work with Luca Dedola among others). The difficulty in policy formulation is that in practice, weak fundamentals and self-fulfilling expectations are near impossible to separate in a crisis. What happened in the EA after 2010 is best described by a combination of the two, reflecting rising fiscal liabilities and the inability to implement credible policy responses (or credible reform), at both domestic and union-wide level.

ED: Is there a general lesson for the literature on sovereign debt crisis?

GC: We typically think of the cost of default as hitting an economy ex post, when a credit event actually occurs (in the case of the EA, this can take the form of breakup of the union with forced conversion of debt into local currency). The recent EA experience reminds us that the mere possibility of default (or a currency breakup) in some state of the world at some point in the future can already cause large costs in the present, in terms of a persistent and deep macroeconomic distress. The problem tends to be mostly attributed to a ‘diabolic loop’ linking sovereigns and banks in a crisis (see e.g. Brunnermeier et al. 2016): as the price of public debt falls in a fiscal crisis, banks’ balance sheets suffer, the supply of credit contracts, financial stability is shaken, the fiscal outlook further deteriorates, creating the loop. In joint work with Keith Kuester, André Meier, Gernot Mueller, however, we early on realized that the problem is more pervasive. Even for large non-financial corporations, possibly multinationals that do not depend specifically on any national banking system, financial conditions are strongly correlated with those of the state in which they are headquartered. As a general pattern, private borrowing costs in the EA rose and borrowing conditions deteriorated sharply with sovereign risk premia.

With rising premia, the costs of prospective default may stem from either falling aggregate demand or tightening financial constraints, or a combination of the two. More empirical and theoretical work on this topic is badly needed, for instance, concerning the roots of the country-risk premia (uncertainty about tax regimes, macroeconomic conditions, debt overhang, and general political risks).

ED: You have tried to answer some of these issues.

GC: I have mainly focused on the transmission via aggregate demand, based on version of the New-Keynesian model set up by Curdia-Woodford (2010). In Corsetti et al. (2013 and 2014), we show that, with policy rates at the zero lower bound, not only may adverse cyclical shocks be substantially amplified by the implied deterioration of the fiscal outlook, but also, under the same conditions, fiscal policy becomes an unreliable tool. The large multiplier of government spending at the zero lower bound that has been widely discussed in recent literature may not materialize. In the model, first, the size—in fact, even the sign—of the multiplier appears to be quite sensitive to the extent of nominal distortions and market expectations concerning the persistence of the cyclical shocks. Second, sovereign risk affects indeterminacy, raising the risk that expectations become unanchored.

To wit: suppose that, in a monetary union with policy rates at the zero lower bound, markets develop arbitrary pessimistic view of the macroeconomic development in a country, implying a string of higher deficits and debt in the near future. All else equal, this translates into a deterioration of the country’s fiscal outlook, which in turn raises country risk and the borrowing costs for the private sector. Aggregate demand falls. With some downward nominal rigidities, the ensuing fall in economic activities validates ex post the initial, arbitrary, expectations. A similar mechanism may depress output via lower investment and growth (a channel active also in the absence of nominal rigidities). Note that, from the vantage point of market participants, the crisis and downturn appears to be entirely justified by weak fundamentals.

Rethinking the evidence of the EA crisis in light of this model, it is quite clear that stemming the above vicious circle required more than just procyclical deficits (the first reaction to the crisis)—especially when crash budget corrections contribute very little to restore policy credibility, at both domestic and union-wide levels.

To illustrate the potential costs of above mechanisms, contrast the quarterly GDP growth in the UK and EA crisis countries after the crisis. Take Italy. Before the summer of 2011, when the sovereign risk crisis extended to Italy, the GDP in the two countries (setting 2008Q1=100) moved in synch. After 2011, when the sovereign risk crisis hit Italy in full force, the UK has remained on a path of low but steady growth. Italy lost more than 10 percentage points to the UK. Between 2011 and 2015, the Italian debt rose from below 120 to above 132 percent of GDP. Based on the premise that the crisis was in part self-fulfilling, and reflected the inability of the EA policymakers to resolve their differences and conflicts, much of the economic and social costs of this crisis could have been avoided. And of course it could have been much worse without the Outright Monetary Transactions (OMT).

ED: Can policy do anything about it?

GC: In a belief-driven crisis, the first line of defence can be provided by central banks. Surprisingly, until very recently very little work has been devoted to the subject. In recent joint work with Luca Dedola, we have analyzed the conditions under which the central bank can rule out self-fulfilling sovereign default (mind: not fundamental default) via a credible threat to intervene in the government debt market (Corsetti and Dedola 2016). The starting point of our analysis is that central banks can issue liabilities in the form of (possibly interest bearing) monetary assets that are only exposed to the risk of inflation, not to the risk of default. Hence, when a central bank purchases debt, it effectively swaps default-risky with default free nominal liabilities, lowering the cost of borrowing. It is by virtue of this mechanism that, when markets coordinate their expectations on anticipation of (non-fundamental) default, central bank interventions on an appropriate scale can prevent the cost of issuing debt from raising substantially, and thus prevent default from becoming an attractive policy option for fiscal policymakers. Theory here is important to clarify that a “monetary backstop” to government debt needs not rely on a (threat of) debt debasement via inflation. Quite the opposite: the credibility of a monetary backstop via interventions in the debt market may be at stake if these foreshadow high future inflation. It turns out that a strong aversion to inflation, shared by policymakers and society, is a key precondition for its success.

Arguably, most central banks in advanced countries, if only implicitly, have provided a monetary backstop to government throughout the crisis years. Before 2012, the European Monetary Union lacked the required institutional framework for the ECB to do so. This framework could only come into existence after member states finally agreed on a reform of the fiscal rules, on the creation of the European Stability Mechanism (addressing conditionality), as well as on a blueprint for banking union. In September 2012, the ECB was eventually in a position to launch its OMT programme (still, amid political objections).

The importance of these developments in 2012 for the integrity of euro area cannot be overemphasized. Yet they came late and only addressed part of the ongoing problems. Financial and macroeconomic conditions in problematic countries continue to be weak. There has been little or no reversal of internal fragmentation and polarization.

As Europeans are rethinking and debating the institutional future of the euro, there are several questions that require more economic analysis. Ultimately, the responsibility of designing strong fiscal institutions in the EA cannot but remain with national governments. At the same time, it is important to recognize that, logically, debt sustainability also depends on the institutions and regimes of official lending. Europe has moved from IMF-style interventions to an approach lengthening the maturity of the loans and charging concessional rates. We need a theoretical framework to assess the effects and implications of these different approaches—a task that I am currently pursuing in joint work with Aitor Erce and Tim Uy.

References

Brunnermeier M., L. Garicano, P. R. Lane, M. Pagano, R. Reis, T. Santos, D. Thesmar, S. Van Nieuwerburgh, and D. Vayanos, (2016). “The Sovereign-Bank Diabolic Loop and ESBies,”American Economic Review P&P, May, forthcoming.

Corsetti G. and L. Dedola (2016). “The Mystery of the Printing Press: Self-fulfilling Debt crises, and Monetary Sovereignty,” CEPR Dicussion paper 11089.

Corsetti G., A. Erce, and T. Uy (2016). “Debt Sustainability and the Terms of Official Lending,” University of Cambridge , in Progress.

Corsetti G., K. Kuester, A. Meier, and G. Mueller (2013). “Sovereign Risk, Fiscal Policy, and Macroeconomic Stability,” Economic Journal, February, pages F99-F132.

Corsetti G., K. Kuester, A. Meier, and G. Mueller (2014). “Sovereign risk and belief-driven fluctuations in the euro area,” Journal of Monetary Economics, vol. 61, pages 53-73.

Curdia V. and M. Woodford (2010). “Credit spreads and monetary policy,” Journal of Money, Credit and Banking, vol. 42(s1), pages 3-35.

Lorenzoni G. and I. Werning (2015). “Slow Moving Debt Crises,” mimeo, MIT.

Back to Menu

SED: Letter from the President

Dear Friends:

I am getting excited about this year’s meeting in Toulouse.  The Toulouse School of Economics hosts our meeting 30 June through 2 July 2016.  Christian Hellwig and Franck Portier, who head the Local Organizing Committee, are setting up what I am sure will be a stimulating and enjoyable meeting.  The Program Committee, headed by Manuel Amador andPierre-Olivier Weill have lined up an incredible set of plenary speakers:  Fernando Alvarez,Mariacristina De Nardi, and Jean-Marc Robin.

Manuel and Pierre-Olivier and their committee had a difficult task of putting the program together.  This year we hit a new record of 1662 submissions.  We made some adjustments after last year’s record of 1492 submissions:  At Christian and Franck’s suggestion and with their help, we expanded the number of parallel sessions from twelve to thirteen, adding 36 slots in the program for papers; Manuel and Pierre-Olivier reduced the size of the program committee, which frees up slots for contributed, as opposed to invited, papers because we “pay” each members of the committee by letting him or her organize a session; and we have added a poster session for current and recent Ph.D. students (more on that in a moment).  I understand that these adjustments have not been enough to keep many loyal SED members happy, however.  With only 468 slots in the program and 1662 paper submissions, the acceptance rate is only 28 percent.  Our program committee does not have the resources to screen papers as carefully as our journal does, and we are clearly rejecting some very good submissions.  We will reopen the question of expanding the meeting when we meet in Toulouse.  One possibility, which Antonio Merlo and I used when we organized the 1999 Meeting in Alghero, would be to add another day or half day to the schedule.  If you have any thoughts or suggestions on this issue, you can send them to me at [email protected]

Last year in Warsaw, we discussed adding a poster session for current and recent grad students.  Christian and Franck are initiating this on an experimental basis this year in Toulouse.  This newsletter contains more information on how to apply to present a poster in the session.  If this initiative is successful, we will implement it at future meetings.

The 2017 Meeting will be hosted by the University of Edinburgh on 22–24 June 2017.  The Local Organizing Committee, headed by Sevi Rodriguez Mora, is already arranging what promises to be a fabulous meeting.  Veronica Rappaport and Kim Ruhl have agreed to be the Co-Chairs of the Program Committee.

I look forward to seeing those of you who were lucky enough to have your paper accepted in Toulouse.  I hope that we can find a good way of adding more acceptances for the Edinburgh Meeting.

As I say, I am getting excited about the Toulouse Meeting.  Looking at the conference website, http://economicdynamics.org/2016-sed-meeting/, I see a web page on the gastronomic offerings of Toulouse and the surrounding region.  I intend to sample these offerings extensively.  I hope that you can join me.

Tim Kehoe

President, Society for Economic Dynamics

Back to Menu

SED: Poster Session (New!)

 For the 2016 SED meetings in Toulouse, we are planning to add a poster session, in addition to the regular program. The poster session is intended for current PhD students or PostDocs no more than 2 years past their PhD.

Candidates interested in participating in the Poster Session need to be nominated by their PhD advisors or PostDoc mentors. Nominations should include the student’s paper, along with a short nomination letter by the nominating faculty, and be sent by email to Christian Hellwig ([email protected]) and to Manuel Amador and Pierre-Olivier Weill ([email protected]), no later than May 20, 2016, with successful nominations to be announced shortly thereafter.

The selection for the Poster session is independent of the regular program, so you may be nominated even if your paper wasn’t included or you didn’t apply yet.

Back to Menu

A History of Macroeconomics from Keynes to Lucas and Beyond by Michel De Vroey

Macroeconomics is going through a period of introspection, and looking from where the current theoretical and methodological body is coming can be insightful. Michel De Vroey’s new book is important in this regard. It shows how macroeconomics has gone through several revolutions, and may go through more. It describes how Keynes supplanted the Classics by looking at the time’s important economists through their eyes by with today’s language, not shying away from going through models in fair detail. The same exercise is performed with the revolution led by Robert Lucas and his contemporaries. A significant part of the book relates to how the field has evolved after the initial impetus by Lucas.

Several underlying themes emerge from this analysis, as economists from the various periods came to grips with empirical phenomena they could not explain with extant theory: first is the notion of equilibrium, second is price setting, especially wages, third external vs. internal consistency, and fourth the relationship between data and theory. Broadly, this can be summarized by a struggle between Marshallian and Walrasian approaches, which De Vroey not only describes but also does not hesitate to assess, for example in terms of policy readiness.

A History of Macroeconomics is published by Cambridge University Press.

Back to Menu

Bayesian Estimation of DSGE Models by Edward Herbst and Frank Schorfheide

DSGE models are increasingly being estimated, and the Bayesian methods is a favorite in this regard. Frank Schorfheide was one of the pioneers of the method and he has teamed up with Edward Herbst to deliver a compact and readable manual that should suit everyone how wants to venture into these techniques.

Bayesian estimation is widely used for the small- and medium-scale New Keynesian models, thus the books starts with natural building blocks, the Smets-Wouters model on one side and Bayesian inference on the other. The two are then put together to show how a linearized DSGE model can be estimated using various techniques, the popular Random-Walk Metropolis-Hastings and the so far little-used Sequential Monte Carlo. A last part of the book shows how non-linear model can also be dealt with thanks to the SMC method. Code and data are available on the authors’ websites.

Bayesian Estimation of DSGE Models is published by Princeton University Press

Back to Menu