Economic Dynamics Newsletter

Volume 13, Issue 1 (November 2011)

The EconomicDynamics Newsletter is a free supplement to the Review of Economic Dynamics (RED). It is published twice a year in April and November.

In this issue

Jan Eeckhout and Philipp Kircher on Sorting in Macroeconomic Models

Jan Eeckhout is a Professor of Economics at University College London and at Barcelona GSE/UPF. Eeckhout’s research has been concerned with labor markets, matching and sorting. Eeckhout’s RePEC/IDEAS profile.Philipp Kircher is a Reader of Economics at the London School of Economics. Kircher’s research has been concerned with labor markets, matching and sorting. Kircher’s RePEC/IDEAS profile.

1. Introduction

We address the notion of skill allocation across firms and across jobs, and how we can introduce the allocation of skills in otherwise standard macro models. Heterogeneity in skills and jobs is without doubt an important component of the labor market. Individuals are born with different innate ability and non-cognitive skills, they are brought up in diverse households, they have varying educational backgrounds, and their work experience and learning tends to further exacerbate differences between workers. In addition there are also differences on the demand side as jobs differ in their productivity, the span of control of managers over their workers varies, and firms employ different technologies.In the presence of two-sided heterogeneity, the key determinant of the observed allocation and wages is whether there are complementarities between worker skills and job characteristics. Without complementarities (for example when individuals only differ in efficiency units of labor) it does not matter for efficiency where each individual is working. Putting it stark: A CEO would add no more to the value of the economy cleaning offices than orchestrating mergers and acquisitions. In a competitive market she would earn no more in one activity than the other. Her productivity could be decomposed in an additive effect for the worker and the firm: output might be higher in some occupations such orchestrating mergers and acquisitions, but it is the same for high-trained professionals as for untrained high-school dropouts.

Instead, symptomatic of complementarities in value creation is that equilibrium wages depend on both the worker characteristics and the firm types in ways that are not easily decomposable. Sorting, i.e., the matching pattern between jobs and workers, is crucial for the efficiency of the market. Efficiency is no longer simply about whether workers are employed, but the central question is whether they are employed at the right jobs. And whether the right number of people are employed in the right kind of job. A central theme in our research is the question how such complementarities shape employment patterns and wages, how this changes our modeling and thinking about the labor market, and how one might conceptually measure the importance of complementarities in existing datasets.

The aim of this research is investigate how the issue of complementarity can be embedded into standard macro environments. The models should be sufficiently tractable to gain understanding by deriving analytical results, and sufficiently rich so they answer interesting macroeconomic questions. We illustrate this approach by considering three recent strands of research on two-sided matching environments: (1) the interplay firm size and workforce quality, (2) the implications of search frictions on the sorting between firms of different productivity and workers of different qualities, and (3) the quest for evidence of sorting in existing datasets. We will discuss applications to labor economics, trade, and management as we proceed.

2. Sorting, Span of Control, and Factor Intensity

First, we consider the role of span of control and the size of firms in labor markets with heterogeneously skilled workers. Can we explain for example why the high skilled upper management in firms like Walmart have an enormous span of control over relatively low skilled workers, while in mom-and-pop retail stores the span of control is small and skills of both managers and workers are average? Or what are the consequences of information technology that improves the ability to manage many workers, such as monitoring and GPS tracking devices?Most theories of sorting follow the tradition on Becker’s (1973) canonical model of the labor market where each firm consists of exactly one job. There the firm’s choice is about the extensive margin, i.e., which worker to hire. For a given job type, the firm chooses the optimal worker type taking wages as given. To get an intuition for the operation of this market, one key insight is the following: if more productive firms have a higher marginal product from better workers, then in equilibrium these will be the firms that indeed hire the better workers. Such complementarity between firm productivity and worker skill shapes the matching pattern. This simple theory provides interesting links between firm heterogeneity and worker’s wages. For example, if the heterogeneity of firms increases, worker’s wages become more spread out and increase especially at the top, which has been used for example to explain the changes of CEO compensation (e.g., Terviö, 2008, and Gabaix and Landier, 2007).

The main drawback of this theory is that it misses the intensive margin that is at the heart of most macro-economic models: How many workers does the firm employ? How much of the resources should be devoted to each worker in the work force? Models in the tradition of Lucas (1978) that are used to explain the size distribution of firms, address this issue of the intensive margin. They consist of a firm with one scarce resource, the time of its manager. Managers differ in productivity and they can leverage their ability over more or less workers, which are assumed to be homogenous. The key question is how many workers each manager hires. Here the complementarity is between the productivity of the manager and the size of her workforce, i.e. the intensive margin: if this is positive then more productive managers will lead larger teams, explaining the firm-size distribution in plausibly calibrated models. In recent applications Restuccia-Rogerson (2008) and Hsieh-Klenow (2010), amongst others, argue that such heterogeneity levied across different workforce size can help to explain differences across countries in capital, TFP, and factor prices.

While both the extensive and intensive margin in isolation have attracted interest, their combination raises interesting issues: Would more productive firms hire more workers, better workers, or both? How are workers with different skills affected? How does this affect managerial compensation? What are the effects of improved information technology that allows the supervision of a larger workforce? And does it depend on the particular industry and country we are considering? The objective is to incorporate a broad notion of heterogeneity on both sides of the market.

In Eeckhout and Kircher (2011b) we extend the idea of span-of-control to a heterogeneous workforce. Managers simultaneously decides on both margins: the extensive margin of worker skills and the intensive margin of workforce size. The latter determines how much managerial time can be devoted to each of the workers. The output of each worker depends on his own skill, on the quality of the manager, and the amount of supervision time that he receives. Our goal is to understand the equilibrium assignment, wages and managerial profits, and firm size. This general setup was also proposed in Rosen (1982), but solved only for a functional form that is a special case of our model, that of efficiency units of labor. Our setup also includes as special or limiting cases the functional forms of several existing models in this line of research such as Sattinger (1975), Garicano (2000), and Van Nieuwerburgh and Weill (2010). We can also adjust the setup to match the features of the Roy model (Heckman and Honore, 1990). Here we consider the competitive equilibrium outcome of the general model.

The interpretation of our results obviously extends to other setting beyond managers and workers: managerial time can as well be interpreted as the firm’s capital, time per worker represents the capital intensity, and differences in managerial skills constitute the technological productivity differences among firms. Our analysis reveals that the equilibrium sorting patterns in these markets by and large be understood by looking at four key margins. (1) Complementarities in types (extensive margins): Do better managers have a higher marginal product from working with better workers? This is the standard requirement in Becker (1973). (2) Complementarities in quantities (intensive margins): the marginal product of having more workers is increasing if more time is spent by the manager. This is standard for example in CES technologies. (3) Complementarities between manager skill and workforce size, or span-of-control: Do better managers have a higher marginal product of supervising more workers of a given skill? This is the span-of-control condition that features in Lucas (1978). (4) Complementarities between worker skills and managerial time: Do better workers have a higher marginal product of receiving more supervision time?

In general, these different margins are present, and the interesting question is how these complementarities trade off against each other. We find that better managers hire better workers if the product of the first two complementarities outweighs the product of the latter two complementarities, i.e., if (1) * (2)>(3) * (4). From standard assumptions on quantities, (2) is always positive. So the left hand side simply captures the standard “Becker” effect, and if we shut down the intensive margin by means of a Leontieff structure we recover exactly Becker’s condition ((1)>0). Yet in the presence of the intensive margin, better managers can be good in two dimensions: they can be good at working with better workers ((1) large) or they can be good at managing many worker ((3) large). And as a result, the span-of-control effect can override the standard Beckerian complementarity. Good managers have so much span that it is efficient to manage many low skilled workers rather than few high skilled ones. This holds despite the Beckerian skill complementarity. This then also gives a prediction for the firm size distribution: the better managers supervise larger groups if (3)-(4)>0 under positive sorting (and (3)+(4)>0 under negative sorting). Managers end up managing larger teams if the span effect (3) outweighs the complementarity between time spent and worker type (4). If the marginal benefit of spending time is larger than the effect of span-of-control, efficient teams are smaller for better managers.

How does this manifest itself in different industries? In the retail sector for example, high profit companies such as Walmart invest in information technology that reduces the need for high skills relative to smaller mom and pop stores ((1) small or even negative) because the cash registers and inventories are nearly trivial to operate. They also heavily invest in management and control tools that allow the supervision of many workers ((3) positive and large) because it allows them to get centralized information on performance on all registers and inventories. If supervision and training time generates more impact with better workers ((4) positive), then stores like Walmart employ the unskilled workers relative to their mom and pop counterparts. This implies negative sorting. Under negative sorting this (3) and (4) work in the same direction to create very large firms at the top: They do not need to spend much time with each employee because it is not worthwhile, and have the tools to supervise many. In that light it might not be surprising that Walmart is the largest employer worldwide. In industries with positive sorting it is much more difficult to form large teams, since it is worthwhile to spend time with each high-skilled employee. Consider management consulting, with strong complementarities in manager and subordinate skill ((1) large) but moderate span of control technologies (3 moderate). This implies positive sorting. Top firms are only larger than bottom firms if their span of control (3) outweighs the benefits from training and interacting with employees (4). Given that the two counteract, top consulting firms tend to be only moderately larger than other firms in the industry.

How can we view changes in technology? Skill-biased technological change is usually viewed as a change that makes the complementarity between worker skill and firm technology larger. But much of technological change is in terms of information technology that changes the complementarity between manager skill and the amount of workers he supervises. In this model, increases in (3) change the sorting pattern, but in particular it spreads out the firm size distribution. It makes the difference between the two parts of the second condition even larger. Big firms become even bigger relative to the small firms.

The exact levels of wages, managerial profits, and skill assignment are characterized by a simple system of two differential equations. Changes in skills or changes in technology can explicitly be analyzed by studying the response to this system, which will give deeper insights into the consequences of technological change. This is particularly relevant for international trade, where trade changes the availability of factors of production and sometimes introduces new technologies. So far, such changes have been analyzed mainly for settings in which the size of each firm is limited by the extent of the demand. Typically, firms operate in Dixit-Stiglitz type markets where consumers have preference for variety (e.g., Costinot (2010)). Our framework is different and adds to those models of trade: output has decreasing returns because of scarce managerial resources, which limits the size of the firms. The advantage is that it can be studied without functional form assumptions. But it can also be integrated into a Dixit-Stiglitz type framework. Finally, the framework is easily extended to unemployment as well, which allows to study both the compensation and unemployment for workers of different skills. Again, this might be important in trade settings, where this has been studied recently by Helpman, Itskohki and Redding (2011), yet in their setting workers are ex-ante identical and earn identical expected-payoffs, while in many trade settings we would like to start from a situation where workers of different types exist in the population.

Returning to the discussion of cross-country TFP differences, it will be interesting to see to which extent not only the heterogeneity in firm size as in Restuccia-Rogerson (2008) and Hsieh-Klenow (2010) matters to explain the differences. By introducing firm size into an otherwise standard model of sorting, the debate can be illuminated taking into account differences in skill distributions across countries, as well as the size distribution across firms. We should mention, though, that the tractability of this line of research relies on the assumption that workers and supervisors interact, but the interaction among workers is limited. They interact only to the extent that more resources devoted to one worker means that less resources (supervision time) is available for another. While substantial work on competitive markets and on combinatorial matching theory has been devoted to capture complementarities among workers (e.g., Kelso and Crawford (1982), Cole and Prescott (1997), Gul and Stacchetti (1999), Milgrom and Hatfield (2010)) the results are usually confined to existence theorems. The line of work that we follow is more restrictive, but allows clear characterizations of the size and skill level of firms, and of wages and firm profits. Conditions like the one characterizing the equilibrium allocation in our model can help build intuition for the economic mechanisms in these markets.

3. Market Frictions and Sorting

The allocation of heterogeneously skilled workers across different jobs plays a crucial role in markets with frictions. Frictions are non-negligible in the allocation process of many environments. For example in the labor market, unemployment is considered to be a natural feature that arises when firms and workers need time to find a suitable match. Sorting models that go beyond the competitive market conditions can describe unemployment patterns across worker skills.Recall that Becker (1973) showed that in a market without frictions, complementarities — or equivalently supermodularity of the match output — between firm and worker types lead to positive sorting where more productive firms hire better skilled workers. The match surplus is supermodular if the marginal contribution of a better worker is higher in a better firm, i.e., if the cross-partial of match output with respect to worker and firm type is positive. For the case of frictions, the most well-known analytic result by Shimer and Smith (2000) is derived in a setting with random search frictions and pairwise matching. Parties that are matched bargain whether to stay together and produce, or to separate and wait for another meeting, where the future is discounted. They prove that complementarities between worker and firm types (supermodularity of the match surplus) are not sufficient to ensure that more productive firms hire more productive workers. That means that there are production technologies where the better firms have a larger gain from hiring the better workers, but still they tend to hire less able workers.

Intuitively the reason is the following. In competitive markets the firms know they can trade, and their only consideration is which worker they would rather hire. In the presence of frictions, they do not only care about whom to hire, but also about whether they can hire at all. For a more productive firm the opportunity cost of being without a worker is higher, and so they are more eager to ensure a match now rather than waiting. This logic extends to matching patterns in the marriage and housing markets, which makes the model generally applicable. Unfortunately, because of the mismatch inherent in random search, the mathematical conditions to ensure positive sorting do not relate directly to the matching frictions, and it is difficult to get an intuition about the forces that operate in this market. Nor are the wages or employment patterns easily characterized because the randomness of the process exposes firms and workers to many trading partners.

In Eeckhout and Kircher (2010) we build on this work and that by Shi (2001). We point out that when there is heterogeneity, the absence of information about prices in the random search model is a strong assumption. Agents are assumed to meet many trading partners that they would have rather avoided, and the transfer price has to be determined at the time of the meeting through bargaining. In contrast, we analyze a world where buyers can observe the type of their trading partner as well as the price the seller posts. Because trade is decentralized, trading frictions still exist, for example due to congestion. In this world, prices guide the trading decisions just like in the Walrasian model of Becker (1973), only now delay remains an equilibrium feature that is taken into account in the price setting.

We address the role of price competition in markets with matching frictions and how it leads to sorting of heterogeneous agents. With frictions there are two aspects of value creation: the match-value when two agents actually trade, and the probability of trading governed by the search technology. We find that positive assortative matching obtains if the complementarities in output creation are larger than the complementarities in finding a trading partner, as measured by the elasticity of substitution in the output and the matching function. The condition has a simple economic interpretation. Complementarities in matching mean that better firms would like to employ better workers, capturing the forces in Becker (1973). But in case a firm does not manage to find a worker is cannot produce, and the productive firms have most to lose from inactivity. If they can increase their matching prospects by attracting low-skilled workers, they would do this, given a tendency against positive sorting captured through the matching function.

For standard matching functions, our condition is fullfiled if and only if the output function is root-supermodular, i.e., the (square-)root of the output function is supermodular. This means that the extent of complementarity needed is less than under random search, but still stronger than in the frictionless environment. To see this, we show that in the presence of random search frictions as in Shimer and Smith (2000), log-supermodularity is neccessary for positive assortative matching, while with no frictions at all (Becker 1973) there is positive assortative matching under mere supermodularity. In the neoclassical world, there are no frictions and all agents are assumed to have full information about the prices and types when they decide which type to accept. At the other extreme, Shimer and Smith (2000) assume that there are random search frictions and agents cannot observe prices and types until after they meet. When there is price competition, prices partly mitigate frictions by directing the heterogeneous types to the most adequate market, thus avoiding inefficient meetings with undesirable types.

The economic interpretation of this result is transparent in terms of the fundamentals of the economy, and it prominently features the role of heterogeneity together with matching frictions. In the absence of any complementarities, sorting is not important for the creation of match-value. The key aspect is to get matched at all. Due to congestion, high-type buyers would like to trade where few other buyers attempt to trade. This allows them to secure trade with high probability, and they are willing to pay for this. While sellers know that they might be idle if they attract few buyers on average, some are willing to do this at a high enough price. The low-type sellers are those who find it optimal to provide this trading security, as their opportunity cost of not trading is lowest. This results in negative assortative matching: high-type buyers match with low-type sellers. It follows that sufficient complementarity is needed in order to obtain positive assortative matching.

We can also introduce frictions when firms differ in size, i.e. as above, when there is both an intensive and an extensive margin to the firm decision. Integrating labor market frictions into the model, we show how unemployment varies across worker types. It naturally follows form the setup that unemployment decreases in the skill type of the worker. Instead, the frictions of the firms in vacancy creating are ambiguous. Larger firms can in general face higher or lower frictions, depending on whether firm size is increasing in type.

4. Using Mismatch to Identify Complementarities

Despite the casual observation that better firms hire better workers, there is unfortunately little or no evidence to support this. Are more skilled workers really more productive in better firms? The amount of effort and resources organizations invest in hiring the “right” person for the job indicates that they are. This then is indirect evidence of complementarities in production and implies that the exact allocation is important for efficiency. Yet, there is little direct evidence. The most widely cited work concludes that there is no corroboration of complementarity between workers and jobs. In a seminal paper, Abowd, Kramarz and Margolis (1999) analyze the correlation between firm and worker fixed effects from wage regressions. The obtained correlation aims to provide evidence whether or not there is complementarity or substitutability, and if so how big the coefficient is. The idea is that more productive firms pay higher wages than lower wage firms irrespective of the exact worker they hire, and the firm fixed effect therefore recovers the ranking of the firms.While this appears plausible, it turns out that in a simple model this logic is flawed. The key ingredient to identify the presence of complementarities is mismatch. Whether it be due to search frictions or information frictions, the fact that we observe agents in less than the optimal job generates an inefficient allocation relative to the frictionless outcome, and at the same time it provides sufficient information on the extent of the complementarity. Consider a worker-job pair with mismatch. In the presence of a friction, there is a tradeoff between separation followed by a new match and staying in the current match. For given frictions, the larger the mismatch, the larger the incentive to face the cost of search and rematch.

This has implications for how the wages are determined. In Eeckhout and Kircher (2011a) we show first that the wage for a given worker is non-monotonic in the type of his employer. This is due to the fact that in a sorting model, wages reflect the opportunity cost of mismatch. The key observation here is that for a given worker there can be mismatch both with too bad a firm and too good a firm. The surplus of a match is determined by the value of the match after subtracting the outside option of rematching. When matched with too low a firm type, the worker is better off with a higher firm type, and when matched with too high a firm type, the worker is better off matching with a worse firm type. Therefore match surplus for a given worker is inverted U-shaped in firm type. With transferable utility, this surplus is divided and therefore wages are also inverted U-shaped. In particular, the marginal firm type that is too low and the marginal firm type that is too high must generate a surplus that is zero in both cases: continuing the match must generate the same value as separation. This then implies that wages at the marginal firm type are the same.

The non-monotonicity of wages in firm type implies that the standard procedure to use firm fixed effects and correlate it with worker type is ill-suited. That procedure requires the identifying assumption that wages are monotonic in firm type, which is not the case when there is mismatch. Because of the non-monotonic effect of firm type on wages, the wage cannot be decomposed in an additively separable firm and worker fixed effect. We show analytically that the misspecification is not innocuous: for the most common specifications in the literature the firm fixed effect misses any direct connection to the true type of the firm.

Instead, a simple algorithm allows us to back out (the absolute value of) the degree of complementarity. The main source of identification is the search behavior by workers that differs when the degree of complementarity is high, and when as a result, sorting is important. First, we extract from the range of wages paid what the cost of search is. The highest observed wage corresponds to the wage obtained in a frictionless market and we use this to order the workers and obtain the type distribution. Likewise, we can obtain an order of the firms by the level of wages that they pay. The difference between the highest and the lowest wage corresponds to the cost of search. Second, given the search cost, the fraction of the firm population that an agent is willing to match with, i.e., the matching set, identifies the strength of the complementarity as expressed by the (absolute value of the) cross-partial of the production function. This is possible because the strength of the cross-partial directly reflects the output loss due to mismatch.

The shortcomings of the fixed effects regressions have also been pointed out in other work (Lopes de Melo, 2008, Lise, Meghir and Robin, 2008, and Bagger and Lentz, 2008). Their simulations and calibrations of search models with strong complementarities and sorting nonetheless generate small or even negative correlations of the simulated fixed effects of workers and firms. We provide a theoretical foundation for this finding, and Gautier and Teulings (2004, 2006) propose a second-order approximation method to get around the shortcomings of the fixed effect regressions.

Looking forward, for applied work it is desirable to introduce heterogeneity and allocative efficiency in otherwise standard macro models. Too often models of heterogeneity are augmented representative agent models. For good reasons of course, because modeling is complicated. That makes the quest to find simple results and a tractable setup in the context of sufficiently rich economic heterogeneity important, despite the obvious challenges.

5. References

Abowd, John M., Francis Kramarz and David N. Margolis, 1999. “High Wage Workers and High Wage Firms,” Econometrica, Econometric Society, vol. 67(2), pages 251-334.
Bagger, Jesper and Rasmus Lentz, 2008. “An Empirical Model of Wage Dispersion with Sorting,” University of Wisconsin manuscript.
Becker, Gary S, 1973. “A Theory of Marriage: Part I,” Journal of Political Economy, University of Chicago Press, vol. 81(4), pages 813-46.
Cole, Harold L. and Edward C. Prescott, 1997. “Valuation Equilibrium with Clubs,” Journal of Economic Theory, Elsevier, vol. 74(1), pages 19-39.
Costinot, Arnaud,2009. “An Elementary Theory of Comparative Advantage,” Econometrica, Econometric Society, vol. 77(4), pages 1165-1192.
Eeckhout, Jan and Philipp Kircher, 2010. “Sorting and Decentralized Price Competition,” Econometrica, Econometric Society, vol. 78(2), pages 539-574.
Eeckhout, Jan and Philipp Kircher, 2011a. “Identifying Sorting–In Theory,” Review of Economic Studies, Oxford University Press, vol. 78(3), pages 872-906.
Eeckhout, Jan and Philipp Kircher, 2011b. “Assortative Matching and the Size of the Firm”, manuscript.
Gabaix, Xavier and Augustin Landier, 2008. “Why Has CEO Pay Increased So Much?,” The Quarterly Journal of Economics, MIT Press, vol. 123(1), pages 49-100.
Garicano, Luis, 2000. “Hierarchies and the Organization of Knowledge in Production,” Journal of Political Economy, University of Chicago Press, vol. 108(5), pages 874-904.
Gautier, Pieter A. and Coen N. Teulings, 2004. “The Right Man for the Job,” Review of Economic Studies, Wiley Blackwell, vol. 71(2), pages 553-580.
Gautier, Pieter A. and Coen N. Teulings, 2006. “How Large are Search Frictions?,” Journal of the European Economic Association, MIT Press, vol. 4(6), pages 1193-1225.
Gul, Faruk and Ennio Stacchetti, 1999. “Walrasian Equilibrium with Gross Substitutes,” Journal of Economic Theory, Elsevier, vol. 87(1), pages 95-124.
Hatfield, John William and Paul R. Milgrom, 2005. “Matching with Contracts,” American Economic Review, American Economic Association, vol. 95(4), pages 913-935.
Heckman, James J. and Bo E. Honoré, 1990. “The Empirical Content of the Roy Model,” Econometrica, Econometric Society, vol. 58(5), pages 1121-49.
Helpman, Elhanan, Oleg Itskhoki and Stephen Redding, 2010. “Inequality and Unemployment in a Global Economy,” Econometrica, Econometric Society, vol. 78(4), pages 1239-1283.
Hsieh, Chang-Tai Peter J. Klenow, 2010. “Development Accounting,” American Economic Journal: Macroeconomics, American Economic Association, vol. 2(1), pages 207-23.
Kelso, Alexander S. Jr and Vincent P. Crawford, 1982. “Job Matching, Coalition Formation, and Gross Substitutes,” Econometrica, Econometric Society, vol. 50(6), pages 1483-1504.
Lise, Jeremy, Costas Meghir, and Jean-Marc Robin, 2008. “Matching, Sorting and Wages”, UCL manuscript.
Lopes de Melo, Rafael, 2008. “Sorting in the Labor Market: Theory and Measurement”, Yale manuscript.
Robert E. Lucas Jr., 1978. “On the Size Distribution of Business Firms,” Bell Journal of Economics, The RAND Corporation, vol. 9(2), pages 508-523.
Restuccia, Diego and Richard Rogerson, 2008. “Policy Distortions and Aggregate Productivity with Heterogeneous Plants,” Review of Economic Dynamics, Elsevier, vol. 11(4), pages 707-720.
Sattinger, Michael, 1975. “Comparative Advantage and the Distributions of Earnings and Abilities,” Econometrica, Econometric Society, vol. 43(3), pages 455-68.
Shi, Shouyong, 2001. “Frictional Assignment. I. Efficiency,” Journal of Economic Theory, Elsevier, vol. 98(2), pages 232-260.
Shimer, Robert and Lones Smith, 2000. “Assortative Matching and Search,” Econometrica, Econometric Society, vol. 68(2), pages 343-370.
Terviö, Marko, 2008. “The Difference That CEOs Make: An Assignment Model Approach,” American Economic Review, American Economic Association, vol. 98(3), pages 642-68.
Van Nieuwerburgh, Stijn and Pierre-Olivier Weill, 2010. “Why Has House Price Dispersion Gone Up?,” Review of Economic Studies, Wiley Blackwell, vol. 77(4), pages 1567-1606.

Q&A: Gita Gopinath on Sovereign Default

Gita Gopinath is Professor of Economics at Harvard University. She has worked on debt issues, emerging markets and international economics. Gopinath’s RePEc/IDEAS entry.
EconomicDynamics: The current crisis with Greece and possibly other European countries highlights that it has become more difficult to manage high public debt when currency devaluation is not an option. Aside from drastic austerity measures, what are possible policies?
Gita Gopinath: The interaction between high public debt and the inability to devalue has come up frequently in discussions of the Euro crisis. However, there is an important distinction to be made. There are two channels through which a currency devaluation can help a government repay its debt: One, by reducing the real value of the debt owed and two by stimulating the economy through adjustments of the terms of trade and therefore raising primary fiscal surpluses for the government. The first channel is relevant to the extent that the debt is denominated in local currency in which case the real value of debt owed externally is lower. However, in the case of an individual country in the Euro Area like Greece whose debt is in Euros, even if they were to exit the Euro, as long as they did not default on their debt contracts by re-denominating their liabilities in their local currency, a currency devaluation would do little to reduce the value of debt owed.As for the second channel through which a currency devaluation can help, namely the expansionary effect it can have on economic output, there is a clear substitute through the use of fiscal instruments. In a recent paper, Farhi, Gopinath and Itskhoki (2011) show that “fiscal devaluations” deliver the exact same real allocations as currency devaluations. Currency devaluations to the extent they have expenditure switching effects do so by deteriorating the terms of trade of the country, that is raising the relative price of imported to exported goods. In the absence of a currency adjustment, a combination of an increase in value added taxes (with border adjustment) and a uniform cut in payroll taxes can deliver the same outcomes. An increase in VAT will raise the price of imported goods as foreign firms face a higher tax and it will lower the price of domestic exports (relative to domestic sales prices) since exports are exempt from VAT. The net effect is a deterioration in the terms of trade equivalent to that following a currency devaluation. To ensure that firms that adjust prices do so similarly across currency and fiscal devaluations an increase in VAT needs to be accompanied with a cut in payroll taxes. We show that the equivalence of currency and fiscal devaluations is valid in a wide range of environments, with varying degrees of price and wage stickiness and with alternative asset market structures.

The increase in VAT can be viewed as an austerity measure but it is important to note that when combined with a payroll tax cut its impact on the economy is exactly the same as that following an exchange rate devaluation. In other words the lack of exchange rate flexibility does not limit the ability of countries in the Euro area to achieve allocations attainable under a nominal exchange rate devaluation.

ED: In the face of the latest debt developments, central banks have been much more proactive than historically. Do you view this as a good development?
GG: The European Central Bank has certainly been proactive in containing the debt crisis in Europe through direct purchases of troubled sovereign debt. At the same time they have been cautious in their role of lender of last resort. Their current stance is that they are not willing to monetize the debt of countries at the risk of future inflation. Given that the crisis has spread to Italy and to a lesser extent France it will be interesting to see how the ECB responds.The bigger question is what should a central bank do. Should it stick to its mandate of targeting inflation or should it resort to inflating away the debt so as to prevent a default by the country? I do not believe we have the answers here. As recently pointed out by Kocherlakota (2011) if a central bank decides to commit to an inflation target then this effectively makes the debt of the country real. The country can be subject to self-fulfilling credit crisis along the lines of Calvo (1988) and Cole and Kehoe (2000). If lenders expect that governments will default they raise the cost of borrowing for governments who then find it difficult to roll over their debts and this in turn triggers default. The negative consequences for the economy of a default are potentially large.

In this context suppose a central bank is willing to inflate away the debt. Does this firstly rule out the multiple equilbria described previously? Calvo (1988) evaluates several scenarios some of which involve multiple equilibria even when default is implicit through inflation. So it is not obvious whether having the ability to inflate solves the multiplicity problem. Then there are the relative costs of inflation versus outright default. A plus for inflation is that it can be done incrementally unlike default that is much more discreet. It is arguable that the costs of a small increase in inflation are low relative to the costs of default. Of course the level of inflation required can be very high and this can unhinge inflation expectations that then can have large negative consequences for the economy. So, as I said earlier, it is still to be determined in future research what the virtues are of having a central bank that uses its monetary tools to contain a debt crisis.

ED: While it is always an economic solution to a threat of default, political constraints are very restricting, as Greece shows. Are we neglecting the political economy aspects?
GG: It is not straightforward even from a pure economic point of view. Should countries default or undertake austerity measures to remain in good credit standing? The answer depends on how costly defaults are and empirical evidence on this provides little guidance. In our models we postulate several costs associated with defaults, including loss of access to credit markets (Eaton and Gersolvitz (1981), Aguiar and Gopinath (2006)), collateral damage to other reputational contracts, spillover to banking crisis, trade sanctions etc. See Wright (forthcoming) for a non-technical survey of the sovereign debt and default literature. The empirical evidence on this, as surveyed in Panizza et al. (2009) is, however, not very informative given the endogeneity of defaults. The ambiguity associated with the optimal response to the debt crisis was evident at the start of the debt crisis in Europe when there was little agreement even among economists about whether Greece should default or not.Then, as you rightly point out the political economy aspects add another dimension of complexity. The political constraints have permeated all aspects of the debt crisis. Firstly, the objective function being maximized here is clearly not purely based on economics but based on political factors that motivated the formation of the Euro. Secondly, the policy responses have been constrained by politics. For instance, at the start of the crisis it was widely perceived that the reason the Germans and French were interested in bailing out Greece was mainly to protect their banks that had significant exposure to Greek debt. An alternative, less costly route was to directly bailout German and French banks but that was viewed as politically impossible to implement. The Euro zone is now back to where it started with banks needing bailouts, except the crisis has exacerbated with now the debts of Portugal, Spain and Italy also facing default pressure. In addition, the political fallout of implementing austerity measures is evident all over Europe.

While there exists a large literature on the political economy of sovereign debt the focus has largely been on explaining why a country can end up with too much debt. There is more limited work that combines political economy with the possibility of sovereign default. An exception is Manuel Amador (2008) who examines the role of political economy in sustaining debt in equilibrium and Aguiar, Amador and Gopinath (2009) who show that the relative impatience of the government combined with its inability to commit to repayment of debt and its tax policy can lead distortionary investment cycles even in the long-run. There is almost no work on the redistributional impact on heterogeneous agents of the decision to default versus to undertake costly austerity measures, something the current crisis has brought to the forefront. This is certainly fertile ground for future research.

References

Aguiar, Mark, Manuel Amador and Gita Gopinath, 2009. “Investment Cycles and Sovereign Debt Overhang,” Review of Economic Studies, Wiley Blackwell, vol. 76(1), pages 1-31.
Aguiar, Mark and Gita Gopinath, 2006. “Defaultable debt, interest rates and the current account,” Journal of International Economics, Elsevier, vol. 69(1), pages 64-83.
Amador, Manuel, 2008. “Sovereign Debt and the Tragedy of the Commons“, Manuscript.
Calvo, Guillermo, 1988. “Servicing the Public Debt: The Role of Expectations,” American Economic Review, American Economic Association, vol. 78(4), pages 647-61.
Cole, Harold and Timothy Kehoe, 2000. “Self-Fulfilling Debt Crises,” Review of Economic Studies, Wiley Blackwell, vol. 67(1), pages 91-116.
Eaton, Jonathan and Mark Gersovitz, 1981. “Debt with Potential Repudiation: Theoretical and Empirical Analysis,” Review of Economic Studies, Wiley Blackwell, vol. 48(2), pages 289-309.
Farhi, Emmanuel, Gita Gopinath and Oleg Itskhoki, 2011. “Fiscal Devaluations“, manuscript.
Kocherlakota, Narayana, 2011. “Central Bank Independence and Sovereign Default“, Speech, September 26, 2011.
Panizza, Ugo, Federico Sturzenegger and Jeromin Zettelmeyer, 2009. “The Economics and Law of Sovereign Debt and Default,” Journal of Economic Literature, American Economic Association, vol. 47(3), pages 651-98.
Wright, Mark, forthcoming. “The Theory of Sovereign Debt and Default“, Encyclopaedia of Financial Globalization.
Dear SED Members and Friends:

The annual SED conference is the defining event of the Society, and I am very pleased to once again report that this year’s conference in Gent, Belgium was a huge success. A clear indicator of the high quality of the conference is the number of submissions, which this year was more than 1200. While the quality of the participants is a key part of this success, the Society is also grateful to those individuals whose hard work helped to ensure its success. In particular, I would like to thank the local organizers, Robert Kollmann, Gert Peersman, and Stijn Van Nieuwerburgh, for making sure that everything ran so seamlessly, as well as the two program chairs, Cristina Arellano and Philipp Kircher, for putting together an excellent program.

The 2012 meeting of the SED will be held in Limassol, Cyprus from June 22-24. The team of local organizers consists of Sofronis Clerides, Andros Kourtellos, Alex Michaelides, Chris Pissarides and Marios Zachariadis. They have some very good things in store for us. Paco Buera and Nicola Fuchs-Schündeln have agreed to serve as program chairs, and have already lined up an outstanding slate of plenary speakers: Andy Atkeson, Monika Piazzesi, and Chris Udry.

The 2013 meeting will be held in Seoul, South Korea. This will be the first time that the meetings are held in Asia, and we are very excited about the prospect of bringing the conference to a new continent. The local organizers are Yongsung Chang, Jang-Ok Cho, Sun-Bin Kim, Hyun Song Shin, Kwanho Shin and Tack Yun. I know they will put together something special for us.

I am very happy to report that for the second year in a row the winners of the Nobel Prize in Economics have had a close connection with the SED. Tom Sargent was the inaugural President of the SED, and has long been very active in the Society, having been the program organizer for the initial conference in the “modern era”, held in Minneapolis in 1990, and a plenary speaker in Costa Rica in 2000. Chris Sims has participated in several conferences and was a plenary speaker in Istanbul in 2009. Their contributions to economics have motivated and inspired a great deal of the work that has been presented at the SED over the years.

Finally, since my term as President will end at the conclusion of the 2012 conference, this will be my final installment of the “Letter from the President”. It has been an honor and a pleasure to serve as President of an organization with so much positive energy. It makes the job of President a very easy one. This is made all the more true by the efforts of Ellen McGrattan as Treasurer and Christian Zimmermann as Secretary. I am very grateful to them for all of their help during my term. I would also like to thank Gianluca Violante for his excellent work as Editor of RED. I am very pleased and excited to announce that Ramon Marimon has agreed to serve as the next President of the Society. Ramon has long been an active member of the Society, having been the program chair for the 1995 meetings in Barcelona. I know he will do a great job.

See you all in Cyprus!

Best Regards.

Richard Rogerson, President

Society for Economic Dynamics

Society for Economic Dynamics: Call for Papers, 2012 Meeting

The next annual meeting of the Society of Economic Dynamics will take place in Limassol, Cyprus, from 22-24 June 2012. We are glad to announce keynote speeches by

Andrew Atkeson, UCLA
Monika Piazzesi, Stanford University
Christopher Udry, Yale University

You can now submit your paper for the conference using ConferenceMaker until February 15th, 2012. We are looking forward to many exciting submissions for the academic program!

All information on the conference can be accessed from the SED homepage.

We hope to see you next year in Cyprus. Best wishes,

Paco Buera and Nicola Fuchs-Schündeln
2012 SED Program Chairs

News on Editorial Board Composition

Since my last update, two years ago, there have been a number of changes to the Editorial Board. Mark Aguiar, Gadi Barlevy, Michele Boldrin, and Wojciech Olszewlski have all stepped down. I am deeply grateful to all of them for their commitment, their professionalism, and for the time they devoted to RED. A special thank you, on behalf of the Society, goes to Michele who served the Review as Associate Editor for 14 years.

Several outstanding colleagues have joined the Board as Associate Editors: Ariel Burnstein (UCLA), Jan Eeckhout (UCL), Nir Jaimovich (Duke), Marco Mazzocco (UCLA), Virgiliu Midrigan (NYU), and Diego Restuccia (Toronto). Former Associate Editors Marco Bassetto (Chicago Fed) and Martin Schneider (Stanford) have been appointed Editors.

Turnaround Statistics

RED strives to deliver fast and efficient turnaround of manuscripts, without compromising the quality of the refereeing process. Besides desk rejections, virtually all submitted manuscripts receive two referee reports. In 2010, RED received 244 submissions. As of April 2011, 237 of these submissions had already received at least a first decision. The mean processing time from submission to first decision was 10.3 weeks or 72 days. The table below describes the distribution of first decisions by type (desk reject, reject after review, revise and resubmit, accept).

Distribution of First Decision Times

Number of decisions Number of desk reject. Number of reject. after review Number of R&R Number of accept.
Total 237 104 97 36 0
Within 3 months 64% 100%* 41% 28%
Within 4 months 21% 0% 38% 36%
Within 5 months 11% 0% 16% 28%
More than 5 months 4% 0% 5% 8%

Note: * The average turnaround time for desk rejections was 10 days.

Note that 85 percent of all submissions were dealt with within 4 months, and only 4% of all submissions (or roughly 9 papers) took longer than 5 months. Typically, these are difficult decisions, where the Referees are split and the Editor deems it necessary calling upon a third Referee.

Among all the manuscripts with a final disposition in 2010, the acceptance rate was 15%.

Impact Factor

The table below shows the 2-Year ISI Impact Factor (one of the best known and most widely used indicator of a journal’s quality) for RED and for a comparison group of journals, since 2006. The impact factor of a given journal for year t is calculated as the number times s articles published in year t-1 and t-2 were cited in all journals during year t.

2-Year ISI Impact Factor

2010 2009 2008 2007 2006
Review of Economic Dynamics 1.259 0.975 0.954 0.972 0.835
Journal of Economic Growth 2.468 3.083 2.542 2.292 3.248
Journal of Economic Theory 1.112 1.092 1.224 1.353 1.046
Journal of Monetary Economics 1.654 1.755 1.429 1.478 1.379
International Economic Review 1.516 1.030 1.150 0.917 1.031
Journal of Money Credit and Banking 1.150 1.194 1.422 0.700 0.780
Journal of Economic Dynamics and Control 1.117 1.097 0.885 0.703 0.779
Macroeconomic Dynamics 0.763 0.517 0.516 0.453 0.510

The Impact Factor of RED is now, for the first time in our history, above 1, and continues its steady growth reflecting the higher and higher quality of the published articles, as well as the fact that the reputation of RED continues to rise steadily well beyond the Society.

Upcoming Special Issues

RED relies predominantly on regular submissions of manuscripts. Throughout our history, we have also published special issues representing the best research at the frontier of topics which are of particular interest to members of the Society. Articles in special issues are usually selected from a broad call for papers, as well as through direct solicitations. They all go through a full refereeing process. Diego Restuccia and Richard Rogerson are editing a special issue on “Misallocation and Productivity” which is scheduled to appear in the January 2013 issue of RED. Preliminary versions of the selected papers were presented in a mini-conference within the last SED meetings in Ghent.

Gianluca Violante, Managing Editor
Review of Economic Dynamics

Galí’s Unemployment Fluctuations and Stabilization Policies

Unemployment Fluctuations and Stabilization Policies, A New Keynesian Perspective
by Jordi Galí

In business cycle policy analysis, it has become important to understand what makes unemployment fluctuate. One way to generate unemployment in a standard business cycle model is to introduce search friction &agreave; la Mortensen-Pissarides. With this book based on a series of lectures at the University of Copenhagen, Jordi Galí introduces another way that is applicable to new-Keynesian models with staggered wages. It is based on a reinterpretation of the model, thus there is no need for new bells and whistles.

In new-Keynesian models, the wage mark-up is central in determining fluctuations in employment and output. If one reinterprets wage mark-up shocks as what prevents full employment, then one can infer the unemployment rate from it and the unemployment rate fluctuates as market power changes, while this market power is constant in the Mortensen-Pissarides model.

After setting up the basic model, the book take a new look at efficiency through the business cycle and redefines the output gap. It can then analyze how monetary policy should be conducted within this model class if the central bank follows a Taylor rule.

Unemployment Fluctuations and Stabilization Policies is published by MIT Press.