Economic Dynamics Newsletter

Volume 11, Issue 1 (November 2009)

The EconomicDynamics Newsletter is a free supplement to the Review of Economic Dynamics (RED). It is published twice a year in April and November.

In this issue

Rasmus Lentz on Heterogeneity in the Labor Market

Rasmus Lentz is Associate Professor of Economics at the University of Wisconsin-Madison. His research interests lie in Labor Economics. Lentz’s RePEc/IDEAS entry.

1. Introduction

I will in this article take the opportunity to describe two projects that I am currently engaged in with Dale T. Mortensen and Jesper Bagger, respectively. They are part of a common research agenda that I view as an exploration of the impact of heterogeneity in the labor market.I have adopted a view of the labor market where productive resources are allocated to firms subject to frictions. Both workers and firms come in a wide range of productive capabilities. Workers are engaged in a perpetual search for higher wages and in the process they move between jobs so as to improve their productivity. At any point in time, the empirically observed large number of job-to-job transitions is a (possibly noisy) reflection of the labor market’s reallocation of resources in the direction of greater productivity.

High and low productivity firms co-exist in an uneasy relationship where the high productivity firms are expanding their scale of operations and employment of workers at the expense of the less productive firms, thereby increasing aggregate productivity. The selection into the most productive firms is limited by frictions in the expansion and maintenance of scale; research and development to expand demand for the firm’s output and/or output capacity are costly activities, and so is the effort to hire and retain workers in the labor market.

Wages reflect productive heterogeneity of both workers and firms, labor market frictions, and since wages are a primary driver of flows also the particular joint distribution of worker and firm types over matches that the market is implementing. The production function is both the key determinant of returns to worker and firm heterogeneity, and the allocation of worker and firm types over matches.

The measurement of heterogeneity’s impact on labor market outcomes allows quantification of the returns to for example human capital accumulation and job search. It is at the core of the evaluation of policies that impact the labor market’s ability to implement efficient allocation of workers to firms. Given frictional job search, the labor market may not achieve the efficient match allocation. Policies that affect the strength of frictions can impact aggregate productivity through their impact on allocation. Furthermore, as we emphasize in Lentz and Mortensen (2008a) aggregate labor productivity growth is in part a result of more productive firms making scale investments that crowd out less productive firms. We refer to this channel as the selection effect. In Lentz and Mortensen (2008a), we find that 54% of Danish productivity growth comes from the selection effect. Labor market policy can impact aggregate productivity growth through this channel if it impacts the mechanisms by which the labor market reallocates workers from less to more productive firms. The latter point applies more broadly to any policy that has a disparate impact on the expected profitability of scale investments across firm types.

In my ongoing agenda on firm heterogeneity and productivity with Dale T. Mortensen we study selection in isolation from allocation. One can well imagine the introduction of allocation considerations in the analysis, although at substantial technical cost. In my work with Jesper Bagger labor market frictions impact both allocation and selection.

In the following, I initially discuss measurement concerns that are common to the two projects. I then proceed to first discuss my project on firm heterogeneity, productivity and labor market friction with Dale T. Mortensen. Then, I discuss my project on sorting and wage dispersion with Jesper Bagger.

2. Measurement

Along with the agenda follows a measurement challenge which typically becomes a major topic of its own. In both of the projects I discuss below, we are currently in the process of estimation. I use Danish matched employer-employee micro panel data. Similar data are available for the United States through the US Census. The fundamental observation in the data is a match between a worker ID and firm ID. Along with this observation follows a record of match specific observations like for instance the start and end dates of the match, and wages. The ID’s are constant over time allowing a record of a worker’s match history and the same for each individual firm. In addition to the match core is a record of possibly time varying worker and firm characteristics. In the case of the Danish data, the entire population of matches is observed from 1980 to date. A similar wealth of data is available for the United States. Needless to say, these are remarkable data but they remain indirect reflections of the key objects of interest. Hence, I approach the data with the help of explicit model structures. The structure provides a lens through which I view the data. From an estimation point of view, it is a way of stating maintained identifying assumptions.Estimation is done by indirect inference. If we could estimate by maximum likelihood, we would. However, the models do not produce likelihood expressions that are practically implementable. Furthermore, a maximum likelihood estimation strategy requires constant access to the data throughout the estimation process and because data access is limited due to confidentiality requirement, computation must therefore be done on servers of the statistical agency that hosts the data. For obvious reasons, statistical agencies typically have computation solutions that are focused on data hosting and access. Less attention is paid to raw computation power and clusters for parallel computing. This is not a good environment for numerically intensive model solving tasks such as those I am facing.

Indirect inference provides a feasible estimation strategy. In addition, it has a few practical advantages as well. First, through the specification of the auxiliary model, it allows a focus on the particular aspects of the data that the model is supposed to speak to. Of course, the freedom of choice of auxiliary model involves the risk of leaving out relevant information in the data, and so care must be taken in this step. Second, the auxiliary model can be designed so that the statistics involved are not subject to the data confidentiality restrictions and can therefore be extracted from the servers of the statistical agency. Estimation can subsequently be done on the researcher’s preferred computation solution. This is really practical. It also broadens data access to researchers without access to the actual confidential micro data – as long as the specified auxiliary model also provides identification in these other cases.

Finally, by including existing reduced form approaches to the question at hand in the auxiliary model, indirect inference allows an easy bridge between the model estimation results and existing reduced form studies. While the existing studies will typically not be actual reduced forms for the model in question, they nevertheless often times contain valuable identifying information. And in the case where they do not help identification, that is an important point as well since the interest in the reduced form typically comes from the conviction that it identifies key points of interest.

3. Firm Heterogeneity, Productivity, and the Labor Market

The Danish micro panel data reveal a number of stylized facts: At any point in time, there is great measured labor productivity dispersion across firms. More productive firms pay higher wages, they are larger in terms of output and to some extent also in terms of input. It is a general feature in these kinds of data that the relationship between firm output and productivity is robustly positive, but the relationship between productivity and input size can be weak. Labor productivity is persistent but not permanent. Firms tend to be born small and they tend to die small. The distribution of labor force size across firms is left skewed with a thick right tail. Workers tend to move in the direction of higher wages.In my work with Dale T. Mortensen on firm heterogeneity and productivity we establish a framework consistent with the data that explicitly connects labor market frictions with the determination of aggregate productivity. The model is a modification of the general equilibrium model of firm dynamics in Lentz and Mortensen (2008a), which builds on Klette and Kortum (2004).

Firms produce intermediary goods. Each intermediary good has a demand that is determined through the aggregation of intermediary goods into a final consumption good. The production of an intermediary good requires labor and firms differ from each other in their labor productivity. It is assumed that production is constant returns to scale in labor. A firm can expand its scale of operations by undertaking a costly product innovation effort which according to a stochastic arrival process yields a new intermediary product which I will also refer to as a product line. Firms can have multiple product lines.

Labor is obtained from the labor market subject to frictions. Matches are a result of costly search and recruitment effort by both workers and firms. Each product line operates its own hiring process and sets wages according to a Stole and Zwiebel (1996) bargaining mechanism. In this wage setting mechanism, each worker bargains with the firm as if the worker is the marginal worker. By assumption worker reallocation between product lines within a firm is subject to the same frictions as those of the overall labor market.

A firm is fully characterized by its labor productivity type and its product portfolio, including the labor force size state of each product line. A firm is born with a single product line as a result of entry effort by a potential entrant. Upon entry, the firm immediately learns its productivity type. A firm exits upon the destruction of its final product. A firm’s type is persistent but need not be permanent.

More productive firms have greater expected profits from scale expansion than less productive firms. Consequently, they choose greater product innovation rates. A product line is destroyed according to the same obsolescence rate regardless of its inventor, and so on average more productive firms obtain greater scale (number of product lines) than less productive firms.

The differential scale expansion rates across firms is at the core of the selection contribution to productivity. Because more productive firms expand at a greater rate, in steady state they employ a greater share of productive resources compared to their birth distribution representation. In this case, the selection effect contributes positively to productivity as the more productive firms crowd out the less productive ones. In Lentz and Mortensen (2008a) we estimate a growth version of this model without labor market frictions on Danish firm panel data. We find the selection effect to be a very important source of productivity growth. In the counterfactual where the distribution of productive resources over firm types is set according to the type distribution at birth rather than that in steady state, productivity growth is less than half.

The impact of labor market friction on the selection effect turns out to be non-trivial. Firm size is in the model constrained by current product demand and labor market frictions. For some firm types, product demand is the more important constraint, for others labor market frictions play a greater role. Therefore, a policy that reduces the level of friction will impact the strength of the selection effect through a disparate impact on firm types. As a side note, this is also an example of an environment where it is crucial to correctly model heterogeneity. A representative firm model completely misses this point.

A product line’s labor force size follows a stochastic birth-death process. Workers are added as a result of recruitment activity and they are lost to exogenous separation and to quits to other firms. Broadly speaking, a more productive firm has a greater return to recruitment than a less productive firm. In combination with a higher match acceptance rate by workers, a more productive firm has a greater hiring rate. The more productive firm also tends to lose workers to other firms at a lower rate. Therefore, absent demand constraints, more productive firms will on average be larger. A reduction in labor market friction will unambiguously strengthen this pattern and the impact on the selection effect would be unambiguously positive.

The interaction with demand constraints complicates matters. If greater productivity does not increase a product line’s frictionless labor demand level much, then it is possible to find an environment where labor force size is primarily demand constrained for high productivity firms and primarily labor friction constrained at the low productivity end. A friction reduction will in such an environment do little to labor demand at the high productivity end but expand labor force size at the low end. This would in isolation weaken the selection effect and could pave the way for the somewhat counterintuitive result that a labor market friction reduction lowers aggregate productivity. I emphasize this complication not because I have any particular reason to believe that it is empirically relevant, nor do I know it to be irrelevant. Rather, I want to highlight that the evaluation of policy instruments and counterfactuals depends crucially on the particular model parameter specification. Hence, the obvious value of fully estimated model.

We are currently working with two versions of the model that differ in the firm’s product pricing mechanism. In one version, product pricing is a result of Betrand competition between the innovating firm and a competitive fringe that can also produce the product but at a productivity disadvantage. One interpretation of the competitive fringe is home production within households. We describe this version of the model in detail in Lentz and Mortensen (2008a). In this case, the marginal productivity of a worker within a product line is constant up to the point where product demand is exhausted. As a result, worker reallocation is purely driven by a desire to move up the product line productivity ladder.

In the other version, product pricing is set by monopoly pricing and the marginal productivity of a worker within a product line is decreasing in the line’s labor force size. In this case, worker reallocation is not just from low to high productivity product lines, but also from well staffed lines to newly created ones that have yet to staff up. Labor market friction and the degree of substitutability between intermediary products determine the extent to which marginal worker productivity is equalized across product lines.

We are in the process of estimating the model and will subsequently explore the link between labor market policies and counterfactuals on aggregate productivity.

4. Sorting, Labor Market Flows and Wages

My project with Jesper Bagger focuses on the measurement of the impact of worker and firm heterogeneity on wages in an environment with labor market frictions and possible sorting. The project is also directly concerned with the measurement of sorting itself.Worker heterogeneity is modelled as a simple single dimensional characteristic referred to as skill. Similarly, firms are characterized by a single dimensional productivity index. For the sake of simplicity, it is assumed that firm production is additively separable across matches. This is clearly an assumption that must be relaxed as the literature moves forward, but for now it allows a relatively simple discussion of the mapping between match production function characteristics and the joint distribution of worker skill and firm productivity over matches. It is assumed that productive heterogeneity is absolute meaning that for any given firm type, a more skilled worker is more productive than a less skilled worker. Similarly for firm productivity.

There is positive complementarity between worker skill and firm productivity if the match production function is supermodular in skill and productivity. In this case, the sum of production from two matches where a high skill worker is matched with a high productivity firm in one match and a low skill worker is matched with a low productivity firm in the other exceeds the output sum of the two matches where you match the high skill worker with the low productivity firm and the low skill worker with high productivity firm. There are negative complementarities in production if the production function is submodular. In this case, the inequality in the example above is reversed, that is, matching opposites produces more than matching likes.

The studies of the partnership model in Becker (1973) and subsequently with matching frictions in Shimer and Smith (2000) emphasize the connection between matching function complementarities and sorting patterns in the equilibrium match distribution. Absent frictions, production function supermodularity (submodularity) induces positive (negative) sorting. Matching frictions complicate matters somewhat. Shimer and Smith (2000) show that log-supermodularity and log-submodularity of the production function are sufficient for positive and negative sorting, respectively. The partnership model takes as given a fixed population of heterogeneous agents. They can match with only one agent at a time. In Shimer and Smith (2000), matched agents cannot search while matched. In their ongoing projects where they apply the partnership model to the study of wages and sorting, Lise, Meghir and Robin (2008) and de Melo (2008) relax this assumption on the worker side of the market. In his study of replacement hiring and wages, Bobbio (2009) relaxes this assumption on both sides of the market. The partnership model’s assumption of scarcity in matching opportunities is a key source of discriminating behavior. In order to accept a match opportunity, it has to compensate the agent for the loss of value from the meeting process while matched. The application of the partnership model to multi-worker firms typically assumes that each position in the firm has its own hiring process that produces meetings that apply only to the position in question.

In Lentz (2010), I set forth an on-the-job search model where sorting can arise as a result of search intensity choice variation across worker types. I show that if the match production function is supermodular, more skilled workers have relatively greater gains from outside job opportunities, they consequently search harder and in a stochastic dominance sense end up matched with more productive firms. That is, positive sorting. In the case where the match production function is submodular, negative sorting obtains.

Unlike the partnership model, firms are non-discriminatory as they are unconstrained in matching opportunities due to the assumption of constant returns to scale. Each firm has a central hiring process that produces meetings. If a firm decides to match with a worker, it does not reduce the value of the hiring process because it always has room for any additional match opportunity the process produces. If workers were to receive job opportunities at the same rate regardless of skill level and employment state, this environment would produce no sorting regardless of the match production function characteristics. This is exactly the case in Postel-Vinay and Robin (2002). Workers, of course can only match with one firm at a time, but since they receive job opportunities at the same rate while matched as they do unmatched, they too are non-discriminatory. Allowing workers to choose the amount of resources they dedicate to the creation of meetings through their choice of search intensity brings back the possibility of sorting.

The sorting by search intensity model and the partnership model represent two benchmark views of the firm’s role in the determination of sorting in the labor market. In the partnership model, firms are highly discriminatory since they are for the purpose of sorting just like single worker firms. In the sorting by search intensity model firms are completely non-discriminatory due to complete absence of match opportunity scarcity. Both views have obvious merit and underscore the importance of a continued push towards a deeper understanding of the firm in labor market research.

In my work with Jesper Bagger, we build an empirical general equilibrium model of sorting and wages based on the sorting by search intensity mechanism. We assume wage bargaining as in Dey and Flinn (2005) and Cahuc, Postel-Vinay and Robin (2006). In this model workers move up the firm productivity ladder through the offer accumulation process. As the worker accumulates offers, she also accumulates bargaining power since wages are effectively set through bargaining with a worker’s outside option of full surplus extraction with the second best job offer during the employment spell in question. For a given worker-firm match, job separation and worker search intensity are jointly efficient.

The match production function translates worker skill and firm productivity indices into output. It is the production function that determines the productive returns to both worker skill and firm productivity. It is also a key determinant of allocation patterns. Hence, model estimation can in many ways be thought of as a structural production function estimation. In most employer-employee data sets, the Danish one included, we only have output measures at the firm level. At the firm level, output is a convolution of the firm productivity effect and the skill effects of all of its workers. However, the data contain wage observations at the match level. Insofar that wages reflect the characteristics of the match production function, one can use wages for identification of the production function. One notable candidate is the log wage decomposition in Abowd, Kramarz and Margolis (1999), where unobserved individual worker and firm wage effects are identified in addition to the impact of observed worker characteristics. The identification strategy relies on the assumption that log wages be an additive and monotone function of the worker and firm wage effects.

Both the sorting by search intensity model and the partnership model produce a wage function that relates worker and firm characteristics to average match wage realizations. As it turns out, in contrast to the match production function, the average match wage realization is not a monotone function of worker skill and firm productivity once sorting is allowed. The ongoing sorting and wage projects based on the partnership model are finding a similar result, although through a substantially different mechanism. Needless to say, this throws quite a lot of sand into the gears of an identification strategy based primarily on something like the Abowd, Kramarz and Margolis (1999) wage decomposition. For example, all of the mentioned projects on sorting and wages emphasize that it is perfectly possible to have an estimated negative correlation between worker and firm wage effects in an environment characterized by positive complementarities in production and an associated positive sorting between worker skill and firm productivity in the match distribution.

Eeckhout and Kircher (2009) and de Melo (2008) propose identification strategies for the strength of sorting based on the idea of comparing variance of worker types within firms to that of the overall population. The approaches are useful advances, however, the strategies do not identify the type of sorting and in addition the identification of worker types may be quite sensitive to the particular modelling framework at hand. So, more information must be brought to bear. In Bagger and Lentz (2008) we propose one identification strategy that combines the observation of unemployment and employment durations with the observed job flows in and out of firms. The type of sorting is revealed by correlating observed unemployment duration with a measure of a worker’s position in the skill hierarchy. In the model, high skill workers have short durations when there are positive complementarities in production and long durations when the complementarities are negative. The firm productivity hierarchy is identified by observing a firm’s relative inflow of job-to-job transitions to its outflow of job-to-job transitions. This measure stems from an ongoing project I am engaged in with Chris Taber and Rune Vejlin where job-to-job transitions are viewed as a possibly noisy revelation of a worker’s preferences over the two firms involved. Identification of worker skill is facilitated by the identification of the productivity hierarchy. The use of worker flow and duration data for the purpose of identifying match allocation and heterogeneity is sensible but the information that is extracted from flows and durations is typically quite model sensitive. A major challenge moving forward is to formulate identification strategies that are robust across modelling frameworks.

We are currently estimating the model and exploring additional identification strategies.

References

Abowd, John M., Francis Kramarz, and David N. Margolis (1999). “High wage workers and high wage firms,” Econometrica vol. 67(2), pages 251-334.
Becker, Gary S. (1973). “A theory of marriage: Part I,” The Journal of Political Economy vol. 81(4), pages 813-846.
Bobbio, Emmanuele (2009). “Replacement hiring and wages,” Working Paper, University of Wisconsin-Madison.
Cahuc, Pierre, Fabien Postel-Vinay, and Jean-Marc Robin (2006). “Wage bargaining with on-the-job search: Theory and evidence,” Econometrica vol. 74(2), pages 323-364.
de Melo, Rafael Lopes (2008). “Sorting in the labor market: Theory and measurement,” Yale Working Paper.
Dey, Matthew S. and Christopher J. Flinn (2005). “An equilibrium model of health insurance provision and wage determination,” Econometrica vol. 73(2), pages 571-627.
Eeckhout, Jan and Philipp Kircher (2009). “Identifying sorting – in theory,” PIER Working Paper 09-007
Klette, Tor Jakob and Samuel Kortum (2004). “Innovating firms and aggregate innovation,” Journal of Political Economy vol. 112(5), pages 986-1018.
Lentz, Rasmus (2010). “Sorting by search intensity,” Forthcoming in Journal of Economic Theory.
Lentz, Rasmus and Dale T. Mortensen (2008a). “An empirical model of growth through product innovation,” Econometrica vol. 76(6), pages 1317-1373.
Lentz, Rasmus and Dale T. Mortensen (2008b). “Labor market friction, firm heterogeneity, and aggregate employment and productivity,” Working Paper, University of Wisconsin-Madison.
Lentz, Rasmus, Christopher Taber and Rune Vejlin (2009). “Sources of Wage Inequality,” Working Paper, University of Wisconsin-Madison.
Lise, Jeremy, Costas Meghir, and Jean-Marc Robin (2008). “Matching, sorting, and wages,” University College London Working Paper.
Postel-Vinay, Fabien and Jean-Marc Robin (2002). “Equilibrium wage dispersion with worker and employer heterogeneity,” Econometrica vol. 70(6), pages 2295-2350.
Shimer, Robert and Lones Smith (2000). “Assortative matching and search,” Econometrica vol. 68(2), pages 343-369.
Stole, Lars A. and Jeffrey Zwiebel (1996). “Intra-firm bargaining under non-binding contracts,” Review of Economic Studies vol. 63(3), pages 375-410.

Q&A: Pete Klenow on Price Rigidity

Pete Klenow is Professor of Economics at Stanford University. His research encompasses the measurement of price rigidity and the causes of growth. Klenow’s RePEc/IDEAS entry.
EconomicDynamics: There has been in recent years a rapid expansion of research on price changes at the microeconomic level. What are the main lessons to be learned from this evidence?
Pete Klenow: As with other micro data, heterogeneity jumps out at you in the micro price data. Some prices change constantly (e.g., airfares). Other prices are stuck for a year or more (e.g. movie tickets). And when prices change, they usually do so by big amounts — an order of magnitude more than needed just to keep up with general inflation. So sectors differ in their pricing, and within sectors price changes are idiosyncratic. These are the two most consistent findings from micro price studies in the U.S., Euro Area, and beyond.It’s not clear which way this heterogeneity cuts for macro price flexibility. On the one hand, the frequent changers could be “waiting” for the stickier prices to fully incorporate a nominal macro shock. On the other hand, some of the stickiest categories don’t really have a business cycle (e.g., medical care). The more flexible categories, such as new cars, tend to be more cyclical.

ED: Macroeconomic research seems to indicate that price rigidities are stronger than at the microeconomic level. How can this be reconciled?
PK: The key is to have micro price changes that do not fully incorporate macro shocks. This can happen if wages or intermediate prices are sticky, or if there is coordination failure among competing sellers who do not synchronize their price changes. But the evidence for such “complementarities” is mixed.Another route would be some form of sticky information, as advanced by Mankiw and Reis, Sims, or Woodford. Firms may only periodically revise their pricing plans to incorporate macro information, as in Mankiw and Reis. Or they may be too busy paying attention to first order micro and sector shocks to pay much attention to second order macro shocks — that’s Sims’ rational inattention story.

ED: Several OECD countries have recently experienced episodes of deflation. Does this give us new insights into the rigidity of prices?
In the U.S. CPI, at least, the big individual price declines have been concentrated in food and energy. Outside of these categories the frequency of price increases has actually risen a few percentage points since early 2008 (from about 10% a month to 12% a month). News accounts of steep discounts nothwithstanding, sales have not become more common in this data — though it’s possible people are buying more quantities at sale vs. regular prices. These facts would seem to be a challenge for the conventional view of how pricing responds to adverse demand shocks.
ED: Speaking of sales, Eichenbaum, Jaimovich and Rebelo have challenged the measurement of price rigidity by focussing on reference prices excluding sales prices. What is your take on this?
The Eichenbaum, Jaimovich and Rebelo definition of a reference price (the most common price in a quarter) excludes more than just sale prices. It also excludes short-lived regular prices. This difference is quantitatively important: whereas regular (i.e., non-sale) prices in their grocery store chain move every few months, reference prices change about once a year. But excluding sales does a lot too — lowering the frequency of price changes from several weeks to several months.As recent papers by Nakamura and Steinsson and by Kehoe and Midrigan have emphasized, temporary price discounts are prevalent in food and apparel in the U.S. Sales are much less common in the Euro Area. In their “Billion Prices Project“, Cavallo and Rigobon look at online consumer prices from many countries and find sales are of intermediate importance in a number of Latin American countries.

I think the jury is still out on whether there is any macro content to sale prices and other short-lived prices. At one extreme, one can imagine the frequency and magnitude of sales responding to unexpected inventory in cyclically important sectors such as autos and apparel. It’s possible most quantities are sold at discounts from list prices in these categories. At the other extreme, sale prices may reflect idiosyncratic price discrimination that is completely unresponsive to the macro environment.

The fact that many sale price episodes end with a return to the previous regular price would seem to limit their macro content — at least for macro shocks that are persistent in levels. This point is stressed by Kehoe and Midrigan, for example.

But in the U.S. CPI over 40% of sale prices episodes give way to new regular prices. Think clearance prices in apparel and electronics that pave the way for new products with new prices. And most regular price changes are not reversed. For the typical good two “novel” prices appear each year. This is what Ben Malin and I report in a chapter for the new volume of the Handbook of Monetary Economics.

The bottom line is we really don’t know yet how important short-lived prices are for macro price flexibility.

References

Cavallo, Alberto, 2009. “Scraped Data and Sticky Prices: Frequency, Hazards, and Synchronization,” Unpublished paper, Harvard University.
Eichenbaum, Martin, Nir Jaimovich and Sergio Rebelo, 2008. “Reference Prices and Nominal Rigidities,” NBER Working Paper 13829.
Kehoe, Patrick, and Virgiliu Midrigan, 2008. “Temporary Price Changes and the Real Effects of Monetary Policy,” Staff Report 413, Federal Reserve Bank of Minneapolis.
Klenow, Pete, and Ben Malin, 2010. “Microeconomic Evidence on Price-Setting“, forthcoming, Handbook of Monetary Economics, Elsevier.
Mankiw, Gregory, and Ricardo Reis, 2006. “Pervasive Stickiness,” American Economic Review American Economic Association, vol. 96(2), pages 164-169, May.
Nakamura, Emi, and Jön Steinsson, 2008. “Five Facts about Prices: A Reevaluation of Menu Cost Models,” The Quarterly Journal of Economics, MIT Press, vol. 123(4), pages 1415-1464.
Sims, Christopher A., 2003. “Implications of rational inattention,” Journal of Monetary Economics, Elsevier, vol. 50(3), pages 665-690, April.
Woodford, Michael, 2008. “Information-Constrained State-Dependent Pricing,” NBER Working Paper 14620.

Dear SED Members and Friends:

Our 2009 summer meeting in Istanbul was a huge success. Our thanks go out to the program chairs, Jesus Fernandez-Villaverde and Martin Schneider, as well as the local organizers, Nezih Guner, Refet Gürkaynak, Selo Imrohoroglu, Gökçe Kolasin and Kamil Yilmaz for all of their hard work in leading to this success. As many of you know, the Istanbul conference coincided with the end of David Levine‘s three year term as President of the SED. We are all very thankful for his service to the society, and he has left the society in an even stronger position than it was when he began his term. He oversaw three highly successful meetings (in Prague, Cambridge MA, and Istanbul), initiated a considerable expansion in the size of the summer conference, and took steps to have the SED become an official non-profit organization.

The 2010 meeting of the SED will be held in Montreal, Canada from July 12-14. Mark Aguiar and Michèle Tertilt have agreed to serve as program chairs, and have already arranged for an outstanding slate of plenary speakers: Susan Athey from Harvard University, Bob Hall from Stanford University, and Ellen McGrattan from the Federal Reserve Bank of Minneapolis. The local coordinators for the conference are Rui Castro and Francisco Ruge-Murcia, and I know they are working hard to ensure another very successful meeting.

The 2011 meetings will be held in Gent, Belgium. The local coordinators are Stijn Van Nieuwerburgh, Gert Peersman and Robert Kollmann. They made a presentation to the Council during the Istanbul meetings and it looks like we will have a great time in Gent in 2011. While the tradition has been to have the location of the conference to alternate between Europe and North America, the more likely pattern in the future is to have a three year cycle with two meetings in Europe and one in North America. At this point it looks like the 2012 conference will be held in Cyprus. Alex Michaelides and Chris Pissarides have put together a proposal that looks very impressive.

I also want to take this opportunity to thank some additional individuals for their service to the Society. First I would like to thank Narayana Kocherlakota for his years of service as Managing Editor of RED. He devoted a great deal of energy to RED and the journal has done very well under his leadership. I would also like to thank Gianluca Violante, who has taken over from Narayana as the Managing Editor of RED. I am confident that RED will continue its upward trajectory under his leadership. Finally, I would like to thank Ellen McGrattan and Christian Zimmermann, for their years of hard work as Treasurer and Secretary of the Society, respectively.

See you all in Montreal!

Best Regards.

Richard Rogerson, President
Society for Economic Dynamics

Call for Papers, 2010 Meeting

The 21th annual meetings of the Society for Economic Dynamics will be held July 8-10 2010 in Montreal, Canada. The plenary speakers are Susan Athey (Harvard), Bob Hall (Stanford), and Ellen McGrattan (Federal Reserve Bank of Minneapolis). The program co-chairs are Mark Aguiar (Rochester) and Michèle Tertilt (Stanford).

The program will be made up from a selection of invited and submitted papers. The Society now welcomes submissions for the Montreal program. Submissions may be from any area in economics. A program committee will select the papers for the conference. The deadline for submissions is February 15, 2010.

The NEP-DGE Blog

NEP-DGE is a mailing list that disseminates every week abstracts and links to new working papers in the field of dynamic general equilibrium theory. Since 1998, it has sent 500 reports covering a total of 4800 papers to currently 800 subscribers.

NEP-DGE now also has a blog. Each week a paper is selected among those announced and put up for discussion. The hope that this forum for discussion will flourish and serve as an example for other fields (along with two other NEP blogs) that an open discussion about research is possible and fruitful. In particular, this could test new ways to approach peer review.