Economic Dynamics Newsletter

Volume 14, Issue 2 (November 2013)

The EconomicDynamics Newsletter is a free supplement to the Review of Economic Dynamics (RED). It is published twice a year in April and November.

In this issue

The Macroeconomics of Bargain Hunting , by Greg Kaplan and Guido Menzio

Greg Kaplan is a Assistant Professor of Economics at Princeton University. His research interests are in applied macroeconomics. Guido Menzio is Associate Professor of Economics at the University of Pennsylvania. His focus is on macroeconomic applications of search theory. Kaplan’s RePEc/IDEAS profile and Menzio’s RePEc/IDEAS profile

Bargain hunting refers to the activities in which buyers can engage in order to acquire goods and services at lower prices. For example, buyers may acquire goods and services at lower prices by spending more time searching for cheap sellers or by waiting for temporary sales. The goal of our current research is to understand the macroeconomic implications of aggregate changes in the extent of bargain hunting. In particular, we want to understand how aggregate changes in the extent of bargain hunting affect the pricing strategy of sellers, the incentives of sellers to enter or expand their presence in the product market and, in turn, their demand for the inputs used in production and retailing. Given the availability of pricing data for consumption goods, our research focuses on the effect of bargain hunting in the retail market. Yet, the mechanisms highlighted in our research are likely to operate at any other stage of the production chain.

1. Different people, different prices

The first step in our analysis is to identify the types of buyers who pay low prices in the consumption goods market. Aguiar and Hurst (2007) documented that older people pay lower prices than younger people. For example, they showed that people aged 60 to 64 pay roughly 3% less for the same consumption goods than people aged 25 to 29. In Kaplan and Menzio (2013a), we use the Kielts Nielsen Consumer Panel (KNCP) to document that–among the working age population–the non-employed pay lower prices than the employed. Indeed, we found that the non-employed pay between 1 and 5% less than the employed for the same consumption goods. The smaller difference obtains when we define goods at the level of the barcode. The larger difference obtains when we define goods by their characteristics, rather than by their barcode.

2. Price dispersion: evidence and theory

The second step in our analysis is to understand why sellers charge different prices for identical goods, as this determines the cause of price differentials between different types of buyers, as well as the effect on sellers of changes in the distribution of buyers’ types. In Kaplan and Menzio (2013a), we use KNCP to measure the extent of price dispersion for identical goods within the same city and during the same quarter, and to decompose price dispersion into three different sources, each related to alternative theories of price dispersion. To illustrate the spirit of our decomposition, consider two bottles of ketchup that sold at different prices in the same market and during the same period of time. First, the two bottles may have sold at different prices because one was sold at an expensive store (i.e. a store where goods are on average expensive) and the other was sold at a cheap store. We refer to this source of price dispersion as the store component. Second, the two bottles of ketchup may have sold at different prices because, although they were sold at equally expensive stores, one was sold at a store where ketchup is expensive (relative to the average price of that store) and the other was sold at a store where ketchup is cheap (relative to the average price of that store). We refer to this source of price dispersion as the store-specific good component. Third, the two bottles of ketchup may have sold at different prices because, although they were sold at the very same store, one was sold at a high price and the other was sold at a low price perhaps because of a temporary sale. We refer to this source of price dispersion as the transaction component.

The decomposition of price dispersion allows us to assess the relative importance of four popular theories of price dispersion: (i) amenities, (ii) heterogeneous monopolists, (iii) intertemporal price discrimination and (iv) search frictions. According to the amenity theory of price dispersion, the product market is perfectly competitive and, yet, identical goods trade at different prices only because they are bundled with different amenities (e.g., location of the store, customer care provided at the store, etc…). For example, a bottle of ketchup will be expensive at a store with a parking lot reserved to its customers and will be cheap at a store without a reserved parking lot. Since amenities are generally specific to a store, rather than to a particular good or transaction, the amenity theory implies that, that for any good, most of the dispersion in prices should be accounted for by variation in the expensiveness of the stores at which the good is sold. That is, the store component should account for most of price dispersion.

According to the monopoly theory of price dispersion, identical goods are sold at different prices because they are traded by local monopolists who face different marginal costs or different demand elasticities (see, e.g., Golosov and Lucas 2007). For example, a monopolist who faces a relatively inelastic demand for ketchup will charge a higher price for the same bottle of ketchup than a monopolist who faces a relatively elastic demand. As long as the differences in marginal costs and demand elasticities between stores are correlated across goods, then the monopoly theory implies that most of the dispersion in prices for any particular good should be accounted for by variation in the expensiveness of the stores at which the good is sold. That is, the store component should again account for most of price dispersion.

According to the theory of price discrimination, identical goods are sold at different prices because local monopolists vary their price over time in order to discriminate between different types of buyers (see, e.g., Conlisk et al. 1984, Sobel 1984 or Albrecht et al. 2012). For example, consider a monopolist facing a constant inflow of low valuation buyers who have a high intertemporal elasticity of substitution for consumption and a flow of high valuation buyers who cannot substitute consumption intertemporally. The monopolist will find it optimal to follow a pricing cycle. In particular, the monopolist will keep the price relatively high for several periods. At this relatively high price, high valuation buyers will purchase the good, while low valuation buyers will wait. Eventually, the number of low valuation buyers will be sufficiently large to induce the monopolist to lower the price for one period and sell to all of them. According to the theory of intertemporal price discrimination, the variation in prices for the same good should be accounted for by variation in the price at which the good is sold at the same store on different days during the same quarter. That is, the transaction component should account for most of price dispersion.

The presence of search frictions in the product market can simultaneously explain why buyers do not arbitrage away price differences and why sellers choose to charge different prices (see, e.g., Burdett and Judd 1983). Consider a market populated by a large number of sellers and buyers. Due to search frictions, an individual buyer cannot purchase from any seller in the market, but only from a subset of sellers. In particular, some buyers are able to purchase from only one seller (uncontested buyers), while other buyers are able to purchase from multiple sellers (contested buyers). In this environment, if all sellers charged the same price, an individual seller could increase its profits by posting a slightly lower price and sell not only to the uncontested buyers it meets, but also to the contested ones. Hence, in equilibrium, identical sellers must randomize over the price of the good and price dispersion obtains. Depending on the pattern of randomization across goods and days, the search theory of price dispersion may generate variation that is accounted for by the store component (if sellers randomize in a way that is strongly correlated across goods), by the store-good component (if sellers randomize independently across goods) and by the transaction component (if sellers randomize independently across goods and days). What distinguishes the search theory of price dispersion from other theories is the fact that it can generate dispersion in the price of a good that is sold at stores that are on average equally expensive.

Empirically, we find that the store component accounts for only 10% of the variance of transaction prices for the same good in a given city and quarter. This finding suggests that the amenity and monopoly theories of price dispersion are unlikely to be quantitatively very important. In contrast, the store-good component accounts for 35 to 45% of the variance of prices, while the transaction component accounts for the remaining variance. These findings suggest that search frictions and intertemporal price discrimination are the most likely causes of price dispersion. Importantly, both the search and intertemporal price discrimination theories imply that the types of buyers who pay lower prices (e.g., the old and the unemployed) achieve such lower prices because they are more likely to be bargain hunters, that is, by devoting time and effort to visiting multiple stores, seeking out temporary sales or finding close substitutes for goods. Both theories imply that an increase in the fraction of bargain hunters will induce sellers to lower their prices without any concurrent change in the costs of producing and retailing goods.

3. Bargain hunting and shopping externalities

In Kaplan and Menzio (2013b), we combine a search-theoretic model of the product market with a search theoretic model of the labor market to understand the general equilibrium implications of aggregate changes in bargain hunting brought about by changes in the fractions of employed and unemployed buyers. In particular, we model the product market as in Burdett and Judd (1983). The equilibrium of this market determines the extent of price dispersion and, given the difference in search intensity between employed and unemployed buyers, the extent to which unemployed buyers pay lower prices. We model the labor market as in Mortensen and Pissarides (1994). The equilibrium of this market determines the fraction of workers who are unemployed and the difference in income between employed and unemployed workers.

Our main theoretical finding is that changes in the composition of buyers can have such a strong effect on sellers as to generate multiple equilibria. The finding is intuitive. When a firm expands its workforce, it creates external effects on other firms. On the one hand, the expanding firm increases the tightness of the labor market and hence makes it more costly for other firms to hire additional workers. We refer to this effect as the congestion externality of employment. On the other hand, the expanding firm tilts the composition of buyers towards types who have more income to spend and less time to search for low prices (i.e. employed buyers). This increases other firms’ demand and market power, and hence, increases their value from expanding their presence in the product market, which entails hiring additional workers. We refer to these effects as the shopping externalities of employment. If the differences in income and/or shopping time between employed and unemployed buyers are sufficiently large, the shopping externalities dominate the congestion externality, employment decisions of different firms become strategic complements and multiple rational expectations equilibria obtain. Different equilibria are associated with different expectations about future unemployment. Yet, in all equilibria, expectations are rational, in the sense that the realized path of unemployment coincides with the expected one.

Our main quantitative finding is that–when calibrated to the observed differences in shopping behavior between the employed and the unemployed–the economy features multiple rational expectations equilibria. In particular, we calibrate the model economy to match the empirical differences in shopping time (25%), prices paid (-2%) and expenditures (-15%) between unemployed and employed workers, as well as the rates at which workers transit between unemployment and employment. Given these calibration targets, the economy has three steady states: one with an unemployment rate of approximately 5%, one with an unemployment rate of approximately 9% and one with no economic activity. Moreover, for any initial unemployment rate, there are rational expectations equilibria leading to each one of the three steady states. Multiplicity obtains because the firms’ value from entering or scaling up their presence in the product market turns out to be fairly sensitive to the unemployment rate. Interestingly, this happens not so much because unemployed buyers spend less than employed buyers, but mainly because unemployed buyers search more than employed buyers. That is, the firms’ value from participating in the product market is quite sensitive to the unemployment rate because the unemployment rate has a rather strong effect on competitiveness of the product market.

The existence of multiple rational expectations equilibria suggests that economic fluctuations may be due not only by changes in fundamentals (i.e. technology, preferences or policy), but also to changes in the agents’ expectations about future unemployment. We formalize this idea by developing a version of the calibrated model in which the agents’ expectations about long-run unemployment follow a 2-state Markov switching process. In the optimistic state, agents expect to reach the steady state with the lowest unemployment rate (5%). In the pessimistic state, agents expect to reach the steady state with the intermediate unemployment rate (9%). Shocks to the agents’ expectations generate fluctuations in unemployment, vacancies and job-finding rates that are large compared to those generated by productivity shocks. Moreover, unlike productivity shocks, shocks to the agents’ expectations generate large, procyclical fluctuations in the value of firms and rather small, countercyclical fluctuations in real labor productivity. Interestingly, the response of the economy to a negative expectation shock looks a lot like the behavior of the US economy during the Great Recession and its aftermath. This finding suggests the possibility that the financial crisis may have acted as a coordination device in focusing the agents’ expectations about future unemployment towards the pessimistic steady state.

Our theory of multiple equilibria is theoretically novel. Unlike Benhabib and Farmer (1994), our theory does not require increasing returns to scale in production. Unlike Diamond (1982), our theory does not require increasing returns in matching. Unlike Heller (1986), our theory does not hinge on demand externalities. Instead, our theory of multiple equilibria builds on two simple mechanisms. The first mechanism links unemployment, search and competition: when unemployment is lower, buyers spend less time searching for low prices and, in doing so, they make the product market less competitive and drive prices up. The second mechanism links revenues, entry and labor demand: when revenues are higher because of either higher demand or higher prices, new firms want to enter the product market, established firms want to scale-up their presence in the product market and, since both activities require some labor, labor demand increases.

4. Direction for future research

The fact that buyers can affect the price they pay for goods and services by engaging in bargain hunting activities has profound implications for the behavior of the macroeconomy. Our current work shows that, because of the differences in the amount of time spent shopping by employed and unemployed buyers, the unemployment rate has a strong effect on the competitiveness of the product market, on the number and size of sellers and, in turn, on labor demand. Indeed, the effect of the unemployment rate is so strong as to create multiple equilibria and, hence, open the door for non-fundamental shocks. Our current work provides just one example of the effect of bargain hunting on the macroeconomy. For instance, it would be interesting to study the macroeconomic effects of changes in bargain hunting brought about by changes in the age distribution rather than by changes in unemployment. Similarly, it would be interesting to study the macroeconomic effect of aggregate changes in bargain hunting in the intermediate market rather than in the consumption goods market.

References

Mark Aguiar and Erik Hurst, 2007. “Life-Cycle Prices and Production,” American Economic Review, vol. 97(5), pages 1533-1559, December.

James Albrecht, Fabien Postel-Vinay and Susan Vroman, 2013. “An Equilibrium Search Model Of Synchronized Sales,” International Economic Review, vol. 54(2), pages 473-493.

Jess Benhabib and Roger E. A. Farmer, 1994. “Indeterminacy and Increasing Returns,” Journal of Economic Theory, vol. 63(1), pages 19-41, June.

Kenneth Burdett and Kenneth Judd, 1983. “Equilibrium Price Dispersion,” Econometrica, vol. 51(4), pages 955-69, July.

John Conlisk, Eitan Gerstner, and Joel Sobel, 1984. “Cyclic Pricing by a Durable Goods Monopolist,” The Quarterly Journal of Economics, vol. 99(3), pages 489-505, August.

Peter A. Diamond, 1982. “Aggregate Demand Management in Search Equilibrium,” Journal of Political Economy, vol. 90(5), pages 881-94, October.

Mikhail Golosov and Robert E. Lucas Jr., 2007. “Menu Costs and Phillips Curves,” Journal of Political Economy, vol. 115, pages 171-199.

Walter Heller, 1986. “Coordination Failure Under Complete Markets with Applications to Effective Demand.” Equilibrium analysis: Essays in Honor of Kenneth J. Arrow, vol. 2, pages 155-75.

Greg Kaplan and Guido Menzio, 2013a. Deconstructing Price Dispersion. Manuscript, Princeton University and University of Pennsylvania.

Greg Kaplan and Guido Menzio, 2013b. “Shopping Externalities and Self-Fulfilling Unemployment Fluctuations,” NBER Working Paper 18777.

Dale T. Mortensen and Christopher A. Pissarides, 1994. “Job Creation and Job Destruction in the Theory of Unemployment,” Review of Economic Studies, vol. 61(3), pages 397-415, July.

Joel Sobel, 1984. “The Timing of Sales,” Review of Economic Studies, vol. 51(3), pages 353-68, July.

Q&A: James Bullard on policy and the academic world

 James Bullard is President and CEO of the Federal Reserve Bank of St. Louis. His research focuses on learning in macroeconomics. Bullard’s RePEc/IDEAS entry.

EconomicDynamics: You have talked about how you want to connect the academic world with the policy world. The research world is already working on some of these questions. Do you have any comments on that?
James Bullard: I have been dissatisfied with the notion that has evolved over the last 25 or 30 years that it was okay to allow a certain group of economists to work on really rigorous models and do the hard work of publishing in journals and then have a separate group that did policymaking and worried about policymaking issues. These two groups often did not talk to each other, and I think that that is a mistake. It is something you would not allow in other fields. If you are going to land a man on Mars, you are going to want the very best engineering. You would not say that the people who are going to do the engineering are not going to talk to the people who are strategizing about how to do the mission.An important part of my agenda is to force discussion between what we know from the research world and the pressing policy problems that we face and try to get the two to interact more. I understand about the benefits of specialization, which is a critical aspect of the world, but still I think it is important that these two groups talk to each other.

ED: Is there a place in policy for the economic models of the “ivory tower”?
JB: I am not one who thinks that the issues discussed in the academic journals are just navel gazing. Those are our core ideas about how the economy works and how to think about the economy. There are no better ideas. That is why they are published in the leading journals. So I do not think you should ignore those. Those ideas should be an integral part of the thinking of any policymaker. I do not think that you should allow policymaking to be based on a sort of second-tier analysis. I think we are too likely to do that in macroeconomics compared to other fields.
ED: Why do you think that is?
JB: I think people have some preconceptions about what they think the best policy is before they ever get down to any analysis about what it might be. I understand people have different opinions, but I see the intellectual market place as the battleground where you hash that out.I do not think the answers are at all obvious. A cursory reading of the literature shows you that there are many, many smart people involved. They have thought hard about the problems that they work on, and they have spent a lot of time even to eke out a little bit of progress on a particular problem. The notion that all those thousands of pages could be summed up in a tweet or something like that is kind of ridiculous. These are difficult issues, and that is why we have a lot of people working on them under some fair amount of pressure to produce results.

Sometimes I hear people talking about macroeconomics, and they think it is simple. It is kind of like non-medical researchers saying, “Oh, if I were involved, I would be able to cure cancer.” Well fine, you go do that and tell me all about it. But the intellectual challenge is every bit as great in macroeconomics as it is in other fields where you have unsolved problems. The economy is a gigantic system with billions of decisions made every day. How are all these decisions being made? How are all these people reacting to the market forces around them and to the changes in the environment around them? How is policy interacting with all those decisions? That is a hugely difficult problem, and the notion that you could summarize that with a simple wave of the hand is silly.

ED: Do you remember the controversy, the blogosphere discussion, that macroeconomics has been wrong for two decades and all that criticism? Do you have any comments on that?
JB: I think the crisis emboldened people that have been in the wilderness for quite a while. They used the opportunity to come out and say, “All the stuff that we were saying that was not getting published anywhere is all of the sudden right.”My characterization of the last 30 years of macroeconomic research is that the Lucas-Prescott-Sargent agenda completely smoked all rivals. They, their co-authors, friends, and students carried the day by insisting on a greatly increased level of rigor, and there was a tremendous amount of just rolling up their sleeves and getting into the hard work of actually writing down more and more difficult problems, solving them, learning from the solution and moving on to the next one. Their victory remade the field and disenfranchised a bunch of people. When the financial crisis came along, some of those people came back into the fray, and that is perfectly okay. But, there is still no substitute for heavy technical analysis to get to the bottom of these issues. There are no simple solutions. You really have to roll up your sleeves and get to work.

ED: What about the criticism?
JB: I think one thing about macroeconomics is that because everyone lives in the economy and they talk to other people who live in the economy, they think that they have really good ideas about how this thing works and what we need to do. I do not begrudge people their opinions, but when you start thinking about it, it is a really complicated problem. I love that about macroeconomics because it provides for an outstanding intellectual challenge and great opportunities for improvement and success. I do not mind working on something that is hard.But everyone does seem to have an opinion. In medicine you do see some of that: People think they know better than the doctors and they think they are going to self-medicate because their theory is the right one, and the doctors do not know what they are doing. Steve Jobs reportedly thought like this when he was sick. But I think you see less of this type of attitude in the medical arena than you do in economics. That is distressing for us macroeconomists, but maybe we can improve that going forward.

ED: What do you think about the criticism of economists not being able to forecast or to see the financial crisis? Do you have any thoughts on that?
JB: One of the main things about becoming a policymaker is the juxtaposition between the role of forecasting and the role of modeling to try to understand how better policy can be made.In the policy world, there is a very strong notion that if we only knew the state of the economy today, it would be a simple matter to decide what the policy should be. The notion is that we do not know the state of the system today, and it is all very uncertain and very hazy whether the economy is improving or getting worse or what is happening. Because of that, the notion goes, we are not sure what the policy setting should be today. So, the idea is that the state of the system is very hard to discern, but the policy problem itself is often disarmingly simple. What is making the policy problem hard is discerning the state of the system. That kind of thinking is one important focus in the policy world.

In the research world, it is just the opposite. The typical presumption is that one knows the state of the system at a point in time. There is nothing hazy or difficult about inferring the state of the system in most models. However, the policy problem itself is often viewed as really difficult. It might be the solution to a fairly sophisticated optimization problem that carefully weighs the effects of the policy choice on the incentives of households and firms in a general equilibrium context. That kind of attitude is just the opposite of the way the policy world approaches problems. I have been impressed by this juxtaposition since I have been in this job.

Now, forecasting itself I think is overemphasized in the policy world because there probably is an irreducible amount of ambient noise in macroeconomic systems which means that one cannot really forecast all that well even in the best of circumstances. We could imagine two different economies, the first of which has a very good policy and second of which has a very poor policy. In both of these economies it may be equally difficult to forecast. Nevertheless, the first economy by virtue of its much better policy would enjoy much better outcomes for its citizens than the economy that had the worse policy. Ability to forecast does not really have much to do with the process of adopting and maintaining a good policy.

The idea that the success of macroeconomics should be based on forecasting is a holdover from an earlier era in macroeconomics, which Lucas crushed. He said the goal of our theorizing about the economy is to understand better what the effects of our policy interventions are, not necessarily to improve our ability to forecast the economy on a quarter-to-quarter or year-to-year basis.

What we do want to be able to forecast is the effect of the policy intervention, but in most interesting cases that would be a counterfactual. We cannot just average over past behavior in the economy, which has been based on a previous policy, and then make a coherent prediction about what the new policy is going to bring in terms of consumption and investment and other variables that we care about. It is a different game altogether than the sort of day-to-day forecasting game that goes on in policy circles and financial markets.

Of course it is important to try to have as good a forecast as you can have for the economy. It is just that I would not judge success on, say, the mean square error of the forecast. That may be an irreducible number given the ambient noise in the system.

One very good reason why we may not be able to reduce the amount of forecast variance is that if we did have a good forecast, that good forecast would itself change the behavior of households, businesses, and investors in the economy. Because of that, we may never see as much improvement as you might hope for on the forecasting side. The bottom line is that better forecasting would be welcome but it is not the ultimate objective.

We [central banks] do not really forecast anyway. What we do is we track the economy. Most actual forecasting day to day is really just saying: What is the value of GDP last period or last quarter? What is it this quarter? And what is it going to be next quarter? Beyond that we predict that it will go back to some mean level which is tied down by longer-run expectations. There is not really much in the way of meaningful forecasting about where things are going to go. Not that I would cease to track the economy–I think you should track the economy–but it is not really forecasting in the conventional sense.

The bottom line is that improved policy could deliver better outcomes and possibly dramatically better outcomes even in a world in which the forecastable component of real activity is small.

ED: Can the current crisis be blamed on economic modeling?
JB: No. I think that this is being said by people who did not spend a lot of time reading the literature. If you were involved in the literature as I was during the 1990s and 2000s, what I saw was lots of papers about financial frictions, about how financial markets work and how financial markets interact with the economy. It is not an easy matter to study, but I think we did learn a lot from that literature. It is true that that literature was probably not the favorite during this era, but there was certainly plenty going on. Plenty of people did important work during this period, which I think helped us and informed us during the financial crisis on how to think about these matters and where the most important effects might come from. I think there was and continues to be a good body of work on this. If it is not as satisfactory as one might like it to be, that is because these are tough problems and you can only make so much progress at one time.Now, we could think about where the tradeoffs might have been. I do think that there was, in the 1990s in particular, a focus on economic growth as maybe the key phenomenon that we wanted to understand in macroeconomics. There was a lot of theorizing about what drives economic growth via the endogenous growth literature. You could argue that something like that stole resources away from people who might have otherwise been studying financial crises or the interaction of financial systems with the real economy, but I would not give up on those researchers who worked on economic growth. I think that was also a great area to work on, and they were right in some sense that in the long run what you really care about is what is driving long-run economic growth in large developed economies and also in developing economies, where tens of millions of people can be pulled out of poverty if the right policies can be put in place.

So to come back later, after the financial crisis, and say, in effect, “Well those guys should not have been working on long-run growth; they should have been working on models of financial crisis,” does not make that much sense to me and I do not think it is a valid or even a coherent criticism of the profession as a whole. In most areas where researchers are working, they have definitely thought it through and they have very good ideas about what they are working on and why it may be important in some big macro sense. They are working on that particular area because they think they can make their best marginal contribution on that particular question.

That brings me to another related point about research on the interaction between financial markets and the real economy. One might feel it is a very important problem and something that really needs to be worked on, but you also might feel as a researcher, “I am not sure how I can make a contribution here.” Maybe some of this occurred during the two decades prior to the financial crisis.

On the whole, at least from my vantage point (monetary theory and related literature) I saw many people working on the intersection between financial markets and the real economy. I thought they did make lots of interesting progress during this period. I do think that the financial crisis itself took people by surprise with its magnitude and ferocity. But I do not think it makes sense to then turn around and say that people were working on the wrong things in the macroeconomic research world.

ED: There is a tension between structural models that are built to understand policy and statistical models that focus on forecasting. Do you see irrevocable differences between these two classes of models?
JB: I do not see irrevocable differences because there is no alternative to structural models. We are trying to get policy advice out of the models; at the end of the day, we are going to have to have a structural model. We have learned a lot about how to handle data and how to use statistical techniques for many purposes in the field, and I think those are great advances. These days you see a lot of estimation of DSGE models, so that is a combination of theorizing with notions of fit to the data. I think those are interesting exercises.I do not really see this as being two branches of the literature. There is just one branch of the literature. There may be some different techniques that are used in different circumstances. Used properly, you can learn a lot from purely empirical studies because you can simply characterize the data in various ways and then think about how that characterization of the data would match up with different types of models. I see that process as being one that is helpful. But it has to be viewed in the context that ultimately we want to have a full model that will give you clear and sharp policy advice about how to handle the key decisions that have to be made.

ED: What are policy makers now looking for from the academic modelers?
JB: I have argued that the research effort in the U.S. and around the world in economics needs to be upgraded and needs to be taken more seriously in the aftermath of the crisis. I think we are beyond the point where you can ask one person or a couple of smart people to collaborate on a paper and write something down in 30 pages and make a lot of progress that way. At some point the profession is going to have to get a lot more serious about what needs to be done. You need to have bigger, more elaborate models that have many important features in them, and you need to see how those features interact and understand how policy would affect the entire picture.A lot of what we do in the published literature and in policy analysis is sketch ingenious but small arguments that might be relevant for the big elephant that we cannot really talk about because we do not have a model of the big elephant. So we only talk about aspects of the situation, one aspect at a time. Certainly, being very familiar with research myself and having done it myself, I think that approach makes a great deal of sense. As researchers, we want to focus our attention on problems that can be handled and that one can say something about. That drives a lot of the research. But in the big picture, that is not going to be enough in the medium run or the long run for the nation to get a really clear understanding of how the economy works and how the various policies are affecting the macroeconomic outcomes.

We should think more seriously about building larger, better, more encompassing types of models that put a lot of features together so that we can understand the relative magnitudes of various effects that we might think are going on all at the same time. We should also do this within the DSGE context, in which preferences are well specified and the equilibrium is well defined. Therein lies the conflict: to get to big models that are still going to be consistent with micro foundations is a difficult task. In other sciences you would ask for a billion dollars to get something done and to move the needle on a problem like this. We have not done that in economics. We are way too content with our small sketches that we put in our individual research papers. I do not want to denigrate that approach too much because I grew up with that and I love that in some sense, but at some point we should get more serious about this. One reason why this has not happened is that there were attempts in the past (circa 1970) to try to put together big models, and they failed miserably because they did not have the right conceptual foundations about how you would even go about doing this. Because they failed, I think that has made many feel like, “Well, we are not going to try that again.” But just because it failed in the past does not mean it is always going to fail. We could do much better than we do in putting larger models together that would be more informative about the effects of various policy actions without compromising on our insistence that our models be consistent with microeconomic behavior and the objects that we study are equilibrium outcomes under the assumptions that we want to make about how the world works.

ED: Can you perhaps talk about some cutting edge research? You have made some points on policy based on cutting edge research.
JB: One of the things that struck me in the research agenda of the last decade or more is the work by Jess Benhabib, Stephanie Schmitt-Grohe and Martin Uribe on what you might think of as a liquidity trap steady state equilibrium which is routinely ignored in most macroeconomic models. But they argue it would be a ubiquitous feature of monetary economies in which policymakers are committed to using Taylor-type rules and in which there is a zero bound on nominal interest rates and a Fisher relation. Those three features are basically in every model. I thought that their analysis could be interpreted as being very general plus you have a really large economy, the Japanese economy, which seems to have been stuck in this steady state for quite a while.That is an example of a piece of research that influenced my thinking about how we should attack policy issues in the aftermath of the crisis. I remain disappointed to this day that we have not seen a larger share of the analysis in monetary policy with this steady state as an integral part of the picture. It seems to me that this steady state is very, very real as far as the industrialized nations are concerned. Much of the thinking in the monetary policy world is that “the U.S. should not become Japan.” Yet in actual policy papers it is a rarity to see the steady state included.

That brings up another question about policy generally. Benhabib et al. are all about global analysis. A lot of models that we have are essentially localized models that are studying fluctuations in the neighborhood of a particular steady state. There is a fairly rigorous attempt to characterize the particular dynamics around that particular steady state as the economy is hit by shocks and the policymaker reacts in a particular way. There are also discussions of whether the model so constructed provides an appropriate characterization of the data or not, and so on.

However, whether the local dynamics observed in the data are exactly the way a particular model is describing them or not is probably not such a critical question compared to the possibility that the system may leave the neighborhood altogether. The economy could diverge to some other part of the outcome space which we are not accustomed to exploring because we have not been thinking about it. Departures of this type may be associated with considerably worse outcomes from a welfare perspective.

I have come to feel fairly strongly that a lot of policy advice could be designed and should be designed to prevent that type of an outcome. If the economy is going to stay in a small neighborhood of a given steady state forever, do we really care exactly what the dynamics are within that small neighborhood? The possibility of a major departure from the neighborhood of the steady state equilibrium that one is used to observing gives a different perspective on the nature of ‘good policy.’ We need to know much more about the question: Are we at risk of leaving the neighborhood of the steady state equilibrium that we are familiar with and going to a much worse outcome, and if we are, what can be done to prevent that sort of global dynamic from taking hold in the economy?

I know there has been a lot of good work on robustness issues. Tom Sargent and Lars Hansen have a book on it. There are many others who have also worked on these issues. I think, more than anything, we need perspectives on policy other than just what is exactly the right response to a particular small shock on a particular small neighborhood of the outcome space.

ED: Do you have an example?
JB: I have also been influenced by some recent theoretical studies by Federico Ravenna and Carl Walsh, in part because the New Keynesian literature has had such an important influence on monetary policymakers. A lot of the policy advice has been absorbed from that literature into the policymaking process. I would not say that policymakers follow it exactly, but they certainly are well informed on what the advice would be coming out of that literature.I thought the Ravenna-Walsh study did a good job of trying to get at the question of unemployment and inflation within this framework that so many people like to refer to, including myself on many occasions. They put a rigorous and state-of-the-art version of unemployment search theory into the New Keynesian framework with an eye toward describing optimal policy in terms of both unemployment and inflation. The answer that they got was possibly surprising. The core policy advice that comes out of the model is still price stability–that you really want to maintain inflation close to target, even when you have households in the model that go through spells of unemployment and even though the policymaker is trying to think about how to get the best welfare that you can for the entire population that lives inside the model. The instinct that many might have–that including search-theoretic unemployment in the model explicitly would have to mean that the policymaker would want to “put equal weight” on trying to keep prices stable and trying to mitigate the unemployment friction–turns out to be wrong. Optimal monetary policy is still all about price stability.

I think that is important. We are in an era when unemployment has been much higher than what we have been used to in the U.S. It has been coming down, but it is still quite high compared to historical experience in the last few decades. For that reason many are saying that possibly we should put more weight on unemployment when we are thinking about monetary policy. But this is an example of a very carefully done and rigorous piece of theoretical research which can inform the debate, and the message that it leaves is that putting too much weight on unemployment might be actually counterproductive from the point of view of those that live inside the economy because they are going to have to suffer with more price variability than they would prefer, unemployment spells notwithstanding. I thought it was an interesting perspective on the unemployment/inflation question, which is kind of a timeless issue in the macro literature.

References:

Jess Benhabib, Stephanie Schmidt-Grohé and Martin Uribe, 2001. “The Perils of Taylor Rules,” Journal of Economic Theory, vol. 96(1-2), pages 40-69, January.

James Bullard, 2013. “The Importance of Connecting the Research World with the Policy World,” Federal Reserve Bank of St. Louis The Regional Economist, October.

James Bullard, 2013. “Some Unpleasant Implications for Unemployment Targeters,” presented at the 22nd Annual Hyman P. Minsky Conference in New York, N.Y., April 17.

James Bullard, 2010. “Seven Faces of ‘The Peril,’Federal Reserve Bank of St. Louis Review, vol. 92(5), pages 339-52, September/October.

James Bullard, 2010. “Panel Discussion: Structural Economic Modeling: Is It Useful in the Policy Process?” presented at the International Research Forum on Monetary Policy in Washington D.C., March 26.

Lars Peter Hansen and Thomas Sargent, 2007. Robustness. Princeton University Press.

Federico Ravenna and Carl Walsh, 2011. “Welfare-Based Optimal Monetary Policy with Unemployment and Sticky Prices: A Linear-Quadratic Framework,” American Economic Journal: Macroeconomics, vol. 3(2), pages 130-62, April.

Dear SED Members and Friends:

I must confess, I was a little nervous of having my first SED meeting, as President, in Seoul. I thought it was a great idea, but nevertheless it was our first meeting outside our Euro-American, self-referential territory. True, after visiting Yonsei University in 2012 things looked very promising, but a little nervousness remained. It couldn’t have gone better!

Of course, this success would not have been achieved without the collaboration and work of people I would like to thank. First, the local organizers Yongsung Chang, Jang-Ok Cho, Sun-Bin Kim, Hyun Song Shin, Kwanho Shin, Tack Yun; in particular, Yongsung who smoothly solved any possible problem (some said that he also helped to solve the problem with North Korea, but I wouldn’t go that far). Second, Virgiliu Midrigan and Josep Pijoan-Mas, and their Program Committee, who guaranteed that we can safely repeat our motto: ‘SED meetings are excellent wherever they happen to be’. And third, of course, all the participants, starting with our lively and thoughtful plenary speakers: Hal Cole, Gianluca Violante and Mark Watson.

Having said this, meeting in Toronto in 2014 (26-28 June) — in our (now less) self-referential territory — may seem like an anticlimax. But I do not think so. Not just because Toronto is a great, culturally diverse city; or because, with Matthew Mitchell and Diego Restuccia we also have a remarkable team of local organizers to guarantee good logistics, service and fun. Not just because Marina Azzimonti and Veronica Guerrieri, and their program committee, I am sure, will keep our motto alive; but also because this coming year is the 25th SED Annual Meeting. Therefore, we will be celebrating

SED@25
which could also be a great occasion on which to look back, for a moment, and see how much Economic Dynamics has achieved over these past 25 years; to look ahead, after that, and talk about how much can it achieve in the years to come.

So, we will Talk and Toast, on this very special occasion, and I hope you will be there too.

Rarom Marimon, President
Society for Economic Dynamics

The Review of Economic Dynamics (RED) is the official journal of the Society for Economic Dynamics. The journal publishes meritorious original contributions to dynamic economics. The scope of the journal is intended to be broad and to reflect the view of the Society for Economic Dynamics that the field of economics is unified by the scientific approach to economics. We publish contributions in any area of economics provided they meet the highest standards of scientific research. In particular, RED publishes original articles on applications of dynamic economic theory to a wide variety of problems in economics. Related measurement and empirical papers are also welcomed.

Editorial Board Composition

Since last year, there have been a number of changes to the Editorial Board composition. First and foremost, Gianluca Violante‘s tenure as Managing Editor has come to an end on July 1st, 2013. The new Managing Editor is Matthias Doepke (Northwestern). Gianluca will stay on as Editor and continue to handle all past submission still under review/revision. In addition, Cristina Arellano (Minnesota), Tim Cogley (NYU), Gino Gancia (CREI), Fatih Guvenen (Minnesota), Matteo Iacoviello (Federal Board), Andrea Tambalotti (New York Fed), and Pierre-Oliver Weil (UCLA) joined us as Associate Editors. Jonathan Heathcote (Minneapolis Fed) and Vincenzo Quadrini (USC) are now Editors of the Journal.

Special thanks, on behalf of the Society, to Ricardo Lagos, Giorgio Primiceri, and Stephanie Schmitt-Grohé, who have resigned from their positions as Associate Editors after many years of service.

As of October 2013, the Editors of RED are: David Backus (Stern, NYU), Marco Bassetto (Chicago Fed), Jesus Fernandez-Villaverde (Pennsylvania), Jonathan Heathcote (Federal Reserve Bank of Minneapolis), Urban Jermann (Wharton, Pennsylvania), Vincenzo Quadrini (USC), Martin Schneider (Stanford), and Gianluca Violante (NYU).

The Associate Editors are: Cristina Arellano (Minnesota), Ariel Burnstein (UCLA), Hector Chade (ASU), Tim Cogley (NYU), Jan Eeckhout (UCL), Gino Gancia (CREI), Fatih Guvenen (Minnesota), Matteo Iacoviello (Federal Board), Nir Jaimovich (Duke), Tim Kehoe (Minnesota), Maurizio Mazzocco (UCLA), Virgiliu Midrigan (NYU), Matthew Mitchell (Rotman, Toronto), Fabrizio Perri (Minnesota), Diego Restuccia (Toronto), Richard Rogerson (Princeton), Andrea Tambalotti (New York Fed), Pierre-Oliver Weil (UCLA), Steve Williamson (Washington University, St. Louis), and Christian Zimmermann (St. Louis Fed).

Turnaround Statistics

RED strives to deliver fast and efficient turnaround of manuscripts, without compromising the quality of the refereeing process. Besides desk rejections, virtually all submitted manuscripts receive two referee reports. In 2012, RED received 270 submissions (we had 268 submissions in 2011). As of July 2013, all of these submissions had already received at least a first decision. The mean processing time from submission to first decision was 8.8 weeks or 60 days. The table below describes the distribution of first decisions by type: desk reject, reject after review, and revise and resubmit (which includes both minor and major revisions requested).

Distribution of First Decision Times on 2012 Submissions
Number of decisions Number of desk rejects Number of rejects after review Number of R&R
Total 270 106 114 50
Within 3 months 64% 100% 49% 50%
3 to 4 months 19% 0% 30% 18%
4 to 5 months 11% 0% 16% 16%
More than 5 months 6% 0% 5% 16%
Average days since submission 60 8 97 93

Note that 83% of all submissions were dealt with within 4 months, and only 6% of all submissions (or roughly 16 papers) took longer than 5 months. Often, these are difficult decisions, where the Referees are split and the Editor deems it necessary calling upon a third Referee. Conditional on receiving a R&R, the average time since manuscript submission was 93 days. Conditional on not being desk rejected, the average turnaround time was 96 days.

Among all the manuscripts with a final disposition in 2012, the acceptance rate was 12%, corresponding to 24 articles. This acceptance rate, comparable to that of other top economics journals, reflects the fact that only submissions of the highest quality are selected for publication in the Review. The continuous rise of the Impact factor for the Review (see the next section) is proof of this commitment to constantly improving our standards of publication.

Impact Factors

The table below shows two citation indexes for RED and for a comparison group of journals. The first one, reported since 2008, is the 2-Year ISI Impact Factor (one of the most widely used indicator of a journal’s quality). This index is calculated as the number of times articles published in year t-1 and t-2 in a given journal were cited by all journals during year t. The second is the IDEAS/RePEc Recursive Discounted Impact Factor which 1) weighs each citation by the Impact Factor of the citing Journal — this Impact Factor being itself computed recursively in the same fashion, and 2) considers all articles ever published in a given journal, but divides each citation by its age in years. The Discounted Recursive Impact Factors are normalized so that the average citation has a weight of 1. In other words, this index gives more weight to citations in good journals, and to recent citations.

2-Year ISI Impact Factor Recursive Discounted
Impact Factor
2012 2011 2010 2009 2008 June 2013
Review of Economic Dynamics 1.602 1.358 1.259 0.975 0.954 1.512
AEJ: Macroeconomics 3.191 3.800 2.205
Journal of Economic Growth 2.250 2.458 2.468 3.083 2.542 2.546
Journal of Monetary Economics 1.649 1.892 1.654 1.755 1.429 1.725
International Economic Review 1.162 1.559 1.516 1.030 1.150 0.719
Journal of Economic Theory 1.069 1.235 1.112 1.092 1.224 0.896
Journal of Money Credit and Banking 1.104 1.093 1.150 1.194 1.422 0.711
Journal of Economic Dynamics and Control 0.807 0.855 1.117 1.097 0.885 0.497
Macroeconomic Dynamics 0.420 0.452 0.763 0.517 0.516 0.419

As witnessed by the table above, the Impact Factor of RED continues its steady growth. Moreover, the Recursive Discounted IF shows that articles published in RED are very well cited by articles published in the top economics journals.

Upcoming Special Issues

RED relies predominantly on regular submissions of manuscripts. Throughout our history, we have also published special issues representing the frontier of academic research on topics which are of particular interest to members of the Society. Articles in special issues are usually selected from a broad call for papers, as well as through direct solicitations. They all go through a full refereeing process. Guest Editor Mark Gertler (NYU) and our own Steve Williamson (Washington University, St. Louis), are currently editing a special issue on “Money, Credit, and Financial Frictions” which is scheduled to appear in the January 2015 issue of RED. A conference will be held by the Federal Reserve Bank of St. Louis on December 5-6, where first drafts of the accepted papers will be presented and discussed.

Move to Article-Based Publishing

The move to article-based publishing, which was announced in last year’s Editor’s report and should have taken place in 2013, was delayed because of technical reasons — a decision taken jointly by Elsevier and the Editors of RED at the last Board Meeting in Seoul. Once the technical problems are resolved, the Board of RED will reconsider the transition.

Sincerely,

Gianluca Violante, Managing Editor
Matthias Doepke, Managing Editor
Review of Economic Dynamics

Society for Economic Dynamics: 2015 Meeting Call for Papers

The next annual meeting of the Society of Economic Dynamics will take place in Toronto, Canada, from 26-28 June 2014. This year is the 25th SED annual meeting, so you cannot miss it!

We are glad to announce keynote speeches by

Nicholas Bloom, Lawrence Christiano, and Iván Werning.

You can now submit your paper for the conference using ConferenceMaker until February 15th, 2014. We are looking forward to many exciting submissions for the academic program!

All information on the conference can be accessed from the SED 2014 meeting page.

We hope to see you next year in Toronto. Best wishes,

Marina Azzimonti and Veronica Guerrieri
2014 SED Program Chairs

Book Review: Hansen and Sargent’s Recursive Models of Dynamic Linear Economies

Recursive Models of Dynamic Linear Economies
by Lars Peter Hansen and Thomas Sargent

This book was written twenty years ago, but was never really finished. At the occasion of the Gorman Lectures at the University College London, Hansen and Sargent decided to put the finishing touches on a work that was already well-known, if only thanks to its very popular MATLAB programs.

“Recursive Models of Dynamic Linear Economies” is about describing a class of dynamic stochastic models that can be represented in linear quadratic form. Hansen and Sargent provide general procedures to solve the equilibria of those models, as well as to estimate them. The book then becomes more concrete with a series of examples, both with representative agents and heterogeneous consumers, always accompanied with sample MATLAB programs. These chapters are preceded with preliminaries that can be useful in other contexts, such as introductions to linear stochastic difference equations, tricks for more efficient computing, elements of general equilibrium, and various statiscal representations of stochastic processes. The general theme is that these economies can be represented in some recursive form, have competitive equilibria, can be solved using linear optimal control theory, and generate interpretable vector autoregressions. It will thus prove useful to anybody working with this class of models.

Recursive Models of Dynamic Linear Economies is published by Princeton University Press.