Economic Dynamics Newsletter
Volume 4, Issue 1 (November 2002)
The EconomicDynamics Newsletter is a free supplement to the Review of Economic Dynamics (RED). It is published twice a year in April and November.
In this issue
Robert Shimer on Labor Market Frictions and Business Cycles
I would like to use this opportunity to discuss some of my recent research on the business cycle implications of labor market frictions. For reasons that I will discuss more below, I will frame my discussion in terms of the Mortensen-Pissarides matching model (Pissarides 1985, Mortensen and Pissarides 1994, and Pissarides 2000). This model has been used extensively for policy analysis, for example to examine the role that unemployment insurance and mandatory firing costs play in generating highunemployment rates in Europe (Pissarides 1999). With some exceptions (notably Merz 1995 and Andolfatto 1996), however, there has been little exploration of the model’s ability to match a standard set of business cycle facts. In a recent working paper (Shimer 2002a), I argue that the Mortensen-Pissarides model is quantitatively incapable of generating significant employment fluctuations in response to empirically plausible productivity shocks. That is, the model has almost no amplification mechanism. Despite this, the structure of the model allows us to think about other types of shocks that look to be a much more promising explanation for business cycle fluctuations.
The Mortensen-Pissarides Matching Model
I begin by describing the simplest version of the Mortensen-Pissarides matching model. There are two types of agents, workers and firms, both risk-neutral and infinitely-lived with a common discount rate. Workers may be either employed or unemployed. Employed workers earn an endogenous wage w but may not search for another job. Unemployed workers get a fixed payment b and may find a job. Firms have access to a production technology with constant returns to scale in labor. That is, each employed worker yields a fixed revenue p and must be paid the wage w. To hire new workers, firms must create a vacancy at a per-period cost of c. In other words, a firm’s per-period profits are n(p-w) – c v, where n is the number of employees and v is the number of vacancies. Free entry drives the discounted profits from creating a vacancy to zero.Rather than modelling the search process explicitly, the Mortensen-Pissarides model reduces it to a black-box “matching function”. Let U denote the fraction of workers who are unemployed and V denote the number of vacancies in the economy. Then the number of matches is a function M(U,V), increasing in both arguments. The standard assumption is that this function has constant returns to scale, which implies that each unemployed worker finds a job with probability M(U,V)/U and each vacancy is filled with probability M(U,V)/V, both functions only of the vacancy-unemployment ratio V/U. The vacancy-unemployment ratio, and hence the rate at which unemployed workers find jobs, is in turn determined endogenously by firms’ collective vacancy decisions.
In the simplest version of the Mortensen-Pissarides matching model, the job destruction decision, i.e. the probability with which employed workers become unemployed, is treated as exogenous: all matches end with probability d per period. Mortensen and Pissarides (1994) extend this simple model to endogenize the job destruction decision.
A central feature of this model is that the matched worker and firm are in a bilateral monopoly situation. That is, an employed worker could always leave her job and find another employer; however, because search is time-consuming, workers are impatient, and all jobs are identical, she prefers to work for her current employer. Likewise, a firm could fire an employee and attempt to hire another one, but this will take time and will not yield a better match. There are many wages consistent with the pair agreeing to match, and so the model provides little guidance as to how wages are determined. Pissarides (1985) assumes wages satisfy an axiomatic Nash bargaining solution. A worker’s threat point is unemployment and a firm’s threat point is a vacancy. The two agents split the gains from production in excess of this threat point.
From the perspective of a matched worker and firm, wage bargaining is a zero sum game with distributional but not allocational consequences, and so the Nash bargaining assumption might seem innocuous. But from an aggregate perspective, wage bargaining matters. Firms’ expectations of future wages is crucial to their job creation decisions, which balance the up-front cost of creating a vacancy against the expected profits from employing workers. If firms anticipate having to pay high wages in the future, they will be reluctant to create vacancies today, reducing job creation and raising the unemployment rate.
Although the central role that wage bargaining plays in the determination of employment and unemployment rates in the Mortensen-Pissarides model is sometimes seen as a shortcoming, I will argue below that the bilateral monopoly situation is the reason why we can use the model to think about a different type of shock that looks to be a promising explanation for at least some part of business cycle fluctuations. In a reduced form model, these shocks amount essentially to changes in workers’ bargaining power.
Quantitative Behavior
In Shimer (2002a), I examine a stochastic version of this simple model, with shocks driven by a first-order autoregressive process for productivity, p, and the job destruction rate, d. At any point in time, the state of the economy is described by the current level of productivity, the current job destruction rate, and the current unemployment rate. In principle, the curse of dimensionality should make this problem very difficult to handle computationally. But I show that the equilibrium vacancy-unemployment ratio and wage can be expressed as functions only of the first two state variables, productivity and the job destruction rate. Moreover, both functions are easy to compute numerically — and in some special cases, analytically. After computing the vacancy-unemployment ratio at each productivity level and job destruction rate, I simulate a large number of paths and recover the stochastic properties of unemployment, vacancies, and wages in response to these exogenous shocks.I choose model parameters to match as many macro/labor facts as possible. Due to its simplicity, the model cannot replicate some standard business cycle facts (Cooley and Prescott 1995). For example, there is no investment or capital in this model; and the risk-neutrality assumption implies the intertemporal elasticity of substitution is infinite. But there are a number of other facts that the model potentially can match. One that is particularly important is the cyclical behavior of vacancies and unemployment. The correlation between the detrended time series for the two variables is strongly negative, -0.88 (Abraham and Katz 1986, Blanchard and Diamond 1989), and they have approximately the same standard deviation of the percent deviation from trend. That is, if unemployment is 17 percent below trend (5 percentage points instead of 6 percentage points), vacancies are approximately 17 percent above trend. This means that the vacancy-unemployment ratio, and hence the ease of finding a job, is strongly procyclical. On the other hand, wages and productivity are much less variable and much less correlated with either vacancies or unemployment.
I next consider the behavior of the model economy in response to a productivity shock. Qualitatively, this raises the profit from a filled job p – w, encouraging firms to create vacancies. A higher vacancy-unemployment ratio decreases the rate at which vacancies are filled, restoring the zero profit condition. It also makes it easier for workers to find jobs, lowering the unemployment rate. Under reasonable parameter restrictions, vacancies and unemployment move in opposite directions, along a downward sloping “Beveridge curve,” consistent with the previously mentioned fact. But quantitatively, almost all of a productivity shock accrues to workers in the form of higher wages, leaving only a muted response of vacancies and unemployment. Equivalently, it takes an unrealistically large productivity shock to generate reasonable movements in vacancies and unemployment. The model offers little amplification of the underlying shocks.
I also consider the economy’s response to a job destruction shock. This has a direct effect on the unemployment rate because the employment-to-unemployment transition rate increases. It also has an indirect effect: a decline in the expected future duration of jobs discourages vacancy creation. This raises average unemployment duration and further increases the unemployment rate. Moreover, the increase in unemployment duration tends to reduce wages slightly, mitigating the decline in profits. In net, I find a large response of the unemployment rate to a job destruction shock but little movement in the vacancy-unemployment ratio or wages. As a result, vacancies and unemployment are counterfactually positively correlated in response to such shocks, while wages are realistically rigid.
If one only wanted to explain a subset of the data, the model behaves quite well. For example, Blanchard and Diamond (1989), Mortensen and Pissarides (1994) and Cole and Rogerson (1999) find that the model can match the behavior of unemployment and vacancies (as well as some other variables), but do not examine the behavior of wages. Essentially, these papers introduce unrealistically large productivity shocks in order to generate fluctuations. On the other hand, Ramey and Watson (1997) and Pries (2002) assume that job finding rates are constant and exogenous or equivalently that the vacancy-unemployment ratio is acyclical. Both models generate large unemployment changes associated with only moderate wage fluctuations. Similarly, in the Lucas and Prescott (1974) search model, workers seek production opportunities available in an exogenously-determined supply. Models in this framework (e.g. Gomes, Greenwood, and Rebelo 2001) therefore cannot explain why the vacancy-unemployment ratio is procyclical, although they are again capable of matching the cyclical behavior of wages. It is only by looking simultaneously at the behavior of unemployment, vacancies, wages, and productivity that the difficulty of matching the business cycle facts emerges. The lesson to take away from this is that it is important to explore models quantitatively along as many dimensions as possible.
Alternative Wage Setting Assumptions
Wage flexibility, particularly wage flexibility in new jobs, is central to these results. Suppose there was a productivity increase, but firms did not expect wages in new jobs to change. This would amplify the effect on firm entry, since firms would enjoy all of the productivity increase in the form of higher profits. Conversely, if firms anticipated declining wages without an associated change in productivity, this would also lead to an increase in entry and a decline in the unemployment rate. Moreover, quantitatively both of these effects are likely to be big. For example, firms’ economic profits are at least an order of magnitude smaller than their wage bill, so a one percent decline in wages leads to at least a ten percent increase in profits and an associated spurt in job creation. (On the other hand, rigidity of wages in old jobs, perhaps due to implicit or explicit wage contracts, has no effect on job creation.)An assertion that that rigid real wages amplify productivity shocks and that wage shocks are an important source of business cycle fluctuations is unsatisfactory. From a theoretical perspective, one would like to know why real wages are rigid in response to productivity shocks and yet sometimes change in the absence of such shocks. From a normative perspective, it is impossible to analyze a change in labor market policies in the absence of a policy-invariant model of wages. The important next step is therefore to develop alternative models of wage determination from first principles, which do not have a strong link between wage and productivity movements.
One feature of the labor market that may be important in this regard is asymmetric information. A firm knows more about its productivity than does an employee, while a worker knows more about her outside opportunities than does her employer. For a worker to signal that she has a good outside opportunity is costly. She must leave the firm. Likewise, for a firm to credibly signal that it has low productivity is costly. It must typically lay off some workers or sharply reduce the hours of existing employees. The wage also plays an important role, conveying information to the worker about the firm’s productivity — it is at least willing to pay her wage — and to the firm about the worker’s outside opportunities — she is at least willing to work at that wage.
In Shimer (2002b), I develop a simple model with one-sided asymmetric information. A worker does not know how productive her job is. She is able to make take-it-or-leave-it wage demands, but is reluctant to ask for too high a wage because, if the firm refuses her demand, she is laid off. There are two important determinants of wages in this model. First, workers examine the hazard rate of the productivity distribution. If the hazard rate is large, asking for a higher wage is risky, i.e. it results in a substantial increase in the layoff probability. Second, workers consider how long it will take to get another job offer. If job offers are scarce, workers will be reluctant to risk demanding a high wage. This also feeds back into firm behavior. If firms anticipate that workers will demand high wages, they will create few jobs, making job offers scarcer and suppressing wage demands. In parametric examples, I find that an increase in mean productivity raises wages and reduces unemployment, much as in a model with symmetric information. An increase in the variance of productivity lowers wages and has an ambiguous effect on unemployment, an effect that is absent from models with symmetric information. If recessions are periods of low mean productivity and high variance, as Storesletten, Telmer, and Yaron (2001) suggest, we would observe little variation in wages and significant declines in employment.
The wage setting regime, i.e. workers making take-it-or-leave-it wage demands, is important for these results. Since there is no reason to believe that this is an accurate characterization of wage setting in reality, relaxing this assumption is desirable. Of course, any other wage-setting assumption faces the same criticism. An alternative possibility is to focus on Pareto optimal incentive-compatible mechanisms in an economy with two-sided asymmetric information. Here the tools developed in the endogenous incomplete markets literature (e.g. Spear and Srivastava 1987, Thomas and Worrall 1990, Atkeson and Lucas, 1992) are likely to prove useful. It is an open question whether such a model predicts significant employment fluctuations in response to modest exogenous shocks.
References
Andolfatto, David (1996): “Business Cycles and Labor-Market Search,” American Economic Review, 86, 112-132.
Atkeson, Andrew and Robert Lucas (1992): “On Efficient Distribution with Private Information,” Review of Economic Studies, 59, 427-453.
Blanchard, Olivier, and Peter Diamond (1989): “The Beveridge Curve,” Brookings Papers on Economic Activity, 1, 1-60.
Cole, Harold, and Richard Rogerson (1999): “Can the Mortensen-Pissarides Matching Model Match the Business-Cycle Facts?,” International Economic Review, 40, 933-959.
Cooley, Thomas, and Edward Prescott (1995): “Economic Growth and Business Cycles,” in Frontiers of Business Cycle Research, ed. by Thomas Cooley. Princeton University Press, New Jersey.
Gomes, Joao, Jeremy Greenwood, and Sergio Rebelo (2001): “Equilibrium Unemployment,” Journal of Monetary Economics, 48, 109?152.
Lucas, Robert and Edward Prescott (1974): “Equilibrium Search and Unemployment,” Journal of Economic Theory, 7, 188-209.
Merz, Monika (1995): “Search in the Labor Market and the Real Business Cycle,” Journal of Monetary Economics, 36, 269?300.
Mortensen, Dale, and Christopher Pissarides (1994): “Job Creation and Job Destruction in the Theory of Unemployment,” Review of Economic Studies, 61, 397-415.
Pissarides, Christopher (1985): “Short-Run Equilibrium Dynamics of Unemployment, Vacancies, and Real Wages,” American Economic Review, 75, 676-690.
Pissarides, Christopher (1999): “Policy influences on unemployment: The European experience,” Scottish Journal of Political Economy, 46, 389-418.
Pissarides, Christopher (2000): “Equilibrium Unemployment Theory”. MIT Press, Cambridge, MA, second edition.
Pries, Michael (2002): “Persistence of Employment Fluctuations: A Model of Recurring Job Loss,” forthcoming Review of Economic Studies.
Ramey, Gary, and Joel Watson (1997): “Contractual Fragility, Job Destruction, and Business Cycles,” Quarterly Journal of Economics, 112, 873-911.
Shimer, Robert (2002a): “The Cyclical Behavior of Equilibrium Unemployment, Vacancies, and Wages: Evidence and Theory,” Mimeo.
Shimer, Robert (2002b): “Wage Setting with Asymmetric Information,” Mimeo.
Spear, Stephen and Sanjay Srivastava (1987): “On Repeated Moral Hazard with Discounting,” Review of Economic Studies, 54, 599-617.
Storesletten, Kjetil, Chris Telmer, and Amir Yaron (2001) “Asset Pricing with Idiosyncratic Risk and Overlapping Generations,” Mimeo.
Thomas, Jonathan and Tim Worrall (1990): “Income Fluctuation and Asymmetric Information: An Example of a Repeated Principal-Agent Problem,” Journal of Economic Theory, 51, 367-390.
Q&A: Boyan Jovanovic on Technology Adoption
- EconomicDynamics: In a recent Review of Economic Dynamics article with Peter Rousseau, you made the bold prediction that consumption should grow at the yearly rate of 7.6% in the 21th century. This is based on a model of learning by doing where growth is essentially fueled by computer technology. Your estimate is based on the assumption that experience can be measured by cumulative sales in hardware and software. How sensitive is your estimate to alternative measures, in particular the introduction of depreciation or obsolescence?
- Boyan Jovanovic: The model has obsolescence of capital in it. New capital devalues the old, and that is why the term g(p) enters the user cost formula in equation (7). But depreciation is indeed zero — it implies the stock of capital is the same as the cumulative number of machines produced and simplifies the algebra. But I do not think that it has much to do with the particular estimate that you report.The high estimate of 7.6% derives from the fact that a high fraction of equipment is getting cheaper very fast. Much revolves around how big a fraction of the stock of equipment is involved, and whether the price index is accurately measured. We highlight this number partly because the parameter values that imply it also give the model a good fit to the 1970-2001 experience of the U.S.. But I simply invite the reader to read the paper on this. Instead, let me now say a couple of things that are not in the paper about reasons why the share of equipment and the price index of are both hard to predict.
The share of equipment is in efficiency units. Even if knew the growth rate of efficiency units of IT capital, we cannot infer its share in equipment if we do not know the initial share of IT equipment. We need an initial condition. We may overestimate the importance of IT capital if we assume too large an initial condition.
Second, the price decline of computers may be exaggerated, at least in cases when quality cannot be directly measured. Bart Hobijn argues that markups probably decline over the life of a product or the life of a product line, and that new products are introduced with a high markup. we cannot anchor the quality of a new product accurately relative to the quality of old products, this may then appear as declining prices per unit of quality when, in fact, there may in the long run be no price decline at all. In other words, BLS data may overstate quality change for this reason.
Overall, I do believe that IT will form the basis for more and more products and processes and, since those who know tell us that Moore’s Law will continue at its historical pace for at least 20 more years, it seems clear that the world’s output per head will grow a lot faster the 21st century than it did in the 20th.
- ED: In work with Jan Eeckhout, you show that knowledge spillovers in production at the firm level do not necessarily lead to technology convergence. Rather, the fact that followers may want to free ride on leaders creates endogenous and permanent inequality across firms. You apply this concept to city growth as well. Would this also apply to countries and how does your work relate to North-South models of technology diffusion?
- BJ: Yes. I do believe it. But the cross-country TFP numbers seem to say otherwise. Eeckhout’s and my model implies that TFP is higher for followers than for the leaders as followers accumulate less measured capital than the leaders, but they derive more spillovers from the leaders and therefore appear like they are using their capital more efficiently. Evidence on U.S. firms and plants supports this, in that large firms and plants have lower TFP than small ones. But cross-country evidence does not — Hall and Jones report that TFP is positively related to development of countries. Since large firms are found mainly in rich countries, this evidence seems to say the opposite. If we are to believe the cross country evidence, it says that if we look at the world market for steel, say, the large producers have higher TFP whereas if we only were to look at the U.S. producers, large producers would have lower TFP. I have to believe the U.S. evidence because it comes from a variety of sources and is based on better data than the cross-country evidence. But offhand I do not see what the source of the discrepancy is. At any rate, a referee was adamant that the model does not apply to the issue of development, and we more or less concede this in footnote 4 of the version that will come out in the AER in December.
- ED: With Peter Rousseau, you also work on merger cycles and show that they are essentially linked to major technology innovations. One consequence is that merger activity is also correlated with stock prices. As more and more people think that stock prices have been overvalued recently, would you say too many mergers have occurred? Does history exhibit such merger overshooting with proportionally more ex-post inefficient mergers toward the end of waves?
- BJ: If the buyer is overvalued and the target is not, and if the buyer is using a share swap to buy the target, then the overvaluation argument goes through. If the stock market as a whole is overvalued, and the target is a private company that presumably is not overvalued, then the argument again goes through. But most of the capital that has been acquired in this way has been in public companies. In other words, targets themselves are quoted in the stock market, at least once we weigh the targets by their value. Moreover, many targets have been recent IPOs, and on the NASDAQ which is held to have been the place where firms were overvalued the most. So, if the targets are the ones that were overvalued, then I think that the overvaluation story says not that there have been too many mergers, but too few.
References
Hobijn, Bart, 2001. “Is equipment price deflation a statistical artifact?,” Federal Reserve Bank of New York Staff Report 139.
Jovanovic, Boyan, and Jan Eeckhout, 2002. “Knowledge Spillovers and Inequality”, American Economic Review, forthcoming.
Jovanovic, Boyan, and Peter Rousseau, 2002. “Mergers as Reallocations,” NBER working paper 9279.
Jovanovic, Boyan, and Peter Rousseau, 2002. “Moore’s Law and Learning by Doing,” Review of Economic Dynamics, 5, 346-375.
It is once again time for my annual invitation to continue your support of the Society for Economic Dynamics by paying the annual membership dues, submitting your research to the Review of Economic Dynamics, and participating in our annual conference. Once again the Society has continued to grow in numbers and importance. Submissions to RED have also increased this past year. Our membership is at an all-time high and participation in our annual conference set new records.
Our annual conference was held in New York City this past June. It was our most successful conference to date both in terms of the quality of the program and the quality of the local arrangements. The local arrangements, supervised by Alessandra Fogli, Vincenzo Quadrini, and Felicia Shutter were spectacular. Narayana Kocherlakota and Fabrizio Perri put together a conference program that was truly impressive.
The 2003 conference will be held on June 25-27th in Paris, France. The meetings are being organized by Hubert Kempf and Jean-Olivier Hairault. You can look forward to an interesting conference and a magnificent time in Paris. Lee Ohanian of UCLA and Franck Portier of the Universite de Toulouse are organizing the program. The plenary speakers will be George Malaith, Jean Tirole, and Jeremy Greenwood. The call for papers can be found at https://www.minneapolisfed.org/research/events/sed/. Plan on participating – you won’t want to miss it
This year again the SED sponsored a small research conference jointly with the C.V. Starr Center at New York University. The topic of the conference was “Finance and the Macroeconomy.” It was organized jointly by Sydney Ludvigson, Ellen McGrattan and John Heaton. We heard nine papers over one and a half days and these will appear in the spring of 2003 as a special issue of the Review of Economic Dynamics.
Please join again in support of the Society for Economic Dynamics. Information about how to pay your 2003 dues and your subscription to RED are available here.
I look forward to seeing you in June in Paris.
Sincerely,
Thomas F. Cooley, President Society for Economic Dynamics
Call for Papers, 2003 Meetings
The 2003 meetings of the Society for Economic Dynamics will be held June 26-June 28, 2003 on the campus of Universite’ Paris 1 in Paris, France. The plenary speakers are Jeremy Greenwood, George Mailath, and Jean Tirole. The program co-chairs are Lee Ohanian and Franck Portier.
The Society for Economic Dynamics solicits applications in all areas of dynamic economics to be presented at the conference. Members and non-members of the society are invited to participate. The deadline for submissions is February 1, 2003. Please use our standardized form available at https://www.minneapolisfed.org/research/events/sed/ to submit an abstract, and include the name, affiliation, address, and e-mail address of the author interested in presenting the paper. This form is required for all applicants. Submission of the paper is optional and should be done by submitting a URL via the standardized form or by mailing a hard copy to SED Conference, ATTN: Lee Ohanian, Department of Economics, UCLA, 405 Hilgard Avenue, Los Angeles, CA 90024. Fax transmissions will not be considered.
Letter from the Coordinating Editor
The Review of Economic Dynamics, the official journal of the Society for Economic Dynamics, will begin its sixth year of publication in January. Already there is a long list of papers that will be forthcoming in Volume 6, and it looks like this is shaping up to be the highest quality volume yet. Of course, I fully expect that Volume 7 will be even better. I hope your research will be a part of it!
In this letter, I want to let you know about some recent changes to the Editorial Board, as well as some changes in our preferred method of submitting papers for possible publication in RED.
Editorial Board Changes
I am pleased to announce that Ellen McGrattan, of the Federal Reserve Bank of Minneapolis, has joined the group of Editors, which also includes Michele Boldrin, Boyan Jovanovich, Timothy Kehoe, Robert E. Lucas, Jr., Richard Rogerson, and me. In addition, Robert Shimer from Princeton University has agreed to serve as an Associate Editor.
Tom Cooley, who was the founding Coordinating Editor of RED and is currently President of the Society, has stepped down as an editor, but will serve on the journal’s Editorial Advisory Board. Tom deserves most of the credit for establishing the journal and positioning it on the road to success. I know that the journal will continue to benefit from his advice and experience in the years to come.
Submission Procedure
From the beginning, the journal has encouraged electronic submissions. In fact, most of our review process, from soliciting referee reports to notifying authors about editorial decisions, has been handled by email. Still, most of our submissions come to the Editorial Office the old fashioned way. Although I have planned to make electronic submission the preferred method of submission for some time, the technology for creating truly portable PDF files was not well understood by many RED authors. When I began as Coordinating Editor in the summer of 2000, a very high percentage of the PDF files we received needed to be recreated so that all fonts were embedded.
By now the percentage of faulty PDF files submitted to RED has decreased significantly, so we are ready to make this our preferred method of submission. Hence, you are encouraged to send your submission by email to [email protected] as a PDF file. Unfortunately, we cannot accept DVI files, Word files, or any other electronic format. Melissa Turner, our editorial assistant, will acknowledge your submission and is available to answer questions about the status of your submission during the review process. Although this is our preferred submission method, you can still mail us printed copies of your paper if you prefer the traditional method.
If you are unsure of how to create a truly portable PDF file, some instructions are available on the RED website.
Gary Hansen
RED Coordinating Editor
Classical General Equilibrium Theory
Lionel W. McKenzie
Just published by MIT Press, this book covers the main theorems of general equilibrium theory that are usually taught through notes or articles. Thus it offers a serious basis for any student of GE. The core focuses on “Classical” GE, i.e. the theory launched by Arrow, Debreu, Samuelson and the author, with some of the important extensions: demand theory, tâtonnement, Leontief production, comparative statics, the core, existence and uniqueness of the competitive equilibrium. While much of this theory is also applicable to dynamic settings (just define the set of goods appropriately), a chapter also covers some specificities of time: the von Neumann and Ramsey models as well as turnpikes.
Obviously, this is a rather technical subject matter. This book is therefore not accessible to everyone. Indeed, the point of the book is to state and prove rigorously the theorems with a unified language and unified notation. A lot of material is covered, from the fundamentals of demand theory to some frontiers of research, like the use of supermodularity for comparative statics. As a consequence, the writing is dense but still does not compromise on completeness and clarity.
Some teachers like to use Debreu’s Theory of Value complemented with some articles. They may want to consider adopting this book that covers all this material.
“Classical General Equilibrium Theory” has been published by MIT Press in August 2002.
Bruce Smith: Our Colleague and Friend, by Bruce Champ and Stephen Williamson
RePEc/IDEAS entries: Bruce Smith, Bruce Champ, Stephen Williamson.
Bruce Smith’s death on July 9, 2002, at the age of 47, was a great personal loss for us, and a great loss for the economics profession. Our lives and careers would have been very different had it not been for Bruce, and his contributions to monetary economics, macroeconomics, and economic history were enormous.
Bruce was astonishingly prolific. Over the course of a 21-year career, he published 95 papers. Of these, 35 appeared in print during the last five years of his life, at a time when Bruce had to deal with the debilitating effects of treatment for cancer. Bruce refused to let a ravaging disease slow him down, and had much work in the pipeline at the time of his death, including 11 current working papers. How did he do it? In the early 1990s, when Bruce was in the process of moving from the University of Western Ontario to Cornell, the three of us coauthored a paper, and we learned first-hand what lay behind this phenomenal flow of output. Bruce first quickly helped frame the idea of the paper, we sorted out what we thought we wanted to say, and then work began, with everyone working furiously to keep up with Bruce. When the results were in place, Bruce volunteered to write the first draft. Thinking we had several weeks to wait, we took a breather, but three days later Bruce sent us a complete first draft, printed by hand on yellow legal paper. This was a full-length paper of lucid prose, complete with copious footnotes and complete references. At the time this seemed super-human, but if we had known Bruce better we would not have been surprised.
The Minnesota flavor of Bruce’s work is distinctive. Bruce distinguished himself as an undergraduate at the University of Minnesota by taking courses in the graduate program, where he came in contact with Tom Sargent and Neil Wallace, who were clearly formative influences. After graduating with a B.S. in Economics from Minnesota in 1977, Bruce left for M.I.T., graduating with a Ph.D. in Economics in 1981 under the supervision of Stan Fischer. The flavor of M.I.T. is certainly harder to find in Bruce’s work than is the flavor of Minnesota, but M.I.T is likely where he first became exposed to information economics, which played a key role in much of his research.
Bruce had a self-deprecating sense of humor, and would describe his early research as “exercises in grafting static information models onto overlapping generations models of monetary economies.” However, his work amounted to far more than that. Bruce was a pioneer in studying the role of private information in modern models of money and banking. He examined credit rationing and its implications for monetary and fiscal policy, the effect of deposit interest rate ceilings on the stability of the banking system, and how private information changes our view of the Real Bills Doctrine and the Quantity Theory of Money. Bruce knew a great deal about monetary history, and made key contributions to our knowledge of colonial monetary regimes in North America, pre-Civil War U.S. banking, and banking panics during the National Banking era. One of Bruce’s unusual abilities was in forging innovative links between monetary history and modern economic theory, as when he argued (we think, persuasively) that Peel’s Bank Act could be seen as an attempt to kill sunspot equilibria by way of a legal restriction on private financial intermediaries.
Bruce, in his work with Valerie Bencivenga, was one of the first researchers to show how financial intermediation could matter for economic growth. He went on to do further research with Valerie Bencivenga, John Boyd, and Stacey Schreft, among others, which extensively explored the role of financial factors, intermediation, and policy in the growth process. This work paid careful attention to the older literature on growth and development, modern growth theory, intermediation theory, and empirical facts. Another important element in Bruce’s research was the study of the implications of multiple equilibria in monetary economics, as in some of his sole-authored papers and those written with Costas Azariadis and Jim Bullard, among others. A key idea that permeates Bruce’s papers on this topic is the tension between indeterminacy and efficiency. Bruce argued that it was typical that financial restrictions, while reducing efficiency in some desirable equilibria, could also eliminate less desirable equilibria. Thus, while a financial restriction might make good outcomes less good, it might eliminate the possibility of some bad outcomes as well.
Bruce wrote many papers, and he wrote with many coauthors as well. In fact, Bruce had more coauthors than most researchers have published papers. Bruce clearly enjoyed the interaction and exchange of ideas involved in collaborative work, and that enthusiasm extended to his students, some of whom were also his coauthors. Bruce was extremely generous with his time, and had an extensive network of graduate students and former students to whom he was intensely loyal. That generosity extended to the effort he put into placing his students. He also spent much of his time, particularly in the summers, at Federal Reserve Banks, particularly the Atlanta Fed, the Cleveland Fed, the Kansas City Fed, the Minneapolis Fed, and the St. Louis Fed. Bruce contributed much to the research efforts of all of these institutions, and was adept at communicating the important role of research to the management of the Federal Reserve System.
As a colleague, Bruce was fun to be around. His knowledge of the literature was extensive, his mind was incredibly quick, he always had words of encouragement, and he had a wonderful sense of humor. It is no secret that Bruce was highly principled, and found it difficult to compromise on positions that he had thought through carefully and was convinced were right. Thus, in an academic environment, Bruce could be difficult to live with at times, but this perhaps had more to do with imperfections in the world rather than any imperfection in Bruce’s character.
In “It’s A Wonderful Life,” Jimmy Stewart learns what life would have been like had he not existed. Had Bruce Smith not existed, we are sure that life would have been far less interesting and lively, and that our own work would have suffered for it. Bruce taught us that it is a wonderful life, and his determination and drive in the face of great adversity is an inspiration. Bruce has left a very large pair of shoes that no one can fill, but what he taught us and left behind in his writings will help to fill the void.
Christian Zimmermann and Gary Hansen , Editors.