Suvarna Garge (Editor)

Evolution of microeconomics

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
Evolution of microeconomics

Microeconomics is the study of the behaviour of individuals and small impacting organisations in making decisions on the allocation of limited resources. The modern field of microeconomics arose as an effort of neoclassical economics school of thought to put economic ideas into mathematical mode.

Contents

Traditional marginalism

An early attempt was made by Antoine Augustine Cournot in Researches on the Mathematical Principles of the Theory of Wealth (1838) in describing a spring water duopoly that now bears his name. Later, William Stanley Jevons's Theory of Political Economy (1871), Carl Menger's Principles of Economics (1871), and Léon Walras's Elements of Pure Economics: Or the theory of social wealth (1874–77) gave way to what was called the Marginal Revolution. Some common ideas behind those works were models or arguments characterised by rational economic agents maximising utility under a budget constrain. This arose as a necessity of arguing against the labour theory of value associated with classical economists such as Adam Smith, David Ricardo and Karl Marx. Walras also went as far as developing the concept of general equilibrium of an economy.

Alfred Marshall's textbook, Principles of Economics was published in 1890 and became the dominant textbook in England for a generation. His main point was that Jevons went too far in emphasising utility as an attempt to explain prices over costs of production. In the book he writes:

"There are few writers of modern times who have approached as near to the brilliant originality of Ricardo as Jevons has done. But he appears to have judged both Ricardo and Mill harshly, and to have attributed to them doctrines narrower and less scientific than those they really held. Also, his desire to emphasise an aspect of value to which they had given insufficient prominence, was probably in some measure accountable for his saying, "Repeated reflection and inquiry have led me to the somewhat novel opinion that value depends entirely upon utility." (Theory, p. 1) This statement seems to be no less one-sided and fragmentary, and much more misleading, than that into which Ricardo often glided with careless brevity, as to the dependence of value on cost of production; but which he never regarded as more than a part of a larger doctrine, the rest of which he had tried to explain."

In the same appendix he further states:

"Perhaps Jevons' antagonism to Ricardo and Mill would have been less if he had not himself fallen into the habit of speaking of relations which really exist only between demand price and value as though they held between utility and value; and if he had emphasised as Cournot had done, and as the use of mathematical forms might have been expected to lead him to do, that fundamental symmetry of the general relations in which demand and supply stand to value, which coexists with striking differences in the details of those relations. We must not indeed forget that, at the time at which he wrote, the demand side of the theory of value had been much neglected; and that he did excellent service by calling attention to it and developing it. There are few thinkers whose claims on our gratitude are as high and as various as those of Jevons: but that must not lead us to accept hastily his criticisms on his great predecessors."

Marshall's idea of solving the controversy was that the demand curve could be derived by aggregating individual consumer demand curves, which were themselves based on the consumer problem of maximising utility. The supply curve could be derived by superimposing a representative firm supply curves for the factors of production and then market equilibrium would be given by the intersection of demand and supply curves. He also introduced the notion of different market periods: mainly short run and long run. This set of ideas gave way to what economists call perfect competition, now found in the standard microeconomics texts, even though Marshall himself had stated:

"The process of substitution, of which we have been discussing the tendencies, is one form of competition; and it may be well to insist again that we do not assume that competition is perfect. Perfect competition requires a perfect knowledge of the state of the market; and though no great departure from the actual facts of life is involved in assuming this knowledge on the part of dealers when we are considering the course of business in Lombard Street, the Stock Exchange, or in a wholesale Produce Market; it would be an altogether unreasonable assumption to make when we are examining the causes that govern the supply of labour in any of the lower grades of industry. For if a man had sufficient ability to know everything about the market for his labour, he would have too much to remain long in a low grade. The older economists, in constant contact as they were with the actual facts of business life, must have known this well enough; but partly for brevity and simplicity, partly because the term "free competition" had become almost a catchword, partly because they had not sufficiently classified and conditioned their doctrines, they often seemed to imply that they did assume this perfect knowledge."

An early formulation of the concept of production functions is due to Johann Heinrich von Thünen, which presented an exponential version of it. The standard Cobb–Douglas production function found in microeconomics textbooks refers to a collaborative paper between Charles Cobb and Paul Douglas published in 1928 in which they analysed U.S. manufacturing data using this function as the basis of a regression analysis for estimating the relationship between inputs (labour and capital) and output (product): this discussion takes place through the concept of marginal productivity. The mathematical form of the Cobb–Douglas function can be found in the prior work of Wicksell, Thünen, and Turgot.

Jacob Viner presented an early procedure for constructing cost curves in his “Cost Curves and Supply Curves” (1931), the paper was an attempt to reconcile two streams of thought when dealing with this issue at the time: the idea that supplies of factors of production were given and independent of rate of remuneration (Austrian School) or dependent on rate of remuneration (English School, that is followers of Marshall). Viner argued that, “The differences between the two schools would not affect qualitatively the character of the findings,” more specifically, “...that this concern is not of sufficient importance to bring about any change in the prices of the factors as a result of a change in its output.”

In Viner's terminology—now considered standard—the short run is a period long enough to permit any desired output change that is technologically possible without altering the scale of the plant—but is not long enough to adjust the scale of the plant. He arbitrarily assumes that all factors can, for the short run, be classified in two groups: those necessarily fixed in amount, and those freely variable. Scale of plant is the size of the group of factors that are fixed in amount in the short-run, and each scale is quantitatively indicated by the amount of output that can be produced at the lowest average cost possible at that scale. Costs associated with the fixed factors are fixed costs. Those associated with the variable factors are direct costs. Note that fixed costs are fixed only in their aggregate amounts, and vary with output in their amount per unit, while direct costs vary in their aggregate amount as output varies, as well as in their amount per unit. The spreading of overhead is therefore a short-run phenomenon and not to be confused with the long-run.

He explains that if the law of diminishing returns holds that output per unit of variable factor falls as total output rises, and that if the prices of the factors remain constant—then average direct costs increase with output. Also, if atomistic competition prevails—that is, the individual firm output won't affect product prices—then the individual firm short-run supply curve equals the short run marginal cost curve. In the long run, the supply curve for industry can be constructed by summing individual marginal cost curves abscissas. He also explains that:

  • Internal economies of scale are primarily a long-run phenomenon and are due either to reductions in the technical coefficients of production (technical economies=increasing productivity by improved organisation or methods of production) or to discounts resulting from larger size (pecuniary economies).
  • Internal diseconomies of scale can be avoided by increasing industry output by increasing the number of plants without increasing the scale of the plant.
  • External economies of scale are also either technical or pecuniary, but in this case are due to the aggregate behaviour of the industry, and refer to the size of output of the industry as a whole.
  • External diseconomies of scale may occur if as industry output rises the unit price of factors and materials rises as well due to increasing competition for inputs with other industries.
  • It should be made clear that these long-run results only hold if producer are rational actors, that is able to optimise their production so as to have an optimal scale of plant.

    Imperfect competition and game theory

    In 1929 Harold Hotelling published "Stability in Competition" addressing the problem of instability in the classic Cournout model: Bertrand criticised it for lacking equilibrium for prices as independent variables and Edgeworth constructed a dual monopoly model with correlated demand with also lacked stability. Hotteling proposed that demand typically varied continuously for relative prices, not discontinuously as suggested by the later authors.

    Following Sraffa he argued for "the existence with reference to each seller of groups who will deal with him instead of his competitors in spite of difference in price", he also noticed that traditional models that presumed the uniqueness of price in the market only made sense if the commodity was standardised and the market was a point: akin to a temperature model in physics, discontinuity in heat transfer (price changes) inside a body (market) would lead to instability. To show the point he built a model of market located over a line with two sellers in each extreme of the line, in this case maximising profit for both sellers leads to a stable equilibrium. From this model also follows that if a seller is to choose the location of his store so as to maximise his profit, he will place his store the closest to his competitor: "the sharper competition with his rival is offset by the greater number of buyers he has an advantage". He also argues that clustering of stores is wasteful from the point of view of transportation costs and that public interest would dictate more spatial dispersion.

    A new impetus was given to the field when around 1933 Joan Robinson and Edward H. Chamberlin, published respectively, The Economics of Imperfect Competition (1933) and The Theory of Monopolistic Competition (1933), introducing models of imperfect competition. Although the monopoly case was already exposed in Marshall's Principles of Economics and Cournot had already constructed models of duopoly and monopoly in 1838, a whole new set of models grew out of this new literature. In particular the monopolistic competition model results in a non efficient equilibrium. Chamberlin defined monopolistic competition as, "...challenge to traditional viewpoint of economics that competition and monopoly are alternatives and that individual prices are to be explained in terms of one or the other." He continues, "By contrast it is held that most economic situations are composite of both competition and monopoly, and that, wherever this is the case, a false view is given by neglecting either one of the two forces and regarding the situation as made up entirely of the other."

    Later, some market models were built using game theory, particularly regarding oligopolies. A good example of how microeconomics started to incorporate game theory, is the Stackelberg competition model published in 1934, which can be characterised as a dynamic game with a leader and a follower, and then be solved to find a Nash Equilibrium.

    William Baumol provided in his 1977 paper the current formal definition of a natural monopoly where “an industry in which multiform production is more costly than production by a monopoly” (p. 810): mathematically this equivalent to subadditivity of the cost function. He then sets out to prove 12 propositions related to strict economies of scale, ray average costs, ray concavity and transray convexity: in particular strictly declining ray average cost implies strict declining ray subadditivity, global economies of scale are sufficient but not necessary for strict ray subadditivity.

    In 1982 paper Baumol defined a contestable market as a market where "entry is absolutely free and exit absolutely costless", freedom of entry in Stigler sense: the incumbent has no cost discrimination against entrants. He states that a contestable market will never have an economic profit greater than zero when in equilibrium and the equilibrium will also be efficient. According to Baumol this equilibrium emerges endogenously due to the nature of contestable markets, that is the only industry structure that survives in the long run is the one which minimises total costs. This is in contrast to the older theory of industry structure since not only industry structure is not exogenously given, but equilibrium is reached without add hoc hypothesis on the behaviour of firms, say using reaction functions in a duopoly. He concludes the paper commenting that regulators that seek to impede entry and/or exit of firms would do better to not interfere if the market in question resembles a contestable market.

    Behavioural economics

    Kahneman and Tversky published a paper in 1979 criticising the very idea of the rational economic agent. The main point is that there is an asymmetry in the psychology of the economic agent that gives a much higher value to losses than to gains. This article is usually regarded as the beginning of behavioural economics and has consequences particularly regarding the world of finance. The authors summed the idea in the abstract as follows:

    "...In particular, people underweight outcomes that are merely probable in comparison with outcomes that are obtained with certainty. This tendency, called certainty effect, contributes to risk aversion in choices involving sure gains and to risk seeking in choices involving sure losses. In addition, people generally discard components that are shared by all prospects under consideration. This tendency, called the isolation effect, leads to inconsistent preferences when the same choice is presented in different forms."

    Great Recession and executive compensation

    More recently, the Great Recession and the ongoing controversy on executive compensation brought the principal–agent problem again to the centre of debate, in particular regarding corporate governance and problems with incentive structures.

    References

    Evolution of microeconomics Wikipedia


    Similar Topics