In recent years there has developed a branch of economics which seeks to apply statistical physics approaches to understanding economic problems. Some years ago, my colleague at the New School, Anwar Shaikh, introduced me to the pioneering work in this area that seeks to understand income distributions, for example by the physicist Y M Yakovenko. This expanding literature helps to explain, among other things, why actual income distributions beneath a certain high income level appear to be best described by a conventional ‘exponential’ distribution but beyond that are better described by a ‘thick tailed’ Paretian distribution. The resulting composite can possess much higher inequality than if the distribution were simply exponential in form, and helps us to understand why underreporting or non-reporting of top end incomes may be so consequential for empirical estimates of inequality. One tempting way to interpret the two parts of the distribution is that the exponential part is driven by ‘additive’ dynamics deriving from wage fluctuations and the Paretian part by ‘multiplicative’ dynamics deriving from the return on capital . Only those who own substantial capital assets will have incomes that are meaningfully influenced by the latter (and which, in recent years, have grown and grown). Anwar Shaikh has been building on this idea in various ways in his own important work on inequality.
More recently, my New School colleague Duncan Foley has introduced me to a more ‘information theoretic’ point of view, according to which the natural way to frame a range of problems in economics is by seeking to identify ‘statistical equilibria’ (in which distributions as a whole are stable even if there are fluctuations, and can result from behaviours of various kinds, not necessarily maximizing. [This is unlike in the standard concept of economic equilibrium, which requires maximization by agents and that specific patterns of production or allocations resulting from such maximization are in stasis, denying the possibility of fluctuations]. He and another New School colleague, Paulo dos Santos, as well as our distinguished former students (such as Gregor Semieniuk and Ellis Scharfenaker) elsewhere, have argued that the natural way to identify such statistical equilibrium distributions is to ask what outcomes will maximize ‘entropy’, suitably conceived. Entropy maximization is taken to be an appropriate principle to apply in the presence of our own ignorance. Duncan Foley in particular has been sketching the philosophical case for such an approach at length in various writings and presentations.
Over the last eight months or so, I have engaged in an email conversation with colleagues (and in particular with Duncan Foley) on the justification and limits of such an approach. The correspondence reflects my own limited knowledge of the approach as well as a sincere effort to understand the framework better. It also reflects Duncan Foley’s extraordinary intellectual generosity in engaging with me. The correspondence as a whole, which I make public with the permission of those also involved, is available here in pdf form (the exchange as of May 1st, 2017, updated to reflect Duncan’s and Paulo’s latest response to me, and which will be periodically updated to reflect further rounds of correspondence). I apologize to the reader for the seemingly complex form in which the various rounds of correspondence appear (referring to prior rounds of correspondence which are again quoted in the more recent rounds) but this is a more or less unadulterated version of what took place. The actual length of correspondence one must read through if one wishes to get a sense of the whole is less than it at first appears, due to the presence of considerable repetition. However, it is at times technical. All of the papers mentioned are available via hyperlinks in the text with the exception of the paper by dos Santos, which is available in an earlier version, here. We make the correspondence available in the interest of broadening and deepening the discussion. The emails are unedited and show various imperfections as a result. There are aspects that I would change if I could (perhaps to use the example of explaining when and why financial crises take place rather than riots?) but the broad contours of the discussion, and many of the details too, might not greatly vary.
I believe that the champions of the approach being discussed have made a very valuable, and in respects profound, contribution. However, I am substantially unconvinced of its value as a general approach in the ‘social sciences’, for reasons that I ultimately try to elaborate in the correspondence. My own view is that models which are based on a mechanistic image of society or its actors cannot provide such a general methodology, even if they gain useful application in specific contexts. (This is of course an objection to standard economics arguably even more than it is an objection to statistical mechanics based approaches). It is possible that a contextually sensible application of such ideas will lead to valuable forms of theoretical progress. It is also probable that further broadening of the methods used (for example, to take note of the insights of non-equilibrium thermodynamics) can lead to still more useful applications. At the same time the social science as a whole require a supple approach, in which the humanness of the human being finds its place, and in which the role of judgment in matching model to reality is recognized as an inescapable demand. The advocates of the statistical physics inspired approach to the social sciences (Foley, here) agree that there is a role for judgment, but is it to be thick or thin? I believe that this is much more than a distinction without a difference.