Pages

Saturday, August 22, 2009

Insight from Hayek

Hayek posits that in order for a planned economic system to yield an equivalent level of production to a competitive system, it would have to be similarly able to process information. The technical capabilities, uses, condition, and location of each individual piece of productive equipment would need to be quantified and collected in a single place where it could be operated upon. This would be a system of differential equations on the order of hundreds of thousands of variables (edit: more like hundreds of billions of variables: "productive equipment" must include every tool, part, machine, natural resource, and human available in the system, each with varying levels of many characteristics), all in constant flux. Setting aside the problems of information collection, which are formidable in themselves, the solution of such systems of equations are insolvable in real time, even on the largest conceivable digital computers. Such problems are NP complete. (edit: we should also note that since humans are included as productive resources, all of their actions must also necessarily be planned, something undesirable if freedom is a value.)

Note that the stated goal above was merely to emulate what the competitive marketplace already does. How does the competitive marketplace accomplish this apparently impossible feat of data collection and calculation?

The answer must be that the price system itself is an instantiation of a system of differential equations via matrix methods. We've seen from linear algebra how systems of equations can be represented in matrix form, and we've seen from neuroscience and learning theory how neural networks can instantiate functions of equations and matrices of data, and thereby do matrix transformations that translate input into predictions and action. My contention is that economic networks similarly represent information in the weights and frequencies of transactions, that firms similarly integrate the input information (demand) and turn it into prediction (capacity) and action (supply). I believe this is a novel hypothesis, a consistent elaboration of previous claims of market efficiency.

The system can solve itself in real time because every element of it is both a processing mechanism and a memory mechanism; its parallel distributed processing. This is also the reason that it can be flexible; as information changes, only the effected portions of the network are recalculated. But since all information is constantly changing, the entire network is eternally in flux. This probably means that it's solution is never at the absolute maximum; indeed a single maximum almost definately doesn't exist. If all exogenous change were to cease (an nonsensical idea anyway) then the system would probably reach a maximum, but it would still probably be in flux as it could slide across all the other possible maximum values as well.

The next question then becomes: why does the system exhibit this maximizing behavior? Given that its composed of simple computational units who are unaware of the global maximizing goal, why did the network come to instantiate its maximizing function, rather than some other function? I imagine that the answer is recursive: the network is maximizing because its components are maximizing; they are maximizing because their subcomponents are maximizing, and so on down to the most basic level possible.

No comments: