In contemplating the connections between computation in neural networks and in economic networks, the question of failure arises.
"Market efficiency" does not mean that prices are always correct, or that every agent is always acting with perfect rationality. It does mean that all information available to the system is incorporated into prices, through the action of agents behaving rationally with their given small piece of information about the environment.
It may not always be clear whether an agent is acting rationally. Their information may come from unreliable sources, while their processing is rational, and this will result in failure. Likewise, they may have good information and bad processing, which will also result in failure. Therefore, failure should propagate backwards through a network until it reaches an agent in the latter condition.
Ideally, this agent would be corrected or removed, and all the other agents that follow it should be able to resume functioning with minimal penalty (unless part of their decision making involves where they get their information from; in which case they should be penalized for choosing a poor source of information). If the failure propagates back to an entity that is not allowed to fail, what happens? The information that the failure would contributes to the economy is lost.
The point is: failure is an important tool for developing economic efficiency. I mention this, of course, because of the current discussion over economic agents that are "too big to fail," and the idea of creating agents that cannot legally be allowed to fail (like social security and the impending health care program). Because they cannot be allowed to fail, they can't experiment with novel ways of incorporating information, which means they cannot develop and contribute to economic efficiency. And since the world is dynamic, their world-model and processing is likely to be wrong eventually (no matter how conservative it is), and so we get stuck with inefficient, broken economic agents that we can't get rid of.
And back to the original point: how do natural neural networks deal with failing elements?