A speaker at the Singularity Summit triggered some interesting ideas: Eric Baum. His ideas dovetail very nicely with the thinking I've been doing on cognitive architectures, and I'll have to read his book to see whether he's already thought of and included or discredited my idea.
My hypothesis is that portfolio theory from finance (specifically the Markov model) can be used to select the mental agents that continue to be strong and influential in a mind.
The motivation for my hypothesis is an analogy between the "society of mind" idea and the free market. Please allow me to explain: -Firms in a marketplace take in information and material, transform it according to their model of the world, and send it out to other firms and individuals. -Cognitive agents take in information and "material" (eg: other agents' conclusions about the state of the world), transform it according to their model of the world, and send it out to other cognitive agents or agencies.
-The "goodness of fit" of a firm’s model of the world and its role in it is measured by its net income, which is a factor in the valuation of the price of their securities. - There must be some measure of a cognitive agent's "world-model goodness of fit" that is similar to a profit function, where cost has to do with the amount of effort involved in the transformation and "income" has to do with the number of requests made to the agent. I'm not aware of any models that have been made on this point, but I suspect there must already be something very similar in artificial neural network models.
-When many firms exist together, a market can arise for their securities. The value of the securities is determined by their net income and expectations about how that will change in the future; EG how well each firm's "world model" will perform in the face of the uncertain future. -When many cognitive agents exist together, some collection of them must come to dominate and be strengthened; ideally these should be the ones who's output is most valuable for the environment that the agents exist in.
-To balance risks in the face of an uncertain future, an investor can assemble a portfolio of firms that have "world models" (eg; business plans) that counterbalance each other. The idea is that if there's some uncontrollable external event, there might be one firm that benefits from it and another firm that suffers from it. An example is oil companies and car companies: when the price of oil rises, the former benefit and the latter suffer, so holding both securities would tend to balance out the impact of the (locally) uncontrollable rise in oil price. This balancing is the function of the Markov model and portfolio theory in general: it finds how the value of securities have behaved relative to each other in the past and assumes (delicate assumption!) that this will be usefully predictive of how they will behave relative to each other in the future. Thus, risk is reduced to only the totally unpredictable environmental uncertainties. I suppose you could say that one of these unpredictable uncertainties is the possibility that the future world is totally different from the past world, which we know is true depending on the time span you're concerned with. As the rate of change accelerates, this "predictability horizon" gets closer and closer, which is essentially the point of the "singularity" meme. -To balance risk in the face of an uncertain future, minds (natural or synthetic) aught to be able to do the "same thing." That is, judge how well their components have responded to the environment in the past and assemble a collection of them that is maximally robust to uncertainty. I think that if a suitable "profit function" could be found for cognitive agents, the Markov model and other methods of finance aught to apply very well to "societies of mind."
SO! I (like Eric Baum) suggest that we can use market principals to organize synthetic minds. The reverse may also be true; we may be able to use market principals to understand how our own natural minds work. It would be interesting to integrate the three coolest AI ideas I've read about in the last two years. It might be something like this:
1. Have a massive amount of Hierarchical Temporal Memory (HTMs) function as the perceptual layer. They would perceive the external world as well as the output of other agents.
2. Let the HTM's feed Copycat-like non-deterministic agents that seek analogies and make transformations. This may be superfluous, since HTMs are supposed to be able to do that already, but there might be an interesting synthesis.
3. Set these agencies up such that other agencies can monitor their output and strengthen or weaken their activation according to how well they perform; this is the "investor" function. There can be arbitrarily many investors in any arrangement of loops with each other, as well (since they are also agents). Finding some way to limit that combinatorial explosion would be key; I suspect a cost function would limit it effectively.
[See here now for a later post on the issue, which addresses questions posed in the comments to this post]