Pages

Wednesday, August 5, 2009

Quesitons for Dr. Minai and Dr. Bickle

I'm organizing my thoughts on what I'd like to ask my professors, here they are:

Regarding the basics of neural implementation of mathematics:


Is there evidence at the cell level of operations other than vector subtraction being performed in neurons?

What operations, via what cellular mechanisms?

Can the "negative echo" phenomenon observed in saccades be generalized as the inverse of the input, rather than simply the negative? (eg: This would lay the ground work for division through reciprocal multiplication; which would allow for a cellular mechanism of averaging inputs. )

Is "inverse" validly the same concept in addition and multiplication, just applied to different operators? (eg: are negatives and reciprocals fundamentally related by the concept of "inverse"?)

Do the concepts of inversion and transposition have relevance at the cell level?

Are neural representations always row or column vectors, or does the concept of a matrix or a tensor have relevance to neural representations?

Are claims of Bayesian learning trees being implemented in neurons substantiated and/or plausible?


Regarding implementation of vector subtraction to do analogy-making in a simulation:


Suppose that each time we move from point to point in vector-space, both the inverse of the last point and the directional vector for getting there are stored in memory.

When the point and the direction are stored, they are given a high activation value, and this value decays each time we move to a new point.

At each step, all of the previous points and directions are candidates for use in the next action. The next action is determined stochastically by randomly choosing from the contents of the memory, weighted by each of the entries' current activation value.

New target points might come into the system (as if via sensory mechanisms), and they would be given an activation weight just like the previously visited points. Since they will come in with a high activation while all other weights are decaying, they will most likely (but not definately!) be used in the next step.

If a point (whether newly received or previously visited) is selected via the stochastic process, then the inverse of the current point will be added to the new point, the directional vector between them will be found and stored in memory, and the focus will shift to the new point via the directional vector.

If a previously used directional vector is selected via the stochastic process, the system will move from its current point using the previously determined direction, and it will find a new point and store it in memory.

Each time a point is visited, or a directional vector is used, its activation will increase to its maximum.

The more frequently traveled or visited a vector is, the more slowly its activation will decay and the higher the floor of its activation will become (ie: long term potentiation)

Points observed by the sensory process will receive extra activation and obtain LTP if they have been previously visited.

Highly activated vectors might have some spontaneous oscillatory behavior; their activation might spike back to a high level without external input (eg: the "return to origin" attentional vector). This would keep the system from straying too far off into irrelevant territory.


Motivation for this approach:
If the relationships between concepts are stored as abstractly (as vectors), then they should be able to be applied to other concepts (by acting on points). This could lead to the discovery of concepts (points) that have not yet been observed by the sensory mechanisms. If concepts discovered by this process are later confirmed to exist by observation, then they can be regarded as the confirmation or falsification of a hypothesis, and thus should be regarded as a relevant concept and be preserved for future use via LTP. The stochastic process simulates the parallelism of biological neural networks; I'm unaware of a better way of programming this.

Arithmetic operations in neurons

Bickle's "negative echo" mechanism provides a neurological basis for subtraction, and we already know how neurons do addition. How about the other operators? Can we find evidence of multiplication and division in neural systems? I suppose if multiplication was discovered, we might hypothesize the existence of an "reciprocal echo" just like the negative echo, and this would satisfy the definition of division (since dividing by x is the same as multiplying by 1/x (or x^-1)).

More generally: the negative of a number is its "additive inverse," the reciprocal of a number is its "multiplicative inverse." A new question then arises: can Bickle's "negative echo" be re-interpreted more generally as just the inverse of the original stimulus? That would take care of division... assuming multiplication is already taken care of. I hope the use of the word "inverse" in both cases by mathematicians is more deeply seated than just an arbitrary convention.

How then might neurons implement multiplication? Maybe through a cascade effect: if an activated neuron causes many other neurons to become activated, that seems pretty similar to multiplication.

Another question would be whether neurons follow the same matrix multiplication rules as our artificial math. That is to say: in order to multiply two vectors in matrix algebra, their inner dimensions must be the same. Thus, a 1x2 row vector must be transposed into a 2x1 column vector if you want to multiply it with another 1x2 row vector. (See this wikipedia article if this summary is insufficient. ) Is the concept of transposition relevant for neurons? What does it mean to transpose a vector when the vector's dimensions represent the degree to which an object possesses a property?

Ahhh, and what place does the inverse of a matrix have in this vector-space theory of cognition? An inverse matrix is one such that when it is multiplied by the original matrix, you get the identity matrix (where the diagonal is all ones). I asked earlier if "inversion" was the general mechanism that could be abstracted from Bickle's "negative echo" phenomenon; is the same principle in operation here? I note that I slipped from thinking of solely row vectors to matrices in general... is this justifiable in biological neurons?

"Why is any of this interesting," one might ask? Division is interesting because it would allow vector averaging to be implemented in neural networks, which is hypothesized as the mechanism for integrating the output of many smaller networks into a single output, such as an action. Multiplication and division together, I think, are also requirements for more sophisticated learning methods, like Bayesian updating. If this is true, I could draw a direct line of reasoning between Bickle and Hawkins. Mix that in with lessons from Hofstadter and Baum... maybe there's something interesting and new.

In re-reading this post, I realize that I'm focusing solely on neurons doing calculations, and I haven't thought much about their dual-role as memory storage mechanisms... the two roles are deeply intertwined, so I need to make sure I don't artificially separate them.
Some additional comments on Matt's response to my original cash for clunkers complaints:

From Matt:
"This issue, while interesting, is, as you say, a drop in the bucket compared to the changes that would occur through retooling auto factories, research and development, and the private and public reorganization of health care that is imminent."

Matt,

I think the cash for clunkers issue is interesting as a small test case of central government's ability to effect changes it deems desirable. This is especially important considering the imminent public health reorganization, as nationalizing health care will be an incredibly sensitive, complex, and delicate undertaking. What degree of confidence should we have that it will be well-executed, efficient, and non-wasteful? If the cash for clunkers "drop" is a sample of what's in the rest of that "bucket," we can expect "poor results."

To reiterate from my response comment: I'm reacting to this issue because it is small enough and blatant enough to be understandable, whereas the other bailouts and wars are so huge as to be incomprehensible in scope. This is small enough for everyone to understand.