Pages

Wednesday, August 5, 2009

Arithmetic operations in neurons

Bickle's "negative echo" mechanism provides a neurological basis for subtraction, and we already know how neurons do addition. How about the other operators? Can we find evidence of multiplication and division in neural systems? I suppose if multiplication was discovered, we might hypothesize the existence of an "reciprocal echo" just like the negative echo, and this would satisfy the definition of division (since dividing by x is the same as multiplying by 1/x (or x^-1)).

More generally: the negative of a number is its "additive inverse," the reciprocal of a number is its "multiplicative inverse." A new question then arises: can Bickle's "negative echo" be re-interpreted more generally as just the inverse of the original stimulus? That would take care of division... assuming multiplication is already taken care of. I hope the use of the word "inverse" in both cases by mathematicians is more deeply seated than just an arbitrary convention.

How then might neurons implement multiplication? Maybe through a cascade effect: if an activated neuron causes many other neurons to become activated, that seems pretty similar to multiplication.

Another question would be whether neurons follow the same matrix multiplication rules as our artificial math. That is to say: in order to multiply two vectors in matrix algebra, their inner dimensions must be the same. Thus, a 1x2 row vector must be transposed into a 2x1 column vector if you want to multiply it with another 1x2 row vector. (See this wikipedia article if this summary is insufficient. ) Is the concept of transposition relevant for neurons? What does it mean to transpose a vector when the vector's dimensions represent the degree to which an object possesses a property?

Ahhh, and what place does the inverse of a matrix have in this vector-space theory of cognition? An inverse matrix is one such that when it is multiplied by the original matrix, you get the identity matrix (where the diagonal is all ones). I asked earlier if "inversion" was the general mechanism that could be abstracted from Bickle's "negative echo" phenomenon; is the same principle in operation here? I note that I slipped from thinking of solely row vectors to matrices in general... is this justifiable in biological neurons?

"Why is any of this interesting," one might ask? Division is interesting because it would allow vector averaging to be implemented in neural networks, which is hypothesized as the mechanism for integrating the output of many smaller networks into a single output, such as an action. Multiplication and division together, I think, are also requirements for more sophisticated learning methods, like Bayesian updating. If this is true, I could draw a direct line of reasoning between Bickle and Hawkins. Mix that in with lessons from Hofstadter and Baum... maybe there's something interesting and new.

In re-reading this post, I realize that I'm focusing solely on neurons doing calculations, and I haven't thought much about their dual-role as memory storage mechanisms... the two roles are deeply intertwined, so I need to make sure I don't artificially separate them.

No comments: