Pages

Wednesday, August 5, 2009

Quesitons for Dr. Minai and Dr. Bickle

I'm organizing my thoughts on what I'd like to ask my professors, here they are:

Regarding the basics of neural implementation of mathematics:


Is there evidence at the cell level of operations other than vector subtraction being performed in neurons?

What operations, via what cellular mechanisms?

Can the "negative echo" phenomenon observed in saccades be generalized as the inverse of the input, rather than simply the negative? (eg: This would lay the ground work for division through reciprocal multiplication; which would allow for a cellular mechanism of averaging inputs. )

Is "inverse" validly the same concept in addition and multiplication, just applied to different operators? (eg: are negatives and reciprocals fundamentally related by the concept of "inverse"?)

Do the concepts of inversion and transposition have relevance at the cell level?

Are neural representations always row or column vectors, or does the concept of a matrix or a tensor have relevance to neural representations?

Are claims of Bayesian learning trees being implemented in neurons substantiated and/or plausible?


Regarding implementation of vector subtraction to do analogy-making in a simulation:


Suppose that each time we move from point to point in vector-space, both the inverse of the last point and the directional vector for getting there are stored in memory.

When the point and the direction are stored, they are given a high activation value, and this value decays each time we move to a new point.

At each step, all of the previous points and directions are candidates for use in the next action. The next action is determined stochastically by randomly choosing from the contents of the memory, weighted by each of the entries' current activation value.

New target points might come into the system (as if via sensory mechanisms), and they would be given an activation weight just like the previously visited points. Since they will come in with a high activation while all other weights are decaying, they will most likely (but not definately!) be used in the next step.

If a point (whether newly received or previously visited) is selected via the stochastic process, then the inverse of the current point will be added to the new point, the directional vector between them will be found and stored in memory, and the focus will shift to the new point via the directional vector.

If a previously used directional vector is selected via the stochastic process, the system will move from its current point using the previously determined direction, and it will find a new point and store it in memory.

Each time a point is visited, or a directional vector is used, its activation will increase to its maximum.

The more frequently traveled or visited a vector is, the more slowly its activation will decay and the higher the floor of its activation will become (ie: long term potentiation)

Points observed by the sensory process will receive extra activation and obtain LTP if they have been previously visited.

Highly activated vectors might have some spontaneous oscillatory behavior; their activation might spike back to a high level without external input (eg: the "return to origin" attentional vector). This would keep the system from straying too far off into irrelevant territory.


Motivation for this approach:
If the relationships between concepts are stored as abstractly (as vectors), then they should be able to be applied to other concepts (by acting on points). This could lead to the discovery of concepts (points) that have not yet been observed by the sensory mechanisms. If concepts discovered by this process are later confirmed to exist by observation, then they can be regarded as the confirmation or falsification of a hypothesis, and thus should be regarded as a relevant concept and be preserved for future use via LTP. The stochastic process simulates the parallelism of biological neural networks; I'm unaware of a better way of programming this.

No comments: