Upon Dr. Bickle's reccomendation, I'm reading Paul Churchland's 1989 book A Neurocomputational Perspective, and lo! more gems! One of the questions I prepared for Dr. Bickle (but havn't yet posed to him) is whether matrix multiplication could have a neurological implementation, and Churchland gives it to me on p 99 of his book. Its so simple!
One question I have on Churchland's interpretation: he notes on p 99 that the values of the input vector are coded by the relative change in the frequency of the neuron's firing, as compared with its resting baseline. He later brings up the problematic existence of dedicatedly excititory and inhibitory synapses (p 184), and notes that they don't change sign like artificial neural network synapse weights can. The question: wouldn't a reduction in the relative firing rate of an inhibitory neuron be the equivalent of an increase in the rate of an excititory neuron? Does that mean that they are interchangeable?
In a different vein, I also thought of what might be a neat experiment to help probe the dimensionality of language and what kinds of matrix transformations are going on when people use language. Its pretty simple: just ask subjects to brainstorm single-syllable words and record them in the order they occur (while measuring the time between each word). If the dimensions of each word's concept could be non-arbitrarily determined, one could do a relatively straightforward analysis on what sorts of transformations need to happen to connect a word to the previously given word (or set of words).
Why use monosyllabic words? Partly to reduce the complication, partly because concepts that are more important to survival tend to be represented in smaller words, so presumably you'd get a sample of more deeply salient concepts, and the connections between them could be explored.
I did a small scale study (n=2; myself and Rick), and it the results were interesting. There were quite a few different ways the concepts seemed to get transformed; on the semantic, phonic, and letter level.
A problem in doing the study for real would be in deciding which dimensions any given word falls; there might not be a non-arbitrary way to do so, since there's no guarantee that each word is represented identically in each person's language space. A different approach might be to map the transformations between concepts and try to impute the space in which they exist using that map... is that a sensible idea? I think that would involve some fancy math that I'm presently unaware of.