Stochastic Differential Equation Model for Cerebellar Granule Cell Activity (Saarinen et al)
Along with my efforts to read and to think about current research in neuroscience and cognitive psychology, I'm still reading books and novels as is my norm. I just finished "Next" by Michael Crichton - an easy read, but provocative in its dealing with themes related to the implications of modern biological technologies. After I finished it, I decided to pull out "Godel, Escher, Bach: An Eternal Golden Braid" by Douglas Hofstadter. I first read it in my first semester of graduate school, and found it to be a very inspiring and challenging text on consciousness. I'm still only in the first few chapters of my re-read, but I'm thinking a lot about one of the major points of the book, related to the work of Godel, showing that formal systems can't be both complete and consistent (IANAM - I'm trusting D.H. and the Wikipedia page on Godel's Incompleteness Theorems - they seem to match in their interpretations of Godel's work).
So, again - I Am Not A Mathematician - but it would seem that the stress on interpreting the implications of Godel's aforementioned work is that it applies to formal systems that attempt to provide a perfect list of starting information and processing rules that are not contradictory of themselves (hence, complete and consistent). There are a variety of systems - mathematics perhaps as the most pure - that we think of as consistent and complete in our superficial, everyday analysis, but are not ... for example, as given by Hofstadter, language and the "liar's paradox" ("This statement is false."). These paradox are given a new name by Hofstadter: "Strange Loops"; I hope to write more on the liar's paradox and my thoughts on its relationship to epistemological development - for now, though, I want to stay focused on the implications of Godel's proof to attempts to model cognition.
In the article linked at the beginning of this post, the authors propose that stochastic - random - modeling of neural cell activity provides a more accurate representation of the in vitro behavior of said cells. The classical model for neural activity - the Hodgkin-Huxley formalism - is deterministic in the sense that it posits that voltage-dependent ion channels (responsible for the 'ON' / 'OFF' status of the neuron) behave predictably. It's been known for some time that they are not perfectly predictable, but models for neural networks have been based on these easier-to-deal-with deterministic formalisms for some time. The authors of this paper point out that improvements can be had with regard to the modeling, but also with regard to computational speed, when a particular method for representing the random behavior of these voltage-dependent ion channels is built into the larger-scale models. Good stuff for those limiting their research only on networks of neurons, but how can this be relevant for others who want to transcend the chasm between brain and mind?
I think the link is provided, with due credit to Hofstadter, by Godel and now Saarinen (et al). Or perhaps with credit to Godel and now Saarinen (et al) by way of Hofstadter? As I proceed with actually writing about some of the research I'm familiar with, and continue to read and write about more current work, I believe we'll find that the pursuit to formalize conceptual change continues to use deterministic underpinnings, in which the state of one element of the model (E1), with consideration to its relationship with another element of the model (E2), determines the state of the other element. Deterministic modeling makes sense, given that there is a significant degree of predictability in the behavior of neurons, and even (though to a lesser degree, I'd think) in the behavior of the mind. However, I suggest that the stochastic modeling found beneficial by Saarinen et al, will also be beneficial to the efforts to construct both static and dynamic models of cognition - the models that will ultimately help us to understand the process of learning (constructing and retaining knowledge). [Those familiar with my old blog on the concept of evolution may have seen this before - I'll be bringing some of those ideas and posts over to this blog in the near future.]
Finally, I do think that all of this can relate back to the classroom and to teaching (which I am beginning to tend towards defining as 'the process of manipulating environment and available resources to promote student learning'). Many times in the classroom, teachers are confronted with random and seemingly-meaningless student questions; a stochastic model for cognition, based on the stochastic behavior of the functional units of the brain, could incorporate the random activation of prior knowledge that is (from the expert perspective) unrelated to the concept or skill at hand. As these stochastic cognitive models (and their implications) are developed and studied, I suspect we'll begin to see meaning in the seemingly-meaningless, as perhaps these "random" student ideas and/or questions are critical to the brain-mind's learning process. Perhaps the brain-mind has, in fact, evolved in a way that allows it to transcend the limitations that Godel proved inherent to formal systems. In other words, and appropriately paradoxically so, it might be true that our very capacity for logical thought rests on the fundamental informality of the workings of the brain.