Cholinergic Neuromodulation Changes Phase Response Curve Shape and Type in Cortical Pyrimidal Neurons (Stiefel, Gutkin, and Sejnowski)
One of the most important and fundamental findings of neuroscience is that neuron behavior is essentially digital: all-or-nothing, on or off, active or resting. The neuron either fires an action potential or not; there's no such thing as a partial action potential. This digital behavior, along with differential elicitation of activity in regions of the brain given exposure to various stimuli, leads to the prevailing hypothesis that the brain is a modular biological computer that processes information through networks of massively-interconnected neurons. A major challenge, then, in describing the link between brain tissue behavior and the needs of the animal in the "real world" is that many stimuli are analog, differing in small degrees sometimes across a wide range of values: sound volume, light brightness, pressure on skin, etc. To represent (and process and react to) analog phenomena, neurons alter their firing rate - often, a strong stimulus elicits a rapid firing of action potentials over a given period of time, whereas a weaker stimulus elicits a slower firing of action potentials. It's also important for the brain to be able to alter these "spike trains" as new stimuli come in, and as they are further processed - the pattern in which these spike trains are altered by new stimuli is called the "phase response curve". The above article demonstrates that certain neurons in the cortex change their phase response curve based on exposure to a very prevalent neurotransmitter, acetylcholine. Demonstrating this sensitivity implicates that neurotransmitters may be involved in more than just the elicitation of an individual action potential; they may, in fact, influence the behavior of entire neural networks, and therefore exert influence over large-scale cognitive function. This is an incredible finding with significant implications on our models for brain states, mind states, and the relationship between the two.
Exploring relationships among neuroscience, cognitive psychology, and learning. Sprinkled with education policy, reform, and leadership, STEM education research, and technology.
Friday, December 19, 2008
Saturday, December 13, 2008
Standards, teachers, and educational concentration gradients
I've been thinking a lot lately about the role of standards in education. In my opinion, my school is experiencing some growing pains with regard to our standards-based approach to the curriculum and grading system. Some of these pains, I think, are related to the complexity of the standards-based approach as well as the incremental fine-tuning that's happened over the years since the standards philosophy was first implemented. I've found myself stepping back a bit and trying to get a sense for the "big picture": what was the intent of the standards movement, how has it been implemented, and what has changed - if anything - as a result? Perhaps some reflection on these questions will help us to find the next step, and offer some guidance to other schools that might be in the process of adopting the standards-based framework.
The overall intent of the standards movement was to reform the educational system such that it would produce better-learned students. The audacity of the goal was commendable, but the many-pronged nature of systemic reform created confusion among constituents in the system itself. Perhaps most importantly - and astonishingly - the stake-holders with the most direct influence on student learning - teachers - received little guidance, training, or support. In essence, teachers were criticized for lackluster production due to didactic pedagogy, yet the policy makers were themselves didactic in their approach to teacher professional development (if and when it even occurred). Unfortunately, this trend seems to be true at my school - subject area standards were articulated and expectations about student learning were raised. However, even though faculty were involved in articulating the standards and setting the expectations, I learned in a faculty meeting this year that not one professional day since that time has been spent on actually helping teachers to improve their practice. We've spent a lot of time making incremental changes to the standards-based policies, but little to none on training for implementing them.
So, what has changed? It might be true that we no longer graduate students who have not met expectations on our standards - but it could also be true that our stated expectations might be different from those actually imposed on students (vis-a-vis late work, make-up work, re-testing with smaller sets of questions, isolating rather than integrating subject area standards, etc.). At that same faculty discussion when we discussed a lack of time spent on developing better pedagogical methods, we also discussed that most of our assignments and teaching methods have not changed since applying the standards-based framework. On a positive note, the standards framework has resulted in a more detailed articulation and consistent implementation of the curriculum. Furthermore, the framework has also resulted in ensuring that the students must, at some point in time, demonstrate some greater-than-zero knowledge of all of the standards.
I think that these are significant results for a complex policy implemented almost exclusively from the top-down. Given that this is a systemic reform movement, and with the goal of improving, I think the next step is to build capacity from the bottom-up. For us, it's critical to start spending some time working with teachers to evaluate and improve pedagogical methods. For schools just starting this process, I'd recommend implementing the standards framework from the top-down and bottom-up right from the beginning. If you must choose due to limited time & resources, I would recommend starting from the bottom-up. Systemic reform is bound to fail if the process isn't integral, and being didactic from the top-down sends a message that is loud, clear: do as I say, not as I do. Building consensus between administration and faculty is critical for the top-down part of the reform, and building capacity among students and faculty is critical for the bottom-up part of the reform. The keystones in the systemic reform are obvious: teachers.
In my role in my school, I need to be involved in both the top-down and bottom-up aspects of the process. I've had plenty of involvement in the top-down part, but in this reflective process I've realized that I need to be more involved in the bottom-up part, too. In my most recent post, I proposed the idea that a source of power for educational systems to effect change in society stems from the concentration gradient of studies and effort that we create across the school/classroom door. Part of the consensus building process should acknowledge the perpetual uphill battle that teachers face, and build the shared vision that the uphill journey is worth the effort. Part of the capacity building process should ensure, as much as possible, that teachers have the energy to put in the effort, and that they are trained to become more and more efficient uphill climbers. Although a clear implication of this model for educational systems is that the there will never be a perfect reform that will "fix schools", and that any reform that is implemented will always be a work in progress, it brings into clear focus the critical role of the teacher in creating the power of the educational system.
The overall intent of the standards movement was to reform the educational system such that it would produce better-learned students. The audacity of the goal was commendable, but the many-pronged nature of systemic reform created confusion among constituents in the system itself. Perhaps most importantly - and astonishingly - the stake-holders with the most direct influence on student learning - teachers - received little guidance, training, or support. In essence, teachers were criticized for lackluster production due to didactic pedagogy, yet the policy makers were themselves didactic in their approach to teacher professional development (if and when it even occurred). Unfortunately, this trend seems to be true at my school - subject area standards were articulated and expectations about student learning were raised. However, even though faculty were involved in articulating the standards and setting the expectations, I learned in a faculty meeting this year that not one professional day since that time has been spent on actually helping teachers to improve their practice. We've spent a lot of time making incremental changes to the standards-based policies, but little to none on training for implementing them.
So, what has changed? It might be true that we no longer graduate students who have not met expectations on our standards - but it could also be true that our stated expectations might be different from those actually imposed on students (vis-a-vis late work, make-up work, re-testing with smaller sets of questions, isolating rather than integrating subject area standards, etc.). At that same faculty discussion when we discussed a lack of time spent on developing better pedagogical methods, we also discussed that most of our assignments and teaching methods have not changed since applying the standards-based framework. On a positive note, the standards framework has resulted in a more detailed articulation and consistent implementation of the curriculum. Furthermore, the framework has also resulted in ensuring that the students must, at some point in time, demonstrate some greater-than-zero knowledge of all of the standards.
I think that these are significant results for a complex policy implemented almost exclusively from the top-down. Given that this is a systemic reform movement, and with the goal of improving, I think the next step is to build capacity from the bottom-up. For us, it's critical to start spending some time working with teachers to evaluate and improve pedagogical methods. For schools just starting this process, I'd recommend implementing the standards framework from the top-down and bottom-up right from the beginning. If you must choose due to limited time & resources, I would recommend starting from the bottom-up. Systemic reform is bound to fail if the process isn't integral, and being didactic from the top-down sends a message that is loud, clear: do as I say, not as I do. Building consensus between administration and faculty is critical for the top-down part of the reform, and building capacity among students and faculty is critical for the bottom-up part of the reform. The keystones in the systemic reform are obvious: teachers.
In my role in my school, I need to be involved in both the top-down and bottom-up aspects of the process. I've had plenty of involvement in the top-down part, but in this reflective process I've realized that I need to be more involved in the bottom-up part, too. In my most recent post, I proposed the idea that a source of power for educational systems to effect change in society stems from the concentration gradient of studies and effort that we create across the school/classroom door. Part of the consensus building process should acknowledge the perpetual uphill battle that teachers face, and build the shared vision that the uphill journey is worth the effort. Part of the capacity building process should ensure, as much as possible, that teachers have the energy to put in the effort, and that they are trained to become more and more efficient uphill climbers. Although a clear implication of this model for educational systems is that the there will never be a perfect reform that will "fix schools", and that any reform that is implemented will always be a work in progress, it brings into clear focus the critical role of the teacher in creating the power of the educational system.
Labels:
education
Wednesday, December 10, 2008
Standards-based grading and the role of effort
I've only recently discovered the blog What It's Like on the Inside (check it out, it's great reading for anyone interested in standards-based education). A December 6 post about standards-based grading and effort prompted me to leave my first comment - it helped that I happened to have the time and energy at the moment, too! In writing this comment, I really liked the idea that hit me at the end about the "concentration gradient" of studies and effort between schools and society as a source of power for educational systems to effect societal change (much akin to the findings that electrochemical gradients are a source of energy for cells). I figured I'd publish my thoughts here so that I am more likely to build on them in the future.
So, here's my comment on "There is No Spoon". Thanks, "Science Goddess", for some very thought-provoking posts!
"I work at a school with a standards-based approach, grading and curriculum. I'm only in my second year on the job here, but "late work" and "make up work" are issues that really displease quite a number of our faculty. While I support, to some degree, the separation of effort and outcome, I do think that a complete divide between the two is extremist reductionism and a mistake; so much of the success that we've observed in others and experienced personally as adults comes not only from skill (and sometimes not at all), but almost always from hard work. It's frustrating to deal with teenagers who don't value effort, particularly when so much effort is required for good teaching.
I also think it's important to acknowledge that discipline - in this case, a lower grade - can (possibly) be a learning experience. You're right to point out that high school students aren't college students, but it's also right to point out that effort does matter and that a lack of effort does deserve negative consequences. We only allow students to meet expectations at the lowest level on their late / make-up work -- reserving the higher grades for students who do their work on time and with good quality. We're also making time this year to discuss our policies on effort - what we call "academic initiative" standards - to figure out what we can improve, with the goal of getting more students to meet with more success in their classes (on-time and the first time). We're tossing around ideas of linking effort to eligibility for honor roll, co-curricular participation, and academic support structures ... and maybe even making it a part of the summary grade calculation (yes, we grade each standard individually).
But the simple truth - as Roger points out above - is that we need more effort from more of our students to get the outcomes we desire of and for our students. It is, and should be, an uphill battle - the "concentration" of studies and effort in a school setting should be greater within that community than we find in the general public. Establishing such a "gradient" is the critical source of power that enables educational systems to disrupt the equilibrium state of these variables in society. So the question that I wonder about a lot is: how can I support my faculty in their never-ending uphill journey?
So, here's my comment on "There is No Spoon". Thanks, "Science Goddess", for some very thought-provoking posts!
"I work at a school with a standards-based approach, grading and curriculum. I'm only in my second year on the job here, but "late work" and "make up work" are issues that really displease quite a number of our faculty. While I support, to some degree, the separation of effort and outcome, I do think that a complete divide between the two is extremist reductionism and a mistake; so much of the success that we've observed in others and experienced personally as adults comes not only from skill (and sometimes not at all), but almost always from hard work. It's frustrating to deal with teenagers who don't value effort, particularly when so much effort is required for good teaching.
I also think it's important to acknowledge that discipline - in this case, a lower grade - can (possibly) be a learning experience. You're right to point out that high school students aren't college students, but it's also right to point out that effort does matter and that a lack of effort does deserve negative consequences. We only allow students to meet expectations at the lowest level on their late / make-up work -- reserving the higher grades for students who do their work on time and with good quality. We're also making time this year to discuss our policies on effort - what we call "academic initiative" standards - to figure out what we can improve, with the goal of getting more students to meet with more success in their classes (on-time and the first time). We're tossing around ideas of linking effort to eligibility for honor roll, co-curricular participation, and academic support structures ... and maybe even making it a part of the summary grade calculation (yes, we grade each standard individually).
But the simple truth - as Roger points out above - is that we need more effort from more of our students to get the outcomes we desire of and for our students. It is, and should be, an uphill battle - the "concentration" of studies and effort in a school setting should be greater within that community than we find in the general public. Establishing such a "gradient" is the critical source of power that enables educational systems to disrupt the equilibrium state of these variables in society. So the question that I wonder about a lot is: how can I support my faculty in their never-ending uphill journey?
Labels:
education
Tuesday, November 18, 2008
New educational resources from Society for Neuroscience
Recently in my email I received the first edition of an e-Newsletter from the Society for Neuroscience titled "Brain Awareness Headlines". This four-times-a-year publication will strive to provide information - news, resources, etc. - to those interested in neuroscience education. The newsletter seems to stem from SfN's flagship educational vehicle, "Brain Awareness Week", which I helped FA to participate in last year by speaking about the brain and the importance of neuroscience research at a school-wide assembly. To learn more about Brain Awareness Week and some of the educational outreach opportunities that the SfN provides, go to www.sfn.org/baw.
While "Brain Awareness Headlines" is itself a new resuorce, the first edition highlights "NERVE", another new educational resource, and one that is targeted specifically at the K-12 audience. NERVE is an acronym that stands for Neuroscience Education Resources Virtual Encycloportal (nice...), and it serves to organize by theme a variety of digital neuroscience education resources from all around the Internet. Some of the resources are "flat" printouts, while others are interactive and/or animated. The resources included in NERVE can be limited by the intended audience, which includes students at different grade levels as well as teachers.
---
As an aside, it recently occurred to me that I took my first undergraduate course in Neuroscience - Intro to Neuroscience at Brandeis with Dr. Eve Marder - at this time of year a full 10 years ago! In researching this article, I happened to discover that Dr. Marder is the President of the Society for Neuroscience this year -- congratulations! I'm sure I wasn't the best student in my class in Fall '98, but her presentation and discussion of the subject inspired my further study and solidified the brain as a central point of interest throughout my professional and educational development. And, of course - thanks for the chocolate before each test -- it's a tradition that I've passed on to many of my own students - along with the requisite neuroscientific justification ;-)
While "Brain Awareness Headlines" is itself a new resuorce, the first edition highlights "NERVE", another new educational resource, and one that is targeted specifically at the K-12 audience. NERVE is an acronym that stands for Neuroscience Education Resources Virtual Encycloportal (nice...), and it serves to organize by theme a variety of digital neuroscience education resources from all around the Internet. Some of the resources are "flat" printouts, while others are interactive and/or animated. The resources included in NERVE can be limited by the intended audience, which includes students at different grade levels as well as teachers.
---
As an aside, it recently occurred to me that I took my first undergraduate course in Neuroscience - Intro to Neuroscience at Brandeis with Dr. Eve Marder - at this time of year a full 10 years ago! In researching this article, I happened to discover that Dr. Marder is the President of the Society for Neuroscience this year -- congratulations! I'm sure I wasn't the best student in my class in Fall '98, but her presentation and discussion of the subject inspired my further study and solidified the brain as a central point of interest throughout my professional and educational development. And, of course - thanks for the chocolate before each test -- it's a tradition that I've passed on to many of my own students - along with the requisite neuroscientific justification ;-)
Labels:
brain,
education,
professional development
Wednesday, November 12, 2008
Neuro-Education Initiative at Johns Hopkins University
Johns Hopkins University recently announced their Neuro-Education Initiative, a program to develop the links between neuroscience and education. The program will involve a variety of opportunities to share current information, further research, and to develop new ideas by establishing a formal collaboration between the Brain Science Institute and the School of Education. The Initiative began this past summer and continues this fall with a series of interdisciplinary lectures. Starting in 2009, the School of Education will offer a 15 credit graduate certificate in Mind, Brain, and Teaching. The certificate program will be coordinated by Mariale Hardiman, a leader from the Baltimore Public School system who's published extensively on improving educational models through employing the findings of research in neuroscience. Information on the certificate program appears limited at this point, so it may be safest to assume that the program is available only to those who can be in residence at JHU, and not through distance learning at this point in time.
Labels:
professional development
Friday, October 24, 2008
Scale-free learning: the synapse
Synaptic Learning Rules and Sparse Coding in a Model Sensory System (Finelli et al)
The major impetus for the scale-free/fractal, modular, stochastic model for cognition presented in my last post is to investigate the link between the structure and function of the brain with (at least some) of the structure and function of the mind. Although it is not yet specified in the graphical models, my thought is that as we explore the deeper nestings of cognitive resources, we'll get closer and closer to clear links with descrete brain structures like neural networks or even individual neurons. I think it's important to show that learning is not just a mental event, but also a brain event; ie: when learning occurs it is the result of physical and chemical changes in the brain. Although I'm not necessarily arguing that the mental changes that occur in learning are themselves physical and/or chemical, the modeling of learning at the cognitive scale involves changes in the relationships and patterns of activity among cognitive resources.
The above-linked article from Finelli et al uses the locust olfactory system to demonstrate that learning occurs in the synapse level (a change in behavior at the synapse). Although certainly the human brain is much more complex than the locust brain, an interesting ability in both is to be able to process sensory data in a way that involves fewer and fewer neuron activations as the data is more and more processed. The authors found that the activity of a very small number of cells in higher-level processing of smells were sufficient for the locust odor recognition. Interestingly, the authors also suggest that exposure of the olfactory system to a wide variety of odors helps the system to process new odors more effectively.
While I'm not sure that the demonstration of learning at the synaptic level has any practical application to human learning experiences in the classroom or in other traditional learning settings other than to help educators appreciate more deeply that so many different people *can* actually learn the same skills and concepts in the same way, I do think that the implications of this research can improve educational experiences by helping us to focus on how the brain processes information. Particularly, I think it's interesting and important to note that early exposure to a wide variety of information helps the brain to process new information more effectively at later points in time. This suggests that our educational systems, particularly those targeted at young students, should strive to present very broad curricula - perhaps even at the expense of depth - so that later in life those learners can process the new and more complex information that they are exposed to more deeply.
The major impetus for the scale-free/fractal, modular, stochastic model for cognition presented in my last post is to investigate the link between the structure and function of the brain with (at least some) of the structure and function of the mind. Although it is not yet specified in the graphical models, my thought is that as we explore the deeper nestings of cognitive resources, we'll get closer and closer to clear links with descrete brain structures like neural networks or even individual neurons. I think it's important to show that learning is not just a mental event, but also a brain event; ie: when learning occurs it is the result of physical and chemical changes in the brain. Although I'm not necessarily arguing that the mental changes that occur in learning are themselves physical and/or chemical, the modeling of learning at the cognitive scale involves changes in the relationships and patterns of activity among cognitive resources.
The above-linked article from Finelli et al uses the locust olfactory system to demonstrate that learning occurs in the synapse level (a change in behavior at the synapse). Although certainly the human brain is much more complex than the locust brain, an interesting ability in both is to be able to process sensory data in a way that involves fewer and fewer neuron activations as the data is more and more processed. The authors found that the activity of a very small number of cells in higher-level processing of smells were sufficient for the locust odor recognition. Interestingly, the authors also suggest that exposure of the olfactory system to a wide variety of odors helps the system to process new odors more effectively.
While I'm not sure that the demonstration of learning at the synaptic level has any practical application to human learning experiences in the classroom or in other traditional learning settings other than to help educators appreciate more deeply that so many different people *can* actually learn the same skills and concepts in the same way, I do think that the implications of this research can improve educational experiences by helping us to focus on how the brain processes information. Particularly, I think it's interesting and important to note that early exposure to a wide variety of information helps the brain to process new information more effectively at later points in time. This suggests that our educational systems, particularly those targeted at young students, should strive to present very broad curricula - perhaps even at the expense of depth - so that later in life those learners can process the new and more complex information that they are exposed to more deeply.
Friday, June 6, 2008
Graphical models for cognition
The following visuals should help to communicate my thoughts on cognitive models based on the coordination class model for concepts with added influence from neuroscience research on neuron and neural network characteristics.
Cognition has components...
Some components of cognition have smaller, nested components...
The components of cognition are linked...
Nested components of cognition are linked...
The activity of and relationships among components of cognition are not fully deterministic...
Your feedback is most welcome. I've touched on some of the ways I think this model (stochastic scale-free coordination class) can be supported by experience in the classroom, such as concepts within concepts and student questions that seem "random". What other aspects of cognition and learning do you think this model can support? What aspects of cognition and learning have only limited, if any, representation within this model?
Cognition has components...
Some components of cognition have smaller, nested components...
The components of cognition are linked...
Nested components of cognition are linked...
The activity of and relationships among components of cognition are not fully deterministic...
Your feedback is most welcome. I've touched on some of the ways I think this model (stochastic scale-free coordination class) can be supported by experience in the classroom, such as concepts within concepts and student questions that seem "random". What other aspects of cognition and learning do you think this model can support? What aspects of cognition and learning have only limited, if any, representation within this model?
Sunday, June 1, 2008
Teaching machines to predict brain states
The semantic web gets a boost from functional MRIs (Lee @ Ars technica)
I'm recommending that you take a moment to read the above-linked interesting post over at Ars technica on an article recently published in Science. The post discusses the research in the article, which involved training a neural network to create a sense of meaning for a variety of nouns and verbs, and then linking the neural network's meaning for those words with fMRI brain scans of humans thinking about those words. The neural network was then challenged to predict fMRI patterns of brain activation associated with the words it had developed meanings for, but hadn't already linked with existing fMRI scans - it performed quite well, and continued to perform better than random chance would suggest as the prediction task became less linked to the original word set.
I'm recommending that you take a moment to read the above-linked interesting post over at Ars technica on an article recently published in Science. The post discusses the research in the article, which involved training a neural network to create a sense of meaning for a variety of nouns and verbs, and then linking the neural network's meaning for those words with fMRI brain scans of humans thinking about those words. The neural network was then challenged to predict fMRI patterns of brain activation associated with the words it had developed meanings for, but hadn't already linked with existing fMRI scans - it performed quite well, and continued to perform better than random chance would suggest as the prediction task became less linked to the original word set.
Labels:
brain
Friday, May 30, 2008
Links! PLOS one articles of interest in May 2008
The following articles caught my eye this month in PLOS one ... they're on my reading list for possible inclusion here on BME. In the meantime, I thought that readers might enjoy checking some of them out.
Orientation Sensitivity at Different Stages of Object Processing: Evidence from Repetition Priming and Naming (Harris et al)
Enhancement of Both Long-Term Depression Induction and Optokinetic Response Adaptation in Mice Lacking Delphilin (Takeuchi et al)
Brain Networks for Integrative Rhythm Formation (Thaut et al)
Linking Social and Vocal Brains: Could Social Segregation Prevent a Proper Development of a Central Auditory Area in a Female Songbird? (Cousillas et al)
Imagine Jane and Identify John: Face Identity Aftereffects Induced by Imagined Faces (Ryu et al)
A Potential Neural Substrate for Processing Functional Classes of Complex Acoustic Signals (George et al)
Comparing the Processing of Music and Language Meaning Using EEG and fMRI Provides Evidence for Similar and Distinct Neural Representations (Steinbeis1 and Koelsch)
Visual Learning in Multiple-Object Tracking (Makovski et al)
Time Course of the Involvement of the Right Anterior Superior Temporal Gyrus and the Right Fronto-Parietal Operculum in Emotional Prosody Perception (Hoekert et al)
On How Network Architecture Determines the Dominant Patterns of Spontaneous Neural Activity (Galán)
The Encoding of Temporally Irregular and Regular Visual Patterns in the Human Brain (Zeki et al)
Citral Sensing by Transient Receptor Potential Channels in Dorsal Root Ganglion Neurons (Stotz et al)
Long-Term Activity-Dependent Plasticity of Action Potential Propagation Delay and Amplitude in Cortical Networks (Bakkum et al)
Gender Differences in the Mu Rhythm of the Human Mirror-Neuron System (Cheng et al)
Orientation Sensitivity at Different Stages of Object Processing: Evidence from Repetition Priming and Naming (Harris et al)
Enhancement of Both Long-Term Depression Induction and Optokinetic Response Adaptation in Mice Lacking Delphilin (Takeuchi et al)
Brain Networks for Integrative Rhythm Formation (Thaut et al)
Linking Social and Vocal Brains: Could Social Segregation Prevent a Proper Development of a Central Auditory Area in a Female Songbird? (Cousillas et al)
Imagine Jane and Identify John: Face Identity Aftereffects Induced by Imagined Faces (Ryu et al)
A Potential Neural Substrate for Processing Functional Classes of Complex Acoustic Signals (George et al)
Comparing the Processing of Music and Language Meaning Using EEG and fMRI Provides Evidence for Similar and Distinct Neural Representations (Steinbeis1 and Koelsch)
Visual Learning in Multiple-Object Tracking (Makovski et al)
Time Course of the Involvement of the Right Anterior Superior Temporal Gyrus and the Right Fronto-Parietal Operculum in Emotional Prosody Perception (Hoekert et al)
On How Network Architecture Determines the Dominant Patterns of Spontaneous Neural Activity (Galán)
The Encoding of Temporally Irregular and Regular Visual Patterns in the Human Brain (Zeki et al)
Citral Sensing by Transient Receptor Potential Channels in Dorsal Root Ganglion Neurons (Stotz et al)
Long-Term Activity-Dependent Plasticity of Action Potential Propagation Delay and Amplitude in Cortical Networks (Bakkum et al)
Gender Differences in the Mu Rhythm of the Human Mirror-Neuron System (Cheng et al)
Labels:
brain
Monday, May 26, 2008
Cargo cult psychometrics? Setting standards on standards-based standardized tests
This past week I had the opportunity to work for the Maine Department of Education (DOE) with a small, diverse group of educators on the task of setting achievement standards for the Science component of the Maine High School Assessment (MHSA). The intent of the MHSA is to measure student learning relative to the Maine Learning Results (MLR), a body of learning objectives / outcomes / standards that all students in Maine are expected to be proficient in as a result of their high school educational experience. A few years back, the Maine DOE decided to adopt the College Board's SAT as the MHSA instead of continuing with their own Maine Educational Assessment (MEA), which had been developed with the help of Measured Progress, a non-profit firm based in New Hampshire. Aside from offsetting some development costs, the switch to using the SAT as the primary component of the MHSA was undoubtedly influenced by the nice side-effect that all students in Maine would be one step closer to college application readiness. However, unlike the MEA, the SAT does not correlate with all standards in the MLR. Up until this year the federal DOE did not require states to report student learning in Science (at least in grades 9 - 12 ... I'm not sure about earlier grade levels), so Maine had not included any questions on Science since switching to the SAT. But, because Maine needed to report student learning in Science this year, the state DOE worked with Measured Progress to develop a multiple choice and free response augmentation for the MHSA to measure student learning relative to the Science standards from the MLR in order to be able to comply with federal reporting rules.
Because it had been a few years since Maine students' learning in Science had been assessed, the panel I worked on was tasked with setting achievement standards - expectations - for categorizing overall student performance on the Science augment as "Does Not Meet", "Partially Meets", "Meets", and "Exceeds". We began our work by actually taking the assessment; while I can't discuss any of the test items specifically, I can say that the questions seemed generally well-written and represented a broad and balanced sampling of the Science standards. We were then given a binder containing all the questions in order of difficulty, as determined by Item Response Theory (IRT) analysis, the first step in which is to generate for each question an Item Characteristic Curve (ICC) which, we were told, was of the logistic or "S curve" type, although we didn't see the actual graphs.
IRT is a complicated psychometric analytical framework that I heard of for the first time during this panel - I am still learning about it using the following resources: UIUC tutorial; USF summary; National Cancer Institute's Applied Research pages. We were not taught any of the following specifics on IRT during the panel session. From what I've learned subsequently by reading through the above-linked resources, it appears that the purpose of the ICC is to relate P, the probability of a particular response to a question (in this case, the correct answer) to Theta, the strength of the underlying trait of interest (in this case, knowledge of the MLR's science standards). In a logistic ICC, the S-shaped curve has three variables that influence it's theta value: "a", the discrimination parameter; "b", the difficulty parameter; "c", the guessing parameter. What I'm not yet sure of, and perhaps might never be, is whether the rank-order of the questions in our binders were based on some type of integration of the P v. Theta curve for each question, or if they were based on the "b" value for each item's ICC - from the way it was described to us, I suspect the latter to be the case.
Once we had the binders with ordered questions, we were asked to go through each question and to determine, individually, what the question measured and why it was more difficult than the previous question. A multiple choice or free response question can measure a lot of different factors - we were instructed to concentrate on determining which standard(s) from the MLR's Science section were being assessed. So, our analysis of what was measured by each question left out the important factors of wording, inductive reasoning, and deductive reasoning, just to name a few. After finishing our question-by-question analysis and discussing our individual findings as a group, we moved on to another task: describing the Knowledge, Skills, and Abilities (KSAs) that we associated with students at the four different achievement levels (Does Not Meet, Partially Meets, Meets, and Exceeds). Completing this task took quite a while, as there were many different opinions about what kind of KSAs different educators had observed in and/or expected from students at the different achievement levels.
With achievement level KSAs in mind, we then moved into the "bookmarking" task, the results of which would be sent on to the Maine DOE as the panel's recommendation for the cut scores to categorize students within one of the four achievement levels. In the bookmark standard setting procedure, each of us was given three bookmarks - one to place at the cut between Does Not Meet and Partially Meets, another to place at the cut between Partially Meets and Meets, and a final one to place at the cut between Meets and Exceeds. We were instructed to go through the binders with ordered questions and, starting from the easiest question at the very beginning, to place the bookmarks at the transition point where we felt that only the students with the KSAs characteristic of the next-higher achievement level would be able to get the question correct 2/3 of the time. Again, as with IRT, the underpinnings of the bookmark standard setting procedure weren't explained to us in detail, so I've been reading the following sources to learn more about it: Wisconsin DPI - 1; Wisconsin DPI - 2; Lin's work at U. Alberta's CRAME [PDF].
And so, we each went through our binders, set our bookmarks, and gave the results to the psychometrician from Measured Progress. Our bookmarks were averaged, and the average placement of the bookmarks were presented back to us for group discussion. We talked about where we placed our bookmarks individually and why we placed them there - some people were much above the average, some much below, and others very near or at the group's average placement. Conversation revealed that some panelists did not fully understand the instructions on where to place the bookmarks (if I recall, I think most confusion was due to the instructions about only the students at the next highest achievement level being able to get the question correct 2/3 of the time). Conversation also helped many panelists to re-evaluate the placement of their bookmarks based on question characteristics that had not been considered in the first round. We were then given the opportunity to place our bookmarks a second time, and were told that these results (which were not shared with us) would be passed on to the Maine DOE as the panel's recommendation for cut scores for categorizing student achievement.
During one of our breaks on the second day, when we were working on the bookmarking task, another panelist I was talking with asked if I had ever read any Richard Feynman, particularly his essay called "Cargo Cult Science". Although I'd heard of Feynman before, I replied that I hadn't read any of his work - the panelist described it to me as pertaining to the distinction between science and pseudoscience, and shared with me his feeling that our attempts to measure and set standards for student knowledge felt a lot like what Feynman was describing in that essay. At the time, I felt a bit of disagreement - although I know that measuring knowledge of standards via any assessment is bound to have flaws, I don't think it's psuedoscience. I've since read Feynman's essay, and understand more about his distinction between science and pseudoscience, which helps me to understand better my fellow panelist's remark -- I think it is captured by this quote: "In summary, the idea is to try to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another."
Feynman's essay goes on to discuss the obligation of scientists to other scientists, as well as to non-scientists. I particularly agree with the responsibility of scientists to "bend over backwards" to demonstrate how they could be wrong, particularly when the complexity of the problem and/or the solution make it likely that the non-expert / non-scientist will believe the scientist's conclusion simply because they can't understand or evaluate the work on their own. My experience in working on this standard-setting panel provided me with invaluable insight into a complex process whose results have significant implications. Even our panel of experienced science educators struggled to understand the complexity of the standard setting process that we implemented, and the full underlying complexity of the entire process (ie: Item Response Theory and Bookmarking Method) was not explained. Given that there could be significant differences in the KSAs associated with each achievement level depending on the composition of the panel, and given that the underlying complexity of the task is significant, I think it is accurate to label this work as "cargo cult science" because only the results are shared with a broad audience. I don't think that the task of measuring knowledge with "pencil and paper" assessment is inherently pseudoscience - but we ultimately do a disservice to the potential for making education more scientific when the full scope of this type of work is not published.
Because it had been a few years since Maine students' learning in Science had been assessed, the panel I worked on was tasked with setting achievement standards - expectations - for categorizing overall student performance on the Science augment as "Does Not Meet", "Partially Meets", "Meets", and "Exceeds". We began our work by actually taking the assessment; while I can't discuss any of the test items specifically, I can say that the questions seemed generally well-written and represented a broad and balanced sampling of the Science standards. We were then given a binder containing all the questions in order of difficulty, as determined by Item Response Theory (IRT) analysis, the first step in which is to generate for each question an Item Characteristic Curve (ICC) which, we were told, was of the logistic or "S curve" type, although we didn't see the actual graphs.
IRT is a complicated psychometric analytical framework that I heard of for the first time during this panel - I am still learning about it using the following resources: UIUC tutorial; USF summary; National Cancer Institute's Applied Research pages. We were not taught any of the following specifics on IRT during the panel session. From what I've learned subsequently by reading through the above-linked resources, it appears that the purpose of the ICC is to relate P, the probability of a particular response to a question (in this case, the correct answer) to Theta, the strength of the underlying trait of interest (in this case, knowledge of the MLR's science standards). In a logistic ICC, the S-shaped curve has three variables that influence it's theta value: "a", the discrimination parameter; "b", the difficulty parameter; "c", the guessing parameter. What I'm not yet sure of, and perhaps might never be, is whether the rank-order of the questions in our binders were based on some type of integration of the P v. Theta curve for each question, or if they were based on the "b" value for each item's ICC - from the way it was described to us, I suspect the latter to be the case.
Once we had the binders with ordered questions, we were asked to go through each question and to determine, individually, what the question measured and why it was more difficult than the previous question. A multiple choice or free response question can measure a lot of different factors - we were instructed to concentrate on determining which standard(s) from the MLR's Science section were being assessed. So, our analysis of what was measured by each question left out the important factors of wording, inductive reasoning, and deductive reasoning, just to name a few. After finishing our question-by-question analysis and discussing our individual findings as a group, we moved on to another task: describing the Knowledge, Skills, and Abilities (KSAs) that we associated with students at the four different achievement levels (Does Not Meet, Partially Meets, Meets, and Exceeds). Completing this task took quite a while, as there were many different opinions about what kind of KSAs different educators had observed in and/or expected from students at the different achievement levels.
With achievement level KSAs in mind, we then moved into the "bookmarking" task, the results of which would be sent on to the Maine DOE as the panel's recommendation for the cut scores to categorize students within one of the four achievement levels. In the bookmark standard setting procedure, each of us was given three bookmarks - one to place at the cut between Does Not Meet and Partially Meets, another to place at the cut between Partially Meets and Meets, and a final one to place at the cut between Meets and Exceeds. We were instructed to go through the binders with ordered questions and, starting from the easiest question at the very beginning, to place the bookmarks at the transition point where we felt that only the students with the KSAs characteristic of the next-higher achievement level would be able to get the question correct 2/3 of the time. Again, as with IRT, the underpinnings of the bookmark standard setting procedure weren't explained to us in detail, so I've been reading the following sources to learn more about it: Wisconsin DPI - 1; Wisconsin DPI - 2; Lin's work at U. Alberta's CRAME [PDF].
And so, we each went through our binders, set our bookmarks, and gave the results to the psychometrician from Measured Progress. Our bookmarks were averaged, and the average placement of the bookmarks were presented back to us for group discussion. We talked about where we placed our bookmarks individually and why we placed them there - some people were much above the average, some much below, and others very near or at the group's average placement. Conversation revealed that some panelists did not fully understand the instructions on where to place the bookmarks (if I recall, I think most confusion was due to the instructions about only the students at the next highest achievement level being able to get the question correct 2/3 of the time). Conversation also helped many panelists to re-evaluate the placement of their bookmarks based on question characteristics that had not been considered in the first round. We were then given the opportunity to place our bookmarks a second time, and were told that these results (which were not shared with us) would be passed on to the Maine DOE as the panel's recommendation for cut scores for categorizing student achievement.
During one of our breaks on the second day, when we were working on the bookmarking task, another panelist I was talking with asked if I had ever read any Richard Feynman, particularly his essay called "Cargo Cult Science". Although I'd heard of Feynman before, I replied that I hadn't read any of his work - the panelist described it to me as pertaining to the distinction between science and pseudoscience, and shared with me his feeling that our attempts to measure and set standards for student knowledge felt a lot like what Feynman was describing in that essay. At the time, I felt a bit of disagreement - although I know that measuring knowledge of standards via any assessment is bound to have flaws, I don't think it's psuedoscience. I've since read Feynman's essay, and understand more about his distinction between science and pseudoscience, which helps me to understand better my fellow panelist's remark -- I think it is captured by this quote: "In summary, the idea is to try to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another."
Feynman's essay goes on to discuss the obligation of scientists to other scientists, as well as to non-scientists. I particularly agree with the responsibility of scientists to "bend over backwards" to demonstrate how they could be wrong, particularly when the complexity of the problem and/or the solution make it likely that the non-expert / non-scientist will believe the scientist's conclusion simply because they can't understand or evaluate the work on their own. My experience in working on this standard-setting panel provided me with invaluable insight into a complex process whose results have significant implications. Even our panel of experienced science educators struggled to understand the complexity of the standard setting process that we implemented, and the full underlying complexity of the entire process (ie: Item Response Theory and Bookmarking Method) was not explained. Given that there could be significant differences in the KSAs associated with each achievement level depending on the composition of the panel, and given that the underlying complexity of the task is significant, I think it is accurate to label this work as "cargo cult science" because only the results are shared with a broad audience. I don't think that the task of measuring knowledge with "pencil and paper" assessment is inherently pseudoscience - but we ultimately do a disservice to the potential for making education more scientific when the full scope of this type of work is not published.
Thursday, May 15, 2008
Autism-like effects and mitochondrial disorders...?
The Role of Thioredoxin Reductases in Brain Development (Soerensen et al)
A Marked Effect of Electroconvulsive Stimulation on Behavioral Aberration of Mice with Neuron-Specific Mitochondrial DNA Defects (Kasahara et al)
One of my usual projects in the Biology classes that I teach is a "controversial issues" research paper and presentation that requires the student to pick a topic, describe it scientifically, explore multiple perspectives on the topic, and detail their own opinion. A commonly-chosen topic has been autism-vaccinations debate, which is an issue that I'm interested in as well. My father was infected by the Polio virus shortly before the Salk vaccine became publicly available - details aside, it left him with life-long disabilities that have increased in impact with age. What's interesting to think about is that despite the significant health consequences, he was lucky in the sense that he survived Polio in the first place. As you might reasonably guess, I'm a strong proponent of vaccinations - but I also don't discount the possibility that vaccines (overall, and/or specific ingredients) may have negative health consequences for some individuals exposed to them.
So what does this have to do with the brain? Recently in the news I discovered that a US court had ruled that a vaccine had triggered a mitochondrial disorder that caused the manifestation of autism-like symptoms. I've linked to a couple of articles above that help to demonstrate the important role that mitochondria play in the brain - one of the most metabolically-active organs in the body. Research on autism is on-going, but it's clear that it's a brain-based disorder. Furthermore, research is also continuing on mitochondrial disease (also see info from the NIH on mitochondrial myopathy), but it's clear that these diseases have the potential to affect brain function. Finally, many are beginning to research the potential link between mitochondrial disease and autism, as evidenced by this article [PDF link] from a peer-reviewed scientific journal on pediatric medicine and this page with information and links from the CDC.
A Marked Effect of Electroconvulsive Stimulation on Behavioral Aberration of Mice with Neuron-Specific Mitochondrial DNA Defects (Kasahara et al)
One of my usual projects in the Biology classes that I teach is a "controversial issues" research paper and presentation that requires the student to pick a topic, describe it scientifically, explore multiple perspectives on the topic, and detail their own opinion. A commonly-chosen topic has been autism-vaccinations debate, which is an issue that I'm interested in as well. My father was infected by the Polio virus shortly before the Salk vaccine became publicly available - details aside, it left him with life-long disabilities that have increased in impact with age. What's interesting to think about is that despite the significant health consequences, he was lucky in the sense that he survived Polio in the first place. As you might reasonably guess, I'm a strong proponent of vaccinations - but I also don't discount the possibility that vaccines (overall, and/or specific ingredients) may have negative health consequences for some individuals exposed to them.
So what does this have to do with the brain? Recently in the news I discovered that a US court had ruled that a vaccine had triggered a mitochondrial disorder that caused the manifestation of autism-like symptoms. I've linked to a couple of articles above that help to demonstrate the important role that mitochondria play in the brain - one of the most metabolically-active organs in the body. Research on autism is on-going, but it's clear that it's a brain-based disorder. Furthermore, research is also continuing on mitochondrial disease (also see info from the NIH on mitochondrial myopathy), but it's clear that these diseases have the potential to affect brain function. Finally, many are beginning to research the potential link between mitochondrial disease and autism, as evidenced by this article [PDF link] from a peer-reviewed scientific journal on pediatric medicine and this page with information and links from the CDC.
Labels:
brain
Wednesday, May 7, 2008
Measuring academic ability, teacher retention, and student learning
A recent post on Dangerously Irrelevant (cross-posted to LeaderTalk) prompted me to think quite a bit about how academic ability is measured...and how teacher academic ability measurements correlate with measurements of student academic ability. I think the conclusions are built on pretty soft data ... just because there's a correlation between teacher college entry exam scores and teacher attrition doesn't necessarily mean that there's a "brain drain" from the classroom, and just because there's a correlation between those same types of teacher scores and student scores doesn't mean that those are necessarily the most academically-able teachers or students. The large and important point here is that the measurement of academic ability is difficult, and current instruments are limited in scope - so, although we should certainly support the retention of our best teachers, we shouldn't be satisfied with building our arguments on soft data.
The following are some comments that I made on the post at D.I. ... I'm re-posting them here for my own archival purposes and because the topic of measuring academic ability is clearly within the domain of cognitive psychology. As you can tell, I'm responding to other comments, which I don't feel comfortable re-posting here because they're not mine -- please visit the above link to the post at D.I. to read the full article and ensuing discussion -- it's very interesting.
Comment 1:
I'm on-board with the idea that smarter teachers are going to be the best at promoting student learning. But I'd like to echo and add to Orenta's point above .. if we're railing against using standardized tests as such an important measure of current student learning, how can we maintain the integrity of our argument by claiming that the same type of testing is an important and valid measure for teachers' knowledge? Though anecdotal, I know plenty of people who've declined in intelligence / knowledge / learning capacity after the structural support of family and high school settings are gone, which can have a strong positive influence on college entrance exam scores. Also, I know plenty of people who have developed incredibly as learners both in college and post-college, particularly in the context of teaching others. So, again, I'm all for getting the smartest teachers possible in the classroom, and finding ways to keep them there - but I think it's a bit disingenuous, for multiple reasons, to use college entrance scores on standardized exams to make the point.
Comment 2:
I agree that there's an important and significant role for standardized test scores in the interpretation of student and teacher intelligence. However, I am uncomfortable with your generalized statement - if based only on the citations from Anderson & Carroll, and Guarino et al - that "the percentage of teachers with lower academic ability increases in schools over time. The brightest go elsewhere." and your stated assumption #1 "smart people are less likely to stay in teaching (thus resulting in a concentration of teachers with lower academic ability)." As I said originally, I absolutely support the notion that we should make more effort to retain our brightest teachers; I stand by my claim that scores on standardized tests taken in high school, or even at the end of a college program, by individuals who then become teachers are not the best data to use when making the argument that there is a longitudinal "brain drain" from the classroom. While there may be a correlation between this particular teacher characteristic and student achievement, I hesitate to make the jump into causality, as do Wayne & Youngs: "When statistical methods seem to establish that a particular quality indicator influences student achievement, readers still must draw conclusions cautiously. Theory generates alternative explanations that statistical methods must reject, so a positive finding is only as strong as the theory undergirding the analysis. If the theory is incomplete—or data on the plausible determinants of student achievement are incomplete—the untheorized or unavailable determinants of student achievement could potentially correlate with the teacher quality variable (i.e., correlation between the error term and the teacher quality variable). Thus, student achievement differences that appear connected to teacher qualifications might in truth originate in omitted variables." Further, in the section of their article specific to the review of studies on teacher test scores and student achievement, Wayne & Youngs point out that none of the teacher tests used in those studies are still in use, and emphasize the importance of researching the correlation between student achievement and teacher performance on assessments of their skill beyond standardized tests. My larger point, which I attempted to make by providing anecdotal evidence, is that the teachers who do remain in education for longer periods of time are not necessarily less able to promote student learning, even if there is a correlation that points to their tendency to have scored lower on standardized tests prior to their entry into college and/or the classroom. With that said, I would certainly support hiring policy shifts toward selecting applicants with the highest academic credentials possible, including historical and more recent scores on standardized assessments.
Comment 3:
The older the data, the more inappropriate it is as a measure of academic ability. This is well-accepted with IQ tests, for example - the score is compared with others in the same age-range, not with the entire population. With that reasoning in mind, I think it's a bit provocative to use college entrance exam scores as the only data you show in your post about the "brain drain" from the classroom. In my read of the articles you cite, I interpret the authors as being far more reserved in their interpretation of the available data, noting limitations in data availability and suggesting caution in forming inferences based on it. For example, in the Anderson & Carroll DOE study they show that teachers are more likely to earn a graduate degree than non-teachers, and that teachers with a graduate degree are less likely to leave the profession than those with an undergraduate degree. This, to me, is great support for questioning the veracity of the relationship between college entry exam scores and teacher attrition, and exemplifies the significant personal, formal, and professional learning that occurs in college and within the first few years of experience in teaching (and all "leavers" in that study had at least one year of experience in the profession). I suppose that I'm being picky about this data because I worry about the impact on current and prospective teachers from this type of provocation. I don't think anyone in any profession would want either of these two possibilities: 1 - to think that their academic ability is being reduced to and summarized by a standardized test score (SAT or ACT) they earned in high school; 2 - to think that the longer they stay in the profession, the more "less-academically-able" people they'll be working with. Especially with regard to point #2, although it *may* be true, I think we're obliged to use better data to make such a negative critique. My further problem with the use of this data to demonstrate a "brain drain" from the classroom is that there is *so* *much* *more* to being an effective teacher than the (limited) aspects of academic ability indicated by scores on the SAT, ACT, college selectivity, college GPA, Praxis I, Praxis II, or an IQ test. For example, even beyond subject-area expertise, what about being creative and being able to help others be creative? What about the ability to collaborate and to help others to collaborate? What about technology skills and the ability to help others to use technology? My impression is that we're hoping for these skills to manifest more and more in the classroom over time - but none of them are measured by the above-named instruments ... and we're not citing research in the "brain drain" question that attempts to measure them actively (let alone current academic ability), or to correlate measurements of those important "21st Century Skills" with measures of academic ability (whether old or current). I am an ardent advocate of data-driven policy development, decision making, and education research, but I think we need to be really careful - more so than in this post - when we're using data to make a point that is critical of those who are currently in the teaching profession and those who are about to enter it. Certainly this provocative writing has herein inspired productive discussion - but I must admit that I have some concern that using what I would describe as very suspect data to make what may in fact be a valid point may actually serve to exacerbate the very problem that you're writing about and that I think we're all working to prevent: good teachers leaving the classroom.
The following are some comments that I made on the post at D.I. ... I'm re-posting them here for my own archival purposes and because the topic of measuring academic ability is clearly within the domain of cognitive psychology. As you can tell, I'm responding to other comments, which I don't feel comfortable re-posting here because they're not mine -- please visit the above link to the post at D.I. to read the full article and ensuing discussion -- it's very interesting.
Comment 1:
I'm on-board with the idea that smarter teachers are going to be the best at promoting student learning. But I'd like to echo and add to Orenta's point above .. if we're railing against using standardized tests as such an important measure of current student learning, how can we maintain the integrity of our argument by claiming that the same type of testing is an important and valid measure for teachers' knowledge? Though anecdotal, I know plenty of people who've declined in intelligence / knowledge / learning capacity after the structural support of family and high school settings are gone, which can have a strong positive influence on college entrance exam scores. Also, I know plenty of people who have developed incredibly as learners both in college and post-college, particularly in the context of teaching others. So, again, I'm all for getting the smartest teachers possible in the classroom, and finding ways to keep them there - but I think it's a bit disingenuous, for multiple reasons, to use college entrance scores on standardized exams to make the point.
Comment 2:
I agree that there's an important and significant role for standardized test scores in the interpretation of student and teacher intelligence. However, I am uncomfortable with your generalized statement - if based only on the citations from Anderson & Carroll, and Guarino et al - that "the percentage of teachers with lower academic ability increases in schools over time. The brightest go elsewhere." and your stated assumption #1 "smart people are less likely to stay in teaching (thus resulting in a concentration of teachers with lower academic ability)." As I said originally, I absolutely support the notion that we should make more effort to retain our brightest teachers; I stand by my claim that scores on standardized tests taken in high school, or even at the end of a college program, by individuals who then become teachers are not the best data to use when making the argument that there is a longitudinal "brain drain" from the classroom. While there may be a correlation between this particular teacher characteristic and student achievement, I hesitate to make the jump into causality, as do Wayne & Youngs: "When statistical methods seem to establish that a particular quality indicator influences student achievement, readers still must draw conclusions cautiously. Theory generates alternative explanations that statistical methods must reject, so a positive finding is only as strong as the theory undergirding the analysis. If the theory is incomplete—or data on the plausible determinants of student achievement are incomplete—the untheorized or unavailable determinants of student achievement could potentially correlate with the teacher quality variable (i.e., correlation between the error term and the teacher quality variable). Thus, student achievement differences that appear connected to teacher qualifications might in truth originate in omitted variables." Further, in the section of their article specific to the review of studies on teacher test scores and student achievement, Wayne & Youngs point out that none of the teacher tests used in those studies are still in use, and emphasize the importance of researching the correlation between student achievement and teacher performance on assessments of their skill beyond standardized tests. My larger point, which I attempted to make by providing anecdotal evidence, is that the teachers who do remain in education for longer periods of time are not necessarily less able to promote student learning, even if there is a correlation that points to their tendency to have scored lower on standardized tests prior to their entry into college and/or the classroom. With that said, I would certainly support hiring policy shifts toward selecting applicants with the highest academic credentials possible, including historical and more recent scores on standardized assessments.
Comment 3:
The older the data, the more inappropriate it is as a measure of academic ability. This is well-accepted with IQ tests, for example - the score is compared with others in the same age-range, not with the entire population. With that reasoning in mind, I think it's a bit provocative to use college entrance exam scores as the only data you show in your post about the "brain drain" from the classroom. In my read of the articles you cite, I interpret the authors as being far more reserved in their interpretation of the available data, noting limitations in data availability and suggesting caution in forming inferences based on it. For example, in the Anderson & Carroll DOE study they show that teachers are more likely to earn a graduate degree than non-teachers, and that teachers with a graduate degree are less likely to leave the profession than those with an undergraduate degree. This, to me, is great support for questioning the veracity of the relationship between college entry exam scores and teacher attrition, and exemplifies the significant personal, formal, and professional learning that occurs in college and within the first few years of experience in teaching (and all "leavers" in that study had at least one year of experience in the profession). I suppose that I'm being picky about this data because I worry about the impact on current and prospective teachers from this type of provocation. I don't think anyone in any profession would want either of these two possibilities: 1 - to think that their academic ability is being reduced to and summarized by a standardized test score (SAT or ACT) they earned in high school; 2 - to think that the longer they stay in the profession, the more "less-academically-able" people they'll be working with. Especially with regard to point #2, although it *may* be true, I think we're obliged to use better data to make such a negative critique. My further problem with the use of this data to demonstrate a "brain drain" from the classroom is that there is *so* *much* *more* to being an effective teacher than the (limited) aspects of academic ability indicated by scores on the SAT, ACT, college selectivity, college GPA, Praxis I, Praxis II, or an IQ test. For example, even beyond subject-area expertise, what about being creative and being able to help others be creative? What about the ability to collaborate and to help others to collaborate? What about technology skills and the ability to help others to use technology? My impression is that we're hoping for these skills to manifest more and more in the classroom over time - but none of them are measured by the above-named instruments ... and we're not citing research in the "brain drain" question that attempts to measure them actively (let alone current academic ability), or to correlate measurements of those important "21st Century Skills" with measures of academic ability (whether old or current). I am an ardent advocate of data-driven policy development, decision making, and education research, but I think we need to be really careful - more so than in this post - when we're using data to make a point that is critical of those who are currently in the teaching profession and those who are about to enter it. Certainly this provocative writing has herein inspired productive discussion - but I must admit that I have some concern that using what I would describe as very suspect data to make what may in fact be a valid point may actually serve to exacerbate the very problem that you're writing about and that I think we're all working to prevent: good teachers leaving the classroom.
Labels:
education
Tuesday, May 6, 2008
Mouse genetics and formation of spatial memory
Mouse Cognition-Related Behavior in the Open-Field: Emergence of Places of Attraction (Dvorkin et al)
The mouse in a maze is a pretty familiar image to many, even if they are only loosely familiar with the formalities of psychological studies. In this article, researchers placed different genetic strains of mice in an open space - not a maze - and tracked their patterns of movement. Interestingly, the researchers determined that a correlation exists between the genetic strain of mouse and the movement behavior of the mouse, and inferred that perceptual and/or cognitive differences (due to genetics) are the causal factor. The movement behavior investigated was the tendency of the mouse to stop in a particular location in the open space, relative to how many times the mouse had passed through or stopped in that particular location. With various factors (including overall tendency to stop, as well as olfactory influences) controlled for, the researchers discovered that two of the three strains of mice investigated were more likely to stop in a particular location if they had passed through it or stopped there more times (implying that stopping behavior is related to memory of location), whereas the third strain of mouse did not show such a relationship between those aspects of movement behavior. The strain of mouse that did not exhibit this movement pattern is of a genetic variety that is known to cause malfunctions in the hippocampus, an area of the brain known to have a significant role in memory. The strain of mouse that had the greatest tendency to stop in previously-visited locations is reportedly a relatively new genetic strain that is highly similar to wild-type mice. This study provides valuable information supporting the role of genetics on the formation of spatial memory in mice. I would suggest that it also supports the notion that even in different and more complex species such as our own, inherited biological factors may predispose certain individuals to have greater or lesser learning capacity in very specific types of tasks (not just overall).
The mouse in a maze is a pretty familiar image to many, even if they are only loosely familiar with the formalities of psychological studies. In this article, researchers placed different genetic strains of mice in an open space - not a maze - and tracked their patterns of movement. Interestingly, the researchers determined that a correlation exists between the genetic strain of mouse and the movement behavior of the mouse, and inferred that perceptual and/or cognitive differences (due to genetics) are the causal factor. The movement behavior investigated was the tendency of the mouse to stop in a particular location in the open space, relative to how many times the mouse had passed through or stopped in that particular location. With various factors (including overall tendency to stop, as well as olfactory influences) controlled for, the researchers discovered that two of the three strains of mice investigated were more likely to stop in a particular location if they had passed through it or stopped there more times (implying that stopping behavior is related to memory of location), whereas the third strain of mouse did not show such a relationship between those aspects of movement behavior. The strain of mouse that did not exhibit this movement pattern is of a genetic variety that is known to cause malfunctions in the hippocampus, an area of the brain known to have a significant role in memory. The strain of mouse that had the greatest tendency to stop in previously-visited locations is reportedly a relatively new genetic strain that is highly similar to wild-type mice. This study provides valuable information supporting the role of genetics on the formation of spatial memory in mice. I would suggest that it also supports the notion that even in different and more complex species such as our own, inherited biological factors may predispose certain individuals to have greater or lesser learning capacity in very specific types of tasks (not just overall).
Labels:
brain
Wednesday, April 30, 2008
Critical: construction of the cortical connectome
The Human Connectome - A Structural Description of the Human Brain (Sporns et al)
One of the major limitations in the effort to improve our understanding of the brain-mind relationship is the lack of available data on the arrangement and connections of and among neurons in the brain. The authors of this article propose "connectome" as the name for this important data set, and suggest that the data set include details on neuron position (using a common coordinate system), the absence or presence of connection(s) to other neuron(s), and, if those connections are present, information on the type of connection (ie. excitatory or inhibitory) and the biochemical / biophysical details of the connection. Furthermore, the authors suggest a strategic approach to the development of the connectome: given that one of the primary uses of the connectome will be to establish the link between brain activity and cognitive activity, it would make sense to establish the connectome of the cortex first. The authors further suggest that the connectome should first be described at a larger scale than individual neurons, given the enormous number of neurons in the human brain (approximately 10^11), the even larger number of connections among neurons (approximately 10^13), the plasticity of individual neurons and synapses, and the apparent role of groups of neurons (fibers) in brain function. Although it is difficult to isolate functional groups of fibers, the authors propose that a particular MRI method known as diffusion tensor imaging (DTI) could be useful in developing this initial draft of the connectome.
One of the major limitations in the effort to improve our understanding of the brain-mind relationship is the lack of available data on the arrangement and connections of and among neurons in the brain. The authors of this article propose "connectome" as the name for this important data set, and suggest that the data set include details on neuron position (using a common coordinate system), the absence or presence of connection(s) to other neuron(s), and, if those connections are present, information on the type of connection (ie. excitatory or inhibitory) and the biochemical / biophysical details of the connection. Furthermore, the authors suggest a strategic approach to the development of the connectome: given that one of the primary uses of the connectome will be to establish the link between brain activity and cognitive activity, it would make sense to establish the connectome of the cortex first. The authors further suggest that the connectome should first be described at a larger scale than individual neurons, given the enormous number of neurons in the human brain (approximately 10^11), the even larger number of connections among neurons (approximately 10^13), the plasticity of individual neurons and synapses, and the apparent role of groups of neurons (fibers) in brain function. Although it is difficult to isolate functional groups of fibers, the authors propose that a particular MRI method known as diffusion tensor imaging (DTI) could be useful in developing this initial draft of the connectome.
Labels:
brain
Monday, April 28, 2008
Development, nutrition, and learning
Compensatory Growth Impairs Adult Cognitive Performance (Fisher et al)
In this study, zebra finch birds were used to explore the consequences of poor nutrition early in life on cognitive performance later in life. The researchers found that the zebra finches that exhibited the most growth following the end of the period of poor nutrition showed the slowest performance on the learning task. What's interesting is that all birds tested for speed on a learning task as adults were of the same size, even though one group of birds had been subjected to nutritional deprivation. So, the researchers were able to compare the adult size of the bird with their size at the end of the period of poor nutrition to quantify the compensatory growth their bodies produced when a healthier diet became available. Similar studies are cited in the research that involved comparing the cognitive performance of human babies born at low birth weight, some of whom received a nutritionally-enriched diet and others of whom did not; the results were similar to those found in the zebra finch bird study. This study provides further support for the well-accepted idea that phenotypes are influenced by environmental variables such as nutrition, and that cognitive phenotypes are not an exception. However, this study adds significant detail on phenotype variability resulting from changes in the environment that take place during critical periods of development, particularly relative to the development of the brain. Although the "big picture" suggestion is certainly not to withhold better nutrition if and when it is available, it is interesting to note that the overall benefit of compensatory growth can cause some specific deficiencies later in life, particularly with regard to cognitive systems. Further research should investigate the consequences of varied timing of nutritional deficits and their impact on learning later in life, but for now it seems clear that we should make every effort to ensure that nutritious food is available from the beginning and throughout a child's life.
In this study, zebra finch birds were used to explore the consequences of poor nutrition early in life on cognitive performance later in life. The researchers found that the zebra finches that exhibited the most growth following the end of the period of poor nutrition showed the slowest performance on the learning task. What's interesting is that all birds tested for speed on a learning task as adults were of the same size, even though one group of birds had been subjected to nutritional deprivation. So, the researchers were able to compare the adult size of the bird with their size at the end of the period of poor nutrition to quantify the compensatory growth their bodies produced when a healthier diet became available. Similar studies are cited in the research that involved comparing the cognitive performance of human babies born at low birth weight, some of whom received a nutritionally-enriched diet and others of whom did not; the results were similar to those found in the zebra finch bird study. This study provides further support for the well-accepted idea that phenotypes are influenced by environmental variables such as nutrition, and that cognitive phenotypes are not an exception. However, this study adds significant detail on phenotype variability resulting from changes in the environment that take place during critical periods of development, particularly relative to the development of the brain. Although the "big picture" suggestion is certainly not to withhold better nutrition if and when it is available, it is interesting to note that the overall benefit of compensatory growth can cause some specific deficiencies later in life, particularly with regard to cognitive systems. Further research should investigate the consequences of varied timing of nutritional deficits and their impact on learning later in life, but for now it seems clear that we should make every effort to ensure that nutritious food is available from the beginning and throughout a child's life.
Friday, April 25, 2008
Plagiarism education, the ontology of ideas, and epistemological resources
Through my experiences in education, I've generally observed that students appear to have a relatively superficial understanding of what plagiarism is and why it's wrong. Because of the general lack of deep understanding of the issue, along with its potential repercussions, I feel that learning about plagiarism is critical to student growth and development. Traditional approaches to plagiarism education tend to be based on the misconceptions model for learning. Perhaps a better approach to promoting student learning about the issues surrounding plagiarism could be based on a cognitive resources conceptual model involving students' personal epistemological resources.
It is my experience that students reach relatively fast and easy agreement that wholesale "copy and paste" without citation is plagiarism and is wrong. Students can see that such actions deny credit to others who have put forth significant effort, and that this type of plagiarism clearly inhibit the teacher's ability to assess their personal understanding of the knowledge or skill at hand. Students do admit that it happens, but tend to blame the behavior on laziness and/or procrastination. However, in situations in which they submit work in a good-faith attempts to complete assignments according to teacher instructions, students are in much greater disagreement with regard to the more abstract forms of plagiarism that involve using ideas and/or structures of ideas without citation. For example, students are often challenged to do research for assignments, and are instructed to express "in their own words" the information they find. Many students feel that an idea that has been re-worded is sufficiently different enough from the original that citation is not required, while others feel that ideas taken from others must be cited even if expressed in alternate wording. Some students can see extensions of the issue of citation, and question whether the teacher should be cited for ideas transmitted orally in class or otherwise.
Students can become frustrated when confronted with rigorous interpretations of plagiarism that require comprehensive citation. The position of such students tends to be, in essence, that ideas are so fundamentally dependent on other non-original knowledge that they can't be copied (ie: only the expression of ideas can be copied). In other words, student frustrations about citation rules seem to be influenced by their understanding of the complexity of the "heredity" of ideas. Simply put, these students feel that ideas are either observations available to anyone or are already so dependent on other foundational concepts that they don't deserve qualification as original; in neither case is citation viewed as necessary. I think it's important to acknowledge that there is significant understanding about the origin and nature of ideas underlying the position that new ideas are based on pre-existing ideas. Perhaps the frustration these students experience has to do with uncertainty on how far back citation has to go in order to avoid plagiarism. These are pretty insightful thoughts and completely valid concerns, given that plagiarism and its significant consequences are presented in a very black & white, right & wrong dichotomy. I think these students are putting forth the effort to promote their own learning and to follow their teachers' guidelines and want a clear set of rules on how to avoid crossing the line and committing plagiarism.
As I've been thinking about student understanding of plagiarism and its fundamental relationship with concepts on the origin and nature of ideas, I've remembered some reading I did in grad school on personal epistemology (Hofer, 2001) and on using epistemological resources in physics education (Hammer & Elby, 2003). At this stage my thoughts are still in a very formative stage with regard to using epistemological resources in plagiarism education. However, it seems to me very clear that the students I've encountered do have concepts of knowledge and knowing that are activated when they discuss plagiarism. Furthermore, it also seems clear to me that students could benefit from improvements to plagiarism education that would move it beyond the misconceptions approach and into a cognitive resources approach. Efforts in this direction include the book "Originality, Imitation, and Plagiarism" edited by Eisner and Vicinus highlighted by Inside Higher Education, which also highlighted specific plagiarism education methods presented by Hagopian and others at a recent conference on college writing. My hope is that further development of a conceptual model for student thinking on plagiarism, based on epistemological cognitive resources, will advance student and teacher understanding of a topic that is clearly more complex than a misconception of academic integrity.
EDIT: After posting this I decided to do a Google search on "plagiarism and epistemology", wondering if anyone else was thinking on the same track. Although it seems we've come to it from somewhat different paths, Rebecca Moore Howard has much more extensive and refined work on the epistemology of plagiarism, the afore-linked presented at a conference (called "Originality, Imitation, and Plagiarism") at the University of Michigan in 2005. As is clear from her publications list, I've got lots of reading to do - and I'm sure I'll find even more. It's going to be interesting to explore applying these ideas with secondary-level students ... perhaps I'll even find others in secondary-ed who are already on this track, too. I'll keep posting on this, but wanted to edit this post to include this disclaimer to avoid any potential appearance plagiarism on my own part!
References
Hammer, D. and Elby, A. (2003). Tapping Epistemological Resources for Learning Physics. Journal of the Learning Sciences, 12, 53-90.
Hofer, B. (2001). Personal epistemology research: Implications for learning and
instruction. Educational Psychology Review, 13(4), 353-382.
It is my experience that students reach relatively fast and easy agreement that wholesale "copy and paste" without citation is plagiarism and is wrong. Students can see that such actions deny credit to others who have put forth significant effort, and that this type of plagiarism clearly inhibit the teacher's ability to assess their personal understanding of the knowledge or skill at hand. Students do admit that it happens, but tend to blame the behavior on laziness and/or procrastination. However, in situations in which they submit work in a good-faith attempts to complete assignments according to teacher instructions, students are in much greater disagreement with regard to the more abstract forms of plagiarism that involve using ideas and/or structures of ideas without citation. For example, students are often challenged to do research for assignments, and are instructed to express "in their own words" the information they find. Many students feel that an idea that has been re-worded is sufficiently different enough from the original that citation is not required, while others feel that ideas taken from others must be cited even if expressed in alternate wording. Some students can see extensions of the issue of citation, and question whether the teacher should be cited for ideas transmitted orally in class or otherwise.
Students can become frustrated when confronted with rigorous interpretations of plagiarism that require comprehensive citation. The position of such students tends to be, in essence, that ideas are so fundamentally dependent on other non-original knowledge that they can't be copied (ie: only the expression of ideas can be copied). In other words, student frustrations about citation rules seem to be influenced by their understanding of the complexity of the "heredity" of ideas. Simply put, these students feel that ideas are either observations available to anyone or are already so dependent on other foundational concepts that they don't deserve qualification as original; in neither case is citation viewed as necessary. I think it's important to acknowledge that there is significant understanding about the origin and nature of ideas underlying the position that new ideas are based on pre-existing ideas. Perhaps the frustration these students experience has to do with uncertainty on how far back citation has to go in order to avoid plagiarism. These are pretty insightful thoughts and completely valid concerns, given that plagiarism and its significant consequences are presented in a very black & white, right & wrong dichotomy. I think these students are putting forth the effort to promote their own learning and to follow their teachers' guidelines and want a clear set of rules on how to avoid crossing the line and committing plagiarism.
As I've been thinking about student understanding of plagiarism and its fundamental relationship with concepts on the origin and nature of ideas, I've remembered some reading I did in grad school on personal epistemology (Hofer, 2001) and on using epistemological resources in physics education (Hammer & Elby, 2003). At this stage my thoughts are still in a very formative stage with regard to using epistemological resources in plagiarism education. However, it seems to me very clear that the students I've encountered do have concepts of knowledge and knowing that are activated when they discuss plagiarism. Furthermore, it also seems clear to me that students could benefit from improvements to plagiarism education that would move it beyond the misconceptions approach and into a cognitive resources approach. Efforts in this direction include the book "Originality, Imitation, and Plagiarism" edited by Eisner and Vicinus highlighted by Inside Higher Education, which also highlighted specific plagiarism education methods presented by Hagopian and others at a recent conference on college writing. My hope is that further development of a conceptual model for student thinking on plagiarism, based on epistemological cognitive resources, will advance student and teacher understanding of a topic that is clearly more complex than a misconception of academic integrity.
EDIT: After posting this I decided to do a Google search on "plagiarism and epistemology", wondering if anyone else was thinking on the same track. Although it seems we've come to it from somewhat different paths, Rebecca Moore Howard has much more extensive and refined work on the epistemology of plagiarism, the afore-linked presented at a conference (called "Originality, Imitation, and Plagiarism") at the University of Michigan in 2005. As is clear from her publications list, I've got lots of reading to do - and I'm sure I'll find even more. It's going to be interesting to explore applying these ideas with secondary-level students ... perhaps I'll even find others in secondary-ed who are already on this track, too. I'll keep posting on this, but wanted to edit this post to include this disclaimer to avoid any potential appearance plagiarism on my own part!
References
Hammer, D. and Elby, A. (2003). Tapping Epistemological Resources for Learning Physics. Journal of the Learning Sciences, 12, 53-90.
Hofer, B. (2001). Personal epistemology research: Implications for learning and
instruction. Educational Psychology Review, 13(4), 353-382.
Thursday, April 17, 2008
Using the Wiimote to research the relationship between movement and cognition
Exploring Action Dynamics as an Index of Paired-Associate Learning (Dale et al)
The above article extends support for the relationship between movement and cognition (and cites a number of interesting articles that do the same). One rather unique and interesting aspect of this study is its methodology, in particular the decision to use Nintendo's Wii Remote (Wiimote) to measure movement during a learning task. Specifically, the aim of the above study was to measure various aspects of short-time-scale movements and investigate their possible correlation with longer-time-scale learning. Although the learning task was rather simplistic (association of symbolic pairs), the results indicated a strong relationship between movement and cognition. The researchers found that movement patterns changed predictably during the learning task and argue that movement patterns could be used as an index for measuring the success of the learner in the task at hand.
The above article extends support for the relationship between movement and cognition (and cites a number of interesting articles that do the same). One rather unique and interesting aspect of this study is its methodology, in particular the decision to use Nintendo's Wii Remote (Wiimote) to measure movement during a learning task. Specifically, the aim of the above study was to measure various aspects of short-time-scale movements and investigate their possible correlation with longer-time-scale learning. Although the learning task was rather simplistic (association of symbolic pairs), the results indicated a strong relationship between movement and cognition. The researchers found that movement patterns changed predictably during the learning task and argue that movement patterns could be used as an index for measuring the success of the learner in the task at hand.
Labels:
mind
Wednesday, April 16, 2008
Movement as a resource for vocal learning
Molecular Mapping of Movement-Associated Areas in the Avian Brain: A Motor Theory for Vocal Learning Origin (Feenders et al)
Certain groups of birds, like humans and a small number of other mammalian species, are capable of learning to use their voices to imitate what they hear. While other studies have determined genetic expression that is unique to this trait, as well as unique regions of the brain that are not present in species that are vocal but not vocal learners, the origin of these brain regions has not been well understood. The authors of this study propose that a major constraint on the evolution of vocal learning in the avian orders that feature this capability is a set of seven brain regions that correlate with specific types of movement when active. The authors suggest that not only do the movement regions of the bird brains have a feed-forward relationship with the vocal regions, but also that the vocal regions have a feed-back relationship with the movement regions. The study wraps up by suggesting a theory that in distantly related animal species the brain-based capacity for vocal learning evolved from specialized brain regions that control movement and perhaps even movement-based learning. This theory requires significant experimentation for validation, but has the potential to provide a powerful explanatory mechanism for the evolution of vocal learning not just in distantly-related bird species, but also in distantly-related mammalian species, as well as between birds and mammals. Furthermore, the authors note the gestural precursors to spoken language in early human development and the benefits that formalization of these gestures (sign language) can provide to learning to speak. In summary, this study provides further neuroscientific support for cognitive resources and suggests, specifically, that movement capacity serves as a resource for complex cognitive development.
Certain groups of birds, like humans and a small number of other mammalian species, are capable of learning to use their voices to imitate what they hear. While other studies have determined genetic expression that is unique to this trait, as well as unique regions of the brain that are not present in species that are vocal but not vocal learners, the origin of these brain regions has not been well understood. The authors of this study propose that a major constraint on the evolution of vocal learning in the avian orders that feature this capability is a set of seven brain regions that correlate with specific types of movement when active. The authors suggest that not only do the movement regions of the bird brains have a feed-forward relationship with the vocal regions, but also that the vocal regions have a feed-back relationship with the movement regions. The study wraps up by suggesting a theory that in distantly related animal species the brain-based capacity for vocal learning evolved from specialized brain regions that control movement and perhaps even movement-based learning. This theory requires significant experimentation for validation, but has the potential to provide a powerful explanatory mechanism for the evolution of vocal learning not just in distantly-related bird species, but also in distantly-related mammalian species, as well as between birds and mammals. Furthermore, the authors note the gestural precursors to spoken language in early human development and the benefits that formalization of these gestures (sign language) can provide to learning to speak. In summary, this study provides further neuroscientific support for cognitive resources and suggests, specifically, that movement capacity serves as a resource for complex cognitive development.
Sunday, April 13, 2008
Pick's Disease and Creativity
A Disease That Allowed Torrents of Creativity (Blakeslee) - New York Times
It seems the meme of creativity is rising, as might be expected if we are transitioning into the Imagination Age. As we might also expect, the explanation for creativity continues to be grounded in brain-based research. The above NY Times article ties in well with previous posts on creativity and the right hemisphere, including the post on recent identification of the "Aha!" region of the brain (associative cortex in the right hemisphere's parietal lobe). This article points out an aspect of brain function that has always fascinated me: the idea that the frontal lobe acts to inhibit the activity of other regions of the brain. As we continue to build cognitive models for educational purposes, it's important to keep in mind that relationships between resources shouldn't always be "positive", such that the activity level of one resource always increases the probability of activity of another resource. Cognitive models should be robust enough to include the possibility that resources might decrease the probability of activity of other resources. As such, we can model increases in certain capacities as other aspects of cognition are reduced in activity - whether through endogenous disease such as that mentioned in the article above, or through educational experiences that train students to minimize certain thought patterns in order to maximize others.
It seems the meme of creativity is rising, as might be expected if we are transitioning into the Imagination Age. As we might also expect, the explanation for creativity continues to be grounded in brain-based research. The above NY Times article ties in well with previous posts on creativity and the right hemisphere, including the post on recent identification of the "Aha!" region of the brain (associative cortex in the right hemisphere's parietal lobe). This article points out an aspect of brain function that has always fascinated me: the idea that the frontal lobe acts to inhibit the activity of other regions of the brain. As we continue to build cognitive models for educational purposes, it's important to keep in mind that relationships between resources shouldn't always be "positive", such that the activity level of one resource always increases the probability of activity of another resource. Cognitive models should be robust enough to include the possibility that resources might decrease the probability of activity of other resources. As such, we can model increases in certain capacities as other aspects of cognition are reduced in activity - whether through endogenous disease such as that mentioned in the article above, or through educational experiences that train students to minimize certain thought patterns in order to maximize others.
Wednesday, April 9, 2008
Rodent tool use supports cognitive resources hypothesis
Tool-Use Training in a Species of Rodent: The Emergence of an Optimal Motor Strategy and Functional Understanding (Okanoya et al)
In an article that even received a writeup in the New York Times, the authors cited above present their results from training degus (Octodon degus) to use rakes. Specifically, the researchers wanted to implement an experimental design that would easily extend to studying any changes in the degus' brains that resulted from the training. A variety of other animals are able to use tools, though only a few studies have documented changes in brain activity that correlate with such ability. Interestingly, many researchers have hypothesized that proximal phylogenetic relationships with humans (the most complex of tool-using animals) influence intelligence (measured to some degree by the ability to use tools). In comparing their results from working with degus with a similar experiment done with monkeys, Okanoya and the other researchers in the above study propose that socio-ecological factors may be involved. While brain complexity is certainly not unrelated to common ancestry, another interesting proposal here is worthy of direct quotation: "These observations suggest that tool use ... may represent a standardized set of cognitive skills necessary for general implementation." Conceptual change and education researchers working within the domain of cognitive psychology have, for many years, developed the idea of resources - fundamental units of cognition that are used to construct concepts. It would appear from the research on degus that cognitive resources may be supported by observations of conceptual development in other animal models. Furthermore, and excitingly to me and others hoping to better model the brain-mind relationship, this research further supports the notion that cognitive modeling should be, as much as possible, reflective of what we know about the brain.
In an article that even received a writeup in the New York Times, the authors cited above present their results from training degus (Octodon degus) to use rakes. Specifically, the researchers wanted to implement an experimental design that would easily extend to studying any changes in the degus' brains that resulted from the training. A variety of other animals are able to use tools, though only a few studies have documented changes in brain activity that correlate with such ability. Interestingly, many researchers have hypothesized that proximal phylogenetic relationships with humans (the most complex of tool-using animals) influence intelligence (measured to some degree by the ability to use tools). In comparing their results from working with degus with a similar experiment done with monkeys, Okanoya and the other researchers in the above study propose that socio-ecological factors may be involved. While brain complexity is certainly not unrelated to common ancestry, another interesting proposal here is worthy of direct quotation: "These observations suggest that tool use ... may represent a standardized set of cognitive skills necessary for general implementation." Conceptual change and education researchers working within the domain of cognitive psychology have, for many years, developed the idea of resources - fundamental units of cognition that are used to construct concepts. It would appear from the research on degus that cognitive resources may be supported by observations of conceptual development in other animal models. Furthermore, and excitingly to me and others hoping to better model the brain-mind relationship, this research further supports the notion that cognitive modeling should be, as much as possible, reflective of what we know about the brain.
Brain, Mind, and Education
What a nice feeling to discover the right niche! I've started a new blog which I think will subsume my previous work here - please join me at Brain, Mind, and Education. As much as possible given my "real life" responsibilities, I'm keeping up with current research in neuroscience, cognitive psychology, and education, and thinking about relationships among the three. I don't anticipate any further posts here, and though I'll leave the material accessible, I'll also be working to bring some aspects of it over to the new blog - I hope you'll join me and hop in on some discussion!
Sunday, April 6, 2008
Into the Imagination Age
Let Computers Compute. It's the Age of the Right Brain. (Rae-Dupree, NY Times)
An interesting article in the technology section of the NY Times this weekend seems to fit well with my last couple of posts on intuitive math skills and hemispheric differentiation in the brain. I'm wondering if we're encountering a 'paradigm shift' of sorts, from the Information Age into what might be called the Imagination Age. Reading the article reminded me of something I remember Steve Jobs saying during my time as a full-time Apple-centric computer consultant after the "Internet bubble" burst in 2000 - 2001, which I paraphrase here as "We'll have to innovate our way out of this." I doubt that many could argue with the success of Jobs' strategy. A few other examples are provided in the article, and I think the implications meld well with educational needs, too: while memorization isn't unimportant, what's most critical is that we teach our students to be creative and collaborative problem solvers. Given that many of our educational institutions are still stuck in the Industrial Age, and transitioning with significant difficulty into the Information Age, how quickly can we expect these systems to adapt to yet another paradigm shift?
An interesting article in the technology section of the NY Times this weekend seems to fit well with my last couple of posts on intuitive math skills and hemispheric differentiation in the brain. I'm wondering if we're encountering a 'paradigm shift' of sorts, from the Information Age into what might be called the Imagination Age. Reading the article reminded me of something I remember Steve Jobs saying during my time as a full-time Apple-centric computer consultant after the "Internet bubble" burst in 2000 - 2001, which I paraphrase here as "We'll have to innovate our way out of this." I doubt that many could argue with the success of Jobs' strategy. A few other examples are provided in the article, and I think the implications meld well with educational needs, too: while memorization isn't unimportant, what's most critical is that we teach our students to be creative and collaborative problem solvers. Given that many of our educational institutions are still stuck in the Industrial Age, and transitioning with significant difficulty into the Information Age, how quickly can we expect these systems to adapt to yet another paradigm shift?
Sunday, March 30, 2008
Evolution and cognition: intuitive and symbolic mathematics in primates
Basic Math in Monkeys and College Students (Cantlon and Brannan)
Human and multiple non-human species have demonstrated a variable capacity for intuitive number sense (approximate quantitative representation, counting, and simple addition). Mathematics has a significant link with verbal and visual symbolic representations (ie, the arabic number symbols) that have been used by humans to extend mathematical analysis far beyond the most basic intuitive quantitative analysis. Humans and a much more limited number of non-human species demonstrate any capacity for "symbolic sense" for even basic mathematics - the (variable) ability to associate verbal or visual symbols with the species' (variable) number sense. Major limiting factors on the results and comparison of results among previous studies have included small sample sizes and methodological differences (particularly when compared to human experiments). Although results should be taken with requisite skepticism, some of the differences found are still clearly in the domain of basic mathematical operations; multiple species can track quantity in a +1 addition experiment, but few non-human species seem to exhibit an intuitive concept of set addition. Furthermore, humans, although more capable than many other species, are clearly limited in their intuitive capacity for set addition. In the study above, adult human (n = 14) demonstrations of intuitive set addition was similar to that of adult female rhesus macaque monkeys (n = 2).
The study used the same method to compare the species, but sample size continues to be an issue. Results, though, do seem plausible: that humans performed a bit more accurately and quickly, and could more accurately add larger sets. The limitations found in human capacity also seem plausible, showing a declining accuracy as set size increased and also as set size and similarity in set quantity increased. If my assumption is correct that symbolic sense in mathematics is constructed upon numerical sense, results would support the inference that learning the rules of symbolic manipulation for mathematics doesn't necessarily mean that there is a depth of understanding in the meaning of results; in other words, it's likely that there is a significant difference between the ability to do symbolic math and the ability to understand the meaning of both the process and the result.
There are profound educational implications of the relationship between intuitive mathematics and symbolic mathematics. For one example, I've encountered this many times in teaching about the history of life on Earth and the concept of biological evolution. Current estimates put the age of our planet at about 4.5 billion years, and evidence of life has been found in fossils nearly 4 billion years old. It's easy to look over what I've just done because it's so commonplace: I represented the astronomically incredible age of our planet with 10 squiggly marks on the screen, and the age of life on our planet with only 8 squiggly marks! To really understand evolution, we need to understand huge quantities of time. I have heard a variety of analogies used to help students understand large quantities. However, many of these analogies are rooted in symbolic math, and, unsurprisingly, still involve thinking about huge quantities: for example, a mole, about 6.023 x 10^23, is huge.
To increase the intuitive understanding for the enormity of "billions" of years, I have students do an activity that specifically builds symbolic math upon a simple but powerful physical experience. The student prepares with a blank piece of paper and a writing implement, and for five minutes draws lines - "hash marks" - on a page. (Of course, students can take a break if their hand cramps a bit or if they are uncomfortable at all - I try to cheer them on a bit like a sport coach...) After five minutes I have them count up their marks, and then help them to think about each of them as representing one full Earth year. We then work through a calculation of how much time it would take them - with no breaks for eating, sleeping, etc - to make 4 billion marks. The usual result is usually around 40 years, so I ask them to think about what they'll be like when forty years from now -- it's enough to get the gears moving in a much more powerful way than just saying and/or writing 4 billion.
I think there are some interesting philosophical implications of these studies, too, though I'm admittedly superficial and short in my ability to deal with them. But I think they come down to the remaining "open" question of whether or not mathematical logic is created or discovered - ie: is math a property of the universe or a property of the human mind? Given that the human mind is based in the human brain, and the human brain shares at least some degree of common ancestry with the brains of all species tested (and certainly a great deal with the monkey), perhaps the question on the "universality" of mathematical logic is refined to whether it is a property of the universe or of the vertebrate mind.
Human and multiple non-human species have demonstrated a variable capacity for intuitive number sense (approximate quantitative representation, counting, and simple addition). Mathematics has a significant link with verbal and visual symbolic representations (ie, the arabic number symbols) that have been used by humans to extend mathematical analysis far beyond the most basic intuitive quantitative analysis. Humans and a much more limited number of non-human species demonstrate any capacity for "symbolic sense" for even basic mathematics - the (variable) ability to associate verbal or visual symbols with the species' (variable) number sense. Major limiting factors on the results and comparison of results among previous studies have included small sample sizes and methodological differences (particularly when compared to human experiments). Although results should be taken with requisite skepticism, some of the differences found are still clearly in the domain of basic mathematical operations; multiple species can track quantity in a +1 addition experiment, but few non-human species seem to exhibit an intuitive concept of set addition. Furthermore, humans, although more capable than many other species, are clearly limited in their intuitive capacity for set addition. In the study above, adult human (n = 14) demonstrations of intuitive set addition was similar to that of adult female rhesus macaque monkeys (n = 2).
The study used the same method to compare the species, but sample size continues to be an issue. Results, though, do seem plausible: that humans performed a bit more accurately and quickly, and could more accurately add larger sets. The limitations found in human capacity also seem plausible, showing a declining accuracy as set size increased and also as set size and similarity in set quantity increased. If my assumption is correct that symbolic sense in mathematics is constructed upon numerical sense, results would support the inference that learning the rules of symbolic manipulation for mathematics doesn't necessarily mean that there is a depth of understanding in the meaning of results; in other words, it's likely that there is a significant difference between the ability to do symbolic math and the ability to understand the meaning of both the process and the result.
There are profound educational implications of the relationship between intuitive mathematics and symbolic mathematics. For one example, I've encountered this many times in teaching about the history of life on Earth and the concept of biological evolution. Current estimates put the age of our planet at about 4.5 billion years, and evidence of life has been found in fossils nearly 4 billion years old. It's easy to look over what I've just done because it's so commonplace: I represented the astronomically incredible age of our planet with 10 squiggly marks on the screen, and the age of life on our planet with only 8 squiggly marks! To really understand evolution, we need to understand huge quantities of time. I have heard a variety of analogies used to help students understand large quantities. However, many of these analogies are rooted in symbolic math, and, unsurprisingly, still involve thinking about huge quantities: for example, a mole, about 6.023 x 10^23, is huge.
To increase the intuitive understanding for the enormity of "billions" of years, I have students do an activity that specifically builds symbolic math upon a simple but powerful physical experience. The student prepares with a blank piece of paper and a writing implement, and for five minutes draws lines - "hash marks" - on a page. (Of course, students can take a break if their hand cramps a bit or if they are uncomfortable at all - I try to cheer them on a bit like a sport coach...) After five minutes I have them count up their marks, and then help them to think about each of them as representing one full Earth year. We then work through a calculation of how much time it would take them - with no breaks for eating, sleeping, etc - to make 4 billion marks. The usual result is usually around 40 years, so I ask them to think about what they'll be like when forty years from now -- it's enough to get the gears moving in a much more powerful way than just saying and/or writing 4 billion.
I think there are some interesting philosophical implications of these studies, too, though I'm admittedly superficial and short in my ability to deal with them. But I think they come down to the remaining "open" question of whether or not mathematical logic is created or discovered - ie: is math a property of the universe or a property of the human mind? Given that the human mind is based in the human brain, and the human brain shares at least some degree of common ancestry with the brains of all species tested (and certainly a great deal with the monkey), perhaps the question on the "universality" of mathematical logic is refined to whether it is a property of the universe or of the vertebrate mind.
Wednesday, March 26, 2008
From TED - "My Stroke of Insight" by Jill Bolte Taylor
A colleague sent me this video, and in so doing, introduced me to the TED theme "How the Mind Works". I'd been loosely familiar with TED already, but I'm thrilled to see that they're sharing so many ideas from so many incredible people on a subject that I find so interesting and so critical to the future development of education. I have provided a link to the TED theme in my links section, and look forward to accessing more of the resources therein.
I hope that sharing this talk by Jill Bolte Taylor, entitled "My Stroke of Insight", will help others to learn about the large-scale functional differentiation of the human brain, and will promote interest in learning about some of the smaller-scale functional differences that I suspect will become more and more important in facilitating student learning. While I do tend to be a bit skeptical about some of the generalizations included in Dr. Taylor's presentation, I think they help to build the foundation for inspiration and critical thinking. In particular this presentation demonstrates the educational power of personal experience and sharing it.
Visit the TED page for the transcript of this talk and much more.
I hope that sharing this talk by Jill Bolte Taylor, entitled "My Stroke of Insight", will help others to learn about the large-scale functional differentiation of the human brain, and will promote interest in learning about some of the smaller-scale functional differences that I suspect will become more and more important in facilitating student learning. While I do tend to be a bit skeptical about some of the generalizations included in Dr. Taylor's presentation, I think they help to build the foundation for inspiration and critical thinking. In particular this presentation demonstrates the educational power of personal experience and sharing it.
Visit the TED page for the transcript of this talk and much more.
Labels:
brain
Sunday, March 23, 2008
Neuron behavior - randomness, Godel's theorum, and cognitive modeling
Stochastic Differential Equation Model for Cerebellar Granule Cell Activity (Saarinen et al)
Along with my efforts to read and to think about current research in neuroscience and cognitive psychology, I'm still reading books and novels as is my norm. I just finished "Next" by Michael Crichton - an easy read, but provocative in its dealing with themes related to the implications of modern biological technologies. After I finished it, I decided to pull out "Godel, Escher, Bach: An Eternal Golden Braid" by Douglas Hofstadter. I first read it in my first semester of graduate school, and found it to be a very inspiring and challenging text on consciousness. I'm still only in the first few chapters of my re-read, but I'm thinking a lot about one of the major points of the book, related to the work of Godel, showing that formal systems can't be both complete and consistent (IANAM - I'm trusting D.H. and the Wikipedia page on Godel's Incompleteness Theorems - they seem to match in their interpretations of Godel's work).
So, again - I Am Not A Mathematician - but it would seem that the stress on interpreting the implications of Godel's aforementioned work is that it applies to formal systems that attempt to provide a perfect list of starting information and processing rules that are not contradictory of themselves (hence, complete and consistent). There are a variety of systems - mathematics perhaps as the most pure - that we think of as consistent and complete in our superficial, everyday analysis, but are not ... for example, as given by Hofstadter, language and the "liar's paradox" ("This statement is false."). These paradox are given a new name by Hofstadter: "Strange Loops"; I hope to write more on the liar's paradox and my thoughts on its relationship to epistemological development - for now, though, I want to stay focused on the implications of Godel's proof to attempts to model cognition.
In the article linked at the beginning of this post, the authors propose that stochastic - random - modeling of neural cell activity provides a more accurate representation of the in vitro behavior of said cells. The classical model for neural activity - the Hodgkin-Huxley formalism - is deterministic in the sense that it posits that voltage-dependent ion channels (responsible for the 'ON' / 'OFF' status of the neuron) behave predictably. It's been known for some time that they are not perfectly predictable, but models for neural networks have been based on these easier-to-deal-with deterministic formalisms for some time. The authors of this paper point out that improvements can be had with regard to the modeling, but also with regard to computational speed, when a particular method for representing the random behavior of these voltage-dependent ion channels is built into the larger-scale models. Good stuff for those limiting their research only on networks of neurons, but how can this be relevant for others who want to transcend the chasm between brain and mind?
I think the link is provided, with due credit to Hofstadter, by Godel and now Saarinen (et al). Or perhaps with credit to Godel and now Saarinen (et al) by way of Hofstadter? As I proceed with actually writing about some of the research I'm familiar with, and continue to read and write about more current work, I believe we'll find that the pursuit to formalize conceptual change continues to use deterministic underpinnings, in which the state of one element of the model (E1), with consideration to its relationship with another element of the model (E2), determines the state of the other element. Deterministic modeling makes sense, given that there is a significant degree of predictability in the behavior of neurons, and even (though to a lesser degree, I'd think) in the behavior of the mind. However, I suggest that the stochastic modeling found beneficial by Saarinen et al, will also be beneficial to the efforts to construct both static and dynamic models of cognition - the models that will ultimately help us to understand the process of learning (constructing and retaining knowledge). [Those familiar with my old blog on the concept of evolution may have seen this before - I'll be bringing some of those ideas and posts over to this blog in the near future.]
Finally, I do think that all of this can relate back to the classroom and to teaching (which I am beginning to tend towards defining as 'the process of manipulating environment and available resources to promote student learning'). Many times in the classroom, teachers are confronted with random and seemingly-meaningless student questions; a stochastic model for cognition, based on the stochastic behavior of the functional units of the brain, could incorporate the random activation of prior knowledge that is (from the expert perspective) unrelated to the concept or skill at hand. As these stochastic cognitive models (and their implications) are developed and studied, I suspect we'll begin to see meaning in the seemingly-meaningless, as perhaps these "random" student ideas and/or questions are critical to the brain-mind's learning process. Perhaps the brain-mind has, in fact, evolved in a way that allows it to transcend the limitations that Godel proved inherent to formal systems. In other words, and appropriately paradoxically so, it might be true that our very capacity for logical thought rests on the fundamental informality of the workings of the brain.
Along with my efforts to read and to think about current research in neuroscience and cognitive psychology, I'm still reading books and novels as is my norm. I just finished "Next" by Michael Crichton - an easy read, but provocative in its dealing with themes related to the implications of modern biological technologies. After I finished it, I decided to pull out "Godel, Escher, Bach: An Eternal Golden Braid" by Douglas Hofstadter. I first read it in my first semester of graduate school, and found it to be a very inspiring and challenging text on consciousness. I'm still only in the first few chapters of my re-read, but I'm thinking a lot about one of the major points of the book, related to the work of Godel, showing that formal systems can't be both complete and consistent (IANAM - I'm trusting D.H. and the Wikipedia page on Godel's Incompleteness Theorems - they seem to match in their interpretations of Godel's work).
So, again - I Am Not A Mathematician - but it would seem that the stress on interpreting the implications of Godel's aforementioned work is that it applies to formal systems that attempt to provide a perfect list of starting information and processing rules that are not contradictory of themselves (hence, complete and consistent). There are a variety of systems - mathematics perhaps as the most pure - that we think of as consistent and complete in our superficial, everyday analysis, but are not ... for example, as given by Hofstadter, language and the "liar's paradox" ("This statement is false."). These paradox are given a new name by Hofstadter: "Strange Loops"; I hope to write more on the liar's paradox and my thoughts on its relationship to epistemological development - for now, though, I want to stay focused on the implications of Godel's proof to attempts to model cognition.
In the article linked at the beginning of this post, the authors propose that stochastic - random - modeling of neural cell activity provides a more accurate representation of the in vitro behavior of said cells. The classical model for neural activity - the Hodgkin-Huxley formalism - is deterministic in the sense that it posits that voltage-dependent ion channels (responsible for the 'ON' / 'OFF' status of the neuron) behave predictably. It's been known for some time that they are not perfectly predictable, but models for neural networks have been based on these easier-to-deal-with deterministic formalisms for some time. The authors of this paper point out that improvements can be had with regard to the modeling, but also with regard to computational speed, when a particular method for representing the random behavior of these voltage-dependent ion channels is built into the larger-scale models. Good stuff for those limiting their research only on networks of neurons, but how can this be relevant for others who want to transcend the chasm between brain and mind?
I think the link is provided, with due credit to Hofstadter, by Godel and now Saarinen (et al). Or perhaps with credit to Godel and now Saarinen (et al) by way of Hofstadter? As I proceed with actually writing about some of the research I'm familiar with, and continue to read and write about more current work, I believe we'll find that the pursuit to formalize conceptual change continues to use deterministic underpinnings, in which the state of one element of the model (E1), with consideration to its relationship with another element of the model (E2), determines the state of the other element. Deterministic modeling makes sense, given that there is a significant degree of predictability in the behavior of neurons, and even (though to a lesser degree, I'd think) in the behavior of the mind. However, I suggest that the stochastic modeling found beneficial by Saarinen et al, will also be beneficial to the efforts to construct both static and dynamic models of cognition - the models that will ultimately help us to understand the process of learning (constructing and retaining knowledge). [Those familiar with my old blog on the concept of evolution may have seen this before - I'll be bringing some of those ideas and posts over to this blog in the near future.]
Finally, I do think that all of this can relate back to the classroom and to teaching (which I am beginning to tend towards defining as 'the process of manipulating environment and available resources to promote student learning'). Many times in the classroom, teachers are confronted with random and seemingly-meaningless student questions; a stochastic model for cognition, based on the stochastic behavior of the functional units of the brain, could incorporate the random activation of prior knowledge that is (from the expert perspective) unrelated to the concept or skill at hand. As these stochastic cognitive models (and their implications) are developed and studied, I suspect we'll begin to see meaning in the seemingly-meaningless, as perhaps these "random" student ideas and/or questions are critical to the brain-mind's learning process. Perhaps the brain-mind has, in fact, evolved in a way that allows it to transcend the limitations that Godel proved inherent to formal systems. In other words, and appropriately paradoxically so, it might be true that our very capacity for logical thought rests on the fundamental informality of the workings of the brain.
Monday, March 17, 2008
Locating the light bulb: neural activity correlates to insight
Neural Activity When People Solve Verbal Problems with Insight (Jung-Beeman et al, 2004)
The "Aha!" moment is, perhaps, one of the most tangible and exciting educational experiences for teachers and students alike; the students finally "get it" - whatever concept or skill "it" might be. Using fMRI and scalp EEG, Beeman et al describe that the experience of insight in solving a problem (followed by an "Aha" moment experience) correlates with a particular type of activity in a specific region of the brain called the right anterior superior temporal gyrus. This front and top region of the right hemisphere's temporal lobe is active in the early stages of problem solving, but also experiences a burst of activity approximately 0.3s prior to the "Aha" moment.
The location of this region in the right hemisphere means that there are some interesting things that we can do to promote insight, such as presenting helpful information and/or potential solutions to the left part of the visual field. Much as our right limbs are controlled by the left hemisphere of our brain, the left part of our visual field is processed by the brain's right hemisphere. The study also points out that individuals vary in their response to solving problems with insight, and even found one individual's brain responded more strongly to non-insight problem solving. So, we shouldn't be surprised, then, that different students need different stimuli and time to achieve insight, nor that some students may not achieve the "Aha" experience in our classrooms. Finally, it's important to keep in mind that the problems studied were word problems, and that the area of the brain found to be active preceding the conscious "Aha" experience is very close to an area of the brain found to be strongly associated with language skills; it's possible that "Aha" experiences are caused by different regions of the brain when the problem solving takes on different modalities.
The "Aha!" moment is, perhaps, one of the most tangible and exciting educational experiences for teachers and students alike; the students finally "get it" - whatever concept or skill "it" might be. Using fMRI and scalp EEG, Beeman et al describe that the experience of insight in solving a problem (followed by an "Aha" moment experience) correlates with a particular type of activity in a specific region of the brain called the right anterior superior temporal gyrus. This front and top region of the right hemisphere's temporal lobe is active in the early stages of problem solving, but also experiences a burst of activity approximately 0.3s prior to the "Aha" moment.
The location of this region in the right hemisphere means that there are some interesting things that we can do to promote insight, such as presenting helpful information and/or potential solutions to the left part of the visual field. Much as our right limbs are controlled by the left hemisphere of our brain, the left part of our visual field is processed by the brain's right hemisphere. The study also points out that individuals vary in their response to solving problems with insight, and even found one individual's brain responded more strongly to non-insight problem solving. So, we shouldn't be surprised, then, that different students need different stimuli and time to achieve insight, nor that some students may not achieve the "Aha" experience in our classrooms. Finally, it's important to keep in mind that the problems studied were word problems, and that the area of the brain found to be active preceding the conscious "Aha" experience is very close to an area of the brain found to be strongly associated with language skills; it's possible that "Aha" experiences are caused by different regions of the brain when the problem solving takes on different modalities.
Saturday, March 15, 2008
Research resources
As I'm devoting some limited free time to developing this blog, I'm finding a number of good resources for current research involving the overlap of cognitive neuroscience and education research. I have a number of articles I'm parsing through from the Public Library of Science (PLoS) journals, including PLoS Biology, PLoS Computational Biology, and PLOS one. I've also discovered the International Mind, Brain, and Education Society, a fairly recently-formed group that organizes the journal Mind, Brain, and Education. I'm thinking about joining the Society - seems right up my alley - but I'm going to have to keep looking for access to their journal as it doesn't seem available through databases I'm currently using to access subscription-based materials.
This brings up, naturally, the question of open access. I believe that open access to research - especially when in any part publicly funded - is critical to current and future development, especially that which deals with issues related to education. So, while I'm not going to stop reading through subscription-based journals as I have the opportunity, and reporting on interesting findings cogent to this blog when possible, it is likely that I will continue to focus my reading and reporting efforts using sources that will be openly available to as wide an audience as possible.
This brings up, naturally, the question of open access. I believe that open access to research - especially when in any part publicly funded - is critical to current and future development, especially that which deals with issues related to education. So, while I'm not going to stop reading through subscription-based journals as I have the opportunity, and reporting on interesting findings cogent to this blog when possible, it is likely that I will continue to focus my reading and reporting efforts using sources that will be openly available to as wide an audience as possible.
Labels:
professional development
Wednesday, February 13, 2008
Welcome to Brain, Mind, and Education
Hi folks! This is a blog intended to serve as a source for information and an opportunity to discuss research and ideas from neuroscience and cognitive psychology as they do ... or might ... apply to teaching and learning. I'm an academic administrator and classroom teacher (science) at an independent and comprehensive high school in rural central Maine (USA), so my primary focus will be on secondary-level education. My background involves undergraduate and graduate-level study of the life sciences, with particular emphasis on neuroscience, cytomolecular and developmental biology, along with healthy doses of psychology, pedagogy, and educational research. My specific interests include curriculum development, analysis, and refinement along with cognitive systems modeling. I hope that this space will help me to collect and connect a variety of ideas, but I also look forward to the potential for learning from other readers through your comments - please let me know what you think and what you're interested in, too.
Enjoy!
-JP
Enjoy!
-JP
Subscribe to:
Posts (Atom)