Framing Evolution
From: Knowing One’s Place: Space and the Brain
Part of the problem of appreciating the full complexity of neuroscience is that even the complexity is, well, pretty darned complex, coming in all sorts of mesmerizing varieties.
There is the obvious, “in your face”, sort of complexity, based upon the sheer volume of possibilities: coming to terms with the vast number of neurons in the brain, grappling with how to go about isolating one specific protein out of thousands, and doggedly following each one down its own particular thousand-fold biochemical pathways.
But then there is a much more subtle sort of complexity, the sort that is not immediately obvious to a non-specialist but upon further reflection presents a vast number of seemingly insuperable obstacles to our understanding.
Take vision. Nothing could be more natural than blithely declaring that sight involves the brain appropriately processing the signals that it gets from our eyes. Which is true, of course, as far as it goes. But dig a little bit deeper and a much more sophisticated picture starts to emerge.
Duke neuroscientist Jennifer Groh has spent the vast majority of her research career doing just that: looking to unravel the subtle, and often overlooked, complexity of how our brains develop an understanding of where we are.
“The photoreceptors in our eyes give us a representation of where visual information is, where objects are in the world; and that frame of reference depends on where the stimuli are with respect to the array of photoreceptors. In other words, it depends on where the stimuli are with respect to your eyes.
“Well, we can move our eyes; and we do. In fact, we move them a lot — about three times per second — and we move them really fast, at a speed of about 500 degrees per second.
“That’s a lot of eye motion that the brain has to deal with, to compensate for. It has to assemble the snapshots that are taken by the photoreceptor array at each of the different positions that your eyes might be looking.”
That’s complicated enough, but that’s only the beginning. Now think about the way the brain integrates a wide range of other sensory input — vision, hearing and touch — each sense depending on completely different mechanisms, and each one having, as Jennifer describes it, its own neurological “frame of reference”.
“If you then extend this problem to include some of the other senses, like the auditory system, it’s important to first recognize that, of course, sounds aren’t affected by how the eyes are moving. The auditory system is using a different frame of reference for figuring out where the sounds are located, which is based on subtle cues that are different across the two ears.
“A sound that’s located on one particular side will arrive in that ear first and will be slightly louder in that ear than the other. The brain has to compare the signals arriving in one ear with the signals arriving in the other to compute the angle that the sound is coming from.
“In general, then, the auditory system is computing sound location based on cues that are fundamentally anchored to the head, while the visual system is computing visual locations based on cues that are fundamentally anchored to the orientation of the eyes. So every time your eyes move, you’re yanking your visual scene around to some new position with respect to your auditory scene.
“I’ve been really interested in how the brain fixes that, how it puts those two signals into a common frame of reference so that you can do things like use lip-reading cues to help you understand what someone is saying.”
Well, alright, you might say. It’s complicated. Very, very complicated, even — and perhaps unexpectedly so. But after all, when it comes to the brain, lots of things are complicated — speech, language, regulating our emotions, playing the cello. Isn’t this just yet another in a long line of highly complex neural processing?
Perhaps. But then again, maybe how we process sensory information about the world around us is somehow different. Maybe the neural infrastructure responsible for how the brain represents space actually plays a more preeminent role than we might naively think.
In a knowing wink to the decidedly less rigorous side of popular neuroscience, Jennifer begins her captivating book, Making Space: How the Brain Knows Where Things Are, with a little tongue-in-cheek declaration. “Nine-tenths of your brain power is spent figuring out where things are,” she announces boldly, before immediately admitting that she just made that up. Why would she do such a thing?”
“As you probably know, there is this popular myth out there that we only use 10% our brains. None of us actually know where that number comes from, so it’s kind of a running joke in neuroscience to say, ‘I’m just going to throw a number out there and say this is how much of your brain is involved with this, that, or the other thing.’
“But I’m half serious — or maybe a little bit more than half serious — because, when you look at it, there’s an awful lot of the brain that has been identified as carrying some kind of information that’s relevant to these kinds of processes. There’s a lot of the brain that responds to visual information, there’s a lot that responds to sound, there’s a lot that responds to touch, there’s a lot that’s involved in controlling movements that are essential to understanding how to combine information across these different sensory systems.
“If you were to say that all of those brain structures are really just doing those things, that it’s their job to work on these spatial-processing, sensory, motor-control issues, there wouldn’t be that much left for doing the things that concern , say, what makes us smart. How come most of the brain isn’t involved in, say, language?
“It turns out that, if you look at areas of the brain that seem to be involved in, say, language, or memory, or attention, or planning, or motivation, there’s a lot of overlap between the structures that are implicated in those processes and the structures that are implicated in sensory and motor processing.”
Why should such overlaps exist at all? Why, for example, is there such a well-documented link between memory and space? What could be causing that?
Suffice it to say that nobody knows for sure, but Jennifer speculates that it might be intimately tied with the basic principles of how evolutionary processes work.
“A general problem in evolution is to envision how simple events, like the mutation of an individual gene, can produce an organism that functions better than the other organisms that don’t have that mutation — because usually, when you tweak something, you make it worse. Intermediate states in the course of evolution are often hard to envision.
“One thing that may be happening is that modules in the brain might be duplicated through a fairly simple set of mutations so that you might take a structure that’s working well, and maybe one small change means that you now have two of those. And if you now have two, and the one before was sufficient, you find yourself with an extra that can be used for something that you weren’t originally doing.
“The thought is that perhaps there’s still some history that the duplicated module retains based on where it came from — the circuitry might look similar to what’s present in the original area because perhaps it’s getting some of the same kinds of inputs, but it wouldn’t be doing the same exact things. It would be doing something similar, but related to a different type of input.
“And it may be that spatial processing originally arises as something that’s essential for that first module to do, and that when it gets duplicated in that second module, you still have all this spatial infrastructure, only now you’re going to use it to do things like think and maybe reason about abstract concepts that might easily be equatable to something spatial, but aren’t, in and of themselves, spatial.”
The reason why so much of our neural architecture might seem “space-like”, then, is that nature has used our spatial processing systems as a sort of evolutionary template, somehow leveraging them towards the creation of a multitude of other high-level systems that can be engaged in things like logic and language.
That sounds pretty complicated, too, I must admit.
But that doesn’t mean it isn’t true.
This is the introduction written by Howard Burton of the book, Knowing One’s Place: Space and the Brain, which is based on an in-depth, filmed conversation between Howard and Jennifer Groh, Professor of Psychology and Neuroscience at Duke University.
The book is broken into chapters and includes questions for discussion at the end of each chapter. The book is also available as part of the 5-part Ideas Roadshow Collection called Conversations About Neuroscience.
Visit the dedicated page for our conversation with John Duncan on Ideas On Film: https://ideas-on-film.com/jennifer-groh/. On our Ideas On Film YouTube channel you can watch a clip from the filmed conversation: https://youtu.be/MYPp85PJpuI.