Facing Facts

Ideas Roadshow
5 min readJan 22, 2021

--

From: Vision and Perception

As for most phenomena, the Ancient Greeks had a theory of vision, sometimes called “emission theory”. The idea, often attributed to Empedocles, was that light shone out of the human eye and lit up objects in our visual field, making perception possible.

There were, of course, significant problems with this theory. If everything came from our eyes, why might we not be able to see equally well at night? This led Empedocles to postulate some relationship between these “eye rays” and those from other sources, such as the sun.

A few hundred years later, Euclid pointed out that, according to this view it was difficult to understand how, by closing and opening one’s eyes under the night sky, we might suddenly be able to see the stars, which were presumably a long way away.

So, like any scientific theory, there were a few outstanding issues. But nevertheless, the “eye ray” theory of vision held sway for centuries. Serious doubts only began to arise through the work of the 10th century Arabian physicist Ibn al-Haytham, who promulgated a competing “intromission theory”, which states that our visual perception of some external object is stimulated by something emanating from the object itself (in this case, light rays).

But whichever way you look at it, a key question remains. Once the core visual information has reached us, how do we process it? Just because light rays from a face, say, reach our optic nerve, that doesn’t necessarily guarantee that we will appropriately visualize the face. And it certainly doesn’t guarantee that we will recognize it.

This question is now duly recognized as a pre-eminent one. But it certainly wasn’t always appreciated as such, even in recent times.

Kalanit Grill-Spector, the Principal Investigator of Stanford’s Vision & Perception Neuroscience Lab, smiling, told me that only forty years ago, MIT computer scientists were so convinced that vision was simply reducible to sufficiently powerful computational algorithms, they assigned the question of how it works to some unfortunate undergraduate as a summer research project.

Howard Burton in conversation with Kalanit Grill-Spector

The reason that this was a laughable idea (for everyone other than the poor student, who might well have been unable to appreciate the humour in the situation), is that it turns out that the neural processes that underlie vision are so sophisticated and well-developed, current estimates are that they take up some 30% of all brain function. And this massive neuronal effort, ironically, is why we have taken so much of vision’s complexity for granted for so long.

“The reason why vision research is so interesting,” Kalanit enthusiastically related to me “is that an awful lot of the brain is involved in doing vision. And the reason it looks effortless is that there’s a lot of machinery that is working away without us consciously having to activate it.”

Kalanit specializes in facial recognition. Much of the impetus for her current work began with a discovery of a particular region of the brain along a specific crease or sulcus — she obligingly puts it into non-technical terms by calling it a “dimple” — that seemed remarkably similar across a broad sample of the population. In other words, virtually everyone seemed to possess this special dimple, but it had hitherto gone unnoticed by the anatomical textbooks until she and her postdoc, Kevin Weiner, discovered it.

Having found the dimple, the next logical step was to investigate what particular sort of neural processing it might be involved in.

“It turns out that there is some dedicated hardware in your brain that seems to be involved in processing faces. This is a discovery made by Nancy Kanwisher in 1997 using fMRI. Initially, she thought that there was just one area, a module in the brain — she described it as a blueberry-sized module — that just processes faces.

“What we’ve found over time, from 1997 to today, is that there are, in fact, multiple regions that are organized in a very systematic way. And we’ve recently found that they’re very predictable anatomically — they happen in the same part of every person’s brain.

“We’ve started another collaboration with people who look at the histologies — the anatomical make-up of the brain. You can’t do this sort of work with a living subject, because you have to look at slices of the brain under the microscope and you therefore need post-mortem brains. But they have discovered different regions of the brain that clearly seem to have different hardware.

“But we naturally can’t test this hardware directly by mapping function to their data, because they’re using post-mortem brains. But it turns out that the location where they’ve found these sorts of specialized hardware correspond well with a unique anatomical region on the sulcus that we’d recently found — a sort of dimple.

“So this led us to think that perhaps this special hardware in this region might be linked to some specific processing that’s relevant to our perception.”

Exciting stuff. But how on earth might it conceivably be tested? After all, many of the key results in our understanding of these issues necessarily seem to rely on either non-invasive measurements of subjects using fMRI (where no direct testing is possible) or anatomical analysis of post-mortem brains (where feedback is obviously impossible).

But sometimes, you can just get lucky.

“Once in a while, we have the opportunity to directly record from the surface of the brain. And this happens for subjects who get evaluated for surgery for epilepsy. There’s an epilepsy clinic here at Stanford that treats patients who have intractable epilepsy that doesn’t respond to medication. The doctors bring them in for testing to try to evaluate where the seizure starts to determine whether or not they might safely be able to perform highly-localized surgery on them.

“They come to Stanford for a week or so and have electrodes implanted on the surface of the brain, waiting for a seizure to occur so that the doctors can track it. Sometimes, if the patients are willing to help us, we get to work with them during this waiting period to do additional tests for our research.

“In one particular case, it so happened that the doctor implanted an electrode in exactly the same part of the brain that we were looking at, and we were able to run a small current through the electrode and test things directly. Let me show you what happened…”

Empedocles, needless to say, would have been amazed. And so will you.

This is the introduction written by Howard Burton of the book, Vision and Perception, which is based on an in-depth, filmed conversation between Howard and Kalanit Grill-Spector, Professor in Psychology and the Stanford Neurosciences Institute at Stanford University.

The book is broken into chapters and includes questions for discussion at the end of each chapter. The book is also available as part of the 5-part Ideas Roadshow Collection called Conversations About Neuroscience.

Visit the dedicated page for our conversation with John Duncan on Ideas On Film: https://ideas-on-film.com/kalanit-grill-spector/. On our Ideas On Film YouTube channel you can watch a clip from the filmed conversation: https://youtu.be/1vpVrr_4Cao.

--

--

Ideas Roadshow
Ideas Roadshow

Written by Ideas Roadshow

Serious About Curiosity – Ideas That Travel Well

No responses yet