Neuroscientists working on the Blue Brain Project at EPFL (Ecole Polytechnique Fédérale de Lausanne) in Switzerland claim they are “reconstructing the brain piece by piece and building a virtual brain in a supercomputer”. This virtual brain, they claim, “will be an exceptional tool giving neuroscientists a new understanding of the brain and a better understanding of neurological diseases.” The computing power needed for successfully building a virtual brain is astronomical, seeing as each “simulated neuron requires the equivalent of a laptop computer” and that only the human cortex has “two million columns, each having in the order of 100,000 neurons each”. The virtual brain not only needs to simulate this enormous number of neurons, it also has to simulate the connections between the neurons, and the connections between groups of neurons (some of which overlap) as well as the computations performed over the sets of these neurons. The researchers claim that “Blue Brain is a resounding success”: the generated models “show a behavior already observed in years of neuroscientific experiments” and these models “will be basic building blocks for larger scale models leading towards a complete virtual brain.” Promising research perhaps, but since the whole process is “entirely data driven and essentially automatically executed on the supercomputer” it is unlikely to yield a fruitful scientific understanding of the brain/mind because, as Jerry Fodor puts it, “we’re heavily invested in finding answers to which we don’t know the corresponding questions.”
As many self-reflecting scientists and historians and philosophers of science will tell you, science is driven by theories, not by data. Take the example of the nervous system of honeybees that is understood to such an extent that researchers have constructed a robotic honeybee that can be programmed to perform in such a way that when it is put inside a honeybee colony it is indistinguishable from a real honeybee (the robotic honeybee can, for example, perform the famous waggle dance of the honeybees in order to instruct other honeybees in the colony about locations of food sources). However, as Marc Hauser remarks, “We’ve gotten almost nowhere in how the bee’s brain represents the simplicity of the dance language. Although any good biologist, after several hours of observation, can predict accurately where the bee is going, we currently have no understanding of how the brain actually performs that computation.” In other words, prediction or duplication (or simulation, for that matter) is not the same as explanation. If the neural basis for the behaviour of bees (which can be simulated to perfection) remains a mystery, the outlook for vastly more complicated systems as the human brain are remote at best.
Chomsky makes a similar point when he says:
“It’s all totally meaningless, so I don’t participate in the debate. Humans can be taught to do a fair imitation of the complex bee communication system. That is not of the slightest interest to bee scientists, who are rational, and understand something about science: they are interested in the nature of bees, and it is of no interest if some other organism can be trained to partially mimic some superficial aspects of the waggle dance. And one could of course not get a grant to teach grad students to behave like imperfect bees. When we turn to the study of humans, for some reason irrationality commonly prevails — possibly a reflection of old-fashioned dualism — and it is considered significant that apes (or birds, which tend to do much better) can be trained to mimic some superficial aspects of human language. But the same rational criteria should hold as in the case of bees and graduate students. Possibly training graduate students to mimic the waggle dance could teach us something about human capacity, though it’s unlikely. Similarly, it’s possible that training apes to do things with signs can teach us something about the cognitive capacities of apes. That’s the way the matter is approached by serious scientists, like Anne and David Premack. Others prefer to fool themselves.”
What is needed is better theories to guide the research, not more data. As Fodor quipped in the London Review of Books:
There’s a funny didactic fable of Bernard Shaw’s called, I think, The Little Black Girl in Search of God, in which the eponymous heroine wanders around what was then the intellectual landscape, looking for such wisdom as may be on offer. She runs into Pavlov, who explains to her why he is, rather horribly, drilling holes in the mouths of dogs: it’s to show that expecting food makes them salivate. ‘But we already knew that,’ she says, in some perplexity. ‘Now we know it scientifically,’ Pavlov replies. It may be that some such thought also motivates the current interest in brain localisation. Granted that we always sort of knew that there’s a difference between nouns and verbs, or between thinking about teapots and taking a nap, we didn’t really know it till somebody found them at different places in the brain. Now that somebody has, we know it scientifically.