Theory of predictive brain as important as evolution — an interview with Lars Muckli

Our brains make sense of the world by predicting what we will see and then updating these predictions as the situation demands, according to Lars Muckli, professor of neuroscience at the Centre for Cognitive Neuroimaging in Glasgow, Scotland. He says that this predictive processing framework theory is as important to brain science as evolution is to biology.

Horizon magazine: You have used advanced brain imaging techniques to come up with a model of how the brain processes vision – and it says that instead of just sorting through what we see, our brains actually anticipate what we will see next. Could you tell us a bit more?

Lars Muckli: ‘We are interested to understand how the brain supports vision. A classical view had been that the brain is responding to visual information in a cascade of hierarchical visual areas with increasing complexity, but a more modern way is to realise that, actually, the brain is not meeting every situation with a clean sheet, but with lots of predictions.’

How does that work?

‘The main purpose of the brain, as we understand it today, is it is basically a prediction machine that is optimising its own predictions of the environment it is navigating through. So, vision starts with an expectation of what is around the corner. Once you turn around the corner, you are then negotiating potential inputs to your predictions – and then responding differently to surprise and to fulfilment of expectations.

‘So that’s what’s called the predictive processing framework, and it’s a proposed unifying theory of the brain. It’s basically creating an internal model of what’s going to happen next.’

Why does this happen?

‘First of all, the outside world is not in our brain so somehow we need to get something into our brain that is a useful description of what’s happening – and that’s a challenge.

‘We become painfully aware of this challenge if we try to simulate this in a computer model – how do we get information about the outside world into a computer model? The brain does that in an unsupervised way. It segments the visual input into object, background, foreground, context, people and so on, and no one ever gives the brain any kind of supervision to do so.

‘To have meaningful models of the world, you need to have something like a supervisor in your brain that says: “This is Object A. This is another object, and you need to find a name for this.” We don’t have a supervisor, but we have something – and that’s the currency of surprise. (The need) to minimise surprise is used as a supervisor.’

[Read more…]

Nabokov’s experiments with time

Michael Wood writes:

Language has many forms of quiet kindness, refusals of stark alternatives. “Never” can mean “not always,” and “impossible” may mean “not now.” Insomnia may mean a shortage of sleep rather than its entire absence, and when Gennady Barabtarlo writes that “Nabokov typically remembered having his dreams at dawn, right before awakening after a sleepless night,” or indeed calls his own book Insomniac Dreams, we are looking not so much at a paradox as a touch of logical leeway. There is no need to go “beyond logic,” as Nabokov says one of the characters does in his story “The Vane Sisters,” but we do often need to bend it a little, ask it to relax.

In October 1964, Nabokov began the experiment that Barabtarlo expertly unfolds for us:

Every morning, immediately upon awakening, he would write down what he could rescue of his dreams. During the following day or two he was on the lookout for anything that seemed to do with the recorded dream.

He continued the record, written in English on index cards now kept in the Berg Collection of the New York Public Library, until the beginning of January 1965. He and his wife, Véra, were living at the Palace Hotel in Montreux, Switzerland. Lolita and his teaching career at Wellesley and Cornell lay in the past. He had published Pale Fire in 1962 and completed his translation of and commentary on Eugene Onegin, which appeared in June 1964. The English version of his early novel The Defense came out in September of that year, and he was working on the Russian translation of Lolita. The novels still to come were Ada, or Ardor (1969), Transparent Things (1972), Look at the Harlequins (1974), and the fragmentary, posthumous The Original of Laura (2009).

The “experiments with time” of Barabtarlo’s subtitle have several points of reference. There is the book An Experiment with Time, by J.W. Dunne, first published in 1927, with several later editions, which prompted Nabokov’s attempt at a dream record. A card dated October 14, 1964, is headed “An Experiment” and the words “Re Dunne” are written in a corner. “The following checking of dream events,” Nabokov writes,

was undertaken to illustrate the principle of “reverse memory.” The waking event resembling or coinciding with the dream event does so not because the latter is a prophecy but because this would be the kind of dream that one might expect to have after the event.

“Not because the latter is a prophecy” is the voice of Nabokov’s caution, and pretty much contradicts Dunne’s claim. His idea is that we routinely dream of the future but deny our experience because we think this can’t happen. Precognitive dreams are as normal as memory or anxiety dreams. [Continue reading…]

Don’t miss the latest posts at Attention to the Unseen: Sign up for email updates.

We develop the capacity to reason before we can speak

The Verge reports:

One-year-old babies may not be able to speak, but they are able to think logically, according to new research that shows the earliest known foundation of our ability to reason.

Legendary psychologist Jean Piaget believed that we didn’t have logical reasoning abilities until we were seven, but scientists scanned the eyes of 48 babies and found that they’re able to reason through the process of elimination. The research was published today in the journal Science.

The type of reasoning in question, process of elimination, is formally called “disjunctive syllogism.” It goes like this: if only A or B can be true, and A is false, then B must be true. So, if the cup is either red or blue, and it is not red, then it is blue. Process of elimination isn’t necessarily the easiest form of reasoning, says Justin Halberda, a psychologist and child development expert at Johns Hopkins University who was not involved in today’s study, but it’s a crucial one for higher thinking. “One of the central pieces that separates human reasoning from all other forms is to negate a premise — you see that if it’s not A, it’s something else,” he says. “That’s quite fancy stuff.” [Continue reading…]

Don’t miss the latest posts at Attention to the Unseen: Sign up for email updates.

Are we smart enough to know how smart animals are?

Frans de Waal asks: are we smart enough to know how smart animals are?

Just as attitudes of superiority within segments of human culture are often expressions of ignorance, humans collectively — especially when subject to the dislocating effects of technological dependence — tend to underestimate the levels of awareness and cognitive skills of creatures who live mostly outside our sight. This tendency translates into presuppositions that need to be challenged by what de Waal calls his “cognitive ripple rule”:

Every cognitive capacity that we discover is going to be older and more widespread than initially thought.

In a review of de Waal’s book, Are We Smart Enough to Know How Smart Animals Are?, Ludwig Huber notes that there are a multitude of illustrations of the fact that brain size does not correlate with cognitive capacities.

Whereas we once thought of humans as having unique capabilities in learning and the use of tools, we now know these attributes place us in a set of species that also includes bees. Our prior assumptions about seemingly robotic behavior in such creatures turns out to have been an expression of our own anthropocentric prejudices.

Huber writes:

Various doctrines of human cognitive superiority are made plausible by a comparison of human beings and the chimpanzees. For questions of evolutionary cognition, this focus is one-sided. Consider the evolution of cooperation in social insects, such as the Matabele ant (Megaponera analis). After a termite attack, these ants provide medical services. Having called for help by means of a chemical signal, injured ants are brought back to the nest. Their increased chance of recovery benefits the entire colony. Red forest ants (Myrmica rubra) have the ability to perform simple arithmetic operations and to convey the results to other ants.

When it comes to adaptations in animals that require sophisticated neural control, evolution offers other spectacular examples. The banded archerfish (Toxotes jaculatrix) is able to spit a stream of water at its prey, compensating for refraction at the boundary between air and water. It can also track the distance of its prey, so that the jet develops its greatest force just before impact. Laboratory experiments show that the banded archerfish spits on target even when the trajectory of its prey varies. Spit hunting is a technique that requires the same timing used in throwing, an activity otherwise regarded as unique in the animal kingdom. In human beings, the development of throwing has led to an enormous further development of the brain. And the archerfish? The calculations required for its extraordinary hunting technique are based on the interplay of about six neurons. Neural mini-networks could therefore be much more widespread in the animal kingdom than previously thought.

Research on honeybees (Apis mellifera) has brought to light the cognitive capabilities of minibrains. Honeybees have no brains in the real sense. Their neuronal density, however, is among the highest in insects, with roughly 960 thousand neurons—far fewer than any vertebrate. Even if the brain size of honeybees is normalized to their body size, their relative brain size is lower than most vertebrates. Insect behavior should be less complex, less flexible, and less modifiable than vertebrate behavior. But honeybees learn quickly how to extract pollen and nectar from a large number of different flowers. They care for their young, organize the distribution of tasks, and, with the help of the waggle dance, they inform each other about the location and quality of distant food and water.

Early research by Karl von Frisch suggested that such abilities cannot be the result of inflexible information processing and rigid behavioral programs. Honeybees learn and they remember. The most recent experimental research has, in confirming this conclusion, created an astonishing picture of the honeybee’s cognitive competence. Their representation of the world does not consist entirely of associative chains. It is far more complex, flexible, and integrative. Honeybees show configural conditioning, biconditional discrimination, context-dependent learning and remembering, and even some forms of concept formation. Bees are able to classify images based on such abstract features as bilateral symmetry and radial symmetry; they can comprehend landscapes in a general way, and spontaneously come to classify new images. They have recently been promoted to the set of species capable of social learning and tool use.

In any case, the much smaller brain of the bee does not appear to be a fundamental limitation for comparable cognitive processes, or at least their performance. Jumping spiders and cephalopods are similarly instructive. The similarities between mammals and bees are astonishing, but they cannot be traced to homologous neurological developments. As long as the animal’s neural architecture remains unknown, we cannot determine the cause of their similarity. [Continue reading…]

Don’t miss the latest posts at Attention to the Unseen: Sign up for email updates.

Plants, people, and decision-making

Laura Ruggles writes:

Plants are not simply organic, passive automata. We now know that they can sense and integrate information about dozens of different environmental variables, and that they use this knowledge to guide flexible, adaptive behaviour.

For example, plants can recognise whether nearby plants are kin or unrelated, and adjust their foraging strategies accordingly. The flower Impatiens pallida, also known as pale jewelweed, is one of several species that tends to devote a greater share of resources to growing leaves rather than roots when put with strangers – a tactic apparently geared towards competing for sunlight, an imperative that is diminished when you are growing next to your siblings. Plants also mount complex, targeted defences in response to recognising specific predators. The small, flowering Arabidopsis thaliana, also known as thale or mouse-ear cress, can detect the vibrations caused by caterpillars munching on it and so release oils and chemicals to repel the insects.

Plants also communicate with one another and other organisms, such as parasites and microbes, using a variety of channels – including ‘mycorrhizal networks’ of fungus that link up the root systems of multiple plants, like some kind of subterranean internet. Perhaps it’s not really so surprising, then, that plants learn and use memories for prediction and decision-making.

Can plants make decisions?

A lot of people will balk at such a notion for obvious reasons. For instance, the idea of plants as decision-makers suggests the possibility of some plants making good decisions, others bad, and some suffering from indecisiveness.

Isn’t what is being presented as a decision, simply the outcome of a particular constellation of factors that result in a particular outcome? In which case the outcome is determined and involves no decision.

Maybe, but let’s flip this around and instead of questioning a posited decision-making process inside plants, consider what happens inside humans.

My favorite way of doing this is by attempting to zero in on the moment an action is initiated — the moment, for instance, when one decides to stand up from sitting.

Within the general field of awareness, there will probably be a phase of rumination and some physical precursors of action, but the exact moment in which the action starts — that seems to come out of nowhere. We function more like puppets animated by an invisible puppeteer and then mask our lack of agency with a narrative of purpose, after the fact.

Not sure about the agentless nature of physical action? Then consider this: what’s the next thought that will pop into your head?

Of course, we never actually know what’s going to arrive before it gets delivered. The brain offers no tracking service like Amazon.