A tiny change in brain organization without which humans never could have evolved

Douglas Fox writes:

Suzana Herculano-Houzel spent most of 2003 perfecting a macabre recipe—a formula for brain soup. Sometimes she froze the jiggly tissue in liquid nitrogen, and then she liquefied it in a blender. Other times she soaked it in formaldehyde and then mashed it in detergent, yielding a smooth, pink slurry.

Herculano-Houzel had completed her Ph.D. in neuroscience several years earlier, and in 2002, she had begun working as an assistant professor at the Federal University of Rio de Janeiro in Brazil. She had no real funding, no laboratory of her own—just a few feet of counter space borrowed from a colleague.

“I was interested in questions that could be answered with very little money [and] very little technology,” she recalls. Even so, she had a bold idea. With some effort—and luck—she hoped to accomplish something with her kitchen-blender project that had bedeviled scientists for over a century: to count the number of cells in the brain—not just the human brain, but also the brains of marmosets, macaque monkeys, shrews, giraffes, elephants, and dozens of other mammals.

Her method might have seemed carelessly destructive at first. How could annihilating such a fragile and complex organ provide any useful insights? But 15 years on, the work of Herculano-Houzel and her team has overturned some long-held ideas about the evolution of the human mind. It is helping to reveal the fundamental design principles of brains and the biological basis of intelligence: why some large brains lead to enhanced intelligence while others provide no benefit at all. Her work has unveiled a subtle tweak in brain organization that happened more than 60 million years ago, not long after primates branched off from their rodent-like cousins. It might have been a tiny change—but without it, humans never could have evolved. [Continue reading…]

Octopuses on ecstasy reveal genetic link to evolution of social behaviors in humans

Johns Hopkins University School of Medicine:

By studying the genome of a kind of octopus not known for its friendliness toward its peers, then testing its behavioral reaction to a popular mood-altering drug called MDMA or “ecstasy,” scientists say they have found preliminary evidence of an evolutionary link between the social behaviors of the sea creature and humans, species separated by 500 million years on the evolutionary tree.

A summary of the experiments is published Sept. 20 in Current Biology, and if the findings are validated, the researchers say, they may open opportunities for accurately studying the impact of psychiatric drug therapies in many animals distantly related to people.

“The brains of octopuses are more similar to those of snails than humans, but our studies add to evidence that they can exhibit some of the same behaviors that we can,” says Gül Dölen, M.D., Ph.D., assistant professor of neuroscience at the Johns Hopkins University School of Medicine and the lead investigator conducting the experiments. “What our studies suggest is that certain brain chemicals, or neurotransmitters, that send signals between neurons required for these social behaviors are evolutionarily conserved.”

Octopuses, says Dölen, are well-known to be clever creatures. They can trick prey to come into their clutches, and Dölen says there is some evidence they also learn by observation and have episodic memory. The gelatinous invertebrates (animals without backbones) are further notorious for escaping from their tank, eating other animals’ food, eluding caretakers and sneaking around.

But most octopuses are asocial animals and avoid others, including other octopuses. But because of some of their behaviors, Dölen still thought there may be a link between the genetics that guide social behavior in them and humans. One place to look was in the genomics that guide neurotransmitters, the signals that neurons pass between each other to communicate. [Continue reading…]

The digital corruption of the human brain

Maryanne Wolf writes:

Look around on your next plane trip. The iPad is the new pacifier for babies and toddlers. Younger school-aged children read stories on smartphones; older boys don’t read at all, but hunch over video games. Parents and other passengers read on Kindles or skim a flotilla of email and news feeds. Unbeknownst to most of us, an invisible, game-changing transformation links everyone in this picture: the neuronal circuit that underlies the brain’s ability to read is subtly, rapidly changing – a change with implications for everyone from the pre-reading toddler to the expert adult.

As work in neurosciences indicates, the acquisition of literacy necessitated a new circuit in our species’ brain more than 6,000 years ago. That circuit evolved from a very simple mechanism for decoding basic information, like the number of goats in one’s herd, to the present, highly elaborated reading brain. My research depicts how the present reading brain enables the development of some of our most important intellectual and affective processes: internalized knowledge, analogical reasoning, and inference; perspective-taking and empathy; critical analysis and the generation of insight. Research surfacing in many parts of the world now cautions that each of these essential “deep reading” processes may be under threat as we move into digital-based modes of reading.

This is not a simple, binary issue of print vs digital reading and technological innovation. As MIT scholar Sherry Turkle has written, we do not err as a society when we innovate, but when we ignore what we disrupt or diminish while innovating. In this hinge moment between print and digital cultures, society needs to confront what is diminishing in the expert reading circuit, what our children and older students are not developing, and what we can do about it. [Continue reading…]

Peter Tse: Free will — essence and nature

 

Brains keep temporary molecular records before making a lasting memory

File 20180802 136652 1cvad3.jpg?ixlib=rb 1.1
Like the day’s newspaper, the brain has a temporary way to keep track of events.
TonTonic/Shutterstock.com

By Kelsey Tyssowski, Harvard University

The first dance at my wedding lasted exactly four minutes and 52 seconds, but I’ll probably remember it for decades. Neuroscientists still don’t entirely understand this: How was my brain able to translate this less-than-five-minute experience into a lifelong memory? Part of the puzzle is that there’s a gap between experience and memory: our experiences are fleeting, but it takes hours to form a long-term memory.

In recent work published in the journal Neuron, my colleagues and I figured out how the brain keeps temporary molecular records of transient experiences. Our finding not only helps to explain how the brain bridges the gap between experience and memory. It also allows us to read the brain’s short-term records, raising the possibility that we may one day be able to infer a person’s, or at least a laboratory mouse’s, past experience – what they saw, thought, felt – just by looking at the molecules in their brain.

Electrical pulses carry signals along the branches of neurons.
Santiago Ramón y Cajal, CC BY

[Read more…]

We are more than our brains

Alan Jasanoff writes:

Brains are undoubtedly somewhat computer-like – computers, after all, were invented to perform brain-like functions – but brains are also much more than bundles of wiry neurons and the electrical impulses they are famous for propagating. The function of each neuroelectrical signal is to release a little flood of chemicals that helps to stimulate or suppress brain cells, in much the way that chemicals activate or suppress functions such as glucose production by liver cells or immune responses by white blood cells. Even the brain’s electrical signals themselves are the products of chemicals called ions that move in and out of cells, causing tiny ripples that can spread independently of neurons.

Also distinct from neurons are the relatively passive brain cells called glia (Greek for glue) that are roughly equal in number to the neurons but do not conduct electrical signals in the same way. Recent experiments in mice have shown that manipulating these uncharismatic cells can produce dramatic effects on behaviour. In one experiment, a research group in Japan showed that direct stimulation of glia in a brain region called the cerebellum could cause a behavioural response analogous to changes more commonly evoked by stimulation of neurons. Another remarkable study showed that transplantation of human glial cells into mouse brains boosted the animals’ performance in learning tests, again demonstrating the importance of glia in shaping brain function. Chemicals and glue are as integral to brain function as wiring and electricity. With these moist elements factored in, the brain seems much more like an organic part of the body than the idealised prosthetic many people imagine.

Stereotypes about brain complexity also contribute to the mystique of the brain and its distinction from the body. It has become a cliché to refer to the brain as ‘the most complex thing in the known Universe’. This saying is inspired by the finding that human brains contain something on the order of 100,000,000,000 neurons, each of which makes about 10,000 connections (synapses) to other neurons. The daunting nature of such numbers provides cover for people who argue that neuroscience will never decipher consciousness, or that free will lurks somehow among the billions and billions.

But the sheer number of cells in the human brain is unlikely to explain its extraordinary capabilities. Human livers have roughly the same number of cells as brains, but certainly don’t generate the same results. Brains themselves vary in size over a considerable range – by around 50 per cent in mass and likely number of brain cells. Radical removal of half of the brain is sometimes performed as a treatment for epilepsy in children. Commenting on a cohort of more than 50 patients who underwent this procedure, a team at Johns Hopkins in Baltimore wrote that they were ‘awed by the apparent retention of memory after removal of half of the brain, either half, and by the retention of the child’s personality and sense of humour’. Clearly not every brain cell is sacred.

If one looks out into the animal kingdom, vast ranges in brain size fail to correlate with apparent cognitive power at all. Some of the most perspicacious animals are the corvids – crows, ravens, and rooks – which have brains less than 1 per cent the size of a human brain, but still perform feats of cognition comparable to chimpanzees and gorillas. Behavioural studies have shown that these birds can make and use tools, and recognise people on the street, feats that even many primates are not known to achieve. Within individual orders, animals with similar characteristics also display huge differences in brain size. Among rodents, for instance, we can find the 80-gram capybara brain with 1.6 billion neurons and the 0.3-gram pygmy mouse brain with probably fewer than 60 million neurons. Despite a greater than 100-fold difference in brain size, these species live in similar habitats, display similarly social lifestyles, and do not display obvious differences in intelligence. Although neuroscience is only beginning to parse brain function even in small animals, such reference points show that it is mistaken to mystify the brain because of its sheer number of components.

Playing up the machine-like qualities of the brain or its unbelievable complexity distances it from the rest of the biological world in terms of its composition. But a related form of brain-body distinction exaggerates how the brain stands apart in terms of its autonomy from body and environment. This flavour of dualism contributes to the cerebral mystique by enhancing the brain’s reputation as a control centre, receptive to bodily and environmental input but still in charge.

Contrary to this idea, our brains themselves are perpetually influenced by torrents of sensory input. The environment shoots many megabytes of sensory data into the brain every second, enough information to disable many computers. The brain has no firewall against this onslaught. Brain-imaging studies show that even subtle sensory stimuli influence regions of the brain, ranging from low-level sensory regions where input enters the brain to parts of the frontal lobe, the high-level brain area that is expanded in humans compared with many other primates.

Many of these stimuli seem to take direct control of us. For instance, when we view illustrations, visual features often seem to grab our eyes and steer our gaze around in spatial patterns that are largely reproducible from person to person. If we see a face, our focus darts reflexively among eyes, nose and mouth, subconsciously taking in key features. When we walk down the street, our minds are similarly manipulated by stimuli in the surroundings – the honk of a car’s horn, the flashing of a neon light, the smell of pizza – each of which guides our thoughts and actions even if we don’t realise that anything has happened.

Even further below our radar are environmental features that act on a slower timescale to influence our mood and emotions. Seasonal low light levels are famous for their correlation with depression, a phenomenon first described by the South African physician Norman Rosenthal soon after he moved from sunny Johannesburg to the grey northeastern United States in the 1970s. Colours in our surroundings also affect us. Although the idea that colours have psychic power evokes New Age mysticism, careful experiments have repeatedly linked cold colours such as blue and green to positive emotional responses, and hot red hues to negative responses. In one example, researchers showed that participants performed worse on IQ tests labelled with red marks than on tests labelled with green or grey; another study found that subjects performed better on computerised creativity tests delivered on a blue background than on a red background.

Signals from within the body influence behaviour just as powerfully as influences from the environment, again usurping the brain’s command and challenging idealised conceptions of its supremacy. [Continue reading…]

In order to remember, it’s necessary to forget

Dalmeet Singh Chawla writes:

Past theories about forgetting mostly emphasized relatively passive processes in which the loss of memories was a consequence of the physical traces of those memories (what some researchers refer to as “engrams”) naturally breaking down or becoming harder to access; those engrams may typically be interconnections between brain cells that prompt them to fire in a certain way. This forgetting process could involve the spontaneous decay of connections between neurons that encode a memory, the random death of those neurons, the failure of systems that would normally help to consolidate and stabilize new memories, or the loss of context cues or other factors that might make it hard to retrieve a memory.

Now, however, researchers are paying much more attention to mechanisms that actively erase or hide those memory engrams.

One form of active forgetting that scientists formally identified in 2017 is called intrinsic forgetting. It involves a certain subset of cells in the brain — which Ronald Davis and Yi Zhong, who wrote the paper that introduced the idea, casually call “forgetting cells” — that degrade the engrams in memory cells. [Continue reading…]

A theory of reality as more than the sum of its parts

Natalie Wolchover writes:

In his 1890 opus, The Principles of Psychology, William James invoked Romeo and Juliet to illustrate what makes conscious beings so different from the particles that make them up.

“Romeo wants Juliet as the filings want the magnet; and if no obstacles intervene he moves towards her by as straight a line as they,” James wrote. “But Romeo and Juliet, if a wall be built between them, do not remain idiotically pressing their faces against its opposite sides like the magnet and the filings. … Romeo soon finds a circuitous way, by scaling the wall or otherwise, of touching Juliet’s lips directly.”

Erik Hoel, a 29-year-old theoretical neuroscientist and writer, quoted the passage in a recent essay in which he laid out his new mathematical explanation of how consciousness and agency arise. The existence of agents — beings with intentions and goal-oriented behavior — has long seemed profoundly at odds with the reductionist assumption that all behavior arises from mechanistic interactions between particles. Agency doesn’t exist among the atoms, and so reductionism suggests agents don’t exist at all: that Romeo’s desires and psychological states are not the real causes of his actions, but merely approximate the unknowably complicated causes and effects between the atoms in his brain and surroundings.

Hoel’s theory, called “causal emergence,” roundly rejects this reductionist assumption.

“Causal emergence is a way of claiming that your agent description is really real,” said Hoel, a postdoctoral researcher at Columbia University who first proposed the idea with Larissa Albantakis and Giulio Tononi of the University of Wisconsin, Madison. “If you just say something like, ‘Oh, my atoms made me do it’ — well, that might not be true. And it might be provably not true.”

Using the mathematical language of information theory, Hoel and his collaborators claim to show that new causes — things that produce effects — can emerge at macroscopic scales. They say coarse-grained macroscopic states of a physical system (such as the psychological state of a brain) can have more causal power over the system’s future than a more detailed, fine-grained description of the system possibly could. Macroscopic states, such as desires or beliefs, “are not just shorthand for the real causes,” explained Simon DeDeo, an information theorist and cognitive scientist at Carnegie Mellon University and the Santa Fe Institute who is not involved in the work, “but it’s actually a description of the real causes, and a more fine-grained description would actually miss those causes.” [Continue reading…]

Neurons can carry more than one signal at a time

Duke Today:

Back in the early days of telecommunications, engineers devised a clever way to send multiple telephone calls through a single wire at the same time. Called time-division multiplexing, this technique rapidly switches between sending pieces of each message.

New research from Duke University shows that neurons in the brain may be capable of a similar strategy.

In an experiment examining how monkeys respond to sound, a team of neuroscientists and statisticians found that a single neuron can encode information from two different sounds by switching between the signal associated with one sound and the signal associated with the other sound.

“The question we asked is, how do neurons preserve information about two different stimuli in the world at one time?” said Jennifer Groh, professor in the department of psychology and neuroscience, and in the department of neurobiology at Duke.

“We found that there are periods of time when a given neuron responds to one stimulus, and other periods of time where it responds to the other,” Groh said. “They seem to be able to alternate between each one.”

The results may explain how the brain processes complex information from the world around us, and may also provide insight into some of our perceptual and cognitive limitations. The results appeared July 13 in Nature Communications. [Continue reading…]