What are the limits of manipulating nature?

In Scientific American, Neil Savage writes:

Matt Trusheim flips a switch in the darkened laboratory, and an intense green laser illuminates a tiny diamond locked in place beneath a microscope objective. On a computer screen an image appears, a fuzzy green cloud studded with brighter green dots. The glowing dots are color centers in the diamond—tiny defects where two carbon atoms have been replaced by a single atom of tin, shifting the light passing through from one shade of green to another.

Later, that diamond will be chilled to the temperature of liquid helium. By controlling the crystal structure of the diamond on an atom-by-atom level, bringing it to within a few degrees of absolute zero and applying a magnetic field, researchers at the Quantum Photonics Laboratory run by physicist Dirk Englund at the Massachusetts Institute of Technology think they can select the quantum-mechanical properties of photons and electrons with such precision that they can transmit unbreakable secret codes.

Trusheim, a postdoctoral researcher in the lab, is one of many scientists trying to figure out just which atoms embedded in which crystals under what conditions will give them that kind of control. Indeed, scientists around the world are tackling the hard problem of controlling nature at the level of atoms and below, down to electrons or even fractions of electrons. Their aim is to find the knobs that control the fundamental properties of matter and energy and turn those knobs to customize matter and energy, creating ultrapowerful quantum computers or superconductors that work at room temperature.

These scientists face two main challenges. On a technical level, the work is extremely difficult. Some crystals, for instance, must be made to 99.99999999 percent purity in vacuum chambers emptier than space. The more fundamental challenge is that the quantum effects these researchers want to harness—for example, the ability of a particle to be in two states at once, à la Schrödinger’s cat—happen at the level of individual electrons. Up here in the macro world, the magic goes away. Researchers manipulating matter at the smallest scales, therefore, are trying to coax nature into behaving in ways that strain at the limits imposed by fundamental physics. The degree to which they succeed will help determine our scientific understanding and technological capacity in the decades to come. [Continue reading…]

The next big discovery in astronomy? Scientists probably found it years ago – but they don’t know it yet

File 20180504 166887 8ht3tz.jpg?ixlib=rb 1.1
An artist’s illustration of a black hole “eating” a star.
NASA/JPL-Caltech

By Eileen Meyer, University of Maryland, Baltimore County

Earlier this year, astronomers stumbled upon a fascinating finding: Thousands of black holes likely exist near the center of our galaxy.

The X-ray images that enabled this discovery weren’t from some state-of-the-art new telescope. Nor were they even recently taken – some of the data was collected nearly 20 years ago.

No, the researchers discovered the black holes by digging through old, long-archived data.

Discoveries like this will only become more common, as the era of “big data” changes how science is done. Astronomers are gathering an exponentially greater amount of data every day – so much that it will take years to uncover all the hidden signals buried in the archives.

The evolution of astronomy

Sixty years ago, the typical astronomer worked largely alone or in a small team. They likely had access to a respectably large ground-based optical telescope at their home institution.

Their observations were largely confined to optical wavelengths – more or less what the eye can see. That meant they missed signals from a host of astrophysical sources, which can emit non-visible radiation from very low-frequency radio all the way up to high-energy gamma rays. For the most part, if you wanted to do astronomy, you had to be an academic or eccentric rich person with access to a good telescope.

Old data was stored in the form of photographic plates or published catalogs. But accessing archives from other observatories could be difficult – and it was virtually impossible for amateur astronomers.

Today, there are observatories that cover the entire electromagnetic spectrum. No longer operated by single institutions, these state-of-the-art observatories are usually launched by space agencies and are often joint efforts involving many countries.

With the coming of the digital age, almost all data are publicly available shortly after they are obtained. This makes astronomy very democratic – anyone who wants to can reanalyze almost any data set that makes the news. (You too can look at the Chandra data that led to the discovery of thousands of black holes!)

[Read more…]

The Dreamtime, science and narratives of Indigenous Australia

File 20180501 135803 tkypa4.jpg?ixlib=rb 1.1
Lake Mungo and the surrounding Willandra Lakes of NSW were established around 150,000 years ago.
from www.shutterstock.com

David Lambert, Griffith University

This article is an extract from an essay Owning the science: the power of partnerships in First Things First, the 60th edition of Griffith Review.

We’re publishing it as part of our occasional series Zoom Out, where authors explore key ideas in science and technology in the broader context of society and humanity.


Scientific and Indigenous knowledge systems have often been in conflict. In my view, too much is made of these conflicts; they have a lot in common.

For example, Indigenous knowledge typically takes the form of a narrative, usually a spoken story about how the world came to be. In a similar way, evolutionary theories, which aim to explain why particular characters are adapted to certain functions, also take the form of narratives. Both narratives are mostly focused on “origins”.




Read more:
Friday essay: when did Australia’s human history begin?


From a strictly genetic perspective, progress on origins research in Australia has been particularly slow. Early ancient DNA studies were focused on remains from permafrost conditions in Antarctica and cool temperate environments such as northern Europe, including Greenland.

But Australia is very different. Here, human remains are very old, and many are recovered from very hot environments.

While ancient DNA studies have played an important role in informing understanding of the evolution of our species worldwide, little is known about the levels of ancient genomic variation in Australia’s First Peoples – although some progress has been made in recent years. This includes the landmark recovery of genomic sequences from both contemporary and ancient Aboriginal Australian remains.

[Read more…]

Should scientists advocate on the issue of climate change?

Ingfei Chen writes:

I was recently chatting with a friend who specializes in science education when we touched upon a conundrum: Researchers who study climate change grasp the dire need to cut planet-warming carbon emissions that come from burning fossil fuels, yet many of them shrink from voicing their views at public events or to the press. The stock-in-trade of scientists is their objectivity, my friend explained. The worry is that advocating for an agenda may diminish their credibility, hurt their careers, and make them sitting ducks for political attacks online. But what are their moral obligations here? What should they do?

Curious, I did some digging and queried a few experts and, well, the issues here are complicated. You’ve got a decades-old controversy over whether scientists should be advocates, and you’ve got climate change, a diffuse, slow moving, and highly politicized problem. To be sure, researchers should inform the public about what they’ve learned from their studies and suggest potential ways forward. But views differ on what, exactly, is the best way for them to make the case for action, particularly given the politics involved and what can seem like real reputational risks.

For one perspective, my friend pointed me to the work of philosopher and climate activist Kathleen Dean Moore, an emerita professor at Oregon State University. I looked up Moore’s poetic and poignant 2016 book, “Great Tide Rising: Towards Clarity and Moral Courage in a Time of Planetary Change,” which lays out razor-edged arguments for why climate change is a matter of moral urgency. Given the existential nature of its hazards, letting climate change proceed unchecked is a betrayal of the children who inherit the planetary mess we’ve made. It’s a gross human-rights violation, because poorer nations will suffer the most. The list goes on.

“If you have all the facts” — that is, the scientific consensus on climate change — “and if you have this moral affirmation of our duty, then you know what you ought to do,” Moore told me when I called her. “You know that it’s necessary for the government, everybody to take action.” And climate scientists bear a particular moral responsibility because “they know, more than anybody else, the dangers that we face,” she said. Backed by the authority of their science, “they have powerful voices if they would choose to use them.” [Continue reading…]

Don’t miss the latest posts at Attention to the Unseen: Sign up for email updates.

Colossal cosmic collision alters understanding of early universe

Reuters reports:

Astronomers have detected the early stages of a colossal cosmic collision, observing a pile-up of 14 galaxies 90 percent of the way across the observable universe in a discovery that upends assumptions about the early history of the cosmos.

Researchers said on Wednesday the galactic mega-merger observed 12.4 billion light-years away from Earth occurred 1.4 billion years after the Big Bang that gave rise to the universe. Astronomers call the object a galactic protocluster, a precursor to the type of enormous galaxy clusters that are the largest-known objects in today’s universe.

It marked the first time scientists observed the birth of a galaxy cluster, with at least 14 galaxies crammed into an area only about four times the size of our average-sized Milky Way galaxy.

A protocluster as massive as the one observed here, designated as SPT2349-56, should not have existed at that time, according to current notions of the early universe. Scientists had figured this could not happen until several billion of years later. [Continue reading…]

Don’t miss the latest posts at Attention to the Unseen: Sign up for email updates.

Gaia mission releases map of more than a billion stars – here’s what it can teach us

File 20180425 175035 1h0dvim.jpg?ixlib=rb 1.1
Gaia’s view of our Milky Way and neighbouring galaxies.
ESA/Gaia/DPAC, CC BY-SA

By George Seabroke, UCL

Most of us have looked up at the night sky and wondered how far away the stars are or in what direction they are moving. The truth is, scientists don’t know the exact positions or velocities of the vast majority of the stars in the Milky Way. But now a new tranche of data from the European Space Agency’s Gaia satellite, aiming to map stars in our galaxy in unprecedented detail, has come in to shed light on the issue.

The Gaia Archive opened on April 25, making public Gaia’s second data release to everyone. To quote the character Dave Bowman in the sci-fi classic 2001: A Space Odyssey: “It’s full of stars”. In fact, it contains data on the distances to more than 1.3 billion stars.

The Gaia satellite was launched in 2013 and has been scanning the sky with its two telescopes continuously ever since, with the aim of deciphering how our Milky Way galaxy formed and evolved. To do this, it is measuring something called parallax. If you hold a finger at arms length and look at it with one eye and then the other, your finger appears to shift position compared to the background. The angular change is called parallax.

Being in space allows Gaia to see similar tiny shifts in star positions. Observations at different locations six months apart (half way of its orbit around the Earth) are akin to looking at your finger with one eye and then the other. When you know the parallax as well as the distance from Gaia to the sun (or the distance from your nose to your eye), you can use simple trigonometry to work out the distance to each star (or your finger).

[Read more…]

China’s great leap forward in science

Philip Ball writes:

I first met Xiaogang Peng in the summer of 1992 at Jilin University in Changchun, in the remote north-east of China, where he was a postgraduate student in the department of chemistry. He told me that his dream was to get a place at a top American lab. Now, Xiaogang was evidently smart and hard-working – but so, as far as I could see, were most Chinese science students. I wished him well, but couldn’t help thinking he’d set himself a massive challenge.

Fast forward four years to when, as an editor at Nature, I publish a paper on nanotechnology from world-leading chemists at the University of California at Berkeley. Among them was Xiaogang. That 1996 paper now appears in a 10-volume compendium of the all-time best of Nature papers being published in translation in China.

I watched Xiaogang go on to forge a solid career in the US, as in 2005 he became a tenured professor at the University of Arkansas. But when I recently had reason to get in touch with Xiaogang again, I discovered that he had moved back to China and is now at Zhejiang University in Hangzhou – one of the country’s foremost academic institutions.

For Xiaogang, it seems that America was no longer the only land of opportunity. These days, Chinese scientists stand at least as good a chance of making a global impact on science from within China itself.

The economic rise of China has been accompanied by a waxing of its scientific prowess. In January, the United States National Science Foundation reported that the number of scientific publications from China in 2016 outnumbered those from the US for the first time: 426,000 versus 409,000. Sceptics might say that it’s about quality, not quantity. But the patronising old idea that China, like the rest of east Asia, can imitate but not innovate is certainly false now. In several scientific fields, China is starting to set the pace for others to follow. On my tour of Chinese labs in 1992, only those I saw at the flagship Peking University looked comparable to what you might find at a good university in the west. Today the resources available to China’s top scientists are enviable to many of their western counterparts. Whereas once the best Chinese scientists would pack their bags for greener pastures abroad, today it’s common for Chinese postdoctoral researchers to get experience in a leading lab in the west and then head home where the Chinese government will help them set up a lab that will eclipse their western competitors. [Continue reading…]

Don’t miss the latest posts at Attention to the Unseen: Sign up for email updates.

Burning coal may have caused Earth’s worst mass extinction

Dana Nuccitelli writes:

Recently, geologist Dr Benjamin Burger identified a rock layer in Utah that he believed might have formed during the Permian and subsequent Triassic period that could shed light on the cause of the Great Dying [the Earth’s deadliest mass extinction 252 million years ago].

During the Permian, Earth’s continents were still combined as one Pangea, and modern day Utah was on the supercontinent’s west coast. Samples from the end-Permian have been collected from rock layers in Asia, near the volcanic eruptions, but Utah was on the other side of Pangaea. Burger’s samples could thus provide a unique perspective of what was happening on the other side of the world from the eruptions. Burger collected and analyzed samples from the rock layer, and documented the whole process in a fascinating video:

 

Burger’s samples painted a grim picture of Earth’s environment at the end of the Permian period. A sharp drop in calcium carbonate levels indicated that the oceans had become acidic. A similar decline in organic content matched up with the immense loss of life in the oceans during this period. The presence of pyrite pointed to an anoxic ocean (without oxygen), meaning the oceans were effectively one massive dead zone.

Bacteria ate the oversupply of dead bodies, producing hydrogen sulfide gas, creating a toxic atmosphere. The hydrogen sulfide oxidized in the atmosphere to form sulfur dioxide, creating acid rain, which killed much of the plant life on Earth. Elevated barium levels in the samples had likely been carried up from the ocean depths by a massive release of methane.

Levels of various metals in the rock samples were critical in identifying the culprit of this mass extinction event. As in end-Permian samples collected from other locations around the world, Burger didn’t find the kinds of rare metals that are associated with asteroid impacts. There simply isn’t evidence that an asteroid struck at the right time to cause the Great Dying.

However, Burger did find high levels of mercury and lead in his samples, coinciding with the end of the Permian period. Mercury has also been identified in end-Permian samples from other sites. Lead and mercury aren’t associated with volcanic ash, but they are a byproduct of burning coal. [Continue reading…]

Don’t miss the latest posts at Attention to the Unseen: Sign up for email updates.

There’s no scientific basis for race — it’s a made-up label

 

Elizabeth Kolbert writes:

In the first half of the 19th century, one of America’s most prominent scientists was a doctor named Samuel Morton. Morton lived in Philadelphia, and he collected skulls.

He wasn’t choosy about his suppliers. He accepted skulls scavenged from battlefields and snatched from catacombs. One of his most famous craniums belonged to an Irishman who’d been sent as a convict to Tasmania (and ultimately hanged for killing and eating other convicts). With each skull Morton performed the same procedure: He stuffed it with pepper seeds—later he switched to lead shot—which he then decanted to ascertain the volume of the braincase.

Morton believed that people could be divided into five races and that these represented separate acts of creation. The races had distinct characters, which corresponded to their place in a divinely determined hierarchy. Morton’s “craniometry” showed, he claimed, that whites, or “Caucasians,” were the most intelligent of the races. East Asians—Morton used the term “Mongolian”—though “ingenious” and “susceptible of cultivation,” were one step down. Next came Southeast Asians, followed by Native Americans. Blacks, or “Ethiopians,” were at the bottom. In the decades before the Civil War, Morton’s ideas were quickly taken up by the defenders of slavery.

“He had a lot of influence, particularly in the South,” says Paul Wolff Mitchell, an anthropologist at the University of Pennsylvania who is showing me the skull collection, now housed at the Penn Museum. We’re standing over the braincase of a particularly large-headed Dutchman who helped inflate Morton’s estimate of Caucasian capacities. When Morton died, in 1851, the Charleston Medical Journal in South Carolina praised him for “giving to the negro his true position as an inferior race.”

Today Morton is known as the father of scientific racism. So many of the horrors of the past few centuries can be traced to the idea that one race is inferior to another that a tour of his collection is a haunting experience. To an uncomfortable degree we still live with Morton’s legacy: Racial distinctions continue to shape our politics, our neighborhoods, and our sense of self.

This is the case even though what science actually has to tell us about race is just the opposite of what Morton contended. [Continue reading…]

Don’t miss the latest posts at Attention to the Unseen: Sign up for email updates.

Science shouldn’t treat the undetectable as taboo


Adam Becker writes:

The Viennese physicist Wolfgang Pauli suffered from a guilty conscience. He’d solved one of the knottiest puzzles in nuclear physics, but at a cost. ‘I have done a terrible thing,’ he admitted to a friend in the winter of 1930. ‘I have postulated a particle that cannot be detected.’

Despite his pantomime of despair, Pauli’s letters reveal that he didn’t really think his new sub-atomic particle would stay unseen. He trusted that experimental equipment would eventually be up to the task of proving him right or wrong, one way or another. Still, he worried he’d strayed too close to transgression. Things that were genuinely unobservable, Pauli believed, were anathema to physics and to science as a whole.

Pauli’s views persist among many scientists today. It’s a basic principle of scientific practice that a new theory shouldn’t invoke the undetectable. Rather, a good explanation should be falsifiable – which means it ought to rely on some hypothetical data that could, in principle, prove the theory wrong. These interlocking standards of falsifiability and observability have proud pedigrees: falsifiability goes back to the mid-20th-century philosopher of science Karl Popper, and observability goes further back than that. Today they’re patrolled by self-appointed guardians, who relish dismissing some of the more fanciful notions in physics, cosmology and quantum mechanics as just so many castles in the sky. The cost of allowing such ideas into science, say the gatekeepers, would be to clear the path for all manner of manifestly unscientific nonsense.

But for a theoretical physicist, designing sky-castles is just part of the job. Spinning new ideas about how the world could be – or in some cases, how the world definitely isn’t – is central to their work. Some structures might be built up with great care over many years, and end up with peculiar names such as inflationary multiverse or superstring theory. Others are fabricated and dismissed casually over the course of a single afternoon, found and lost again by a lone adventurer in the troposphere of thought.

That doesn’t mean it’s just freestyle sky-castle architecture out there at the frontier. The goal of scientific theory-building is to understand the nature of the world with increasing accuracy over time. All that creative energy has to hook back onto reality at some point. But turning ingenuity into fact is much more nuanced than simply announcing that all ideas must meet the inflexible standards of falsifiability and observability. These are not measures of the quality of a scientific theory. They might be neat guidelines or heuristics, but as is usually the case with simple answers, they’re also wrong, or at least only half-right. [Continue reading…]