The dangers of distracted parenting

Erika Christakis writes:

Smartphones have by now been implicated in so many crummy outcomes—car fatalities, sleep disturbances, empathy loss, relationship problems, failure to notice a clown on a unicycle—that it almost seems easier to list the things they don’t mess up than the things they do. Our society may be reaching peak criticism of digital devices.

Even so, emerging research suggests that a key problem remains underappreciated. It involves kids’ development, but it’s probably not what you think. More than screen-obsessed young children, we should be concerned about tuned-out parents.

Yes, parents now have more face time with their children than did almost any parents in history. Despite a dramatic increase in the percentage of women in the workforce, mothers today astoundingly spend more time caring for their children than mothers did in the 1960s. But the engagement between parent and child is increasingly low-quality, even ersatz. Parents are constantly present in their children’s lives physically, but they are less emotionally attuned. To be clear, I’m not unsympathetic to parents in this predicament. My own adult children like to joke that they wouldn’t have survived infancy if I’d had a smartphone in my clutches 25 years ago.

To argue that parents’ use of screens is an underappreciated problem isn’t to discount the direct risks screens pose to children: Substantial evidence suggests that many types of screen time (especially those involving fast-paced or violent imagery) are damaging to young brains. Today’s preschoolers spend more than four hours a day facing a screen. And, since 1970, the average age of onset of “regular” screen use has gone from 4 years to just four months.

Some of the newer interactive games kids play on phones or tablets may be more benign than watching TV (or YouTube), in that they better mimic children’s natural play behaviors. And, of course, many well-functioning adults survived a mind-numbing childhood spent watching a lot of cognitive garbage. (My mother—unusually for her time—prohibited Speed Racer and Gilligan’s Island on the grounds of insipidness. That I somehow managed to watch every single episode of each show scores of times has never been explained.) Still, no one really disputes the tremendous opportunity costs to young children who are plugged in to a screen: Time spent on devices is time not spent actively exploring the world and relating to other human beings.

Yet for all the talk about children’s screen time, surprisingly little attention is paid to screen use by parents themselves, who now suffer from what the technology expert Linda Stone more than 20 years ago called “continuous partial attention.” This condition is harming not just us, as Stone has argued; it is harming our children. The new parental-interaction style can interrupt an ancient emotional cueing system, whose hallmark is responsive communication, the basis of most human learning. We’re in uncharted territory. [Continue reading…]

Google’s artificial intelligence drone project for the Pentagon provoked backlash for the company

Gizmodo reports:

Google will not seek another contract for its controversial work providing artificial intelligence to the U.S. Department of Defense for analyzing drone footage after its current contract expires.

Google Cloud CEO Diane Greene announced the decision at a meeting with employees Friday morning, three sources told Gizmodo. The current contract expires in 2019 and there will not be a follow-up contract, Greene said. The meeting, dubbed Weather Report, is a weekly update on Google Cloud’s business.

Google would not choose to pursue Maven today because the backlash has been terrible for the company, Greene said, adding that the decision was made at a time when Google was more aggressively pursuing military work. The company plans to unveil new ethical principles about its use of AI next week. A Google spokesperson did not immediately respond to questions about Greene’s comments.

Google’s decision to provide artificial intelligence to the Defense Department for the analysis of drone footage has prompted backlash from Google employees and academics. Thousands of employees have signed a petition asking Google to cancel its contract for the project, nicknamed Project Maven, and dozens of employees have resigned in protest.

Google, meanwhile, defended its work on Project Maven, with senior executives noting that the contract is of relatively little value and that its contribution amounts merely to providing the Defense Department with open-source software.

But internal emails reviewed by Gizmodo show that executives viewed Project Maven as a golden opportunity that would open doors for business with the military and intelligence agencies. The emails also show that Google and its partners worked extensively to develop machine learning algorithms for the Pentagon, with the goal of creating a sophisticated system that could surveil entire cities. [Continue reading…]

The total information awareness we feared the government acquiring, we have freely given to the tech giants

Renee DiResta writes:

“Every purchase you make with a credit card, every magazine subscription you buy and medical prescription you fill, every Web site you visit and e-mail you send or receive, every academic grade you receive, every bank deposit you make, every trip you book and every event you attend—all these transactions and communications will go into … a virtual, centralized grand database,” the New York Times columnist warns.

On the heels of Mark Zuckerberg’s numerous government testimonies and sustained criticism over the Cambridge Analytica scandal, the author of this Times column must be talking about Facebook—right? Or perhaps the web’s broader, ad-based business model?

Not so: The William Safire column, “You Are a Suspect,” was published in the Times in 2002—two years before Facebook was created. And Safire isn’t talking about social networks or digital advertising—he’s discussing Total Information Awareness, a US Defense Advanced Research Projects Agency (Darpa) program that proposed mining vast amounts of Americans’ data to identify potential national security threats. The virtual grand database was to belong to the Department of Defense, which would use it to identify behavior patterns that would help to predict emerging terrorist threats.

Today, we’re voluntarily participating in the dystopian scenario Safire envisioned 16 years ago, with each bit of data handed to companies like Facebook and Google. But in this system, private companies are our information repositories—leaving us to reckon with the consequences of a world that endows corporations with the kind of data once deemed too outrageous for the government. [Continue reading…]

What are the limits of manipulating nature?

In Scientific American, Neil Savage writes:

Matt Trusheim flips a switch in the darkened laboratory, and an intense green laser illuminates a tiny diamond locked in place beneath a microscope objective. On a computer screen an image appears, a fuzzy green cloud studded with brighter green dots. The glowing dots are color centers in the diamond—tiny defects where two carbon atoms have been replaced by a single atom of tin, shifting the light passing through from one shade of green to another.

Later, that diamond will be chilled to the temperature of liquid helium. By controlling the crystal structure of the diamond on an atom-by-atom level, bringing it to within a few degrees of absolute zero and applying a magnetic field, researchers at the Quantum Photonics Laboratory run by physicist Dirk Englund at the Massachusetts Institute of Technology think they can select the quantum-mechanical properties of photons and electrons with such precision that they can transmit unbreakable secret codes.

Trusheim, a postdoctoral researcher in the lab, is one of many scientists trying to figure out just which atoms embedded in which crystals under what conditions will give them that kind of control. Indeed, scientists around the world are tackling the hard problem of controlling nature at the level of atoms and below, down to electrons or even fractions of electrons. Their aim is to find the knobs that control the fundamental properties of matter and energy and turn those knobs to customize matter and energy, creating ultrapowerful quantum computers or superconductors that work at room temperature.

These scientists face two main challenges. On a technical level, the work is extremely difficult. Some crystals, for instance, must be made to 99.99999999 percent purity in vacuum chambers emptier than space. The more fundamental challenge is that the quantum effects these researchers want to harness—for example, the ability of a particle to be in two states at once, à la Schrödinger’s cat—happen at the level of individual electrons. Up here in the macro world, the magic goes away. Researchers manipulating matter at the smallest scales, therefore, are trying to coax nature into behaving in ways that strain at the limits imposed by fundamental physics. The degree to which they succeed will help determine our scientific understanding and technological capacity in the decades to come. [Continue reading…]

Human society is unprepared for the rise of artificial intelligence

Henry Kissinger writes:

The internet age in which we already live prefigures some of the questions and issues that AI will only make more acute. The Enlightenment sought to submit traditional verities to a liberated, analytic human reason. The internet’s purpose is to ratify knowledge through the accumulation and manipulation of ever expanding data. Human cognition loses its personal character. Individuals turn into data, and data become regnant.

Users of the internet emphasize retrieving and manipulating information over contextualizing or conceptualizing its meaning. They rarely interrogate history or philosophy; as a rule, they demand information relevant to their immediate practical needs. In the process, search-engine algorithms acquire the capacity to predict the preferences of individual clients, enabling the algorithms to personalize results and make them available to other parties for political or commercial purposes. Truth becomes relative. Information threatens to overwhelm wisdom.

Inundated via social media with the opinions of multitudes, users are diverted from introspection; in truth many technophiles use the internet to avoid the solitude they dread. All of these pressures weaken the fortitude required to develop and sustain convictions that can be implemented only by traveling a lonely road, which is the essence of creativity.

The impact of internet technology on politics is particularly pronounced. The ability to target micro-groups has broken up the previous consensus on priorities by permitting a focus on specialized purposes or grievances. Political leaders, overwhelmed by niche pressures, are deprived of time to think or reflect on context, contracting the space available for them to develop vision.

The digital world’s emphasis on speed inhibits reflection; its incentive empowers the radical over the thoughtful; its values are shaped by subgroup consensus, not by introspection. For all its achievements, it runs the risk of turning on itself as its impositions overwhelm its conveniences. [Continue reading…]

Data centers, the factories of the digital age, emit as much CO2 as the airline industry

Yale Environment 360 reports:

The cloud is coming back to Earth with a bump. That ethereal place where we store our data, stream our movies, and email the world has a physical presence – in hundreds of giant data centers that are taking a growing toll on the planet.

Data centers are the factories of the digital age. These mostly windowless, featureless boxes are scattered across the globe – from Las Vegas to Bangalore, and Des Moines to Reykjavik. They run the planet’s digital services. Their construction alone costs around $20 billion a year worldwide.

The biggest, covering a million square feet or more, consume as much power as a city of a million people. In total, they eat up more than 2 percent of the world’s electricity and emit roughly as much CO2 as the airline industry. And with global data traffic more than doubling every four years, they are growing fast.

Yet if there is a data center near you, the chances are you don’t know about it. And you still have no way of knowing which center delivers your Netflix download, nor whether it runs on renewable energy using processors cooled by Arctic air, or runs on coal power and sits in desert heat, cooled by gigantically inefficient banks of refrigerators.

We are often told that the world’s economy is dematerializing – that physical analog stuff is being replaced by digital data, and that this data has minimal ecological footprint. But not so fast. If the global IT industry were a country, only China and the United States would contribute more to climate change, according to a Greenpeace report investigating “the race to build a green internet,” published last year. [Continue reading…]

Bitcoin consumes more energy than Switzerland

Eric Holthaus writes:

Bitcoin’s energy footprint has more than doubled since Grist first wrote about it six months ago.

It’s expected to double again by the end of the year, according to a new peer-reviewed study out Wednesday. And if that happens, bitcoin would be gobbling up 0.5 percent of the world’s electricity, about as much as the Netherlands.

That’s a troubling trajectory, especially for a world that should be working overtime to root out energy waste and fight climate change. By late next year, bitcoin could be consuming more electricity than all the world’s solar panels currently produce — about 1.8 percent of global electricity, according to a simple extrapolation of the study’s predictions. That would effectively erase decades of progress on renewable energy.

Although the author of the study, Alex de Vries, an economist and data consultant based in the Netherlands, has shared these calculations publicly before, this is the first time that an analysis of bitcoin’s energy appetite has appeared in a peer-reviewed journal.

Bitcoin continues to soar in popularity — mostly as a speculative investment. And like any supercharged speculative investment, it swings wildly. Within the past 18 months, the price of bitcoin has soared ten-fold, crashed by 75 percent, only to double again, all while hedge funds and wealthy libertarians debate the future of the virtual currency.

Beyond its tentative success as a get-rich-quick scheme, bitcoin has an increasingly real-world cost. The process of “mining” for coins requires a globally distributed computer network racing to solve math problems — and also helps keep any individual transaction confidential and tamper-proof. That, in turn, requires an ever-escalating arms race of computing power — and electricity use — which, at the moment, has no end in sight. A single bitcoin transaction is so energy intensive that it could power the average U.S. household for a month. [Continue reading…]

William Morris’ vision of a world free from wage slavery is finally within reach

Vasilis Kostakis and Wolfgang Drechsler write:

At the beginning of the 21st century, a new world is emerging. Not since Marx identified the manufacturing plants of Manchester as the blueprint for the new capitalist society has there been a deeper transformation of the fundamentals of our socioeconomic life. A new commons-based mode of production, enabled by information and communication technology (ICT), what we now call digitisation, redefines how we (can) produce, consume and distribute. This pathway is exemplified by interconnected collaborative initiatives that produce a wide range of artifacts, from encyclopaedias and software to agricultural machines, wind turbines, satellites and prosthetics…

As recently as two decades ago, most people would have thought it absurd to countenance a free and open encyclopaedia, produced by a community of dispersed enthusiasts primarily driven by other motives than profit-maximisation, and the idea that this might displace the corporate-organised Encyclopaedia Britannica and Microsoft Encarta would have seemed preposterous. Similarly, very few people would have thought it possible that the top 500 supercomputers and the majority of websites would run on software produced in the same way, or that non-coercive cooperation using globally shared resources could produce artifacts as effectively as those produced by industrial capitalism, but more sustainably. It would have been unimaginable that such things should have been created through processes that were far more pleasant than the work conditions that typically result in such products.

Commons-based production goes against many of the assumptions of mainstream, standard-textbook economists. Individuals primarily motivated by their interest to maximise profit, competition and private property are the Holy Grail of innovation and progress – more than that: of freedom and liberty themselves. One should never forget these two everlasting ‘truths’ if one wants to understand the economy and the world, we are told. These are the two premises of the free-market economics that have dominated the discourse until today.

So, is GNU/Linux, the free and open-source software that drives those 500 supercomputers, an exception that proves the rule? What about the Apache HTTP Server, the leading software in the web-server market, or Wikipedia? The legal scholar Yochai Benkler at Harvard University was one of the first to observe that such commons-based projects are by now too common to be considered anomalies. Already a decade ago (when smartphones were a novelty), Benkler argued in The Wealth of Networks (2006) that a new mode of production was emerging that would shape how we produce and consume information. He called this mode ‘commons-based peer production’ and claimed that it can deliver better artifacts while promoting another aspect of human nature: social cooperation. Digitisation does not change the human person (in this respect), it just allows her to develop in ways that had previously been blocked, whether by chance or design. [Continue reading…]

The Dreamtime, science and narratives of Indigenous Australia

File 20180501 135803 tkypa4.jpg?ixlib=rb 1.1
Lake Mungo and the surrounding Willandra Lakes of NSW were established around 150,000 years ago.
from www.shutterstock.com

David Lambert, Griffith University

This article is an extract from an essay Owning the science: the power of partnerships in First Things First, the 60th edition of Griffith Review.

We’re publishing it as part of our occasional series Zoom Out, where authors explore key ideas in science and technology in the broader context of society and humanity.


Scientific and Indigenous knowledge systems have often been in conflict. In my view, too much is made of these conflicts; they have a lot in common.

For example, Indigenous knowledge typically takes the form of a narrative, usually a spoken story about how the world came to be. In a similar way, evolutionary theories, which aim to explain why particular characters are adapted to certain functions, also take the form of narratives. Both narratives are mostly focused on “origins”.




Read more:
Friday essay: when did Australia’s human history begin?


From a strictly genetic perspective, progress on origins research in Australia has been particularly slow. Early ancient DNA studies were focused on remains from permafrost conditions in Antarctica and cool temperate environments such as northern Europe, including Greenland.

But Australia is very different. Here, human remains are very old, and many are recovered from very hot environments.

While ancient DNA studies have played an important role in informing understanding of the evolution of our species worldwide, little is known about the levels of ancient genomic variation in Australia’s First Peoples – although some progress has been made in recent years. This includes the landmark recovery of genomic sequences from both contemporary and ancient Aboriginal Australian remains.

[Read more…]

Palantir knows everything about you

Bloomberg reports:

High above the Hudson River in downtown Jersey City, a former U.S. Secret Service agent named Peter Cavicchia III ran special ops for JPMorgan Chase & Co. His insider threat group—most large financial institutions have one—used computer algorithms to monitor the bank’s employees, ostensibly to protect against perfidious traders and other miscreants.

Aided by as many as 120 “forward-deployed engineers” from the data mining company Palantir Technologies Inc., which JPMorgan engaged in 2009, Cavicchia’s group vacuumed up emails and browser histories, GPS locations from company-issued smartphones, printer and download activity, and transcripts of digitally recorded phone conversations. Palantir’s software aggregated, searched, sorted, and analyzed these records, surfacing keywords and patterns of behavior that Cavicchia’s team had flagged for potential abuse of corporate assets. Palantir’s algorithm, for example, alerted the insider threat team when an employee started badging into work later than usual, a sign of potential disgruntlement. That would trigger further scrutiny and possibly physical surveillance after hours by bank security personnel.

Over time, however, Cavicchia himself went rogue. Former JPMorgan colleagues describe the environment as Wall Street meets Apocalypse Now, with Cavicchia as Colonel Kurtz, ensconced upriver in his office suite eight floors above the rest of the bank’s security team. People in the department were shocked that no one from the bank or Palantir set any real limits. They darkly joked that Cavicchia was listening to their calls, reading their emails, watching them come and go. Some planted fake information in their communications to see if Cavicchia would mention it at meetings, which he did.

It all ended when the bank’s senior executives learned that they, too, were being watched, and what began as a promising marriage of masters of big data and global finance descended into a spying scandal. The misadventure, which has never been reported, also marked an ominous turn for Palantir, one of the most richly valued startups in Silicon Valley. An intelligence platform designed for the global War on Terror was weaponized against ordinary Americans at home.

Founded in 2004 by Peter Thiel and some fellow PayPal alumni, Palantir cut its teeth working for the Pentagon and the CIA in Afghanistan and Iraq. The company’s engineers and products don’t do any spying themselves; they’re more like a spy’s brain, collecting and analyzing information that’s fed in from the hands, eyes, nose, and ears. The software combs through disparate data sources—financial documents, airline reservations, cellphone records, social media postings—and searches for connections that human analysts might miss. It then presents the linkages in colorful, easy-to-interpret graphics that look like spider webs. U.S. spies and special forces loved it immediately; they deployed Palantir to synthesize and sort the blizzard of battlefield intelligence. It helped planners avoid roadside bombs, track insurgents for assassination, even hunt down Osama bin Laden. The military success led to federal contracts on the civilian side. The U.S. Department of Health and Human Services uses Palantir to detect Medicare fraud. The FBI uses it in criminal probes. The Department of Homeland Security deploys it to screen air travelers and keep tabs on immigrants.

Police and sheriff’s departments in New York, New Orleans, Chicago, and Los Angeles have also used it, frequently ensnaring in the digital dragnet people who aren’t suspected of committing any crime. People and objects pop up on the Palantir screen inside boxes connected to other boxes by radiating lines labeled with the relationship: “Colleague of,” “Lives with,” “Operator of [cell number],” “Owner of [vehicle],” “Sibling of,” even “Lover of.” If the authorities have a picture, the rest is easy. Tapping databases of driver’s license and ID photos, law enforcement agencies can now identify more than half the population of U.S. adults. [Continue reading…]

Don’t miss the latest posts at Attention to the Unseen: Sign up for email updates.