Let’s cultivate our material intelligence

Glenn Adamson writes:

Are you sitting comfortably? If so, how much do you know about the chair that’s holding you off the ground – what it’s made from, and what its production process looked like? Where it was made, and by whom? Or go deeper: how were the materials used to make the chair extracted from the planet? Most people will find it difficult to answer these basic questions. The object cradling your body remains, in many ways, mysterious to you.

Quite probably, you are surrounded by many things of which you know next to nothing – among them, the device on which you are reading these words. Most of us live in a state of general ignorance about our physical surroundings. It’s not our fault; centuries of technological sophistication and global commerce have distanced most of us from making physical things, and even from seeing or knowing how they are made. But the slow and pervasive separation of people from knowledge of the material world brings with it a serious problem.

Until about a century ago, most people knew a great deal about their immediate material world. Fewer and fewer do today, as commodities circulate with ever greater speed over greater distances. Because of the sheer complexity of contemporary production, even the people who do have professional responsibility for making things – the engineers and factory workers and chemists among us – tend to be specialists. Deepened knowledge usually also means narrowed knowledge. This tends to obscure awareness of the extended production chains through which materials, tools, components and packaging are sourced. Nobody – not an assembly-line worker, not a CEO – has a comprehensive vantage point. It is partly a problem of scale: the wider the view comes, the harder it is to see clearly what’s close at hand.

In effect, we are living in a state of perpetual remote control. As Carl Miller argues in his book The Death of the Gods (2018), algorithms have taken over many day-to-day procedures. These algorithms are themselves driven by algorithms, in a cascade of interconnected calculation. Such automated decisionmaking is extremely efficient, but it has contributed to a crisis of accountability. If no one understands what is really happening, how can anyone be held responsible? This lack of transparency gives rise to a range of ethical dilemmas, chief among them our inability to address climate change, due in part to prevalent psychological separation from the processes of extraction, manufacture and disposal. [Continue reading…]

The technology industry is run by capitalists pretending to be idealists

Ian Bogost writes:

Businesspeople are in business for the money, won directly through profits and indirectly through the forces of market speculation. And yet, for more than a decade now, the technology industry has persuaded the public, and the street, that the efforts of firms such as Facebook and Google are conducted first for reasons of social benefit. To “change the world,” as their leaders intone, even as it becomes clear that some of the changes in question are often detrimental rather than beneficial. Perhaps it was inevitable that these optimistic, tech-industry entreaties of the aughts—make the world more open and connected, don’t be evil, and so on—would be revealed as mere Pollyannaism, naive but ultimately righteous in their ambition.

Facebook’s own public defenses lean in, as it were, to that interpretation. Mark Zuckerberg opened his testimony before Congress earlier this year by insisting that “Facebook is an idealistic and optimistic company.” More recently, as the company has reeled from report after report suggesting that it knew more, and earlier, about how its platform was being used for election meddling, its executives repeated over and over again that it had been “too slow” to respond to Russian election interference and other techniques of manipulation on the platform.

That’s a smart parry, because it implicitly reinforces the righteousness of the company and its mission. It’s not okay to have been slow, the messaging suggests, but it’s understandable given the company’s ambitious, global optimism. “Connecting people” is difficult work, so cut us some slack, the company seems to be saying.

In this context, Sandberg has an impossible task. She was hired to be “the adult in the room,” that much is true. But that epithet has been interpreted too literally and without guile. It was never really meant to involve deepening the sophistication with which Facebook understands “community” or “connecting people.” Her job was to make the company profitable, and she did. [Continue reading…]

Is cyclical time the cure to technology’s ills?

Stephen E. Nash writes:

The world changed dramatically on June 29, 2007. That’s the day when the iPhone first became available to the public.

In the 11 years since, more than 8.5 billion smartphones of all makes and models have been sold worldwide. Smartphone technology has allowed billions of people to enter and participate in a new, cybernetic, and ever more complex and rapid relationship with the world.

Humans have been tumbling headlong into this new digital frontier for a quarter century—since the World Wide Web went public. Until recently, that digital frontier followed Moore’s law, which states that computing power doubles every two years on average. With artificial intelligence, virtual reality, social media, and other mind-blowing developments, our technological world gets ever more interesting, changes ever faster, and, at least from my archaeological perspective, becomes ever more daunting. The rapidity of technological change, and by extension our current relationship to time, is undeniably unusual when viewed against the long evolutionary history of our species.

To illustrate what I mean, we can examine the rate of technological change across the epic sweep of humanity, from the moment we first appeared as a species in Africa until today. In so doing, we can gain a better understanding of the relationship between time, technology, and humans. [Continue reading…]

What happened beyond the Western Front

Priya Satia writes:

Baghdad’s fall in 1917 was hailed as “the most triumphant piece of strategy … since war started.” It enforced the military establishment’s commitment to the “cult of the offensive” and convinced Prime Minister David Lloyd George to make Jerusalem a “Christmas gift” to his people—just when the Battle of Passchendaele, the major 1917 Allied offensive on the Western Front, ended in failure. These campaigns preserved British morale despite the grim news from France. The fall of Jerusalem incited public euphoria—the bell of Westminster chimed for the first time in three years. Postwar military journals noted a “reversal in the importance of the various campaigns,” since Mesopotamia and Palestine had proved that in future wars, “mobility and power” would again be “correlated.” The high-tech power of armored cars, aircraft, and wireless, combined with cavalry, riverboats, deception, and guerrilla tactics—showed that modern warfare need not be stalemated trench warfare. Educational tours in Iraq praised the “special value” of operations there for military science.

These campaigns seemed to affirm British military prowess and redeem warfare itself as a productive enterprise—in the very cradle of civilization. The Guardian triumphantly called the military operations in the region the greatest “programme of public works … since …ALEXANDER THE GREAT.” Trains, cars, and airplanes were bringing a new “age of miracles” to Baghdad, where lay the “natural junctions” of the world’s airways and railways, “the world’s centre.” Others imagined a “regenerated Babylonia” giving meaning to British war losses. Mesopotamia would supply cotton and wheat, provide fields for European industry, and enlarge “the wealth of a universe wasted by war,” foresaw the powerful British administrator in Iraq, Gertrude Bell. “We’ll fix this land up,” wrote an officer, “and move the wheels of a new humanity.” The press hailed “the regeneration of Palestine” as “one of the few fine and imaginative products of the war” that made “it all [seem] worth while.” These campaigns renewed Victorian idealism despite the cynicism produced on the Western Front. James Mann, a postwar recruit to Iraq, explained to his mother: “If one takes the Civil Service, or the Bar, or Literature, or Politics, or even the Labour movement, what can one do that is constructive? Here on the other hand I am constructing the whole time.”

But these hopes were pipe dreams. The occupying army did build bridges and railways but abandoned many of these projects because of financial stringency and because a violent colonial policing system known as “air control” hijacked the development discourse in the face of a 1920 Iraqi rebellion against the British occupation. Iraq descended into a new kind of colonial hell, where bombing was used for everyday purposes like tax collection.

The Great War institutionalized the British view of the Middle East as a site of exception that permitted tactics considered unethical elsewhere. For Britons, the campaigns in the Middle East gave industrial warfare a new lease on life and produced the tactics that shaped the next war, while inspiring a long history of destructive covert and aerial Western engagement with the Middle East. [Continue reading…]

Britain funds research into autonomous drones that select who they kill, says report

The Guardian reports:

Technologies that could unleash a generation of lethal weapons systems requiring little or no human interaction are being funded by the Ministry of Defence, according to a new report.

The development of autonomous military systems – dubbed “killer robots” by campaigners opposed to them – is deeply contentious. Earlier this year, Google withdrew from the Pentagon’s Project Maven, which uses machine learning to analyse video feeds from drones, after ethical objections from the tech giant’s staff.

The government insists it “does not possess fully autonomous weapons and has no intention of developing them”. But, since 2015, the UK has declined to support proposals put forward at the UN to ban them. Now, using government data, Freedom of Information requests and open-source information, a year-long investigation reveals that the MoD and defence contractors are funding dozens of artificial intelligence programmes for use in conflict. [Continue reading…]

Doctors have become the tools of their tools

Atul Gawande writes:

A 2016 study found that physicians spent about two hours doing computer work for every hour spent face to face with a patient—whatever the brand of medical software. In the examination room, physicians devoted half of their patient time facing the screen to do electronic tasks. And these tasks were spilling over after hours. The University of Wisconsin found that the average workday for its family physicians had grown to eleven and a half hours. The result has been epidemic levels of burnout among clinicians. Forty per cent screen positive for depression, and seven per cent report suicidal thinking—almost double the rate of the general working population.

Something’s gone terribly wrong. Doctors are among the most technology-avid people in society; computerization has simplified tasks in many industries. Yet somehow we’ve reached a point where people in the medical profession actively, viscerally, volubly hate their computers. [Continue reading…]

‘The devil lives in our phones and is wreaking havoc on our children’

The New York Times reports:

The people who are closest to a thing are often the most wary of it. Technologists know how phones really work, and many have decided they don’t want their own children anywhere near them.

A wariness that has been slowly brewing is turning into a regionwide consensus: The benefits of screens as a learning tool are overblown, and the risks for addiction and stunting development seem high. The debate in Silicon Valley now is about how much exposure to phones is O.K.

“Doing no screen time is almost easier than doing a little,” said Kristin Stecher, a former social computing researcher married to a Facebook engineer. “If my kids do get it at all, they just want it more.”

Ms. Stecher, 37, and her husband, Rushabh Doshi, researched screen time and came to a simple conclusion: they wanted almost none of it in their house. Their daughters, ages 5 and 3, have no screen time “budget,” no regular hours they are allowed to be on screens. The only time a screen can be used is during the travel portion of a long car ride (the four-hour drive to Tahoe counts) or during a plane trip.

Recently she has softened this approach. Every Friday evening the family watches one movie.

There is a looming issue Ms. Stecher sees in the future: Her husband, who is 39, loves video games and thinks they can be educational and entertaining. She does not.

“We’ll cross that when we come to it,” said Ms. Stecher, who is due soon with a boy.

Some of the people who built video programs are now horrified by how many places a child can now watch a video.

Asked about limiting screen time for children, Hunter Walk, a venture capitalist who for years directed product for YouTube at Google, sent a photo of a potty training toilet with an iPad attached and wrote: “Hashtag ‘products we didn’t buy.’”

Athena Chavarria, who worked as an executive assistant at Facebook and is now at Mark Zuckerberg’s philanthropic arm, the Chan Zuckerberg Initiative, said: “I am convinced the devil lives in our phones and is wreaking havoc on our children.” [Continue reading…]

An alternative history of Silicon Valley disruption

Nitasha Tiku writes:

A few years after the Great Recession, you couldn’t scroll through Google Reader without seeing the word “disrupt.” TechCrunch named a conference after it, the New York Times named a column after it, investor Marc Andreessen warned that “software disruption” would eat the world; not long after, Peter Thiel, his fellow Facebook board member, called “disrupt” one of his favorite words. (One of the future Trump adviser’s least favorite words? “Politics.”)

The term “disruptive innovation” was coined by Harvard Business School professor Clayton Christensen in the mid-90’s to describe a particular business phenomenon, whereby established companies focus on high-priced products for their existing customers, while disruptors develop simpler, cheaper innovations, introduce the products to a new audience, and eventually displace incumbents. PCs disrupted mainframes, discount stores disrupted department stores, cellphones disrupted landlines, you get the idea.

In Silicon Valley’s telling, however, “disruption” became shorthand for something closer to techno-darwinism. By imposing the rules of nature on man-made markets, the theory justified almost any act of upheaval. The companies still standing post-disruption must have survived because they were the fittest.

“Over the next 10 years, I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not,” Andreessen wrote in his seminal 2011 essay on software in the Wall Street Journal. “This problem is even worse than it looks because many workers in existing industries will be stranded on the wrong side of software-based disruption and may never be able to work in their fields again.”

Even after the word lost its meaning from overuse, it still suffused our understanding of why the ground beneath our feet felt so shaky. They tried to freak us out and we believed them. Why wouldn’t we? Their products were dazzling, sci-fi magic come to life. They transformed our days, our hours, our interior life. Fear of being stranded on “the wrong side,” in turn, primed us to look to these world-beating companies to understand what comes next.

It is only now, a decade after the financial crisis, that the American public seems to appreciate that what we thought was disruption worked more like extraction—of our data, our attention, our time, our creativity, our content, our DNA, our homes, our cities, our relationships. The tech visionaries’ predictions did not usher us into the future, but rather a future where they are kings. [Continue reading…]

Games are taking over life

Vincent Gabrielle writes:

Deep under the Disneyland Resort Hotel in California, far from the throngs of happy tourists, laundry workers clean thousands of sheets, blankets, towels and comforters every day. Workers feed the heavy linens into hot, automated presses to iron out wrinkles, and load dirty laundry into washers and dryers large enough to sit in. It’s loud, difficult work, but bearable. The workers were protected by union contracts that guaranteed a living wage and affordable healthcare, and many had worked decades at the company. They were mostly happy to work for Disney.

This changed in 2008. The union contracts were up, and Disney wouldn’t renew without adjustments. One of the changes involved how management tracked worker productivity. Before, employees would track how many sheets or towels or comforters the workers washed, dried or folded on paper notes turned in at the end of the day. But Disney was replacing that system with an electronic tracking system that monitored their progress in real time.

Electronic monitoring wasn’t unusual in the hotel business. But Disney took the highly unusual step of displaying the productivity of their workers on scoreboards all over the laundry facilities, says Austin Lynch, director of organising for Unite Here Local 11. According to Lynch, every worker’s name was compared with the names of coworkers, each one colour-coded like traffic signals. If you were keeping up with the goals of management, your name was displayed in green. If you slowed down, your name was in yellow. If you were behind, your name was in red. Managers could see the monitors from their office, and change production targets from their computers. Each laundry machine would also monitor the rate of worker input, and flash red and yellow lights at the workers directly if they slowed down.

‘They had a hard time ignoring it,’ said Beatriz Topete, a union organiser for Unite Here Local 11 at the time. ‘It pushes you mentally to keep working. It doesn’t give you breathing space.’ Topete recalled an incident where she was speaking to workers on the night shift, feeding hand-towels into a laundry machine. Every time the workers slowed down, the machine would flash at them. They told her they felt like they couldn’t stop.

The workers called this ‘the electronic whip’.

While this whip was cracking, the workers sped up. ‘We saw a higher incidence of injuries,’ Topete said. ‘Several people were injured on the job.’ The formerly collegial environment degenerated into a race. The laundry workers competed with each other, and got upset when coworkers couldn’t keep up. People started skipping bathroom breaks. Pregnant workers fell behind. ‘The scoreboard incentivises competition,’ said Topete. ‘Our human competitiveness, whatever makes us like games, whatever keeps us wanting to win, it’s a similar thing that was happening. Even if you didn’t want to.’

The electronic whip is an example of gamification gone awry.

Gamification is the application of game elements into nongame spaces. It is the permeation of ideas and values from the sphere of play and leisure to other social spaces. It’s premised on a seductive idea: if you layer elements of games, such as rules, feedback systems, rewards and videogame-like user interfaces over reality, it will make any activity motivating, fair and (potentially) fun. ‘We are starving and games are feeding us,’ writes Jane McGonigal in Reality Is Broken (2011). ‘What if we decided to use everything we know about game design to fix what’s wrong with reality?’

Consequentially, gamification is everywhere. [Continue reading…]

Thanks to genetic genealogy, anonymity will soon become a thing of the past

The New York Times reports:

The genetic genealogy industry is booming. In recent years, more than 15 million people have offered up their DNA — a cheek swab, some saliva in a test-tube — to services such as 23andMe and Ancestry.com in pursuit of answers about their heritage. In exchange for a genetic fingerprint, individuals may find a birth parent, long-lost cousins, perhaps even a link to Oprah or Alexander the Great.

But as these registries of genetic identity grow, it’s becoming harder for individuals to retain any anonymity. Already, 60 percent of Americans of Northern European descent — the primary group using these sites — can be identified through such databases whether or not they’ve joined one themselves, according to a study published today in the journal Science.

Within two or three years, 90 percent of Americans of European descent will be identifiable from their DNA, researchers found. The science-fiction future, in which everyone is known whether or not they want to be, is nigh.

“It’s not the distant future, it’s the near future,” said Yaniv Erlich, the lead author of the study. Dr. Erlich, formerly a genetic-privacy researcher at Columbia University, is the chief science officer of MyHeritage, a genetic ancestry website. [Continue reading…]