AI that writes convincing prose risks mass-producing fake news

MIT Technology Review reports:

Here’s some breaking fake news …

Russia has declared war on the United States after Donald Trump accidentally fired a missile in the air.

Russia said it had “identified the missile’s trajectory and will take necessary measures to ensure the security of the Russian population and the country’s strategic nuclear forces.” The White House said it was “extremely concerned by the Russian violation” of a treaty banning intermediate-range ballistic missiles.

The US and Russia have had an uneasy relationship since 2014, when Moscow annexed Ukraine’s Crimea region and backed separatists in eastern Ukraine.

That story is, in fact, not only fake, but a troubling example of just how good AI is getting at fooling us.

That’s because it wasn’t written by a person; it was auto-generated by an algorithm fed the words “Russia has declared war on the United States after Donald Trump accidentally …”

The program made the rest of the story up on its own. And it can make up realistic-seeming news reports on any topic you give it. The program was developed by a team at OpenAI, a research institute based in San Francisco.

The researchers set out to develop a general-purpose language algorithm, trained on a vast amount of text from the web, that would be capable of translating text, answering questions, and performing other useful tasks. But they soon grew concerned about the potential for abuse. “We started testing it, and quickly discovered it’s possible to generate malicious-esque content quite easily,” says Jack Clark, policy director at OpenAI. [Continue reading…]

Screen time has stunted the development of generations of children

The Guardian reports:

A study has linked high levels of screen time with delayed development in children, reigniting the row over the extent to which parents should limit how long their offspring spend with electronic devices.

Researchers in Canada say children who spent more time with screens at two years of age did worse on tests of development at age three than children who had spent little time with devices. A similar result was found when children’s screen time at three years old was compared with their development at five years.

“What is new in this study is that we are studying really young children, so aged 2-5, when brain development is really rapidly progressing and also child development is unfolding so rapidly,” Dr Sheri Madigan, first author of the study from the University of Calgary, told the Guardian. “We are getting at these lasting effects,” she added of the study.

The authors say parents should be cautious about how long children are allowed to spend with devices. [Continue reading…]

The age of surveillance capitalism

John Naughton writes:

We’re living through the most profound transformation in our information environment since Johannes Gutenberg’s invention of printing in circa 1439. And the problem with living through a revolution is that it’s impossible to take the long view of what’s happening. Hindsight is the only exact science in this business, and in that long run we’re all dead. Printing shaped and transformed societies over the next four centuries, but nobody in Mainz (Gutenberg’s home town) in, say, 1495 could have known that his technology would (among other things): fuel the Reformation and undermine the authority of the mighty Catholic church; enable the rise of what we now recognise as modern science; create unheard-of professions and industries; change the shape of our brains; and even recalibrate our conceptions of childhood. And yet printing did all this and more.

Why choose 1495? Because we’re about the same distance into our revolution, the one kicked off by digital technology and networking. And although it’s now gradually dawning on us that this really is a big deal and that epochal social and economic changes are under way, we’re as clueless about where it’s heading and what’s driving it as the citizens of Mainz were in 1495.

That’s not for want of trying, mind. Library shelves groan under the weight of books about what digital technology is doing to us and our world. Lots of scholars are thinking, researching and writing about this stuff. But they’re like the blind men trying to describe the elephant in the old fable: everyone has only a partial view, and nobody has the whole picture. So our contemporary state of awareness is – as Manuel Castells, the great scholar of cyberspace once put it – one of “informed bewilderment”.

Which is why the arrival of Shoshana Zuboff’s new book is such a big event. Many years ago – in 1988, to be precise – as one of the first female professors at Harvard Business School to hold an endowed chair she published a landmark book, The Age of the Smart Machine: The Future of Work and Power, which changed the way we thought about the impact of computerisation on organisations and on work. It provided the most insightful account up to that time of how digital technology was changing the work of both managers and workers. And then Zuboff appeared to go quiet, though she was clearly incubating something bigger. The first hint of what was to come was a pair of startling essays – one in an academic journal in 2015, the other in a German newspaper in 2016. What these revealed was that she had come up with a new lens through which to view what Google, Facebook et al were doing – nothing less than spawning a new variant of capitalism. Those essays promised a more comprehensive expansion of this Big Idea.

And now it has arrived – the most ambitious attempt yet to paint the bigger picture and to explain how the effects of digitisation that we are now experiencing as individuals and citizens have come about.

The headline story is that it’s not so much about the nature of digital technology as about a new mutant form of capitalism that has found a way to use tech for its purposes. The name Zuboff has given to the new variant is “surveillance capitalism”. It works by providing free services that billions of people cheerfully use, enabling the providers of those services to monitor the behaviour of those users in astonishing detail – often without their explicit consent.

“Surveillance capitalism,” she writes, “unilaterally claims human experience as free raw material for translation into behavioural data. Although some of these data are applied to service improvement, the rest are declared as a proprietary behavioural surplus, fed into advanced manufacturing processes known as ‘machine intelligence’, and fabricated into prediction products that anticipate what you will do now, soon, and later. Finally, these prediction products are traded in a new kind of marketplace that I call behavioural futures markets. Surveillance capitalists have grown immensely wealthy from these trading operations, for many companies are willing to lay bets on our future behaviour.” [Continue reading…]

Disruption for thee, but not for me

Cory Doctorow writes:

The Silicon Valley gospel of “disruption” has descended into caricature, but, at its core, there are some sound tactics buried beneath the self-serving bullshit. A lot of our systems and institutions are corrupt, bloated, and infested with cream-skimming rentiers who add nothing and take so much.

Take taxis: there is nothing good about the idea that cab drivers and cab passengers meet each other by random chance, with the drivers aimlessly circling traffic-clogged roads while passengers brave the curb lane to frantically wave at them. Add to that the toxic practice of licensing cabs by creating “taxi medallions” that allow businesspeople (like erstwhile Trump bagman Michael Cohen) to corner the market on these licenses and lease them to drivers, creaming off the bulk of the profits in the process, leaving drivers with barely enough to survive.

So enter Uber, an app that allows drivers and passengers to find each other extremely efficiently, that gives drivers realtime intelligence about places where fares are going begging, and which bankrupts the rent-seeking medallion speculators almost overnight.

Of course, Uber also eliminates safety checks for drivers (and allows them to illegally discriminate against people with disabilities, people of color, and other marginalized groups); it used predatory pricing (where each ride is subsidized by deep-pocketed, market-cornering execs) to crush potential competitors, and games the regulatory and tax system.

Uber (and its Peter-Thiel-backed rival Lyft) are not good companies. They’re not forces for good. But the system they killed? Also not good.

In 2016, the City of Austin played a game of high-stakes chicken with Uber and Lyft. Austin cab drivers have to get fingerprinted as part of a criminal records check, and Austin wanted Uber and Lyft drivers to go through the same process.

Uber and Lyft violently objected to this. They said it would add a needless barrier to entry that would depress the supply of drivers, and privately, they confessed their fear that giving in to any regulation, anywhere, would open the door to regulation everywhere. They wanted to establish a reputation for being such dirty fighters that no city would even try to put rules on them.

(Notably, Uber and Lyft did not make any arguments about criminal background checks perpetuating America’s racially unjust “justice system” in which people of color are systematically overpoliced and then railroaded into guilty pleas.)

Austin wasn’t intimidated. They enacted the rule, and Uber and Lyft simply exited the city, leaving Austin without any rideshare at all. All the drivers and passengers who’d come to rely on Lyft and Uber were out of luck.

But the drivers were undaunted. They formed a co-operative and in months, they had cloned the Uber app and launched a new business called Ride Austin, which is exactly like Uber: literally the same drivers, driving the same cars, and charging the same prices. But it’s also completely different from Uber: the drivers own this company through a worker-owned co-op. They take home 25% more per ride than they made when they were driving for Uber. Uber and Lyft drivers commute into Austin from as far away as San Antonio just to drive for Ride. That’s how much better driving for a worker co-op is. [Continue reading…]

Why China’s electric-car industry is leaving Detroit, Japan, and Germany in the dust

Jordyn Dahl writes:

After the Cultural Revolution of the 1960s and ’70s crippled China’s economy, the country began to open its markets to the outside world. The aim was to bring in technological know-how from abroad that domestic firms could then assimilate. By the early ’80s, foreign automakers were allowed in on the condition that they form a joint venture with a Chinese partner. These Chinese firms, by working with foreign companies, would eventually gain enough knowledge to function independently.

Or so the theory went. Chinese-produced cars subsequently flooded the market, but they were largely cheap copycats—they looked like foreign-made cars, but the engines weren’t as good. Carmakers in the US and Europe had too much of a head start for China to catch up.

The only way to outdo the rest of the world, then, was to bet on a whole new technology. Enter electric vehicles, which require less mechanical complexity and rely more on electronic prowess. A Chevrolet Bolt’s electric engine contains just 24 moving segments, according to a teardown performed by consulting company UBS. In comparison, a Volkswagen Golf’s combustion engine has 149. Meanwhile, China already had an electronic manufacturing supply chain in place from its years of producing the world’s batteries, phones, and gadgets.

Now the Chinese government is embracing the shift from combustion to electric engines in a way no other country can match. It’s made electric vehicles one of the 10 pillars of Made in China 2025—a state-led plan for the country to become a global leader in high-tech industries—and enacted policies to generate demand. Since 2013, almost 500 electric-vehicle companies have launched in China to meet the government’s mandate and to cash in on subsidies designed to generate supply. [Continue reading…]

Yuval Noah Harari sees a big-data threat to humanity

Steve Paulson interviews historian Yuval Noah Harari:

What’s different about this moment in history?

What’s different is the pace of technological change, especially the twin revolutions of artificial intelligence and bioengineering. They make it possible to hack human beings and other organisms, and then re-engineer them and create new life forms.

How far can this technology go in changing who we are?

Very far. Beyond our imagination. It can change our imagination, too. If your imagination is too limited to think of new possibilities, we can just improve it. For billions of years, all of life was confined to the organic realm. It didn’t matter if you were an amoeba, a Tyrannosaurus rex, a coconut, a cucumber, or a Homo sapiens. You were made of organic compounds and subject to the laws of organic biochemistry. But now we’re about to break out of this limited organic realm and start combining the organic with inorganic bots to create cyborgs.

What worries you about the new cyborgs?

Experiments are already under way to augment the human immune system with an inorganic, bionic system. Millions of tiny nanorobots and sensors monitor what’s happening inside your body. They could discover the beginning of cancer, or some infectious disease, and fight against these dangers for your health. The system can monitor not just what goes wrong. It can monitor your moods, your emotions, your thoughts. That means an external system can get to know you much better than you know yourself. You go to therapy for years to get in touch with your emotions, but this system, whether it belongs to Google or Amazon or the government, can monitor your emotions in ways that neither you nor your therapist can approach in any way.

Are you saying computer algorithms can break down personal data we may not even be aware of?

Yes. Fear and anger and love, or any human emotion, are in the end just a biochemical process. In the same way you can diagnose flu, you can diagnose anger. You might ask somebody, “Why are you angry? What are you angry about?” And they will say, “I’m not angry about anything, what do you want?” But this external system doesn’t need to ask you. It can monitor your heart, your brain, your blood pressure. It can have a scale of anger and it can know you are now a 6.8 on a scale of 1-to-10. Combining this with enormous amounts of data collected on you 24 hours a day can provide the best healthcare in history. It can also be the foundation of the worst dictatorial regimes in history. [Continue reading…]

Wielding rocks and knives, Arizonans attack self-driving cars

The New York Times reports:

The assailant slipped out of a park around noon one day in October, zeroing in on his target, which was idling at a nearby intersection — a self-driving van operated by Waymo, the driverless-car company spun out of Google.

He carried out his attack with an unidentified sharp object, swiftly slashing one of the tires. The suspect, identified as a white man in his 20s, then melted into the neighborhood on foot.

The slashing was one of nearly two dozen attacks on driverless vehicles over the past two years in Chandler, a city near Phoenix where Waymo started testing its vans in 2017. In ways large and small, the city has had an early look at public misgivings over the rise of artificial intelligence, with city officials hearing complaints about everything from safety to possible job losses. [Continue reading…]

Chinese scientist who claimed to make genetically edited babies is kept under guard

The New York Times reports:

The Chinese scientist who shocked the world by claiming that he had created the first genetically edited babies is sequestered in a small university guesthouse in the southern city of Shenzhen, where he remains under guard by a dozen unidentified men.

The sighting of the scientist, He Jiankui, this week was the first since he appeared at a conference in Hong Kong in late November and defended his actions. For the past few weeks, rumors had swirled about whether Dr. He was under house arrest. His university and the Chinese government, which has put Dr. He under investigation, have been silent about his fate.

Dr. He now lives in a fourth-floor apartment at a university guesthouse, a hotel run by the school for visiting teachers, on the sprawling campus of the Southern University of Science and Technology in Shenzhen’s Nanshan District, where many of China’s best-known tech companies, like Tencent, have their offices.

In November, Dr. He stunned the global scientific community when he claimed to have created the world’s first babies from genetically edited embryos, implanted in a woman who gave birth to twin girls. While he did not provide proof that the gene-edited twins were born, he presented data that suggested he had done what he claimed. [Continue reading…]

Strangers smile less to one another when they have their smartphones, study finds

PsyPost reports:

New research suggests that phones are altering fundamental aspects of social life. According to a study published in Computers in Human Behavior, strangers smile less to one another when they have their smartphones with them.

“Smartphones provide easy access to so much fun and useful content, but we wondered if they may have subtle unanticipated costs for our social behavior in the nondigital world. Smiling is a fundamental human social behavior that serves as a signal of people’s current emotions and motivations,” said study author Kostadin Kushlev, an assistant professor at Georgetown University.

In the study, pairs of strangers were assigned to wait together in a room for 10 minutes either with or without their smartphones. The room was videotaped, and the participants were positioned so that both of their faces were visible to cameras.

Participants with their smartphones were less likely to initiate conversations. Thirty-two participants who had their phones never ended up interacting with the other person in the waiting room. In comparison, just 6 people without their phones never interacted with the stranger. [Continue reading…]

Let’s cultivate our material intelligence

Glenn Adamson writes:

Are you sitting comfortably? If so, how much do you know about the chair that’s holding you off the ground – what it’s made from, and what its production process looked like? Where it was made, and by whom? Or go deeper: how were the materials used to make the chair extracted from the planet? Most people will find it difficult to answer these basic questions. The object cradling your body remains, in many ways, mysterious to you.

Quite probably, you are surrounded by many things of which you know next to nothing – among them, the device on which you are reading these words. Most of us live in a state of general ignorance about our physical surroundings. It’s not our fault; centuries of technological sophistication and global commerce have distanced most of us from making physical things, and even from seeing or knowing how they are made. But the slow and pervasive separation of people from knowledge of the material world brings with it a serious problem.

Until about a century ago, most people knew a great deal about their immediate material world. Fewer and fewer do today, as commodities circulate with ever greater speed over greater distances. Because of the sheer complexity of contemporary production, even the people who do have professional responsibility for making things – the engineers and factory workers and chemists among us – tend to be specialists. Deepened knowledge usually also means narrowed knowledge. This tends to obscure awareness of the extended production chains through which materials, tools, components and packaging are sourced. Nobody – not an assembly-line worker, not a CEO – has a comprehensive vantage point. It is partly a problem of scale: the wider the view comes, the harder it is to see clearly what’s close at hand.

In effect, we are living in a state of perpetual remote control. As Carl Miller argues in his book The Death of the Gods (2018), algorithms have taken over many day-to-day procedures. These algorithms are themselves driven by algorithms, in a cascade of interconnected calculation. Such automated decisionmaking is extremely efficient, but it has contributed to a crisis of accountability. If no one understands what is really happening, how can anyone be held responsible? This lack of transparency gives rise to a range of ethical dilemmas, chief among them our inability to address climate change, due in part to prevalent psychological separation from the processes of extraction, manufacture and disposal. [Continue reading…]