Without humans, AI can wreak havoc

Katherine Maher, chief executive and executive director of the Wikimedia Foundation, writes:

Too often, artificial intelligence is presented as an all-powerful solution to our problems, a scalable replacement for people. Companies are automating nearly every aspect of their social interfaces, from creating to moderating to personalizing content. At its worst, A.I. can put society on autopilot that may not consider our dearest values.

Without humans, A.I. can wreak havoc. A glaring example was Amazon’s A.I.-driven human resources software that was supposed to surface the best job candidates, but ended up being biased against women. Built using past resumes submitted to Amazon, most of which came from men, the program concluded men were preferable to women.

Rather than replacing humans, A.I. is best used to support our capacity for creativity and discernment. Wikipedia is creating A.I. that will flag potentially problematic edits — like a prankster vandalizing a celebrity’s page — to a human who can then step in. The system can also help our volunteer editors evaluate a newly created page or suggest superb pages for featuring. In short, A.I. that is deployed by and for humans can improve the experience of both people consuming information and those producing it. [Continue reading…]

Second Boeing 737 Max crash raises questions about airplane automation

MIT Technology Review reports:

The 737 Max has bigger engines than the original 737, which make it 14% more fuel efficient than the previous generation. As the trade publication Air Current explains, the position and shape of the new engines changed how the aircraft handles, giving the nose a tendency to tip upward in some situations, which could cause the plane to stall. The new “maneuvering characteristics augmentation system” was designed to counteract that tendency.

Did these more efficient engines—and the changes they necessitated to the airplane’s automation systems—compromise the aircraft’s safety? As sociologist Charles Perrow wrote in his classic 1984 book Normal Accidents, new air-safety technologies don’t always make airplanes safer, even if they work just as well as they are supposed to. Instead of improving safety, innovations can allow airlines “to run greater risks in search of increased performance.”

A high-ranking Boeing official told the Wall Street Journal that “the company had decided against disclosing more details to cockpit crews due to concerns about inundating average pilots with too much information—and significantly more technical data—than they needed or could digest.”

But what good is a safety system that’s too intricate for highly trained professional airline pilots to understand? Each new automatic device, Perrow wrote, might solve some problems only to introduce new, more subtle ones. Make the system too complicated, he said, and it’s inevitable that regulators will lose track of which pilots had been told what, and that some pilots will get confused about which procedures to follow. It didn’t, he said, make much sense to blame pilots in cases like this. Pilot error, he said, “is a convenient catch-all.” But it’s the complexity of the system that’s really to blame.

The Lion Air crash—and the news that some pilots may not have been given all the information they needed about the new systems on board—caused an uproar among those who fly the 737 Max. As the Seattle Times reported, one American Airlines pilot wondered: “I’ve been flying the MAX-8 a couple times per month for almost a year now, and I’m sitting here thinking, what the hell else don’t I know about this thing?” [Continue reading…]

Capitalism without competition is not capitalism

A philosopher argues that an AI can’t be creative

Sean Dorrance Kelly writes:

Advances in artificial intelligence have led many to speculate that human beings will soon be replaced by machines in every domain, including that of creativity. Ray Kurzweil, a futurist, predicts that by 2029 we will have produced an AI that can pass for an average educated human being. Nick Bostrom, an Oxford philosopher, is more circumspect. He does not give a date but suggests that philosophers and mathematicians defer work on fundamental questions to “superintelligent” successors, which he defines as having “intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.”

Both believe that once human-level intelligence is produced in machines, there will be a burst of progress—what Kurzweil calls the “singularity” and Bostrom an “intelligence explosion”—in which machines will very quickly supersede us by massive measures in every domain. This will occur, they argue, because superhuman achievement is the same as ordinary human achievement except that all the relevant computations are performed much more quickly, in what Bostrom dubs “speed superintelligence.”

So what about the highest level of human achievement—creative innovation? Are our most creative artists and thinkers about to be massively surpassed by machines?

No.

Human creative achievement, because of the way it is socially embedded, will not succumb to advances in artificial intelligence. To say otherwise is to misunderstand both what human beings are and what our creativity amounts to.

This claim is not absolute: it depends on the norms that we allow to govern our culture and our expectations of technology. Human beings have, in the past, attributed great power and genius even to lifeless totems. It is entirely possible that we will come to treat artificially intelligent machines as so vastly superior to us that we will naturally attribute creativity to them. Should that happen, it will not be because machines have outstripped us. It will be because we will have denigrated ourselves. [Continue reading…]

Technological change is not an inexorable, impersonal force

Steven Poole writes:

When is the future no longer the future? Only a decade ago, air travel seemed to be moving ineluctably towards giant planes, or “superjumbos”. But last week Airbus announced it will cease manufacturing its A380, the world’s fattest passenger jet, as current trends favour smaller and more fuel-efficient craft. Progress changed course. A more vivid reminder of lost dreams will come in a few weeks: 2 March marks the 50th anniversary of the maiden flight of Concorde. Once upon a time, all aviation was going to be supersonic. But sometimes, the future is cancelled.

What if what we think is going to be the future right now is cancelled in its turn? We are supposedly on an unstoppable path towards driverless vehicles, fully automated internet-connected “smart homes”, and godlike artificial intelligence – but, then, we’ve been promised flying cars for half a century, and they are still (allegedly) just around the corner. We live in a time when technological change is portrayed as an inexorable, impersonal force: we’d better learn how to surf the tsunami or drown. But as a society, we always have a choice about which direction we take next. And sometimes we make the wrong decision.

Technology isn’t just something that happens to us; it’s something we can decide to build and to use, or not
For one thing, history is full of technological marvels that were abandoned for reasons that were only reassessed much later. To most people in the late 19th century, when fleets of electric taxis operated in London and Manhattan, the electric car was clearly going to win out over the filthy petrol-driven alternative. But then vast oil reserves were discovered in America, and the future went into reverse. Until, in the late 20th century, global warming and advances in battery technology made electric cars seem like a good idea again. [Continue reading…]

AI that writes convincing prose risks mass-producing fake news

MIT Technology Review reports:

Here’s some breaking fake news …

Russia has declared war on the United States after Donald Trump accidentally fired a missile in the air.

Russia said it had “identified the missile’s trajectory and will take necessary measures to ensure the security of the Russian population and the country’s strategic nuclear forces.” The White House said it was “extremely concerned by the Russian violation” of a treaty banning intermediate-range ballistic missiles.

The US and Russia have had an uneasy relationship since 2014, when Moscow annexed Ukraine’s Crimea region and backed separatists in eastern Ukraine.

That story is, in fact, not only fake, but a troubling example of just how good AI is getting at fooling us.

That’s because it wasn’t written by a person; it was auto-generated by an algorithm fed the words “Russia has declared war on the United States after Donald Trump accidentally …”

The program made the rest of the story up on its own. And it can make up realistic-seeming news reports on any topic you give it. The program was developed by a team at OpenAI, a research institute based in San Francisco.

The researchers set out to develop a general-purpose language algorithm, trained on a vast amount of text from the web, that would be capable of translating text, answering questions, and performing other useful tasks. But they soon grew concerned about the potential for abuse. “We started testing it, and quickly discovered it’s possible to generate malicious-esque content quite easily,” says Jack Clark, policy director at OpenAI. [Continue reading…]

Screen time has stunted the development of generations of children

The Guardian reports:

A study has linked high levels of screen time with delayed development in children, reigniting the row over the extent to which parents should limit how long their offspring spend with electronic devices.

Researchers in Canada say children who spent more time with screens at two years of age did worse on tests of development at age three than children who had spent little time with devices. A similar result was found when children’s screen time at three years old was compared with their development at five years.

“What is new in this study is that we are studying really young children, so aged 2-5, when brain development is really rapidly progressing and also child development is unfolding so rapidly,” Dr Sheri Madigan, first author of the study from the University of Calgary, told the Guardian. “We are getting at these lasting effects,” she added of the study.

The authors say parents should be cautious about how long children are allowed to spend with devices. [Continue reading…]

The age of surveillance capitalism

John Naughton writes:

We’re living through the most profound transformation in our information environment since Johannes Gutenberg’s invention of printing in circa 1439. And the problem with living through a revolution is that it’s impossible to take the long view of what’s happening. Hindsight is the only exact science in this business, and in that long run we’re all dead. Printing shaped and transformed societies over the next four centuries, but nobody in Mainz (Gutenberg’s home town) in, say, 1495 could have known that his technology would (among other things): fuel the Reformation and undermine the authority of the mighty Catholic church; enable the rise of what we now recognise as modern science; create unheard-of professions and industries; change the shape of our brains; and even recalibrate our conceptions of childhood. And yet printing did all this and more.

Why choose 1495? Because we’re about the same distance into our revolution, the one kicked off by digital technology and networking. And although it’s now gradually dawning on us that this really is a big deal and that epochal social and economic changes are under way, we’re as clueless about where it’s heading and what’s driving it as the citizens of Mainz were in 1495.

That’s not for want of trying, mind. Library shelves groan under the weight of books about what digital technology is doing to us and our world. Lots of scholars are thinking, researching and writing about this stuff. But they’re like the blind men trying to describe the elephant in the old fable: everyone has only a partial view, and nobody has the whole picture. So our contemporary state of awareness is – as Manuel Castells, the great scholar of cyberspace once put it – one of “informed bewilderment”.

Which is why the arrival of Shoshana Zuboff’s new book is such a big event. Many years ago – in 1988, to be precise – as one of the first female professors at Harvard Business School to hold an endowed chair she published a landmark book, The Age of the Smart Machine: The Future of Work and Power, which changed the way we thought about the impact of computerisation on organisations and on work. It provided the most insightful account up to that time of how digital technology was changing the work of both managers and workers. And then Zuboff appeared to go quiet, though she was clearly incubating something bigger. The first hint of what was to come was a pair of startling essays – one in an academic journal in 2015, the other in a German newspaper in 2016. What these revealed was that she had come up with a new lens through which to view what Google, Facebook et al were doing – nothing less than spawning a new variant of capitalism. Those essays promised a more comprehensive expansion of this Big Idea.

And now it has arrived – the most ambitious attempt yet to paint the bigger picture and to explain how the effects of digitisation that we are now experiencing as individuals and citizens have come about.

The headline story is that it’s not so much about the nature of digital technology as about a new mutant form of capitalism that has found a way to use tech for its purposes. The name Zuboff has given to the new variant is “surveillance capitalism”. It works by providing free services that billions of people cheerfully use, enabling the providers of those services to monitor the behaviour of those users in astonishing detail – often without their explicit consent.

“Surveillance capitalism,” she writes, “unilaterally claims human experience as free raw material for translation into behavioural data. Although some of these data are applied to service improvement, the rest are declared as a proprietary behavioural surplus, fed into advanced manufacturing processes known as ‘machine intelligence’, and fabricated into prediction products that anticipate what you will do now, soon, and later. Finally, these prediction products are traded in a new kind of marketplace that I call behavioural futures markets. Surveillance capitalists have grown immensely wealthy from these trading operations, for many companies are willing to lay bets on our future behaviour.” [Continue reading…]

Disruption for thee, but not for me

Cory Doctorow writes:

The Silicon Valley gospel of “disruption” has descended into caricature, but, at its core, there are some sound tactics buried beneath the self-serving bullshit. A lot of our systems and institutions are corrupt, bloated, and infested with cream-skimming rentiers who add nothing and take so much.

Take taxis: there is nothing good about the idea that cab drivers and cab passengers meet each other by random chance, with the drivers aimlessly circling traffic-clogged roads while passengers brave the curb lane to frantically wave at them. Add to that the toxic practice of licensing cabs by creating “taxi medallions” that allow businesspeople (like erstwhile Trump bagman Michael Cohen) to corner the market on these licenses and lease them to drivers, creaming off the bulk of the profits in the process, leaving drivers with barely enough to survive.

So enter Uber, an app that allows drivers and passengers to find each other extremely efficiently, that gives drivers realtime intelligence about places where fares are going begging, and which bankrupts the rent-seeking medallion speculators almost overnight.

Of course, Uber also eliminates safety checks for drivers (and allows them to illegally discriminate against people with disabilities, people of color, and other marginalized groups); it used predatory pricing (where each ride is subsidized by deep-pocketed, market-cornering execs) to crush potential competitors, and games the regulatory and tax system.

Uber (and its Peter-Thiel-backed rival Lyft) are not good companies. They’re not forces for good. But the system they killed? Also not good.

In 2016, the City of Austin played a game of high-stakes chicken with Uber and Lyft. Austin cab drivers have to get fingerprinted as part of a criminal records check, and Austin wanted Uber and Lyft drivers to go through the same process.

Uber and Lyft violently objected to this. They said it would add a needless barrier to entry that would depress the supply of drivers, and privately, they confessed their fear that giving in to any regulation, anywhere, would open the door to regulation everywhere. They wanted to establish a reputation for being such dirty fighters that no city would even try to put rules on them.

(Notably, Uber and Lyft did not make any arguments about criminal background checks perpetuating America’s racially unjust “justice system” in which people of color are systematically overpoliced and then railroaded into guilty pleas.)

Austin wasn’t intimidated. They enacted the rule, and Uber and Lyft simply exited the city, leaving Austin without any rideshare at all. All the drivers and passengers who’d come to rely on Lyft and Uber were out of luck.

But the drivers were undaunted. They formed a co-operative and in months, they had cloned the Uber app and launched a new business called Ride Austin, which is exactly like Uber: literally the same drivers, driving the same cars, and charging the same prices. But it’s also completely different from Uber: the drivers own this company through a worker-owned co-op. They take home 25% more per ride than they made when they were driving for Uber. Uber and Lyft drivers commute into Austin from as far away as San Antonio just to drive for Ride. That’s how much better driving for a worker co-op is. [Continue reading…]

Why China’s electric-car industry is leaving Detroit, Japan, and Germany in the dust

Jordyn Dahl writes:

After the Cultural Revolution of the 1960s and ’70s crippled China’s economy, the country began to open its markets to the outside world. The aim was to bring in technological know-how from abroad that domestic firms could then assimilate. By the early ’80s, foreign automakers were allowed in on the condition that they form a joint venture with a Chinese partner. These Chinese firms, by working with foreign companies, would eventually gain enough knowledge to function independently.

Or so the theory went. Chinese-produced cars subsequently flooded the market, but they were largely cheap copycats—they looked like foreign-made cars, but the engines weren’t as good. Carmakers in the US and Europe had too much of a head start for China to catch up.

The only way to outdo the rest of the world, then, was to bet on a whole new technology. Enter electric vehicles, which require less mechanical complexity and rely more on electronic prowess. A Chevrolet Bolt’s electric engine contains just 24 moving segments, according to a teardown performed by consulting company UBS. In comparison, a Volkswagen Golf’s combustion engine has 149. Meanwhile, China already had an electronic manufacturing supply chain in place from its years of producing the world’s batteries, phones, and gadgets.

Now the Chinese government is embracing the shift from combustion to electric engines in a way no other country can match. It’s made electric vehicles one of the 10 pillars of Made in China 2025—a state-led plan for the country to become a global leader in high-tech industries—and enacted policies to generate demand. Since 2013, almost 500 electric-vehicle companies have launched in China to meet the government’s mandate and to cash in on subsidies designed to generate supply. [Continue reading…]