Google made $4.7 billion from the news industry in 2018, study says

The New York Times reports:

$4,700,000,000.

It’s more than the combined ticket sales of the last two “Avengers” movies. It’s more than what virtually any professional sports team is worth. And it’s the amount that Google made from the work of news publishers in 2018 via search and Google News, according to a study to be released on Monday by the News Media Alliance.

The journalists who create that content deserve a cut of that $4.7 billion, said David Chavern, the president and chief executive of the alliance, which represents more than 2,000 newspapers across the country, including The New York Times.

“They make money off this arrangement,” Mr. Chavern said, “and there needs to be a better outcome for news publishers.”

That $4.7 billion is nearly as much as the $5.1 billion brought in by the United States news industry as a whole from digital advertising last year — and the News Media Alliance cautioned that its estimate for Google’s income was conservative. For one thing, it does not count the value of the personal data the company collects on consumers every time they click on an article like this one. [Continue reading…]

Bellingcat and how open source reinvented investigative journalism

Muhammad Idrees Ahmad writes:

It’s a brief window into a doomed soul. Clinging to his mother’s back, the child looks twice into the camera held by the man about to kill him. The natural curiosity of a child that fear has failed to extinguish. The smartphone captures the casual cruelty with which both mother and child are killed. Nearby, another mother and daughter are executed. One killer continues to pump bullets into the lifeless bodies with a glee that seems excessive even to his accomplices. “That’s enough, Tsanga!” one shouts. “That’s enough.”

In July 2018, when the video of the killings started circulating on social media, it was clear what had happened. The mothers and children were defenseless, they weren’t resisting, and they were killed with intent. Other facts, however, were less clear: Where did this happen, when was the video recorded, who were the killers, and why did they kill? The fact that the killers had filmed the crime suggested that they were confident in their impunity. Only precise answers to journalism’s enduring questions—what, where, when, who, and why—could revoke it.

This is the task that investigators at the BBC’s Africa Eye unit undertook over the next few months—an investigation for which they have just won a Peabody Award. Africa Eye was able to geolocate the site, matching topographical features from the video to satellite maps; establish the time, using shadows as sundials; confirm the killers’ identities, by cross-referencing social media profiles with government records; and establish that the executions were part of Cameroon’s counter-terrorism operations against the extremist group Boko Haram. Africa Eye’s findings led in February 2019 to the US’s withdrawing $17 million in funding for the Cameroonian Army and European Parliament’s passing a strong resolution condemning “torture, forced disappearances, extrajudicial killings perpetrated by governmental forces.”

The investigation was a triumph of journalism. The smartphone that had captured the victims’ last moments had turned from voyeur to witness. [Continue reading…]

How an aging, digitally semi-literate population is reshaping the internet and politics

BuzzFeed reports:

Although many older Americans have, like the rest of us, embraced the tools and playthings of the technology industry, a growing body of research shows they have disproportionately fallen prey to the dangers of internet misinformation and risk being further polarized by their online habits. While that matters much to them, it’s also a massive challenge for society given the outsize role older generations play in civic life, and demographic changes that are increasing their power and influence.

People 65 and older will soon make up the largest single age group in the United States, and will remain that way for decades to come, according to the US Census. This massive demographic shift is occurring when this age group is moving online and onto Facebook in droves, deeply struggling with digital literacy, and being targeted by a wide range of online bad actors who try to feed them fake news, infect their devices with malware, and steal their money in scams. Yet older people are largely being left out of what has become something of a golden age for digital literacy efforts.

Since the 2016 election, funding for digital literacy programs has skyrocketed. Apple just announced a major donation to the News Literacy Project and two related initiatives, and Facebook partners with similar organizations. But they primarily focus on younger demographics, even as the next presidential election grows closer. [Continue reading…]

A new age of warfare: How Internet mercenaries do battle for authoritarian governments

The New York Times reports:

The man in charge of Saudi Arabia’s ruthless campaign to stifle dissent went searching for ways to spy on people he saw as threats to the kingdom. He knew where to go: a secretive Israeli company offering technology developed by former intelligence operatives.

It was late 2017 and Saud al-Qahtani — then a top adviser to Saudi Arabia’s powerful crown prince — was tracking Saudi dissidents around the world, part of his extensive surveillance efforts that ultimately led to the killing of the journalist Jamal Khashoggi. In messages exchanged with employees from the company, NSO Group, Mr. al-Qahtani spoke of grand plans to use its surveillance tools throughout the Middle East and Europe, like Turkey and Qatar or France and Britain.

The Saudi government’s reliance on a firm from Israel, an adversary for decades, offers a glimpse of a new age of digital warfare governed by few rules and of a growing economy, now valued at $12 billion, of spies for hire.

Today even the smallest countries can buy digital espionage services, enabling them to conduct sophisticated operations like electronic eavesdropping or influence campaigns that were once the preserve of major powers like the United States and Russia. Corporations that want to scrutinize competitors’ secrets, or a wealthy individual with a beef against a rival, can also command intelligence operations for a price, akin to purchasing off-the-shelf elements of the National Security Agency or the Mossad. [Continue reading…]

How social media’s business model helped the New Zealand massacre go viral

The Washington Post reports:

The ability of Internet users to spread a video of Friday’s slaughter in New Zealand marked a triumph — however appalling — of human ingenuity over computerized systems designed to block troubling images of violence and hate.

People celebrating the mosque attacks that left 50 people dead were able to keep posting and reposting videos on Facebook, YouTube and Twitter despite the websites’ use of largely automated systems powered by artificial intelligence to block them. Clips of the attack stayed up for many hours and, in some cases, days.

This failure has highlighted Silicon Valley’s struggles to police platforms that are massively lucrative yet also persistently vulnerable to outside manipulation despite years of promises to do better.

Friday’s uncontrolled spread of horrific videos — a propaganda coup for those espousing hateful ideologies — also raised questions about whether social media can be made safer without undermining business models that rely on the speed and volume of content uploaded by users worldwide. In Washington and Silicon Valley, the incident crystallized growing concerns about the extent to which government and market forces have failed to check the power of social media.

“It’s an uncontrollable digital Frankenstein,” said Tristan Harris, a former Google design ethicist and co-founder of the Center for Humane Technology. [Continue reading…]

Companies use your data to make money. California thinks you should get paid

CNN reported in February:

People give massive amounts of their personal data to companies for free every day. Some economists, academics and activists think they should be paid for their contributions.

Called data dividends, or sometimes digital or technology dividends, the somewhat obscure idea got a boost on Feb 12 from an unexpected source: California’s new governor, Gavin Newsom.

“California’s consumers should … be able to share in the wealth that is created from their data. And so I’ve asked my team to develop a proposal for a new data dividend for Californians, because we recognize that your data has value and it belongs to you,” said Newsom during his annual State of the State speech.

The concept is based in part on an existing model in Alaska where residents receive payment for their share of the state’s oil-royalties fund dividend each fall. The payouts, which can vary from hundreds of dollars to a couple thousand of dollars per person, have become a regular part of the state’s economy. [Continue reading…]

Without humans, AI can wreak havoc

Katherine Maher, chief executive and executive director of the Wikimedia Foundation, writes:

Too often, artificial intelligence is presented as an all-powerful solution to our problems, a scalable replacement for people. Companies are automating nearly every aspect of their social interfaces, from creating to moderating to personalizing content. At its worst, A.I. can put society on autopilot that may not consider our dearest values.

Without humans, A.I. can wreak havoc. A glaring example was Amazon’s A.I.-driven human resources software that was supposed to surface the best job candidates, but ended up being biased against women. Built using past resumes submitted to Amazon, most of which came from men, the program concluded men were preferable to women.

Rather than replacing humans, A.I. is best used to support our capacity for creativity and discernment. Wikipedia is creating A.I. that will flag potentially problematic edits — like a prankster vandalizing a celebrity’s page — to a human who can then step in. The system can also help our volunteer editors evaluate a newly created page or suggest superb pages for featuring. In short, A.I. that is deployed by and for humans can improve the experience of both people consuming information and those producing it. [Continue reading…]

U.S. Cyber Command operation disrupted Internet access of Russian troll factory on day of 2018 midterms

The Washington Post reports:

The U.S. military blocked Internet access to an infamous Russian entity seeking to sow discord among Americans during the 2018 midterms, several U.S. officials said, a warning that the Kremlin’s operations against the United States are not cost-free.

The strike on the Internet Research Agency in St. Petersburg, a company underwritten by an oligarch close to President Vladi­mir Putin, was part of the first offensive cyber campaign against Russia designed to thwart attempts to interfere with a U.S. election, the officials said.

“They basically took the IRA offline,” according to one individual familiar with the matter who, like others, spoke on the condition of anonymity to discuss classified information. “They shut them down.”

The operation marked the first muscle-flexing by U.S. Cyber Command, with intelligence from the National Security Agency, under new authorities it was granted by President Trump and Congress last year to bolster offensive capabilities.

Whether the impact of the St. Petersburg action will be long-lasting remains to be seen. Russia’s tactics are evolving, and some analysts were skeptical the strike would deter the Russian troll factory or Putin, who, according to U.S. intelligence officials, ordered an “influence” campaign in 2016 to undermine faith in U.S. democracy. U.S. officials have also assessed that the Internet Research Agency works on behalf of the Kremlin.

“Such an operation would be more of a pinprick that is more annoying than deterring in the long run,” said Thomas Rid, a strategic-studies professor at Johns Hopkins University who was not briefed on the details. [Continue reading…]

AI that writes convincing prose risks mass-producing fake news

MIT Technology Review reports:

Here’s some breaking fake news …

Russia has declared war on the United States after Donald Trump accidentally fired a missile in the air.

Russia said it had “identified the missile’s trajectory and will take necessary measures to ensure the security of the Russian population and the country’s strategic nuclear forces.” The White House said it was “extremely concerned by the Russian violation” of a treaty banning intermediate-range ballistic missiles.

The US and Russia have had an uneasy relationship since 2014, when Moscow annexed Ukraine’s Crimea region and backed separatists in eastern Ukraine.

That story is, in fact, not only fake, but a troubling example of just how good AI is getting at fooling us.

That’s because it wasn’t written by a person; it was auto-generated by an algorithm fed the words “Russia has declared war on the United States after Donald Trump accidentally …”

The program made the rest of the story up on its own. And it can make up realistic-seeming news reports on any topic you give it. The program was developed by a team at OpenAI, a research institute based in San Francisco.

The researchers set out to develop a general-purpose language algorithm, trained on a vast amount of text from the web, that would be capable of translating text, answering questions, and performing other useful tasks. But they soon grew concerned about the potential for abuse. “We started testing it, and quickly discovered it’s possible to generate malicious-esque content quite easily,” says Jack Clark, policy director at OpenAI. [Continue reading…]

Targeted advertising is ruining the internet and breaking the world

Nathalie Maréchal writes:

In his testimony to the US Senate last spring, Facebook CEO Mark Zuckerberg emphasized that his company doesn’t sell user data, as if to reassure policymakers and the public. But the reality—that Facebook, Google, Twitter, and other social media companies sell access to our attention—is just as concerning. Actual user information may not change hands, but the advertising business model drives company decision making in ways that are ultimately toxic to society. As sociologist Zeynep Tufekci put it in her 2017 TED talk, “we’re building a dystopia just to make people click on ads.”

Social media companies are advertising companies. This has never been a secret, of course. Google pioneered the targeted advertising business model in the late 90s, and Sheryl Sandberg brought the practice to Facebook in 2008 when she joined the company as chief operating officer. The cash was flowing in, and companies around Silicon Valley and beyond adopted the same basic strategy: first, grow the user base as quickly as possible without worrying about revenue; second, collect as much data as possible about the users; third, monetize that information by performing big data analytics in order to show users advertising that is narrowly tailored to their demographics and revealed interests; fourth, profit.

For a while this seemed like a win-win: people around the world could watch cat videos, see pictures of each others’ babies in Halloween costumes, connect with family, friends, and colleagues around the globe, and more. In return, companies would show them ads that were actually relevant to them. Contextual advertising had supported the print and broadcast media for decades, so this was the logical next step. What could possibly go wrong?

Plenty, as it turns out. From today’s vantage point, the Arab Spring stands out as an iconic cautionary tale of techno-utopianism gone wrong. Sure, would-be revolutionaries, reformers, and human rights defenders were among the first to master the power of what we used to call “Web 2.0,” but authorities caught on quickly and used the new tools to crack down on threats to their grasp on power. Similarly, the 2008 Obama campaign was the first to harness online advertising to reach the right voters with the right message with near-surgical precision, but 10 years later the same techniques are propelling right-wing authoritarians to power in the US, the Philippines, and Brazil, and being used to fan the flames of xenophobia, racial hatred, and even genocide around the world—perhaps most devastatingly in Myanmar. How on Earth did we get here? [Continue reading…]