How an aging, digitally semi-literate population is reshaping the internet and politics

BuzzFeed reports:

Although many older Americans have, like the rest of us, embraced the tools and playthings of the technology industry, a growing body of research shows they have disproportionately fallen prey to the dangers of internet misinformation and risk being further polarized by their online habits. While that matters much to them, it’s also a massive challenge for society given the outsize role older generations play in civic life, and demographic changes that are increasing their power and influence.

People 65 and older will soon make up the largest single age group in the United States, and will remain that way for decades to come, according to the US Census. This massive demographic shift is occurring when this age group is moving online and onto Facebook in droves, deeply struggling with digital literacy, and being targeted by a wide range of online bad actors who try to feed them fake news, infect their devices with malware, and steal their money in scams. Yet older people are largely being left out of what has become something of a golden age for digital literacy efforts.

Since the 2016 election, funding for digital literacy programs has skyrocketed. Apple just announced a major donation to the News Literacy Project and two related initiatives, and Facebook partners with similar organizations. But they primarily focus on younger demographics, even as the next presidential election grows closer. [Continue reading…]

A new age of warfare: How Internet mercenaries do battle for authoritarian governments

The New York Times reports:

The man in charge of Saudi Arabia’s ruthless campaign to stifle dissent went searching for ways to spy on people he saw as threats to the kingdom. He knew where to go: a secretive Israeli company offering technology developed by former intelligence operatives.

It was late 2017 and Saud al-Qahtani — then a top adviser to Saudi Arabia’s powerful crown prince — was tracking Saudi dissidents around the world, part of his extensive surveillance efforts that ultimately led to the killing of the journalist Jamal Khashoggi. In messages exchanged with employees from the company, NSO Group, Mr. al-Qahtani spoke of grand plans to use its surveillance tools throughout the Middle East and Europe, like Turkey and Qatar or France and Britain.

The Saudi government’s reliance on a firm from Israel, an adversary for decades, offers a glimpse of a new age of digital warfare governed by few rules and of a growing economy, now valued at $12 billion, of spies for hire.

Today even the smallest countries can buy digital espionage services, enabling them to conduct sophisticated operations like electronic eavesdropping or influence campaigns that were once the preserve of major powers like the United States and Russia. Corporations that want to scrutinize competitors’ secrets, or a wealthy individual with a beef against a rival, can also command intelligence operations for a price, akin to purchasing off-the-shelf elements of the National Security Agency or the Mossad. [Continue reading…]

How social media’s business model helped the New Zealand massacre go viral

The Washington Post reports:

The ability of Internet users to spread a video of Friday’s slaughter in New Zealand marked a triumph — however appalling — of human ingenuity over computerized systems designed to block troubling images of violence and hate.

People celebrating the mosque attacks that left 50 people dead were able to keep posting and reposting videos on Facebook, YouTube and Twitter despite the websites’ use of largely automated systems powered by artificial intelligence to block them. Clips of the attack stayed up for many hours and, in some cases, days.

This failure has highlighted Silicon Valley’s struggles to police platforms that are massively lucrative yet also persistently vulnerable to outside manipulation despite years of promises to do better.

Friday’s uncontrolled spread of horrific videos — a propaganda coup for those espousing hateful ideologies — also raised questions about whether social media can be made safer without undermining business models that rely on the speed and volume of content uploaded by users worldwide. In Washington and Silicon Valley, the incident crystallized growing concerns about the extent to which government and market forces have failed to check the power of social media.

“It’s an uncontrollable digital Frankenstein,” said Tristan Harris, a former Google design ethicist and co-founder of the Center for Humane Technology. [Continue reading…]

Companies use your data to make money. California thinks you should get paid

CNN reported in February:

People give massive amounts of their personal data to companies for free every day. Some economists, academics and activists think they should be paid for their contributions.

Called data dividends, or sometimes digital or technology dividends, the somewhat obscure idea got a boost on Feb 12 from an unexpected source: California’s new governor, Gavin Newsom.

“California’s consumers should … be able to share in the wealth that is created from their data. And so I’ve asked my team to develop a proposal for a new data dividend for Californians, because we recognize that your data has value and it belongs to you,” said Newsom during his annual State of the State speech.

The concept is based in part on an existing model in Alaska where residents receive payment for their share of the state’s oil-royalties fund dividend each fall. The payouts, which can vary from hundreds of dollars to a couple thousand of dollars per person, have become a regular part of the state’s economy. [Continue reading…]

Without humans, AI can wreak havoc

Katherine Maher, chief executive and executive director of the Wikimedia Foundation, writes:

Too often, artificial intelligence is presented as an all-powerful solution to our problems, a scalable replacement for people. Companies are automating nearly every aspect of their social interfaces, from creating to moderating to personalizing content. At its worst, A.I. can put society on autopilot that may not consider our dearest values.

Without humans, A.I. can wreak havoc. A glaring example was Amazon’s A.I.-driven human resources software that was supposed to surface the best job candidates, but ended up being biased against women. Built using past resumes submitted to Amazon, most of which came from men, the program concluded men were preferable to women.

Rather than replacing humans, A.I. is best used to support our capacity for creativity and discernment. Wikipedia is creating A.I. that will flag potentially problematic edits — like a prankster vandalizing a celebrity’s page — to a human who can then step in. The system can also help our volunteer editors evaluate a newly created page or suggest superb pages for featuring. In short, A.I. that is deployed by and for humans can improve the experience of both people consuming information and those producing it. [Continue reading…]

U.S. Cyber Command operation disrupted Internet access of Russian troll factory on day of 2018 midterms

The Washington Post reports:

The U.S. military blocked Internet access to an infamous Russian entity seeking to sow discord among Americans during the 2018 midterms, several U.S. officials said, a warning that the Kremlin’s operations against the United States are not cost-free.

The strike on the Internet Research Agency in St. Petersburg, a company underwritten by an oligarch close to President Vladi­mir Putin, was part of the first offensive cyber campaign against Russia designed to thwart attempts to interfere with a U.S. election, the officials said.

“They basically took the IRA offline,” according to one individual familiar with the matter who, like others, spoke on the condition of anonymity to discuss classified information. “They shut them down.”

The operation marked the first muscle-flexing by U.S. Cyber Command, with intelligence from the National Security Agency, under new authorities it was granted by President Trump and Congress last year to bolster offensive capabilities.

Whether the impact of the St. Petersburg action will be long-lasting remains to be seen. Russia’s tactics are evolving, and some analysts were skeptical the strike would deter the Russian troll factory or Putin, who, according to U.S. intelligence officials, ordered an “influence” campaign in 2016 to undermine faith in U.S. democracy. U.S. officials have also assessed that the Internet Research Agency works on behalf of the Kremlin.

“Such an operation would be more of a pinprick that is more annoying than deterring in the long run,” said Thomas Rid, a strategic-studies professor at Johns Hopkins University who was not briefed on the details. [Continue reading…]

AI that writes convincing prose risks mass-producing fake news

MIT Technology Review reports:

Here’s some breaking fake news …

Russia has declared war on the United States after Donald Trump accidentally fired a missile in the air.

Russia said it had “identified the missile’s trajectory and will take necessary measures to ensure the security of the Russian population and the country’s strategic nuclear forces.” The White House said it was “extremely concerned by the Russian violation” of a treaty banning intermediate-range ballistic missiles.

The US and Russia have had an uneasy relationship since 2014, when Moscow annexed Ukraine’s Crimea region and backed separatists in eastern Ukraine.

That story is, in fact, not only fake, but a troubling example of just how good AI is getting at fooling us.

That’s because it wasn’t written by a person; it was auto-generated by an algorithm fed the words “Russia has declared war on the United States after Donald Trump accidentally …”

The program made the rest of the story up on its own. And it can make up realistic-seeming news reports on any topic you give it. The program was developed by a team at OpenAI, a research institute based in San Francisco.

The researchers set out to develop a general-purpose language algorithm, trained on a vast amount of text from the web, that would be capable of translating text, answering questions, and performing other useful tasks. But they soon grew concerned about the potential for abuse. “We started testing it, and quickly discovered it’s possible to generate malicious-esque content quite easily,” says Jack Clark, policy director at OpenAI. [Continue reading…]

Targeted advertising is ruining the internet and breaking the world

Nathalie Maréchal writes:

In his testimony to the US Senate last spring, Facebook CEO Mark Zuckerberg emphasized that his company doesn’t sell user data, as if to reassure policymakers and the public. But the reality—that Facebook, Google, Twitter, and other social media companies sell access to our attention—is just as concerning. Actual user information may not change hands, but the advertising business model drives company decision making in ways that are ultimately toxic to society. As sociologist Zeynep Tufekci put it in her 2017 TED talk, “we’re building a dystopia just to make people click on ads.”

Social media companies are advertising companies. This has never been a secret, of course. Google pioneered the targeted advertising business model in the late 90s, and Sheryl Sandberg brought the practice to Facebook in 2008 when she joined the company as chief operating officer. The cash was flowing in, and companies around Silicon Valley and beyond adopted the same basic strategy: first, grow the user base as quickly as possible without worrying about revenue; second, collect as much data as possible about the users; third, monetize that information by performing big data analytics in order to show users advertising that is narrowly tailored to their demographics and revealed interests; fourth, profit.

For a while this seemed like a win-win: people around the world could watch cat videos, see pictures of each others’ babies in Halloween costumes, connect with family, friends, and colleagues around the globe, and more. In return, companies would show them ads that were actually relevant to them. Contextual advertising had supported the print and broadcast media for decades, so this was the logical next step. What could possibly go wrong?

Plenty, as it turns out. From today’s vantage point, the Arab Spring stands out as an iconic cautionary tale of techno-utopianism gone wrong. Sure, would-be revolutionaries, reformers, and human rights defenders were among the first to master the power of what we used to call “Web 2.0,” but authorities caught on quickly and used the new tools to crack down on threats to their grasp on power. Similarly, the 2008 Obama campaign was the first to harness online advertising to reach the right voters with the right message with near-surgical precision, but 10 years later the same techniques are propelling right-wing authoritarians to power in the US, the Philippines, and Brazil, and being used to fan the flames of xenophobia, racial hatred, and even genocide around the world—perhaps most devastatingly in Myanmar. How on Earth did we get here? [Continue reading…]

U.S. joins Russia, North Korea in refusing to sign cybersecurity pact

Caroline Orr reports:

More than 50 countries signed onto a historic cybersecurity pact Monday as part of the Paris Peace Forum, marking an important step forward in the global fight against cyberwarfare and criminal activity on the internet.

In addition to the governments that pledged to work together to combat malicious online activities, at least 150 tech companies and 90 charitable organizations and universities also signed onto the agreement.

However, there were a few notable absences from the list of signatories. Among the countries that declined to pledge support for the global pact were the repressive regimes of Russia, China, and North Korea — and the United States.

The agreement, known as the “Paris Call for Trust and Security in Cyberspace,” represents the largest and most coordinated effort to date to create a set of international laws and norms for cyberwarfare and security — akin to a Geneva Convention for the digital world. [Continue reading…]

Bitcoin: Are we really going to burn up the world for libertarian nerdbucks?

Eric Holthaus writes:

The continued growth of power-hungry Bitcoin could lock in catastrophic climate change, according to a new study.

The cryptocurrency’s growth, should it follow the adoption path of other widely used technologies (like credit cards and air conditioning), would alone be enough to push the planet to 2-degree C warming, the red line value the world agreed to in the 2015 Paris climate accord.

Bitcoin essentially converts electricity into cash, via incredibly complex math problems designed to eliminate the need for government-sponsored currencies. It’s made a lot of bros rich over the past few years, but it’s also raised some significant concerns about the ethics of sucking up excess energy on a finite planet.

The libertarian nerdbucks account for only a tiny fraction (0.033 percent) of global transactions right now, but its rapid growth and already sizable energy usage are worrisome. This latest study, from researchers at the University of Hawaii-Manoa, adds to the pile of evidence that Bitcoin needs to cut down dramatically on energy use — or risk taking down our chances for a clean energy future with it. [Continue reading…]