Zuckerberg gaslights Congress

Kevin Poulsen writes:

Facebook was warned five years ago that the “reverse-lookup” feature in its search engine could be used to harvest names, profiles, and phone numbers for virtually all its users. But the company ignored the red flags until last week, after it happened.

In prepared testimony to Congress released Monday, Mark Zuckerberg acknowledged that malefactors had used the reverse-lookup “to link people’s public Facebook information to a phone number,” he wrote (PDF). “When we found out about the abuse, we shut this feature down.” He said that Facebook only discovered the incidents two weeks ago.

Zuckerberg is set to testify at a joint hearing before the Senate’s Judiciary and Commerce committees on Tuesday, and then return to Capitol Hill on Wednesday to appear before the House Energy and Commerce Committee. This will be the first time Facebook’s billionaire founder and CEO has ever appeared before Congress. Last fall the company’s vice president and general counsel Colin Stretch appeared at the hearings probing Russia’s election interference campaign.

The hearings are a response to last month’s revelations that Cambridge Analytica, a U.K.-based consulting firm that worked for the Trump campaign, harvested data on as many as 87 million Facebook users without their knowledge.

Facebook revealed the separate reverse-lookup data spill while responding to the Cambridge Analytica controversy.

The issue was that Facebook allowed users to find anyone on the site by entering either their phone number or email address. In 2010, computer science researchers in Greece showed how spammers could use that feature to validate address lists and “craft personalized phishing emails that are far more efficient than traditional techniques by using personal information publicly available in social networks” (PDF).

But Zuckerberg’s written testimony reveals for the first time that it was phone number lookups that were used in the large scale scraping. That’s a more potent weapon for bulk harvesting, because a data miner can programatically cycle through every possible phone number to get a complete corpus. With some exceptions—custom privacy settings or accounts with no phone number attached—sequential mining would yield every Facebook profile. [Continue reading…]

 

Don’t miss the latest posts at Attention to the Unseen: Sign up for email updates.

Two-thirds of tweets linking to popular websites come from bots, not humans

Pew Research Center reports:

The role of so-called social media “bots” – automated accounts capable of posting content or interacting with other users with no direct human involvement – has been the subject of much scrutiny and attention in recent years. These accounts can play a valuable part in the social media ecosystem by answering questions about a variety of topics in real time or providing automated updates about news stories or events. At the same time, they can also be used to attempt to alter perceptions of political discourse on social media, spread misinformation, or manipulate online rating and review systems. As social media has attained an increasingly prominent position in the overall news and information environment, bots have been swept up in the broader debate over Americans’ changing news habits, the tenor of online discourse and the prevalence of “fake news” online.

In the context of these ongoing arguments over the role and nature of bots, Pew Research Center set out to better understand how many of the links being shared on Twitter – most of which refer to a site outside the platform itself – are being promoted by bots rather than humans. To do this, the Center used a list of 2,315 of the most popular websites1 and examined the roughly 1.2 million tweets (sent by English language users) that included links to those sites during a roughly six-week period in summer 2017. The results illustrate the pervasive role that automated accounts play in disseminating links to a wide range of prominent websites on Twitter.

Among the key findings of this research:

  • Of all tweeted links to popular websites, 66% are shared by accounts with characteristics common among automated “bots,” rather than human users.
  • Among popular news and current event websites, 66% of tweeted links are made by suspected bots – identical to the overall average. The share of bot-created tweeted links is even higher among certain kinds of news sites. For example, an estimated 89% of tweeted links to popular aggregation sites that compile stories from around the web are posted by bots.
  • A relatively small number of highly active bots are responsible for a significant share of links to prominent news and media sites. This analysis finds that the 500 most-active suspected bot accounts are responsible for 22% of the tweeted links to popular news and current events sites over the period in which this study was conducted. By comparison, the 500 most-active human users are responsible for a much smaller share (an estimated 6%) of tweeted links to these outlets.
  • The study does not find evidence that automated accounts currently have a liberal or conservative “political bias” in their overall link-sharing behavior. This emerges from an analysis of the subset of news sites that contain politically oriented material. Suspected bots share roughly 41% of links to political sites shared primarily by conservatives and 44% of links to political sites shared primarily by liberals – a difference that is not statistically significant. By contrast, suspected bots share 57% to 66% of links from news and current events sites shared primarily by an ideologically mixed or centrist human audience.

[Continue reading…]

Regulating the invisible ecosystem where thousands of firms possess data on billions of people

Jonathan Zittrain writes:

Currently there is no way for us to retract information that previously seemed harmless to share. Once tied to our identities, data about us can be part of our permanent record in the hands of whoever has it — and whomever they share it with, voluntarily or otherwise. The Cambridge Analytica data set from Facebook is itself but a lake within an ocean, a clarifying example of a pervasive but invisible ecosystem where thousands of firms possess billions of data points across hundreds of millions of people — and are able to do lots with it under the public radar.

Several years ago Facebook started to limit what apps could scrape from friends’ profiles even with permission, but the basic configuration of user consent as a bulwark against abuse hasn’t changed. Consent just doesn’t work. It’s asking too much of us to meaningfully respond to dialogue boxes with fine print as we try to work or enjoy ourselves online — and even that is with the naïve assumption that the promises on which our consent was premised will be kept.

There are several technical and legal advances that could make a difference.

On the policy front, we should look to how the law treats professionals with specialized skills who get to know clients’ troubles and secrets intimately. For example, doctors and lawyers draw lots of sensitive information from, and wield a lot of power over, their patients and clients. There’s not only an ethical trust relationship there but also a legal one: that of a “fiduciary,” which at its core means that the professionals are obliged to place their clients’ interests ahead of their own.

The legal scholar Jack Balkin has convincingly argued that companies like Facebook and Twitter are in a similar relationship of knowledge about, and power over, their users — and thus should be considered “information fiduciaries.”

Doctors don’t ask patients whether they’d consent to poison over a cure; they recommend what they genuinely believe to be in the patients’ interests. Too often a question of “This app would like to access data about you, O.K.?” is really to ask, “This app would like to abuse your personal data, O.K.?” Users should be respected by protecting them from requests made in bad faith. [Continue reading…]

Don’t miss the latest posts at Attention to the Unseen: Sign up for email updates.

Facebook sent a doctor on a secret mission to ask hospitals to share patient data

CNBC reports:

Facebook has asked several major U.S. hospitals to share anonymized data about their patients, such as illnesses and prescription info, for a proposed research project. Facebook was intending to match it up with user data it had collected, and help the hospitals figure out which patients might need special care or treatment.

The proposal never went past the planning phases and has been put on pause after the Cambridge Analytica data leak scandal raised public concerns over how Facebook and others collect and use detailed information about Facebook users.

“This work has not progressed past the planning phase, and we have not received, shared, or analyzed anyone’s data,” a Facebook spokesperson told CNBC.

But as recently as last month, the company was talking to several health organizations, including Stanford Medical School and American College of Cardiology, about signing the data-sharing agreement. [Continue reading…]

Kalev Leetaru writes:

Over the past year I have repeatedly asked Facebook for its stance on bulk harvesting and research use of its users’ data. Last February I asked the company if it had comment on the mass harvesting of data by commercial enterprises for political purposes and whether it had any policies prohibiting the use of personality quizzes or other apps that bulk harvested profiles. In June I asked it, in light of all of the ways Facebook itself was conducting research on its users, whether it might consider offering users the right to opt-out of having their personal data exploited by Facebook for research. In September, in the aftermath of the controversial “gaydar” study that claimed to be able to estimate someone’s sexual orientation from their photo and used a large volume of harvested Facebook data, I asked whether the work’s mass harvesting of profile photos was of concern to the company. Just last month I asked whether Facebook was planning to request that large holders of data harvested from the platform delete their archives or whether it planned to request that bulk Facebook datasets available for download be restricted to university researches and exclude commercial researchers. Not to mention countless other requests for comment about various Facebook research use of private user data. In every case the company’s response was silence.

If Facebook was so concerned about bulk harvesting and use of its users’ data, it certainly would seem that the company would have taken every opportunity to state that bulk harvesting, archival and commercial exploitation of private user data was something it was concerned about. It could comment that it was working to identify bulk harvesting, to request that companies and universities delete those archives or that it was asking that universities restrict access to the large harvested datasets they make available for download, limiting them to academic and not commercial uses. Instead, radio silence until the company lost control of the privacy narrative and suddenly decided now was the time to say it was shocked by how its data was being harvested and would take steps to reign it in. [Continue reading…]

Don’t miss the latest posts at Attention to the Unseen: Sign up for email updates.

Mark Zuckerberg’s exceptional ability to retract his own words

TechCrunch reports:

You can’t remove Facebook messages from the inboxes of people you sent them to, but Facebook did that for Mark Zuckerberg and other executives. Three sources confirm to TechCrunch that old Facebook messages they received from Zuckerberg have disappeared from their Facebook inboxes, while their own replies to him conspiculously remain. An email receipt of a Facebook message from 2010 reviewed by TechCrunch proves Zuckerberg sent people messages that no longer appear in their Facebook chat logs or in the files available from Facebook’s Download Your Information tool.

When asked by TechCrunch about the situation, Facebook claimed it was done for corporate security in this statement:

“After Sony Pictures’ emails were hacked in 2014 we made a number of changes to protect our executives’ communications. These included limiting the retention period for Mark’s messages in Messenger. We did so in full compliance with our legal obligations to preserve messages.”

However, Facebook never publicly disclosed the removal of messages from users’ inboxes, nor privately informed the recipients. That raises the question of whether this was a breach of user trust. When asked that question directly over Messenger, Zuckerberg declined to provide a statement. [Continue reading…]

Don’t miss the latest posts at Attention to the Unseen: Sign up for email updates.

Robert Mercer backed a secretive group that worked with Facebook, Google to target anti-Muslim ads at swing voters

Center for Responsive Politics reports:

As the final weeks of the 2016 elections ticked down, voters in swing states like Nevada and North Carolina began seeing eerie promotional travel ads as they scrolled through their Facebook feeds or clicked through Google sites.

In one, a woman with a French accent cheerfully welcomes visitors to the “Islamic State of France,” where “under Sharia law, you can enjoy everything the Islamic State of France has to offer, as long as you follow the rules.”

The video has a Man in the High Tower feel. Iconic French tourist sites are both familiar and transformed — the Eiffel Tower is capped with a star and crescent and the spires of the Notre Dame are replaced with the domed qubba of a mosque.

The Mona Lisa is shown looking, the ad says, “as a woman should,” covered in a burka.

If it wasn’t already clear that the ad was meant to stoke viewers’ fears of imminent Muslim conquest, the video is interspersed with violent imagery. Three missiles are seen flying through the sky as the video opens. Blindfolded men are shown kneeling with guns pointed at their heads, and children are shown training with weapons “to defend the caliphate.” [Continue reading…]

Facebook said the personal data of most of its 2 billion users has been collected and shared with outsiders

The Washington Post reports:

Facebook said Wednesday that most of its 2 billion users likely have had their public profiles scraped by outsiders without the users’ explicit permission, dramatically raising the stakes in a privacy controversy that has dogged the company for weeks, spurred investigations in the United States and Europe, and sent the company’s stock price tumbling.

The acknowledgment was part of a broader disclosure by Facebook on Wednesday about the ways in which various levels of user data have been taken by everyone from malicious actors to ordinary app developers.

“We’re an idealistic and optimistic company, and for the first decade, we were really focused on all the good that connecting people brings,” Chief Executive Mark Zuckerberg said on a call with reporters Wednesday afternoon. “But it’s clear now that we didn’t focus enough on preventing abuse and thinking about how people could use these tools for harm as well.”

As part of the disclosure, Facebook for the first time detailed the scale of the improper data collection for Cambridge Analytica, a political data consultancy hired by President Trump and other Republican candidates in the last two federal election cycles. The political consultancy gained access to Facebook information on up to 87 million users, 71 million of whom are Americans, Facebook said. Cambridge Analytica obtained the data to build “psychographic” profiles that would help deliver targeted messages intended to shape voter behavior in a wide range of U.S. elections. [Continue reading…]

As Malaysia moves to ban ‘fake news,’ worries about who decides the truth

The New York Times reports:

In highway billboards and radio announcements, the government of Malaysia is warning of a new enemy: “fake news.”

On Monday, the lower house of Parliament passed a bill outlawing fake news, the first measure of its kind in the world. The proposal, which allows for up to six years in prison for publishing or circulating misleading information, is expected to pass the Senate this week and to come into effect soon after.

The legislation would punish not only those who are behind fake news but also anyone who maliciously spreads such material. Online service providers would be responsible for third-party content, and anyone could lodge a complaint. As long as Malaysia or Malaysians are affected, fake news generated outside the country is also subject to prosecution.

What qualifies as fake news, however, is ill defined. Ultimately, the government would be given broad latitude to decide what constitutes fact in Malaysia.

“Fake news has become a global phenomenon, but Malaysia is at the tip of the spear in trying to fight it with an anti-fake news law,” said Fadhlullah Suhaimi Abdul Malek, a senior official with the Malaysian Communications and Multimedia Commission. “When the American president made ‘fake news’ into a buzzword, the world woke up.”

But members of Malaysia’s political opposition say the legislation is intended to stifle free speech ahead of elections that are widely seen as a referendum on Prime Minister Najib Razak, who has been tainted by a scandal involving billions of dollars that were diverted from Malaysia’s state investment fund.

“Instead of a proper investigation into what happened, we have a ministry of truth being created,” said Nurul Izzah Anwar, a lawmaker from the People’s Justice Party and the daughter of the jailed opposition leader Anwar Ibrahim. [Continue reading…]

Russian bots are tweeting their support of embattled Fox News host Laura Ingraham

The Washington Post reports:

Embattled Fox News host Laura Ingraham has found some unlikely allies: Russian bots.

Russian-linked Twitter accounts have rallied around the conservative talk-show host, who has come under fire for attacking the young survivors of the Parkland, Fla., school shooting. According to the website Hamilton 68, which tracks the spread of Russian propaganda on Twitter, the hashtag #IstandwithLaura jumped 2,800 percent in 48 hours this weekend. On Saturday night, it was the top trending hashtag among Russian campaigners.

The website botcheck.me, which tracks 1,500 “political propaganda bots,” found that @ingrahamangle, @davidhogg111 and @foxnews were among the top six Twitter handles tweeted by Russia-linked accounts this weekend. “David Hogg” and “Laura Ingraham” were the top two-word phrases being shared.

Wading into controversy is a key strategy for Russian propaganda bots, which seize on divisive issues online to sow discord in the United States. Since the Feb. 14 Parkland shooting, which claimed 17 lives, Russian bots have flooded Twitter with false information about the massacre. [Continue reading…]

Are today’s teenagers smarter and better than we think?


Tara Parker-Pope writes:

Today’s teenagers have been raised on cellphones and social media. Should we worry about them or just get out of their way?

A recent wave of student protests around the country has provided a close-up view of Generation Z in action, and many adults have been surprised. While there has been much hand-wringing about this cohort, also called iGen or the Post-Millennials, the stereotype of a disengaged, entitled and social-media-addicted generation doesn’t match the poised, media-savvy and inclusive young people leading the protests and gracing magazine covers.

There’s 18-year-old Emma González, whose shaved head, impassioned speeches and torn jeans have made her the iconic face of the #NeverAgain movement, which developed after the 17 shooting deaths in February at Marjory Stoneman Douglas High School in Parkland, Fla. Naomi Wadler, just 11, became an overnight sensation after confidently telling a national television audience she represented “African-American girls whose stories don’t make the front page of every national newspaper.” David Hogg, a high school senior at Stoneman Douglas, has weathered numerous personal attacks with the disciplined calm of a seasoned politician.

Sure, these kids could be outliers. But plenty of adolescent researchers believe they are not.

“I think we must contemplate that technology is having the exact opposite effect than we perceived,” said Julie Lythcott-Haims, the former dean of freshmen at Stanford University and author of “How to Raise an Adult.” “We see the negatives of not going outside, can’t look people in the eye, don’t have to go through the effort of making a phone call. There are ways we see the deficiencies that social media has offered, but there are obviously tremendous upsides and positives as well.” [Continue reading…]

Don’t miss the latest posts at Attention to the Unseen: Sign up for email updates.