Category Archives: social media
This past week, I wrote about Mile Hudack’s frustrated Facebook update about Vox and the general state of the media on Facebook, along with many others, and then posted an edited version on Tumblr, which then hit Mediagazer, the Pew Research Center’s daily briefing and the Nieman Lab’s weekly digest of the week in news. It all felt a bit meta and unexpected for a short piece of quick analysis. What follows is an edited version of that initial update.
Before reacting to Hudack’s update, I’d found and shared a great feature on the passage of The DATA Act over on Facebook, after reading Matt Yglesias’s reply to Hudack, an advertising product manager at Facebook. That’s not uncommon: I discover great posts, analysis, research and even new data on Facebook frequently in 2014, both shared by friends and family and on various lists I’ve built. I’ve found that a lot of important news will find me, but not all of it, so I intentionally use other methods to discover it, from Twitter to RSS to Google News to reading print magazines and newspapers, listening to NPR and watching the PBS Newshour. I think about social media and the news differently than the average, though, and I use Facebook and Twitter differently than other folks, too, sharing public updates across multiple platforms much more frequently than the average user. That means you should take the following with a grain of salt or two.
I’m sympathetic to his frustration: I’ve followed and written about the DATA Act for three years, during which time I saw negligible mainstream coverage of it, much like the current lack of coverage regarding the bipartisan FOIA Reform Act, which passed the House of Representatives unanimously this spring, despite the miserable state of Freedom of Information Act compliance in the federal government.
Vox’s jeans story, Yglesias points out, has been shared four times as much on Facebook as the one about how a bill became law in 2014, which suggests that what’s popular on the world’s biggest social network is a result of decisions its users are making, not the media site that originated them. Reasonable people may differ on this point.
I’m on the media producer side of this equation, given my work, which makes me much more sympathetic to Vox’s side of the debate, along with the situation that faces many other media outlets. To Hudack’s point: yes, there’s a lot of dreck in the vast number of media outlets publishing today, from cable to broadcast to online. There’s also fantastic work from a number of outlets that Hudack didn’t list, many of which can be found attached to Pulitzer prizes and nominated for data journalism awards:
Here’s what Atlantic Media senior editor Alexis Madrigal said about it:
“My perception is that Facebook is *the* major factor in almost every trend you identified. I’m not saying this as a hater, but if you asked most people in media why we do these stories, they’d say, ‘They work on Facebook.’ And your own CEO has even provided an explanation for the phenomenon with his famed quote, ‘A squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa.’ This is not to say we (the (digital) media) don’t have our own pathologies, but Google and Facebook’s social and algorithmic influence dominate the ecology of our world.”
Like Google, Facebook can send vast amounts of traffic and readers to content producers, which creates a natural incentive to learn how to get the attention of those readers, create incentives for them to click and share, and how to game those systems as well, from search engine optimization (SEO) to social media optimization (SMO). (On the latter count, the reasons people *share* stories can differ from the reasons they *read* them, and the rate at which they share may diverge as a result.)
In both cases, however, a powerful and inscrutable, closely held algorithm is showing stories to people when they visit the platforms. On Google.com, the algorithm shows you links in response to a directed search. If you’re not anonymized, Google will personalize those results.
On Facebook’s newsfeed, the default environment that users spend time browsing every day, they’re likely to now see a mix of ads, lists, updates from brands and pages you’ve liked, and updates from close friends.
Unless Facebook users take specific steps to create a list of them, they won’t find the clean line of chronological updates from friends and family *to* friends and family that they enjoyed back in 2007.
Today, even if we enjoy and benefit from interaction on the platforms, we’re very much living in Facebook’s world, on its terms.
If a director of advertising products for Facebook wants there to be better journalism online, in general, here’s a suggestion: as Facebook builds more mobile products like Paper and develops its online product more, it could also consider partnerships with news organizations on content and revenue. That might make some publishers uncomfortable or balk, but others would experiment. (It sounds like Liz Heron might already be exploring some of those possibilities.)
My colleague at the Tow Center, Andy Carvin, commenting on my initial Facebook post, suggested that Hudack’s career and perspective shouldn’t be viewed only through the prism of Facebook:
Andy Carvin: Mike isn’t director of product at fb. He actually works on ad products for fb. And I know where his frustration is coming from – he founded blip.tv, which became just another content site after he sold it, but prior to that was one of the Net’s first bastions of citizen journalism. He’s also been posting for months about the sorry state of online reporting about Ukraine and other international crises. So I totally get where he’s coming from. Even if fb is driving a lot of content providers to lowest common denominator content, it seems unfair to put this on his shoulders. And ultimately, it’s still the content providers who choose to publish stuff they think will get the most eyeballs, whether via fb or any other vector.
That’s a fair point, and I’m glad he added that context. There’s research from Pew’s Project for Excellence in Journalism for those who want to dig more.
That said, if Facebook and its leaders wanted to do more to support investigative journalism that isn’t driven by advertising considerations and shareability on social media, the company and/or newly wealthy senior staff might consider investing a portion of the billions in revenue that Facebook is making annually in improving the supply of it.
Specifically, they might support whatever comes after the newspapers that have traditionally housed the investigative journalists that create it. For instance, they could donate revenue to the foundations that have already been investing in news startups, platforms and education (The Knight Foundation News Challenge comes to mind, but there are others, from Sloan to Ford to Gates to Bloomberg to CIMA, which has published a global strategy to support investigative journalism) or establish Facebook scholarships and build out charitable arm focused on the media, akin to Google.org. The total doesn’t have to be much, relative to the annual revenues, but even tens of millions of dollars annually would make a difference to a lot of outlets and startups.
— teresa lewis (@tlewis132) May 13, 2014
A sword-wielding elf spotted in Portland, Oregon by a local smartphone-wielding human, told police that he was “battling Morgoth,” who apparently had made his way back through the Door of Night and returned to Middle Earth in the form of a red BMW.
Morgoth is the evil higher being whose fall from grace as Melkor in J. R. R. Tolkein’s mythical universe parallels that of Satan in John Milton’s “Paradise Lost.” Sauron, who the general public knows from “The Lord of the Rings” movie epics, was one of Morgoth’s chief lieutenants.
The fact that the young man in Oregon was wearing chain mail is a sign that he might just know what he was talking about: high elves in Tolkein’s universe wore mail, unlike the lightly armored wood elves in the Dungeons and Dragons universe and subsequent worlds.
In this case, however, it appears that he was a different sort of “high elf” — the man admitted to officers that he’d taken LSD before his epic battle with the Beamer — and that he wielding a machete, not an ancient elven blade forged in Gondolin.
According to KPTV, after treatment and release from a local hospital, the young human has been charged with criminal mischief, disorderly conduct, criminal mischief and menacing as a result of the elfscapade.
When fake femme fatale can dupe the IT guys at a government agency, you could also be spear phished.
If this all sounds familiar, you might be thinking of “Robin Sage,” when another fictitious femme fatale fooled security analysts, defense contractors and members of the military and intelligence agencies around the DC area.
Everything is new again.
[Image Credit: Wikipedia]
On further reflection Facebook’s announcement regarding upgraded search could be the biggest tech news today.
Why? Well, Facebook graph search for posts and updates will make the network MUCH easier to discover fresh content relevant to a given person, place or thing, both for journalists and regular users.
Right now, search just turns up profiles and pages, not posts.
Combined with a “business graph,” locations and secure payment systems, such a search engine could become useful to a billion Facebook users quickly.
Over time, searches will generate a huge amount of interest data and potentially a new source of revenue, if Facebook adapts Google’s model of selling ads next to results.
Search for Twitter, Tumblr, Google+ and other mobile social networks to come could well evolve similarly, if not at the same massive scale.
Agree? Disagree? Thoughts? Have links to better and/or relevant analysis? Please share in the comments.
Update: Commenting on Google+, open standards advocate Chris Messina agreed that this is notable news, although how big “depends on coverage for normal searches (which would determine search quality perception) and the relative impact of the corpus being mostly ACL’d content.”
Still, wrote Messina, “it’s a big deal, especially if Facebook can annotate that data with intent/verb-based apps. For example, query: “restaurants in New York City that my friends like and I haven’t been too”. I’d expect to see apps I use in the results, like OpenTable or Foursquare.”
He also raised a wrinkle I hadn’t considered: “That’s another aspect of this that becomes big for developers (at some point) — search as a personalized app platform.”
Tomorrow, President Barack Obama will be answering questions about housing during a live event with Zillow. Today, President Obama went directly to Instagram to ask the American people for questions about housing.
In some ways, this is old hat. The source for the questions, after all, is the same as it has been many times over the past five years: social media. As I commented on Tumblr, five years into this administration, it would be easy to let these sorts of new media milestones at the White House go unremarked. That would be a mistake.
The novelty in the event tomorrow lies in two factors:
1) The White House is encouraging people to ask the president questions using the #AskObamaHousing hashtag on Twitter, Zillow’s Facebook page or with their own “instavideo” on Instagram.
As for Tuesday at 5:50 PM ET, there were only around a dozen videos tagged with #AskObamaHousing on Instagram, so if you have a good one, the odds are (relatively) decent for it to be posed. (Twitter, by contrast, is much livelier.)
Such informal, atomized mobile videos are now a growing part of the landscape for government and technology, particularly in an age when the people formerly known as the audience have more options to tune in or tune out of broadcast programming. If the White House is looking to engage younger Americans in a conversation about, Instagram is an obvious place to turn.
Today, politicians and government officials need to go where the People are. Delivering effective answers to their questions regarding affordable housing in a tough economy will be harder, however, than filming a 15 second short.
Under increased scrutiny, Twitter will be extending the ability to report tweets to all of its hundreds of millions of active users around the world.
A statement from Twitter, emailed to the BBC and GigaOm, urged users to report abusive behavior and violations of the relevant policy and Twitter Rules using an online form and shared plans to “bring the functionality to other platforms, including Android and the web.” Twitter hasn’t shared timelines for that extension yet, but aggrieved users in Britain and beyond should gain the ability to flag tweets with a couple of taps eventually.
Twitter users have been able to report violations and abuse for years, with decisions by the service’s Safety team as tickets or law enforcement interest comes in. Twitter’s Safety team, headed by Del (@delbius) Harvey, has been quietly, professionally handling the ugly side for many years.
Adding reporting to individual tweets, however, is a relatively new change that was not announced on the Twitter blog or through the @Safety or @Support accounts.
Here are the relevant details from Twitter’s FAQ:
You can report Tweets that are in violation of the Twitter Rules or our Terms of Service. This includes spam, harassment, impersonation, copyright, or trademark violations. You can report any Tweet on Twitter, including Tweets in your home timeline, the Connect or Discover tabs, or in Twitter Search.
To report a Tweet:
- Navigate to the Tweet you’d like to report.
- Tap the ••• icon to bring up the off-screen menu.
- Select Report Tweet and then one of the options below.
- Select Submit (or Next if reporting abuse; see below for details) or Cancel to complete the report or block the user.
Spam: this is the best option for reporting users who are using spam tactics. Please reference the Twitter Rules for information about some common spam techniques, which include mass creation of accounts for abusive purposes, following a large number of users in a short time, and sending large numbers of unsolicited @replies.
Compromised: if you think the user’s account has been compromised, and they are no longer in control of their account, select this option, and we will follow up with them to reset their password and/or take other appropriate actions.
Abusive: for other types of violations, including harassment, copyright or trademark violations, and impersonation, select this option. When you select “Next’”, you’ll be taken to a form where you can complete and submit your report to Twitter.
Block account: instead of reporting a user, you can select this option to block the user. If you block a user, they will not be allowed to follow you or add you to lists, and you won’t see any interactions with the user in your Connect tab.
Twitter has successfully scaled the ability to flag media to all of its users. They’ve kept the Fail Whale from surfacing by vastly increasing the capacity of the service to handle billions of tweets and surges in use during major events. They’ve already rolled out tweet reporting to Twitter to iPhone users. Now, they’ll simplify reporting of abuse tweets for everyone.
There may be hidden tradeoffs in adding this function, as Staci Kramer pointed out on Twitter: previously available options, like “tweet link,” “mail link” and “read later” aren’t in the new version of Twitter’s iOS app.
— Staci D Kramer (@sdkstl) July 28, 2013
What may prove more difficult than adding this function to other official apps and the Web, however, will be adding the human capacity to adjudicate decisions to suspend or restore accounts.
Twitter will be doing it under increasing scrutiny and a fresh wave of critics who are taking the company to task for being slow to respond to threats and abuse. More than 18,000 people have signed a petition at Change.org demanding that Twitter provide a an abuse reporting button. The petition was filed after a stream of rape threats were directed at Caroline Criado-Perez on Twitter for 48 hours.
— Xeni Jardin (@xeni) July 27, 2013
Criado-Perez, a freelance journalist and self-described feminist campaigner, was in the public eye because of her successful efforts to keep pictures of women on paper money. She began receiving abusive tweets on the day that the Bank of England announced that author Jane Austen would appear on its newly designed £10 note.
The signatories on the petition were asking for a function that already exists for the millions of Twitter users that access the service on an iPhone, as the head of the social networking service’s United Kingdom office tweeted earlier today, responding to heated criticism in the British press.
To mollify critics and offer a users a better experience, Twitter staff will need to proactively detect waves of abuse, aided by algorithms and adjudication systems, and make judgements about whether tweets break its stated policies or represent threats that must be reported to law enforcement.
“I don’t know what proportion of posts are abusive, nor do I know the volume of complaints handled by Twitter staff and their response time, which are obvious factors in how and when abuse reports are handled,” commented veteran journalist Saleem Khan. “If there’s a problem with complaint-handling, Twitter needs to examine its processes and staffing. That said, if abuse and/or non-responsiveness by staff are perceived to be a problem, then it is a problem.”
To state the obvious, this will be an ongoing headache for Twitter.
— Andy Carvin (@acarvin) July 27, 2013
Creating systems that offer fair, efficient moderation and adjudication of reports is a conundrum that code alone may not be able to solve. That challenge is extended by the presence of organized campaigns of humans and bots that game governance systems by flagging users en masse as spammers, leading to suspensions.
— Zeynep Tufekci (@zeynep) July 27, 2013
That may well mean that Twitter, like other social networks with millions of users, will need to expand its safety team and train the rest of its public-facing employees to act as ad hoc ombudsmen and women, as aggrieved users inevitably turn their ire upon staff using the network. They’re well positioned to do so, perhaps better than any other social network, but the service is inevitably going to face tough decisions as it operates in countries do not have legal protections for freedom of expression or the press.
As Rebecca MacKinnon, Ethan Zuckerman and others have highlighted, what we think of as the new public square online is owned and operated by private companies that are setting the terms and conditions for expression and behavior on them. Giving users the capacity to report abuse, fraud or copyright infringement is a natural feature for any major website or service but it comes with new headaches. If Twitter is to go public, however, it will need to develop more matures to handle being a platform for the public.
“The question remains,” commented Khan: “What rights and powers do we delegate to private, for-profit, unregulated platforms that increasingly mediate the majority of our discourse, and where is the line that we draw in that deal?”
Editor’s Note: I sent Twitter a series of questions regarding the new reporting function on Sunday morning. On Sunday night, Twitter declined to comment further than the statement they have released. On Monday afternoon, Twitter CEO Dick Costolo responded to tweeted queries. Following are the questions I posed over email. If you have answers, feel free to comment or contact me.
When was this added? Was there an official blog post or tweets from staff, @safety and @support about it?
@digiphile to be clear, this is already live in iOS and has been in process on android, etc. It's something i've spoken about regularly
— dick costolo (@dickc) July 29, 2013
What’s the timeline for it rolling out to all users? Will Twitter for Windows and BlackBerry and get it?
Will it be added to the API, so that TweetBot and TweetDeck users, along with other clients, can use it after updates?
Will Twitter increase staffing at Safety and Support to handle an increase in reports? To what levels?
Will there be designated ombudsmen or women?
Will there be any transparency into the number of tickets received regarding abuse or someone’s status in the queue?
Will Twitter release aggregate data of abuse (or spam) flagging? How will Twitter deal with false positives or organized/automated campaigns to flag users or tweets?
Will there be any consequences for users that repeatedly abuse the ability to flag people or tweets for abuse?
@digiphile Ideally, yes. the challenge is that abuse of the "report abuse" function quickly evolves to be distributed across a group
— dick costolo (@dickc) July 29, 2013
On August 3, Twitter responded with an update to its rules to help address abusive behavior, including extra staff to handle abuse reports.
“It comes down to this: people deserve to feel safe on Twitter,” said Twitter’s UK lead Tony Wang and Del Harvey, senior director for trust and safety, in a blog post.
We want people to feel safe on Twitter, and we want the Twitter Rules to send a clear message to anyone who thought that such behaviour was, or could ever be, acceptable.”
The updated rules apply globally. “As described in the blog post, this was a clarification of existing rules — we discussed harassment in our help center in connection with abuse, but this makes it explicit in the rules as well,” said Twitter communication lead Jim Prosser, reached by email.
Wang also tweeted an apology to the women who have been targeted by abuse on Twitter.
“I personally apologize to the women who have experienced abuse on Twitter and for what they have gone through,” he said. “The abuse they’ve received is simply not acceptable. It’s not acceptable in the real world, and it’s not acceptable on Twitter.”
So yes, there are limits to free speech on Twitter.
What are they? Well, that’s the sticky wicket. The updated rules now include a section that Harvey said already existed. Twitter “actually always had that as a note on our abusive behavior policy page; we just added it directly to the rules,” she tweeted.
Targeted Abuse: You may not engage in targeted abuse or harassment. Some of the factors that we take into account when determining what conduct is considered to be targeted abuse or harassment are:
*if you are sending messages to a user from multiple accounts;
*if the sole purpose of your account is to send abusive messages to others;
*if the reported behavior is one-sided or includes threats
This was “no real addition, just [a] clarification,” tweeted Harvey. “Twitter “just added the explicit callout to our preexisting policy under the abuse & spam section.”
There is no functional difference in how Twitter’s Safety team will now assess abuse reports, she further clarified.
“We’ve been working on making the reporting process easier for users & clarifying our policies.”