Category Archives: research

When speech becomes text, what happens to writing?

downey

I successfully put down the baby for her late morning nap half an a hour ago. After running quietly around in sock feet trying to do things while she was out cold, I sat down to answer email and messages. As I entered this post into WordPress, she awoke again.)

It’s not easy to respond quickly and at volume using one hand or thumb, though I’ve gotten much better at both over the past five months with a baby daughter.

Over that time, I’ve been struck by how good the voice recognition in iOS on my iPhone has become. I’ve been able to successfully dictate a rough draft of a long article into the email interface and respond to any number of inbound inquiries that way.

That said, neither the soft keyboard nor voice-to-text on the device are a substitute yet for the 15″ keyboard in my MacBook Pro when I want to write at length.

It’s mostly a matter of numbers: I can still type away at more than eighty words per minute on the full-size keyboard, far faster than I can produce accurate text through any method on my smartphone.

Capturing and sharing anything other than text on the powerful device, however, has become trivially easy, from images to video to audio recordings.

The process of “writing” has long since escaped the boundaries of tabulas, slate and papyrus, moving from pens and paper to explode onto typewriters, personal computers and tablets.

Today, I’m thinking about how the bards of today will  be able to reclaim the oldest form of storytelling — the spoken word — and apply it in a new context.

As we enter the next decade of rapidly improving gestural and tactile interfaces for connected mobile devices, I wonder how long until the generations that preceded me will be able to leave decades of experience with keyboards behind and simply speak naturally to connected devices to share what they thinking or seeing with family, friends and coworkers.

Economist Paul Krugman seemed to be thinking about something similar this morning, in a blog post on “techno-optimism”, when he commented on the differences between economic and technological stagnation:

…I know it doesn’t show in the productivity numbers yet, but anyone who tracks technology has a strong sense that something big has been happening the past few years, that seemingly intractable problems — like speech recognition, adequate translation, self-driving cars, etc. — are suddenly becoming tractable. Basically, smart machines are getting much better at interacting with the natural environment in all its complexity. And that suggests that Skynet will soon kill us all a real transformative leap is somewhere over the horizon, maybe not this decade, but this generation.

Still, what do I know? But Brynjolfsson and McAfee have a new book — not yet out, but I have a manuscript — making this point with many examples and a lot of analysis.

There remain big questions about how the benefits of this technological surge, if it’s coming, will be distributed. But I think this kind of thing has to be taken into account when we try to imagine the future; I’m a great Gordon admirer, but his techniques necessarily involve extrapolating from the past, and aren’t well suited to picking up what could be a major inflection point.

That future feels much closer this morning.

[Image Credit: Navneet Alang, "Sci-Fi Fantasies, Real-Life Disappointments"]

6 Comments

Filed under blogging, journalism, research, scifi, technology

In defense of Twitter’s role as a social media watchdog

Mike Rosenwald is concerned that overzealous critics will make Twitter boring.

twitter is ruining

Rosenwald, who has distinguished himself in articles and excellent enterprise reporting at the Washington Post, appears to have strayed into a well-trodden cul de sac of social media criticism.

Writing in the Post, he quotes from series of sources and highlights a couple of Twitter users to arrive at a grand thesis: online mobs taking tweets out of context could chill speech. Rosenwald’s point was amplified by Politico chief economic correspondent Ben White, whose tweet is embedded below:

When I went to grab the embed code for the tweet above, however, I found something curious: I couldn’t generate it. Why? After I strongly but politely challenged White’s point twice on Twitter, he’d blocked me.

Here’s what I said: I am disappointed that the democratization of publishing and speech continues to be resented by the press. Celebrities, media and politicians will be criticized online by the public for inaccuracy and bias. It’s not 1950 anymore. And for that, a journalist blocked me.

Irony aside, I wish White hadn’t taken the nuclear option. I’m no absolutist: when George Packer slammed Twitter 3 years ago, I suggested that he take another look at what was happening there:

Twitter, like so many other things, is what you make of it. Some might go to a cocktail party and talk about fashion, who kissed whom, where the next hot bar is or any number of other superficial topics. Others might hone in on politics, news, technology, media, art, philosophy or any of the other subjects that the New Yorker covers. If you search and listen, it’s not hard to find others sharing news and opinion that’s relevant to your own interests.

Using intelligent filters for information, it’s quite easy to subscribe and digest them at leisure. And it’s as easy as unfollowing someone to winnow out “babble” or a steady stream of mundanity. The impression that one is forced to listen to pabulum, as if obligated to sit through a dreary dinner party or interminable plane ride next to a boring boor, is far from the reality of the actual experience of Twitter or elsewhere.

Packer clearly read my post but didn’t link or reply to it.

Given his public persona, I suspect Rosenwald will be much more open to criticism than Packer or White have proven to be, although I see he hasn’t waded into the vitriolic comments on his story at the Washington Post, which slam Twitter or the article — or both. Here’s what I’ve seen other journalists and Twitter users tweet about the piece:

For my part, I tend to lean towards more speech, not less. Twitter has given millions of people a voice around the world, including the capacity to scrutinize the tweets of members of the media for inaccuracy, bias or ignorance.

That’s not to say that a networked public can’t turn to an online mob and engage in online vigilantism, but the causality that Politico chief White House correspondent Mike Allen trumpeted regarding Twitter use in yesterday’s Playbook was painful to read on Saturday morning.

Twitter makes people online vigilantes? Come on. Facebook, Twitter, Tumblr, Google+ and other social media platforms have taken nearly all of the friction out of commenting on public affairs but it’s up to people to decide what to do with them.

As we’ve seen during natural disasters and revolutions across the Middle East and North Africa, including protests in Turkey this weekend, an increasingly networked public is now acting as reporters and sensors wherever and whenever they are connected, creating an ad hoc system of accountability for governments and filling the gaps where mainstream media outlets are censored or fear to tread.

That emergence still strikes me as positive, on balance, and while I acknowledge the point that White and the sources that Rosenwald quotes make about the potential for self-censorship, I vastly prefer the communications systems of today to the one-to-many broadcasts from last century. If you feel differently, comments — and Twitter — are open.

7 Comments

Filed under article, blogging, journalism, microsharing, research, social media, technology, Twitter

It’s time for a national conversation on gun violence in the United States

Our hearts are broken today“-President Obama, wiping tears from his eyes this afternoon.

I heard his comments on the radio, driving back to DC. I teared up, too. I’ve been mostly reading and listening today, not writing or reporting. I’m thankful I was not responsible for covering breaking news at a media outlet or on the ground in Connecticut, trying to sift fact from fiction or interview bereaved parents or photograph traumatized children.

I can write now with certainty that 27 people were killed by a gunman in Newtown, Connecticut, including 18 children in an elementary school. It’s one of the worst shootings in our nation’s history.

My Facebook feed is full of people offering prayers, voicing anger and frustration, and, happily sharing pictures of their own children. One of my friends announced the birth of his first child. Amidst grieving, new life and joy.

As the reality of this tragedy settles in, this moment may still be too raw to decide exactly what the way forward should be. In the wake of dozens of mass shootings in the past several years, there’s more interest in doing something to prevent them.

What, exactly, we should do to prevent more mass killings should be up for debate, but losing 18 children like this is unbearable. What science says about gun control and killings is not clear, though the literature should inform the debate.

If today is not the time to have that national conversation, many people would like to know when. A new White House epetition asks the President to set a time and place to debate gun policy. Another asks the White House to immediately address gun control through legislation*. As difficult as it may be to navigate the politics of gun control and the 2nd Amendment, that time may have come. That conversation should be balanced by one about mass shootings and mental illness, which is the other significant factor in these events.

In his remarks this afternoon, laden with the emotion that so many of his fellow citizens were feeling, President Obama said that “…we’re going to have to come together to prevent meaningful action to prevent more tragedies like this, regardless of the politics.”

As a country, we need to be able to have a national conversation about what to do next that does not vilify those on the other side of the debate.

I hope our Congress, our Supreme Court, our President and my fellow citizens are ready to work towards preventing more days like today in the year ahead.

The White House epetition to introduce legislation on gun control gained more than 197,000 signatures since its introduction. It was one of the fastest growing White House epetitions to date. By the end of the weekend, it became the most popular epetition in the nation’s history. (Another epetition subsequently passed it in popularly.)

RESPONSE: “We Hear you”

On the evening of December 20, President Obama responded to 32 different epetitions related to gun violence in a video posted on YouTube. It was the first direct response to a White House epetition by a President of the United States.

Earlier in the day, Vice President Joe Biden held the first meeting of a task force formed by the White House to look for ways to reduce gun violence in schools. On December 21st, the National Rifle Association called for armed guards in schools to deter violence.

7 Comments

Filed under journalism, personal, research, video

Notes on Dr. Atul Gawande’s talk at the Health Data Palooza [LIVEBLOG]



[Editor's note: these are live, rough notes from my iPad, and should not in any way represent a 100% accurate transcription. I missed far too much. Caveat lector. My comments are in brackets. You can find Dr. @Atul_Gawande's bio & writing at the New Yorker. Update: I've posted video of Dr. Gawande below.]

GAWANDE: A fascinating part [of the Health Data Palooza]: the idea that we’re putting together people from government, healthcare systems, people from outside who have knowledge about data and tools. This is quite different from the normal models: regulatory or laissez faire.

We have a healthcare system that’s fundamentally broken. The most common complaints from patients seem to be no one you can count on. If you’re paying, you have no sense that there is anyone who can help it costs be under control.

Where to start to fix that? We have recognized that there is enormous variation in cost, depending on whet you go. There is enormous variation in cost, depending on where you go. The two don’t have anything to do with one another. So there’s hope. [Good news.]

Some of the best places to get care are the least expensive. [In healthcare,] positive deviants are the ones that look the most like systems.

Examples from war and lessons within lethality

GAWANDE: Start by looking at performance of doctors in war and their teams. In the war in Iraq/Afghanistan, lethality below 10%. That doesn’t reflect intensity of conflict but improvement in care.

How did we do it? It was the not discovery of new tech that transformed survival but ability to use existing tech far better, in a system that works.

1) Kevlar. Got soldiers to wear it, operationally.
2) Speed to operating table. Improved forward mobile operating theaters.

We achieved the best survival rates in history. How? They changed the way they did surgery. Looked at data, realized needed to stop bleeding, stop contamination, under resource restrictive conditions. No X-rays, needed to learn 19th century techniques for finding fractures by feel.

They adopted “damage control surgery”: Do what you could during 2 hours. Ship and add a note: here’s what I’d did, what’s needed. That helped stimulate development of simple EHR. Average time from wounded on the battlefield, to getting care in the field, Baghdad, in Germany, in us, is less than 3 days. Less than 36 hours for some.

Soldiers can find better care for some conditions in Iraq than in a US city, with fewer resources. How? Alignment of finances, incentives. They weren’t “fee for service soldiers.” Everyone is on the same team: focused on saving life, maintaining health.

Within 48 hours of the wounding or death of soldier, posted on the DoD website. [DCAS: Defense Casualty Analysis System] The public accesses that data, but doctors & nurses access it the most.

One more example: soldiers not wearing protective eyewear. They called them ‘Granny goggles.’ The DoD contracted with a designer, made cool ones. Now wearing. Needed data and research to understand. [Stories matter.]

Affordable Care Organizations (ACOs) have done financial alignment. They’re committing to doing more project at a time. They’re committed to better health within an environment.

To get there, folks in war needed data useful to frontline decision makers. [The same is true at home.]

3 Comments

Filed under government 2.0, research, technology

Visualizing conversations on Twitter about #SOPA

Kickstarter data dude Fred Berenson visualized conversations around SOPA on Twitter: View visualization

@digiphile snapshot

His data crunching strongly implies that I’ve been a “supernode” on this story. I’m not surprised, given how closely I’ve been following how the Web is changing Washington — or vice versa.

Leave a comment

Filed under art, article, journalism, microsharing, research, social bookmarking, social media, technology, Twitter

Yahoo Research: 50% of tweets consumed are generated by 20,000 elite users

New research from research on Twitter found that 50% of tweets consumed are generated by 20K elite users. Based upon the more than 37,000 tweets I’ve posted over four years of tweeting, it’s a virtual lock that I’m one of them. Of particular interest was the “significant homophily” that the researchers found within categories. I’ve tried hard to escape that effect after reading Ethan Zuckerman’s post on homophily, serendipity and xenophilia nearly three years ago.

FULL PAPER: Twitter flow

Abstract:

We study several longstanding questions in media communications research, in the context of the microblogging service Twitter, regarding the production, flow, and consumption of information. To do so, we exploit a recently introduced feature of Twitter—known as Twitter lists—to distinguish between elite users, by which we mean specifically celebrities, bloggers, and representatives of media outlets and other formal organizations, and ordinary users. Based on this classification, we find a striking concentration of attention on Twitter—roughly 50% of tweets consumed are generated by just 20K elite users—where the media produces the most information, but celebrities are the most followed. We also find significant homophily within categories: celebrities listen to celebrities, while bloggers listen to bloggers etc; however, bloggers in general rebroadcast more information than the other categories. Next we re-examine the classical “two-step flow” theory of communications, finding considerable support for it on Twitter, but also some interesting differences. Third, we find that URLs broadcast by different categories of users or containing different types of content exhibit systematically different lifespans. And finally, we examine the attention paid by the different user categories to different news topics.

Leave a comment

Filed under blogging, research, social bookmarking, social media, technology

Why don’t more tweets get @replies or retweets?

As Jennifer Van Grove wrote at Mashable yesterday, “research shows that 71% of all tweets produce no reaction — in replies or retweets — which suggests an overwhelming majority of our tweets fall on deaf ears.”

Sysomos, maker of social media analysis tools, looked at 1.2 billion tweets over a two-month period to analyze what happens after we publish our tweets to Twitter. Its research shows that 71% of all tweets produce no reaction — in the form of replies or retweets — which suggests that an overwhelming majority of our tweets fall on deaf ears.

Sysomos findings also highlight that retweets are especially hard to come by — only 6% of all tweets produce a retweet (the other 23% solicit replies).

I’ll admit, this doesn’t shock me, based upon my experience over the years.

Many of my tweets are retweeted but then I have above-average reach at @digiphile and engaged followers.

I know I’m an outlier in many respects there, and that the community that I follow and interact with likely is as well.

This research backs that anecdotal observation up: people are consuming information rather than actively interacting with it. But my own experience doesn’t gibe with that greater truth, and that’s why I chimed in, even though I know it may expose me to more of my friend Jack Loftus‘ withering snark. (If you don’t read him at Gizmodo you’re missing out.)

Why Don’t People @Reply more?

So what’s going on? I have a couple of theories. The first is that @replies are much like comments. Most people don’t make either. Even though social networking has shifted many, many more people into a content production role through making status updates to Twitter, Facebook, Foursquare (and now perhaps LinkedIn), the 90-9-1 rule or 1% rule still appears to matter most of the social Web. Participation inequality is not a new phenomenon.

That scope of that online history suggests that the behaviors of yesteryear aren’t completely subsumed by the explosion of a more social Web. Twitter and Facebook do appear to have diminished long form blogging activity or comments on posts, as netizens have moved their meta commentary to external social networks. And even there, recent Forrester research suggest that social networking users are creating less content.

In other words, it’s not that Facebook or Twitter sucks, it’s that human behavior is at issue.

It’s not that Twitter or its employees or developers per se are at fault, though you can see where, for example, Quora or Vark are expressly designed to create question and answer threads.

It’s that, for better or worse, the culture of the people using Twitter is expressed in how they use it, including the choice to reply, RT or otherwise engage.

If the service is going to grow into an “information utility” and become a meaningful venue with respect to citizen engagement with government, the evolution of #NewTwitter may need to add better mechanisms to encourage that interaction.

So is Twitter useful?

As Tom Webster pointed out at his blog [Hat tip to @Ed]:

As a researcher, if I were writing this headline, I would have written it thusly: “Nearly 3 in 10 Tweets Provoke A Reaction.”

I follow about 3,000 people on Twitter. If we assume that this lot posts five tweets per week (a conservative figure), that’s 15,000 tweets I could see in a given week, were I to never peel my eyes away from Tweetdeck. The Sysomos data suggests that of those 15,000 tweets, 4,350 were replied to or at least retweeted. See, I think that’s actually a big number.

In other words, 29% of tweets do get a response. That’s better than the direct mail or email marketing, as far as I know. I don’t expect a response from every tweet, though I’ve been guilty of that expectation in past years. That’s why I often ask the same question more than once now, or tweet stories again, or why I’ll syndicate a given post, video or picture into multiple networks.

I continue to find Twitter a useful tool for my profession. While inbound Web traffic from Twitter is negligible when compared to Google, Facebook, StumpleUpon or even Fark, I’ve found it useful for sourcing, sentiment analysis, Q&A, a directory, a direct line to officials and executives, and of course for distributing my writing. Twitter may not be essential in the same sense that a cellphone, camera, notebook and an Internet connection are in my work but I’ve found it to be a valuable complement to those tools. I’ve definitely sourced stories, gathered advice or recommendations through crowdsourcing questions there, with far less effort than more traditional means.

15 Comments

Filed under blogging, government 2.0, microsharing, research, social bookmarking, social media, technology, Twitter

Priceless: Futurama lampoons the eyePhone and “Twitcher”

As a huge fan of Matt Groening, a long-time Apple customer and a serious Twitter user, I found a recent episode of Futurama, “Attack of the Killer App,” to be crackling good satire. Excerpt below:

Yes, I know this all blew up back in July. I saw it tonight, and it made me laugh. The episode pokes fun at pokes fun at Apple and iPhone customers in all sorts of ways, along with viral video and Internet culture. As Engadget pointed out, Futurama critiqued modern gadget and social media obsession using 50s technology. The folks over at EdibleApple.com also highlighted that this is far from the first time Futurama has satirized Apple:

Futurama’s focus on Apple is, of course, nothing new. Series co-founders David X. Cohen and Matt Groening are both big Apple nerds. We previously chronicled Futurama’s subtle and comical use of Apple and Mac references over here.

The viral Twitworm that creates many zombies is one of the best pop references to botnets and IT security I’ve seen recently, too. And there was one more (seriously geeky) detail that Engadget, Edible Apple and Mashable missed:

“When did the Internet become about losing your privacy?” asks Fry.

“August 6, 1991″-Bender.

Why? That was the day when Tim Berners-Lee posted “a short summary of the WorldWideWeb project online. The real world has never been the same since.

WorldWideWeb – Executive Summary

The WWW project merges the techniques of information retrieval and hypertext to make an easy but powerful global information system.

The project started with the philosophy that much academic information should be freely available to anyone. It aims to allow information sharing within internationally dispersed teams, and the dissemination of information by support groups.

Reader view

The WWW world consists of documents, and links. Indexes are special documents which, rather than being read, may be searched. The result of such a search is another (“virtual”) document containing links to the documents found. A simple protocol (“HTTP”) is used to allow a browser program to request a keyword search by a remote information server.

The web contains documents in many formats. Those documents which are hypertext, (real or virtual) contain links to other documents, or places within documents. All documents, whether real, virtual or indexes, look similar to the reader and are contained within the same addressing scheme. To follow a link, a reader clicks with a mouse (or types in a number if he or she has no mouse). To search and index, a reader gives keywords (or other search criteria). These are the only operations necessary to access the entire world of data.

UPDATE: Ok, ok, sharp-eyed readers: The AVClub totally got that Bender reference.

2 Comments

Filed under research, scifi, social media, technology, Twitter, video

Why including women matters for the future of technology and society

The Women of ENIAC

The "Women of ENIAC." For their history, read "Programming the ENIAC."

Some issues trigger a deeper response than others within communities. In the technology world, the education, opportunities and inclusion of women holds unusual resonance.

In the U.S., as Nick Kristof wrote, “schoolgirls are leaving boys behind in the dust.” After graduation, the narrative evolves further. As Claire Cain Miller wrote in the New York Times on Friday, “women now outnumber men at elite colleges, law schools, medical schools and in the overall work force. Yet a stark imbalance of the sexes persists in the high-tech world, where change typically happens at breakneck speed.”

Why the disparity in the world of Silicon Valley startups, venture capital and high technology? Why are so few women in Silicon Valley?

At least some of the issue runs deep, far back into the educational system. As Miller writes:

That attitude is prevalent among young women. Girls begin to turn away from math and science in elementary school, because of discouragement from parents, underresourced teachers and their own lack of interest and exposure, according to a recent report by theAnita Borg Institute for Women and Technology and the Computer Science Teachers Association.

Just 1 percent of girls taking the SAT in 2009 said they wanted to major in computer or information sciences, compared with 5 percent of boys, according to the College Board.

Only 18 percent of college students graduating with computer science degrees in 2008 were women, down from 37 percent in 1985, according to the National Center for Women and Information Technology.

So what can be done? How could including women in FOO Camp or making a list of women in tech or unconferences matter?

As computer scientist Hillary Mason tweeted tonight, “We don’t need affirmative action. We need meaningful culture change and support.”

Based upon the research a colleague gathered tonight, some actions could make an important difference in three ways:

(1) It’s good for men. Inclusion of women and minorities reduce stereotypes, and promotes second-order reflection on latent stereotypes, by providing real, first-hand experience. (Mahzarin R. Banaji and Curtis D. Hardin, Automatic Stereotyping, 7(3) Psychol. Sci. 136-41 (May 1996).)

This leads to better, more accurate evaluation of people’s work – because when people unconsciously use stereotypes, they mis-evaluate work. For example women’s presence in high-level orchestras basically doubled once auditions started to be done gender-blind, focusing only on the music.
(Claudia Goldin and Cecilia Rouse, Orchestrating Impartiality: The Impact of “Blind” Auditions on Female Musicians, 90(4) American Econ. Rev. 715-41 (2000).)

(2) It’s good for women. The absence of women (or very low numbers of women) signals to women that they aren’t welcome or don’t belong, which can in turn cause them to leave the field or choose not to enter it in the first place. (William T. Bielby, Minimizing Workplace Gender and Racial Bias, 29(1) Contemporary Soc. 120-29 (2000))

Research also suggests that when women are invited to the table, they have more energy free to do good work, instead of using half their energy just breaking down the door. Reducing cognitive load on subjects who have to work to overcome stereotypes is not a minor factor.

(3) It’s good for business & technology. Whatever the vertical, the entire industry benefits when the best work is being created and presented. As Miller writes:

Analysts say it makes a difference when women are in the garages where tech start-ups are founded or the boardrooms where they are funded. Studies have found that teams with both women and men are more profitable and innovative. Mixed-gender teams have produced information technology patents that are cited 26 percent to 42 percent more often than the norm, according to the National Center for Women and Information Technology.

In a study analyzing the relationship between the composition of corporate boards and financial performance, Catalyst, a research organization on women and business, found a greater return on investment, equity and sales in I.T. companies that have directors who are women.

The number of senior women doing major research and running labs in traditionally male-dominated fields like physics also offers insight into how efforts to include women can lead to merit-based selection across the broadest set of the best candidates. For instance, consider Lisa Randall, one of the most cited theoretical physicists of the last half-decade. Or Marissa Mayer, a senior Google exec who, as Miller wrote, many women she interviewed cited as “someone who gives them hope.”

Where to learn more

I don’t believe that most people are consciously biased, nor that they intend to be biased. Research into implicit bias suggests, however, that the most pervasive forms of bias are unconscious. Those biases can have tremendous effects on how we evaluate others, mostly to our own detriment – but also to our communities and industries.

Does the issue of women in tech matter to the bottom line? Miller’s reporting suggests that’s so:

Studies have found that teams with both women and men are more profitable and innovative. Mixed-gender teams have produced information technology patents that are cited 26 percent to 42 percent more often than the norm, according to the National Center for Women and Information Technology.

In a study analyzing the relationship between the composition of corporate boards and financial performance, Catalyst, a research organization on women and business, found a greater return on investment, equity and sales in I.T. companies that have directors who are women.

Fortunately, there are a growing number of conferences, groups and networks that celebrate and honor women in technology, including:

O’Reilly Community also features an excellent series of essays on women in tech. For the fascinating story of how women were involved in “hacking” the world’s first programmable computer, pictured at the top of this post), read ENIACprogrammers.org. And the recent Ada Lovelace Day listed dozens of inspirational women who are innovators, inventors and educators.

Finally, Nick Kristof has done the world a mitzvah by writing eloquently about womens’ rights in his most recent book, “Half the Sky.” Learn more at HalfTheSkyMovement.org.

28 Comments

Filed under blogging, research, technology

Transparency Camp 2010: Government, Transparency, Open Data and Coffee

Some unconferences are codathons. Others focus on citizen engagement and Congress.

This weekend’s Transparency Camp in Washington, D.C. brought together technologists, journalists, developers, advocates for open data, open government and open data for discussions, case studies, workshops and even, as Micah Sifry put it, some secular colloquy.

Transparency Camp came at a time of immense foment in Washington and the country beyond. A historic healthcare reform had just been signed into law, including an overhaul of student loans. Midterm elections in Congress loom at the end of the year. And the nation’s economy continues towards an uncertain future, perhaps of  jobless recovery, after the Great Recession.

The Sunlight Foundation’s engagement director, Jake Brewer, kicked off the morning by asking how much had changed around government transparency since the last Transparency Camp. Make sure to read David “Oso” Sasaki’s notes from Transparency Camp for a superb narrative of his Saturday. (Sasaki is the Director of Rising Voices, a global citizen media outreach initiative of Global Voices Online.)

There have been no shortage of transparency wins over that time, as the video embedded below attests. Projects like Earmarkwatch.org,  OpenCongress.org or Punch Clock Map all show the potential for the Web to enable government transparency.

In 2010, there are more reasons to believe government transparency and open government will see more rapid advancement. As the co-founder of the Sunlight Foundation, Ellen Miller, pointed out in her introduction, there are more significant legislative efforts underway around transparency. The The Public Online Information Act (POIA), HR 4858, introduced by Rep. Steve Israel, would embraces a new formula for transparency: “public equals online.” And an omnibus ethics bill, HR 4983, would “amend the Ethics in Government Act of 1978, the Rules of the House of Representatives, the Lobbying Disclosure Act of 1995, and the Federal Funding Accountability and Transparency Act of 2006 to improve access to information in the legislative and executive branches.”

In looking at the role of this unconference in that context, the Director of Sunlight Labs, Clay Johnson, posed three big challenges for Transparency Camp:

  1. An Open Data Playbook. Clay described that as “an instruction manual for people inside government to teach them how to open their data
  2. A list of all jurisdictions and elected officials around the country
  3. A data exchange format for data catalogs, in a model like Google did with GTFS.

The success or failure of Transparency Camp can’t be measured by those metrics alone, however, although whether Johnson’s challenges are met by the community are absolutely part of the story of this weekend.


Identity and Government

Another excellent session at Transparency Camp came from Heather West and Kaliya Hamlin, aka @IdentityWoman. I had considerable context for their talk, given my coverage of OpenID and the Open Identity Exchange (OIX) and trust frameworks, specifically regarding the OIX trust framework as used for citizen-to-government authentication.

A key element of OIX, as Hamlin pointed out, was the standardization of online privacy principles promulgated though IDManagement.gov. Another important part of the identity picture is Microsoft’s release of part of the intellectual property for its U-Prove ID tokens under Open Specifcation, as detailed at credentica.com.

The Open Government Directive, Datasets and Data.gov

When the “three words” from the unconference were synthesized into a “Wordle” for Transparency Camp, four words emerged as the most powerful themes:

Open, government, transparency and, most of all, data.

The Open Government Directive (OGI) was a significant moment in American history, in terms of putting the data of operations into a format and venue where developers could access and parse it: data.gov.

Now that the resource is up, however, there are outstanding concerns about data quality, frequency and, most pertinently, utility. Andrew McLaughlin, the “Deputy Chief Nerd @ the White House” (aka deputy US chief technology officer), suggested that “to get reluctant agencies to embrace data sharing, focus on “high-reward”, not “high-value”, datasets.”

When asked if new guidance was needed, since “high-value datasets” for Data.gov are written into the OGI, McLaughlin responded that “some agencies will use a citizen-utility metric for prioritizing scarce resources. Others will focus on datasets that will are rapidly doable, to help overcome resistance and ease culture change. Both ways of defining “high-value” make sense.” The Venn diagram above illustrates how that might look.

McLaughlin also acknowledged a feature request for data.gov and apps.gov from the Transparency Camp community: more and better metadata, like data quality qualifiers or FISMA compliance status.

At the In Code We Trust: Open Government in New York

My favorite session for the day was a case study of open government featuring the New York Senate. With a nod to Lawrence Lessig, Noel Hidalgo, Sheldon Rampton and Mark Head showed precisely how law could be turned to code. I livestreamed “In Code We Trust” on uStream. After poor transparency ratings, a broad swath of changes to the New York state senate websites was implemented over the past year. New York was the first state senate to adopt Creative Commons for its intellectual property.

Photo Credit: Sheldon Rampton by Noel Hidalgo.]

The New York state senate is integrating open government with social media (see @NYSenate), live video, YouTube and code, at Github.com/NYSenateCIO. I saw Mark Heead, a developer, looked up a bill using the New York Senate API with an application on his smartphone. That API is behind a law browser for New York state legislation. The In Code We Trust Transparency Camp session is archived at uStream.

Health Information Technology

One of the basic principles of an unconference is the “law of two feet.” If you don’t like a session, you move. You own your own experience. Given that livestreamed parts of Transperency Camp, I also “voted with my feed,” moving my window to the Internet along with my body. After a session on the relationship of open government descended into somewhat unproductive discussion about open policy, I moved over to the healthcare information technology (HIT) session, which I recorded in part. Given the billions of dollars that will be flowing into healthcare IT over the next few years, as provisions of the Recovery Act are implemented, this was an important discussion.

Brian Behlendorf, a notable open source technologist, led the session. There’s now an Office of the National Coordinator for Health IT to direct action, available on the Web at HealthIT.gov or on Twitter at @ONC_HealthIT. As Andrew McLaughlin noted, Brian Ahier maintains a great blog on health IT, including details on how the healthcare reform bill affects HIT.

Local Government and the Digital Divide

Another excellent session featured discussions about how transparency is coming to people closer to home.

Literally.

OpenMuni.org provides some perspective on that effort. The Ideascale model of crowdsourced recommendations for better efficiency and governance has been applied to local government, at least in beta, at Localocracy. The first pilot has been put into action at Amherst, Massachusetts.

The local government session at Transparency Camp was also fortunate to have the D.C. CTO, Bryan Sivek, and staff from @octolabs present.

Sivek defined his role as integral to both enabling better services online, like the city resource request center at 311.dc.gov, finding efficiencies for government through IT, and in bringing more citizens the benefit of connectivity. He illuminated a yawning gap in Internet use, observing that “DC has a huge issue with the digital divide. In Wards 5, 7 and 8, 36% of the people are connected.”

One of the stories of the digital divide in D.C. is told at InternetForEveryone.com. The importance of offering technological resources to those without access at home was evidenced by recent research showing that nearly one third of the United States population uses public library computers for Internet access.

Bryan Sivek is  now looking for feedback on how to use technology better in the District, elements of which are evidenced at track.dc.gov.

Odds, Ends, Resources and Takeaways

I was reminded of a great travel resource, FlyOnTime.us, and learned about a new one for Washington, ParkItDC.com.

I wish the former existed for Amtrak.

I learned about data and visualizations of local campaign spending at FollowTheMoney.org and government transparency at OpenSecrets.org.

Most of all, I was reminded by how many brilliant, passionate and engaged people are working to improve government transparency and efficiency through technology, collaboration and advocacy.

The Flickr pool features many of the faces.

I look forward to learning more from others about what happened on day two of Transparency Camp.

Update: The Sunlight Foundation posted a video of Transparency Camp attendees on April 1.

2 Comments

Filed under article, photography, research, technology