William Beutler on Wikipedia

Posts Tagged ‘Mark Graham’

What You Missed at Wikimania 2017

Tagged as , , , , , , , , , , , , , , , , , , , , , , , ,
on August 18, 2017 at 4:39 pm

N.B. At the end of this post I’ve embedded a Spotify playlist for the delightful 2006 album “Trompe-l’oeil” by the Francophone Montreal indie rock band Malajube. It’s what I was listening to as I arrived at Montréal–Pierre Elliott Trudeau International Airport last week, and I think it would make a nice soundtrack for reading this post.

♦     ♦     ♦

Wikimania 2017, the thirteenth annual global meeting of Wikipedia editors and the larger Wikimedia movement, was held in Montreal last weekend. For the fifth time overall, and the first time in two years, I was there. I’ve covered previously attended Wikimanias, sometimes glancingly, and sometimes day-by-day, and this time I’ll do something a little different as well.

One nice thing about a conference for a project focused on the internet: many of the presentations can be found on the internet! Some but not all were recorded and streamed; some but not all have slides available to revisit. The second half of this post is a roundup of presentations I attended, or wished I attended, with media available so you can follow up at your own pace.

But first, a note on a major theme of the conference: implicitly if not specifically called “Wikimedia 2030”, and a draft of a “strategic direction” document circulating by stapled printout from the conference start, later addressed specifically in a presentation by Wikimedia Foundation executive director Katherine Maher and board chair Christophe Henner. It’s available to read here, and I recommend it as a straightforward and clearly-described (if detail-deficient) summary of how Wikimedians understand their project, and where its most dedicated members want to take it.

Draft strategic direction at Wikimania 2017As one would expect, the memo acknowledges the many types of contributors and contributions, brought together by a belief in the power of freely shared knowledge, and a committment to helping organize it. It also focuses on developing infrastructure, building relationships, and strengthening networks. One thing it doesn’t talk much about is Wikipedia, which might be surprising to some. After all, Wikipedia is arguably more important to the movement than the iPhone is to Apple: Wikipedia receives 97.5% of all WMF site traffic, while the iPhone accounts for “only” 70% of Apple’s revenues.

I don’t wish to belabor the Apple analogy much, because there are too many divergences to be useful in a global analysis, but both were revolutionary within their markets, upset competitors, created a whole new participatory ecosystem in their wake, and each grew exponentially until they didn’t. Now the stewards of each are looking beyond the cash cow for new areas of growth. For Apple, it’s cloud-based Services revenue. For the WMF, it’s not quite as easily summarized. But the answer is also partly about building in the cloud, at least figuratively. Although both Wikipedia and the iPhone will remain the most publicly visible manifestations of each organization for the foreseeable future, the leadership of each is focused on what other services they enable, and how they can even make the core product more valuable.

I see two main themes in the memo, about how the Wikimedia movement can better develop that broad ecosystem beyond Wikimedia’s existing base, and how it can improve its underlying systems within movement technology and governance. The former is too big a subject to grapple with here, and I’ll share just a single thought about the latter.

One thing the document concerns itself with at least as much as with Wikipedia is “data structures”—and this nods to Wikidata, which has been the new hotness for awhile, but whose centrality to the larger project is becoming clearer all the time. Take just one easily overlooked line, about how most Wikimedia content is “long-text, unstructured articles”. You know, those lo-fi Wikipedia entries that remain so enduringly popular. They lack structure now, but they might not always. Imagine a future where Wikidata provides information not just to infoboxes (although that is a tricky subject) but also to boring old Wikipedia itself. Forget “red links”: every plain text noun in the whole project may be connected to its “Q number”. Using AI and machine learning, entire concepts can be quickly linked in a way that once required many lifetimes.

At present, Wikipedia is the closest thing we have to the “sum of all human knowledge” but in the future, it may only be the default user interface. Now more than ever, the real action is happening behind the scenes.

♦     ♦     ♦

Birth of Bias: implicit bias’ permanence on Wikipedia

Wikipedia is a project by and for human beings, and necessarily carries the implicit biases of those human beings, whether they’re mindful of the fact or not. This presentation, offered by San Francisco State visiting scholar Jackie Koerner, focused on how to recognize this and think about what to do about it. Slides are accessible by clicking on the image below, and notes from the presentation are here.

Koerner Implicit Bias Wikimania 2017

♦     ♦     ♦

Readership metrics: Trends and stories from our global traffic data

How much do people around the world look at Wikipedia? How much do they look at it on desktop vs. mobile device? How have things changed over time? All of this and more is found in this presentation from Tilman Bayer, accessible by clicking through the image below.

Readership metrics. Trends and stories from our global traffic data (Wikimania 2017 presentation)

♦     ♦     ♦

The Internet Archive and Wikimedia – Common Knowledge Goals

The Internet Archive is not a Wikimedia project, but it is a fellow nonprofit with a similar outlook, complementary mission and, over time, increasing synergy between the two institutions. Every serious Wikimedian should know about the Internet Archive. I didn’t attend the presentation by Wendy Hanamura and Mark Graham, but there’s a lot to be gleaned from the slides embedded below, and session notes here.

♦     ♦     ♦

State of Video in the Wikimedia Movement

You don’t watch a lot of video on Wikipedia, do you? It’s not for lack of interest on the part of Wikipedians. It’s for lack of media availability under appropriate licenses, technology and infrastructure to deliver it, and even community agreement about what kinds of videos would help Wikipedia’s mission. It’s an issue Andrew Lih has focused on for several years, and his slides are highly readable on the subject.

♦     ♦     ♦

The Keilana Effect: Visualizing the closing coverage gaps with ORES

As covered in this blog’s roundup of 2016’s biggest Wikipedia stories, one of Wikipedia’s more recent mini-celebrities is a twentysomething medical student named Emily Temple-Wood, who goes by the nom-de-wiki Keilana. Her response to each experienced instance of gender-based harassment on the internet was to create a new biographical article about another woman scientist on Wikipedia. But it’s not just an inspiring story greenlit by countless news editors in the last couple years: WikiProject Women Scientists, founded by Temple-Wood and Rosie Stephenson-Goodknight, dramatically transformed the number and quality of articles within this subject area, taking them from a slight lag relative to the average article to dramatically outpacing them. Aaron Halfaker, a research scientist at the Wikimedia Foundation, crunched the numbers using the new-ish machine learning article quality evaluation tool ORES. Halfaker presented his findings, with Temple-Wood onstage to add context, on Wikimania’s final day. More than just a victory lap, the question they asked: can it be done again? Only Wikipedia’s contributors can answer that question.

The slides can be accessed by clicking through the image below, notes taken live can be found here, and for the academically inclined, you can also read Halfaker’s research paper: Interpolating Quality Dynamics in Wikipedia and Demonstrating the Keilana Effect.

Keilana Effect (Wikimania 2017)

That was fun! Let’s do this again next year.

Update: Looking for more slides and notes? There’s an “All Session Notes” page on the Wikimania site for your edification.

♦     ♦     ♦

The Agony and Ecstasy of Wikidata

Tagged as , , , , , , , , ,
on April 12, 2012 at 8:31 am

Although Wikipedia is by far the best-known of the Wikimedia collaborative projects, it is just one of many. Just this last week, Wikimedia Deutschland announced its latest contribution: Wikidata (also @Wikidata, and see this interview in the Wikipedia Signpost). Still under development, its temporary homepage announces:

Wikidata aims to create a free knowledge base about the world that can be read and edited by humans and machines alike. It will provide data in all the languages of the Wikimedia projects, and allow for the central access to data in a similar vein as Wikimedia Commons does for multimedia files. Wikidata is proposed as a new Wikimedia hosted and maintained project.

Possible Wikidata logo

One of a few Wikidata logos under consideration.

Upon its announcement, I tweeted my initial impression, that it sounded like Wikipedia’s answer to Wolfram Alpha, the commercial “answer engine” created by Stephen Wolfram in 2009. It seems to partly be that but also more, and its apparent ambition—not to mention the speculation surrounding it—is causing a stir.

Already touted by TechCrunch as “Wikipedia’s next big thing” (incorrectly identifying Wikipedia as its primary driver, I pedantically note), Wikidata will create a central database for the countless numbers, statistics and figures currently found in Wikipedia’s articles. The centralized collection of data will allow for quick updates and uniformity of statistical information across Wikipedia.

Currently when new information replaces old, as is the case with census surveys, elections results and quarterly reports are published, Wikipedians must manually update the old data in all the articles in which it appears, across every language. Wikidata would create the possibility for a quick computer led update to replace all out of date information. Additionally, it is expected that Wikidata will allow visitors to search and access information in a less labor-intensive method. As TechCrunch suggests:

Wikidata will also enable users to ask different types of questions, like which of the world’s ten largest cities have a female mayor?, for example. Queries like this are today answered by user-created Wikipedia Lists – that is, manually created structured answers. Wikidata, on the hand, will be able to create these lists automatically.

Though this project—which is funded by the Allen Institute for Artificial Intelligence, the Gordon and Betty Moore Foundation, and Google—is expected to take about a year to develop, but the blogosphere is already buzzing.

It’s probably fair to say that the overall response has been very positive. In a long post summarizing Wikidata’s aims, Yahoo! Labs researcher Nicolas Torzec identifies himself as one who excitedly awaits the changes Wikidata promises:

By providing and integrating Wikipedia with one common source of structured data that anyone can edit and use, Wikidata should enable higher consistency and quality within Wikipedia articles, increase the availability of information in and across Wikipedias, and decrease the maintenance effort for the editors working on Wikipedia. At the same time, it will also enable new types of Wikipedia pages and applications, including dynamically-generated timelines, maps, and charts; automatically-generated lists and aggregates; semantic search; light question & answering; etc. And because all these data will be available as Open Data in a machine-readable form, they will also benefit thrid-party [sic] knowledge-based projects at large Web companies such as Google, Bing, Facebook and Yahoo!, as well as at smaller Web startups…

Asked for comment by CNet, Andrew Lih, author of The Wikipedia Revolution, called it a “logical progression” for Wikipedia, even as he worries that Wikidata will drive away Wikipedians who are less tech-savvy, as it complicates the way in which information is recorded.

Also cautious is SEO blogger Pat Marcello, who warns that human error is still a very real possibility. She writes:

Wikidata is going to be just like Wikipedia in that it will be UGC (user-generated content) in many instances. So, how reliable will it be? I mean, when I write something — anything from a blog post to a book, I want the data I use in that work to be 100% accurate. I fear that just as with Wikipedia, the information you get may not be 100%, and with the volume of data they plan to include, there’s no way to vette [sic] all of the information.

Fair enough, but of course the upside is that corrections can be easily made. If one already uses Wikipedia, this tradeoff is very familiar.

The most critical voice so far is Mark Graham, an English geographer (and a fellow participant in the January 2010 WikiWars conference) who published “The Problem with Wikidata” on The Atlantic’s website this week:

This is a highly significant and hugely important change to the ways that Wikipedia works. Until now, the Wikipedia community has never attempted any sort of consistency across all languages. …

It is important that different communities are able to create and reproduce different truths and worldviews. And while certain truths are universal (Tokyo is described as a capital city in every language version that includes an article about Japan), others are more messy and unclear (e.g. should the population of Israel include occupied and contested territories?).

The reason that Wikidata marks such a significant moment in Wikipedia’s history is the fact that it eliminates some of the scope for culturally contingent representations of places, processes, people, and events. However, even more concerning is that fact that this sort of congealed and structured knowledge is unlikely to reflect the opinions and beliefs of traditionally marginalized groups.

The comments on the article are interesting, with some voices sharing Graham’s concerns, while others argue his concerns are overstated:

While there are exceptions, most of the information (and bias) in Wikipedia articles is contained within the prose and will be unaffected by Wikidata. … It’s quite possible that Wikidata will initially provide a lopsided database with a heavy emphasis on the developed world. But Wikipedia’s increasing focus on globalization and the tremendous potential of the open editing model make it one of the best candidates for mitigating that factor within the Semantic Web.

Wikimedia and Wikipedia’s slant toward the North, the West, and English speakers are well-covered in Wikipedia’s own list of its systemic biases, and Wikidata can’t help but face the same challenges. Meanwhile, another commenter argued:

The sky is falling! Or not, take your pick. Other commenters have made more informed posts than this, but does Wikidata’s existence force Wikipedia to use it? Probably not. … But if Wikidata has a graph of the Israel boundary–even multiple graphs–I suppose that the various Wikipedia authors could use one, or several, or none and make their own…which might get edited by someone else.

Under the canny (partial) title of “Who Will Be Mostly Right … ?” on the blog Data Liberate, Richard Wallis writes:

I share some of [Graham’s] concerns, but also draw comfort from some of the things Denny said in Berlin – “WikiData will not define the truth, it will collect the references to the data…. WikiData created articles on a topic will point to the relevant Wikipedia articles in all languages.” They obviously intend to capture facts described in different languages, the question is will they also preserve the local differences in assertion. In a world where we still can not totally agree on the height of our tallest mountain, we must be able to take account of and report differences of opinion.

Evidence that those behind Wikidata have anticipated a response similar to Graham’s can be found on the blog Too Big to Know where technologist David Weinberger shared a snippet of an IRC chat with he had with a Wikimedian:

[11:29] hi. I’m very interested in wikidata and am trying to write a brief blog post, and have a n00b question.
[11:29] go ahead!
[11:30] When there’s disagreement about a fact, will there be a discussion page where the differences can be worked through in public?
[11:30] two-fold answer
[11:30] 1. there will be a discussion page, yes
[11:31] 2. every fact can always have references accompanying it. so it is not about “does berlin really have 3.5 mio people” but about “does source X say that berlin has 3.5 mio people”
[11:31] wikidata is not about truth
[11:31] but about referenceable facts

The compiled phrase “Wikidata is not about truth, but about referenceable facts” is an intentional echo of Wikipedia’s oft-debated but longstanding allegiance to “verifiability, not truth”. Unsurprisingly, this familiar debate is playing itself out around Wikidata already.

Thanks for research assistance to Morgan Wehling.