Visualizing Wikibase connections, using wikibase.world queries

This entry is part 1 of 3 in the series Wikibase ecosystem

Over the past week I have spent some time writing some code to start running a little bot on the wikibase.world project, aimed at expanding the number of Wikibases that are collected there, and automating collection of some of the data that can easily be automated.

Over the past week, the bot has imported 650 Wikibase installs that increases the total to 784, and active to 755.

I mainly wanted to do this to try and visualize “federation” or rather, links between Wikibases that are currently occurring, hence creating P55 (links to Wikibase) and P56 (linked from Wikibase).

251 Wikibases seem to link to each other, and Wikidata is very clearly at the centre of that web.

Read more

COVID-19 Wikipedia pageview spikes, 2019-2022

Back in 2019 at the start of the COVID-19 outbreak, Wikipedia saw large spikes in page views on COVID-19 related topics while people here hunting for information.

I briefly looked at some of the spikes in March 2020 using the easy-to-use pageview tool for Wikimedia sites. But the problem with viewing the spikes through this tool is that you can only look at 10 pages at a time on a single site, when in reality you’d want to look at many pages relating to a topic, across multiple sites at once.

I wrote a notebook to do just this, submitted it for privacy review, and I am finally getting around to putting some of those moving parts and visualizations in public view.

Methodology

It certainly isn’t perfect, but the representation of spikes is much more accurate than looking at a single Wikipedia or set of hand selected pages.

  1. Find statements on Wikidata that relate to COVID-19 items
  2. Find Wikipedia site links for these items
  3. Find previous names of these pages if they have been moved
  4. Lookup pageviews for all titles in the pageview_hourly dataset
  5. Compile into a gigantic table and make some graphs using plotly

I’ll come onto the details later, but first for the…

Graphics

All graphics generally show an initial peak in the run-up to the WHO declaring an international public health emergency (12 Feb 2020), and another peak starting prior to the WHO declaring a pandemic.

Be sure to have a look at the interactive views of each diagram to really see the details.

COVID-19 related Wikimedia pageviews (interactive view)

Read more

Wikidata ontological tree of Trains

While looking working on my recent WikiCrowd project I ended up looking at the ontological tree of both Wikidata entities and Wikimedia Commons categories.

In this post, I’ll look at some of the ontology mappings that happen between projects, some of the SPARQL that can help you use this ontology in tools, and also some tools to help you explore this complex tree.

I’m using trains as I think they are fairly easy for most folks to relate to, and also don’t have a massively complex tree.

Commons & Wikidata mapping

Depicts questions in WikiCrowd are entirely generated from these Wikimedia Commons categories, such as Category:Trains & Category:Steam locomotives. These are then mapped to items on Wikidata such as Q870 (train) & Q171043 (steam locomotive).

Wikimedia Commons categories quite often contain infoboxes on the right-hand side that link to a variety of resources for the thing the category is covering. And quite often there is a Wikidata item ID present, this is the case for the categories above.

Likewise on Wikidata statements for P373 (Commons category) will often exist for entities that are depicted on Commons.

Read more

How can I get data on all the dams in the world? Use Wikidata

During my first week at Newspeak house while explaining Wikidata and Wikibase to some folks on the terrace the topic of Dams came up while discussing an old project that someone had worked on. Back in the day collecting information about Dams would have been quite an effort, compiling a bunch of different data from different sources to try to get a complete worldwide view on the topic. Perhaps it is easier with Wikidata now?

Below is a very brief walkthrough of topic discovery and exploration using various Wikidata features and the SPARQL query service.

A typical known Dam

In order to get an idea of the data space for the topic within Wikidata I start with a Dam that I know about already, the Three Gorges Dam (Q12514). Using this example I can see how Dams are typically described.

Read more

Using OpenRefine with Wikidata for the first time

I have long known about OpenRefine (previously Google Refine) which is a tool for working with data, manipulating and cleaning it. As of version 3.0 (May 2018), OpenRefine included a Wikidata extension, allowing for extra reconciliation and also editing of Wikidata directly (as far as I understand it). You can find some documentation on this topic on Wikidata itself.

This post serves as a summary of my initial experiences with OpenRefine, including some very basic reconciliation from a Wikidata Query Service SPARQL query, and making edits on Wikidata.

In order to follow along you should already know a little about what Wikidata is.

Starting OpenRefine

I tried out OpenRefine in two different setups both of which were easy to set up following the installation docs. The setups were on my actual machine and in a VM. For the VM I also had to use the -i option to make the service listen on a different IP. refine -i 172.23.111.140

Read more

Your own Wikidata Query Service, with no limits

This entry is part 1 of 3 in the series Your own Wikidata Query Service

The Wikidata Query Service allows anyone to use SPARQL to query the continuously evolving data contained within the Wikidata project, currently standing at nearly 65 millions data items (concepts) and over 7000 properties, which translates to roughly 8.4 billion triples.

Screenshot of the Wikidata Query Service home page including the example query which returns all Cats on Wikidata.

You can find a great write up introducing SPARQL, Wikidata, the query service and what it can do here. But this post will assume that you already know all of that.

EDIT 2022: You may also want to read https://addshore.com/2022/10/a-wikidata-query-service-jnl-file-for-public-use/

Guide

Here we will focus on creating a copy of the query service using data from one of the regular TTL data dumps and the query service docker image provided by the wikibase-docker git repo supported by WMDE.

Read more

Geospatial search for Wikidata Query Service

Geospatial search is up and running for the Wikidata Query Service! This allows you to search for items with coordinates that are located within a certain radius or within a bounding box.

Along side the the map that can be used to display results for the query service this really is a great tool for quickly visualizing coverage.

Read more

Building applications around Wikidata (a beer example)

Wikidata provides free and open access to entities representing real world concepts. Of course Wikidata is not meant to contain every kind of data, for example beer reviews or product reviews would probably never make it into Wikidata items. However creating an app that is powered by Wikidata & Wikibase to contain beer reviews should be rather easy.

Read more