The Wikidata query service is a public SPARQL endpoint for querying all of the data contained within Wikidata. In a previous blog post I walked through how to set up a complete copy of this query service. One of the steps in this process is the munge step. This performs some pre-processing on the RDF dump that comes directly from Wikidata.
This post walks through using the new Hadoop based munge step with the latest Wikidata TTL dump on Google cloudsDataproc service. This cuts the munge time down from 1-2 days to just 2 hours using an 8 worker cluster. Even faster times can be expected with more workers, all the way down to ~20 minutes.
During my first week at Newspeak house while explaining Wikidata and Wikibase to some folks on the terrace the topic of Dams came up while discussing an old project that someone had worked on. Back in the day collecting information about Dams would have been quite an effort, compiling a bunch of different data from different sources to try to get a complete worldwide view on the topic. Perhaps it is easier with Wikidata now?
Below is a very brief walkthrough of topic discovery and exploration using various Wikidata features and the SPARQL query service.
A typical known Dam
In order to get an idea of the data space for the topic within Wikidata I start with a Dam that I know about already, the Three Gorges Dam (Q12514). Using this example I can see how Dams are typically described.
I searched around for a while looking at various lists of tors on Dartmoor. Slowly I compiled a list that seemed to be quite complete from a variety of sources into a Google Sheet. This list included some initial names and rough OS Mapgrid coordinates(P613).
In order to load the data into OpenRefine I exported the sheet as a CSV and dragged it into OpenRefine using the same process as detailed in my previous post.
I have long known about OpenRefine (previously Google Refine) which is a tool for working with data, manipulating and cleaning it. As of version 3.0 (May 2018), OpenRefine included a Wikidata extension, allowing for extra reconciliation and also editing of Wikidata directly (as far as I understand it). You can find some documentation on this topic on Wikidata itself.
This post serves as a summary of my initial experiences with OpenRefine, including some very basic reconciliation from a Wikidata Query Service SPARQL query, and making edits on Wikidata.
In order to follow along you should already know a little about what Wikidata is.
I tried out OpenRefine in two different setups both of which were easy to set up following the installation docs. The setups were on my actual machine and in a VM. For the VM I also had to use the -i option to make the service listen on a different IP. refine -i 172.23.111.140
I have owned Minecraft Java for several years, but despite being a software developer, I have never looked into creating a mod, until now! This is certainly a different topic compared with my regular blog posts, but as always, I hope it will help someone somewhere.
I stumbled upon a video by one of the fastest-growing Minecraft YouTube channels (Dream) in which he quickly demonstrates creating some mods from suggestions in comments. My journey starts here, and with the fact that I can see an org.bukkit.event.Listener class imported.
This post should serve as a guide that works today, and I also now have a template bukkit mod on GitHub that you may find useful, as all Bukkit templates that I found were years out of date. However, perhaps I should have been looking for Spigot templates! Figuring all of this out only took an hour or so, and at the end of it, I was able to create a mod that left me with a world which you can see below.
The new extension provides the user with an extra button on the ad interests page that will automatically go through and click all of the remove buttons for all of the interest tabs that appear on the top bar. The UI isn’t the best, but it is functional!
Since then mediawiki-docker-dev evolved into less of a testing system and more of a development environment, allowing the use of a master replica DB setup, easily swappable PHP versions, debugging and more. The project on GitHub currently has 40 stars, 38 forks and has seen 17 people contributing back.
Over the past couple of years, developer productivity and development environments have been a big discussion area. The Wikimedia technical conference in 2019 had the main topic of Developer Productivity. There have also been a few efforts in a few directions trying to figure out what is best for the majority of people. These include local-charts (Kubernetes based environment) and MediaWiki-Docker (simple docker-compose based environment).
After downloading from the store and booting up I realized that only powershell, cmd and wsl are listed by default (and also Azure which I don’t really care about).
Clicking around the UI a little there is a settings menu item that opens a JSON configuration file in notepad. This configuration file defines the behaviours of the terminal including the profiles that can be loaded.
After a bit of searching and documentation reading I came up with this profile which I now use for my git bash installation (I hope it can help you too).
For a while I have been running a Wikibase query service update script for WBStack, which is a Java application on a Kubernetes cluster. Part of that journey has included the updater using all available memory, hitting into the kubernetes memory limit and being OOM killed. The title of the post is a little verbose, but I wanted to include all of the keywords that might help people find the answers to the memory issues that I was running into.
WBStack is now in its 7th month with 76 user accounts who have created 226 MediaWiki sites running Wikibase, of which 145 are currently online (81 deleted sites). 295,000 edits have now been made in total, which is an increase of 95,000 in the last month, which roughly equates to 2 edits a minute for the month.