2025 Year Review

This entry is part 7 of 7 in the series Year Reviews

Well, I’m still not back on a boat as I was in 2023… Where to start…

Lightbug

I havn’t written a whole lot about Lightbug yet on this blog, but its still been a fun year of new developments at, and I think it’s time to share some of them with pictures ;)

We released small handheld RTK device, with programmable ESP32 onboard, giving high precision accuracy, in a nice small package.

I’v enjoyed seeing how folks have been using these devices, from tracking lane changes in cars, to finding accuract path positions, or traking things around race courses.

You can find the documentaiton to read through on the docs site, a fancy looking marketing booklet on the website or look at some of the code examples for the programmable ESP32 also on the docs site.

Hopfully this year I’ll get to the point of writing my GPS, RTK, phone etc comparison blog post, comparing the tracks recorded from a bunch of different devices to compare accuracy etc.

Given my open source / open data interests, I do wonder if this will end up being useful for the OpenStreetMap community.

Now we also developed and worked on the ZCard device, though this has primarily remained inhouse, or for show and conferences and workshops. So much so, that there isn’t even a picture of one on the Lightbug website yet, but here is one sitting on one of our tshirts bak at MWC earlier this year, where we had a demo application running on it, allowing basic interactions from a web page.

Think of it kind of like a Flipper Zero in a way, but running the same hardware and firmware stack as the rest of the Lightbug devices, at a fraction of the price, focued on developers. Buttons, Lights, Eink screen, but more importantly, cellular connectivity (GSM LTE CAT 1), LORA, WiFi, Bluetooth, GPS and more.

The primary processor, and high levels SDKs take care of the complexities of connectivity, power management and eink screen renderings, and give you a high level API for interacting the the device in many ways, such as drawing on the screen, communicating over LORA, or connecting to a server to send and receive data.

Wikimedia

Meanwhile, in my non work Wikimedia volunteer time, I have the privilege of attending both Wikimania 2025 in Nairobi, as well as the Wikimedia Hackathon 2025 in Istanbul, Turkey.

Read more

Easy fix for UK blocked imgur images on a wordpress site

If you are like me, then I expect you have in the past at some point used https://imgur.com/ to upload a few images, and also used them on your WordPress site. But since some time in 2025 imgur blocked access to images for users of the UK. Instead of seeing the image, you just see something like the below instead, that says “Content not viewable in your region” with a little imgur logo in the bottom corner.

You can read more in their help article.

I could go through and permenantly replace all of these links in my blog, but I really can’t be bothered, and instead went looking for a less permennt solution.

While looking around I found the Imgur unblocker chrome extension, that makes use a content proxy by duckduckgo to easily rewrite requests and be able to load the content no matter your geography. The extension is on Github too, and the single actions is so hillariously simple, that I figured this might be a nice solution for my WordPress install too.

I found a WordPress plugin called Real-Time Find and Replace that says:

This plugin allows you to dynamically (i.e. at the time when a page is generated) replace code and text from themes and other plugins with code and text of your choosing before a page is delivered to a user’s browser.

Which sounds like exactly what I wanted, and after a quick install, the replacement was rather easy to setup, and eems to cover all cases for me.

Read more

Editing Grokipedia, a first look

As a long time editor and developer in the Wikipedia and Wikimedia space, I’m of course sceptical about what Grokipedia is trying to be, and if it stands any chance of success. it may struggle to deliver the resilience, transparency, and community processes that keep projects like Wikipedia thriving, and in the early weeks the untouchable AI generated content was certainly not going to work moving forward.

However, in the last week or so editing became an option, hidden behind Grok as a safeguard against abuse.

I thought I’d have a look at trying to edit a few of areas of content to see what the experience is like, and capture some of the good and bad points.

In no particular order…

Broken link formatting

A fix attempt

The Donald Trump articles has some broken formatting, which looks like an incorrectly parsed or formatted Markdown link that is now just showing in the HTML of the page. For posterity, I captured a copy of this version of the page on archive.ph, but here is a snapshot of how it appears.

Read more

How much is “Wikibase Suite” (and deploy) used

This entry is part 3 of 3 in the series Wikibase ecosystem

At the start of this year, I spent some time visualizing the Wikibase ecosystem by making use of the data that has been collected on wikibase.world. As part of that, I tried to focus in on Wikibase Suite, trying to determine how many possible installations there were making use of the container images, and or the newer Wikibase Suite Deploy thing.

I came to the number 33, based on the fact that there were this many sites online on wikibase.world that have an exact match to the MediaWiki versions that have been released as part of the Wikibase Suite container images. And in all cases, this would be an overestimate, given that these versions would also be installed by some not using the images, so the likely number would be closer to ~16…

And of the 33 sites that might possibly be using “suite” as they are on the same version at least, probably 50% are installed via other means, so the “suite” installations probably account for ~16 of the wikibases in wikibase.world at a guesstimate, with ~50 using other methods and 711 using wikibase.cloud.

9th December 2025 Edit

Apparently this post attracted some attention, and I want to make it clear that “Wikibase suite” here is specifically talking about the packaging up of the container images / docker images into some single installable reusable magic component, and support system around it.

I personally believe the container images themselves are a great asset, and the idea of a suite of recommended extensions and applications that should be delivered alongside a Wikibase is also an asset that the ecosystem does need.

Also looking at the “Wikibase Suite Team” board for “Sprint 9 (Nov 25 – Dec 9)” federation is a key topic that is currently being worked on, and tasks like T404547 [Self-Hosting Ops] Define metric for ease of self-hosting show that the team is / has moved away from only thinking about the magic packaged layer.

Updated count

10 months on from this first look, while visiting the Wikimedia Germany offices, I found the need once again to try to come up with concrete numbers in terms of the users of Wikibase Suite, where my general motive would be to convince WMDE that resources are better spent in other places, such as supporting the underlying software, not just this fancy wrapper on top.

And with this new analysis, my revised number is roughly 18, of which 9 are possibly active, and 2 are likely lost to bot spam. You can find the list via this rather lengthy wikibase.world query.

Read more

Slop in, craft out?

Earlier today, I sent this absolutely perfectly crafted piece of slop into GitHub Copilot…

Right, but i want thje patche sot be / and /* always

And as I already expected, due to using these LLM based coding agents and assistants continually throughout their evolution, the resulting change was exactly what I wanted, despite the poor instructions.

Now, I’m sure there is actually some difference, and likely this depends on the relevance of the typoed areas, and how often such typos might also appear in training data.

Why is this, you might ask?

Read more

Wikidata, instance of and subclass of through time (P31 & P279)

Last month I looked at all Wikimedia Commons revisions and managed to generate some data and graphs for the usage of depicts statements since they were introduced on the project.

This month, I have applied the same analysis on Wikidata but looking at instance of and subclasses of items. A slightly bigger data set, however essentially the same process.

This will enable easy updating, of various pie charts that have been published over the years, such as

In future, this could be easily adapted to show per Wikipedia project graphs, such as those that are currently at Wikidata:Statistics/Wikipedia

Method

The details of the method can be seen in code in my previous post about depicts statements, and this mostly stays the same.

In words:

  • Look at every revision of Wikidata ever
  • Parse the JSON to determine what values there are for P31 and P279 for each revision
  • Find the latest revision of each item in each given month, and thus find the state of all items in that month
  • Plot the data by number of items that are P31 or P279 of each value item

There are some minor defects to this logic currently that could be cleaned up with future iterations:

  • Deleted items will continue being counted, as I don’t consider the point items are deleted
  • Things will be double counted in this data, as 1 item may have multiple P31 and P279 values, and I don’t try to join these into higher level concept at all

We make an OTHER and UNALLOCATED count as part of the final data summarization. OTHER accounts for things that have not made it into the top 20 items by count, and UNALLOCATED means that we didn’t have a P31 or P279 value in the latest revision.

2025

For August 2025 (or at least part way through it), this is the current state of Wikidata per the above method.

You can now find a PNG of this pie chart on Wikimedia Commons https://commons.wikimedia.org/wiki/File:Wikidata_P31_%26_P279_analysis_August_2025.png

Read more

What was Wikibase “Federated Properties”

The “Federated Properties” feature allows / allowed a local Wikibase instance to access and utilise properties directly from a remote Wikibase, primarily Wikidata. Its primary purpose is to enable partial federation between a local Wikibase and Wikidata, broadening the base of available data without needing to create a property set from scratch.

I’m split between using the present and past tense here, as all of this code still exists within the Wikibase extension, however no one has used it since 2022, and it certainly doesn’t seem to be on the short or medium term (or maybe even long term) roadmaps.

This overview comes from the Wikibase – Federated Properties Phabricator project, which I’ll quote the whole of here for prosperity.

Federated Properties v2 (2021)
An initiative to give users the ability to access remote properties from their local Wikibase and use them in combination with custom local properties. The primary use case is enabling partial federation between a Wikibase and Wikidata. This version of the feature will allow you to:

  • Opt-in to use Wikidata’s properties in addition to your own custom local properties
  • Create and view statements about local entities that contain both local and federated properties
  • Query your Wikibase using both local and federated properties

Federated Properties v1 (2020-2021)
An initiative to give users the ability to access remote properties from their local Wikibase (no local properties were possible in this MVP). This version was launched in the Wikibase Spring Release in May 2021.

As far as I remember, the project died with v2, and I don’t even recall if v2 really saw the light of day outside WMDE internal testing and or hidden testing on wikibase.cloud.

Read more

Wikimedia Commons Depicts statements over time

Wikimedia Commons now uses Structured Data on Commons (SDC) to make media information multilingual and machine-readable. A core part of SDC is the ‘depicts’ statement (P180), which identifies items clearly visible in a file. Depicts statements are crucial for MediaSearch, enabling it to find relevant results in any language by using Wikidata labels, as well as having pre precise definition and structure than the existing category structures.

SDC functionalities began to roll out in 2019. Multilingual file captions were introduced early that year, enabling broader accessibility, followed by the ability to add depicts statements directly on file pages and through the UploadWizard.

Although there are numbers floating around showing a general increase in usage of structured data on Commons, there didn’t seem to be any concrete numbers around the growth in use of depicts statements.

I was particularly interested in this, as must tool WikiCrowd is steadily becoming a more and more efficient way of adding these statements en masse. So I decided to see what data I could come up with.

Read more

Easy WSL Windows path switching alias

I have been primarily developing on WSL for some years now, and still love the combination in terms of all around flexibility. When primarily working on Linux based or focused applications, everything is lovely! However, I’m spending more time straying into the land of hardware, USB devices, and custom IDEs and debug interfaces that are … Read more

AI Code assistant experience comparison (golang-kata-1)

If you’re reading this, and thinking about trying an IDE integrated coding agent, or thinking about switching, maybe stick around, have a read and watch some of the videos. There is at least 6 hours worth of experience wrapped up in this 20 minuite read!

I’m watching a thread on the GitHub community forums, where people are discussing how GitHub Copilot has potentially gone slightly downhill. And in some ways I agree, so I through I’d spend a little bit more time looking at the alternatives, and how they behave.

This post tries to compare 9 different setups, and will primarily look at the differences in presentation within the VS Code IDE that each of these different coding assistants have. How the default user interactions work, and how the tasks are broken down and presented to the user, and generally what the user experience is like between these different assistants.

I’ll try to flag up some other useful information along the way, such as time comparisons, amount of human interaction needed, and overall satisfaction with what the thing is doing, and if this all presents itself nicely in this post, I might find myself writing more in the future…

However, I will not be looking at cost, setup, resource usage or what’s happening with my data along the way…

Assistant, LLM combinations

AssistantModelMain tasks @Tests @Second app @
Github CopilotGPT 4o~ 5:00~ 24:45~ 32
Github CopilotGPT 4.1~ 15:00~ 17:40~ 35
Github CopilotClaude Sonnet 4~ 17:00 (inc tests)~ 17:00~ 28
Gemini Code AssistantGemini Something ?~ 11:20~ 14:30~ 25
AmazonQClaude Sonnet 4~ 7:20~ 15:50~ 28
RoocodeGPT 4.1 (via Github Copilot)~ 5:30~ 10:00~ 18
RoocodeClaude Sonnet 4 (via Anthropic)~ 15:30~ 20:00~ 37
Claude CodeClaude Sonnet 4~ 9:30~ 17:40~ 24
Claude CodeClaude Opus 4~ 10:00N/AN/A

I have setup this post, and the code problem in such a way that I should be able to easily add more combinations and comparisons in the future, and directly compare the performance back to this post. Ideally, at some stage I’d try some other models via Ollama, and also some other pay per requests LLM APIs…

Read more