Wikimedia Commons Depicts statements over time

Wikimedia Commons now uses Structured Data on Commons (SDC) to make media information multilingual and machine-readable. A core part of SDC is the ‘depicts’ statement (P180), which identifies items clearly visible in a file. Depicts statements are crucial for MediaSearch, enabling it to find relevant results in any language by using Wikidata labels, as well as having pre precise definition and structure than the existing category structures.

SDC functionalities began to roll out in 2019. Multilingual file captions were introduced early that year, enabling broader accessibility, followed by the ability to add depicts statements directly on file pages and through the UploadWizard.

Although there are numbers floating around showing a general increase in usage of structured data on Commons, there didn’t seem to be any concrete numbers around the growth in use of depicts statements.

I was particularly interested in this, as must tool WikiCrowd is steadily becoming a more and more efficient way of adding these statements en masse. So I decided to see what data I could come up with.

Read more

Easy WSL Windows path switching alias

I have been primarily developing on WSL for some years now, and still love the combination in terms of all around flexibility. When primarily working on Linux based or focused applications, everything is lovely! However, I’m spending more time straying into the land of hardware, USB devices, and custom IDEs and debug interfaces that are … Read more

AI Code assistant experience comparison (golang-kata-1)

If you’re reading this, and thinking about trying an IDE integrated coding agent, or thinking about switching, maybe stick around, have a read and watch some of the videos. There is at least 6 hours worth of experience wrapped up in this 20 minuite read!

I’m watching a thread on the GitHub community forums, where people are discussing how GitHub Copilot has potentially gone slightly downhill. And in some ways I agree, so I through I’d spend a little bit more time looking at the alternatives, and how they behave.

This post tries to compare 9 different setups, and will primarily look at the differences in presentation within the VS Code IDE that each of these different coding assistants have. How the default user interactions work, and how the tasks are broken down and presented to the user, and generally what the user experience is like between these different assistants.

I’ll try to flag up some other useful information along the way, such as time comparisons, amount of human interaction needed, and overall satisfaction with what the thing is doing, and if this all presents itself nicely in this post, I might find myself writing more in the future…

However, I will not be looking at cost, setup, resource usage or what’s happening with my data along the way…

Assistant, LLM combinations

AssistantModelMain tasks @Tests @Second app @
Github CopilotGPT 4o~ 5:00~ 24:45~ 32
Github CopilotGPT 4.1~ 15:00~ 17:40~ 35
Github CopilotClaude Sonnet 4~ 17:00 (inc tests)~ 17:00~ 28
Gemini Code AssistantGemini Something ?~ 11:20~ 14:30~ 25
AmazonQClaude Sonnet 4~ 7:20~ 15:50~ 28
RoocodeGPT 4.1 (via Github Copilot)~ 5:30~ 10:00~ 18
RoocodeClaude Sonnet 4 (via Anthropic)~ 15:30~ 20:00~ 37
Claude CodeClaude Sonnet 4~ 9:30~ 17:40~ 24
Claude CodeClaude Opus 4~ 10:00N/AN/A

I have setup this post, and the code problem in such a way that I should be able to easily add more combinations and comparisons in the future, and directly compare the performance back to this post. Ideally, at some stage I’d try some other models via Ollama, and also some other pay per requests LLM APIs…

Read more