Late to “AI” assisted development?

Earlier this week, someone asked me if they were perhaps late to making use of AI-assisted development, as they dove into it in the past 2 months (using GitHub Copilot) and are already seeing large gains in a small team in terms of leverage of time. I thought for a second and responded that they might have seen comparably worthwhile gains roughly a year ago. In this post, I’m going to take a look back over the past years to try and figure out what the timeline has actually looked like.

My own vauge memory isn’t very certain, and roughly speaking pre COVID I dont remember much AI being used in software development, and after COVID we were in the AI era? The first place I personally remember using assisted development was via the initial VSCode GitHub Copilot auto completions, which were at the time questionably useful to start with but still showed promise. Included along the way will likely be the first version of Claude Code, Gemini entering the scene, and within GitHub copilot the advancements from completions, to ask & edit, to agent, and finally autopilot and cloud agents.

2017 – 2022: The Transformer era

And although there are other notable mentions, such as BERT by Google in 2018 and CodeBERT in 2020, most of the above comes far before most people will have started looking at or using AI for coding, and that includes me. As I initially started using models during development with the introduction of GitHub Copilot and the autocompletions within VSCode.

GitHub Copilot Technical Preview (June 2021+)

My email innivation to the GitHub Copilot Technical Preview came back in on the 8th July 2021, and it looks like the public announcement on the GitHub blog can still be found dated 29th June 2021.

Read more

AI Code assistant experience comparison (golang-kata-1)

This entry is part 1 of 2 in the series Golang AI kata comparison

If you’re reading this, and thinking about trying an IDE integrated coding agent, or thinking about switching, maybe stick around, have a read and watch some of the videos. There is at least 6 hours worth of experience wrapped up in this 20 minuite read!

I’m watching a thread on the GitHub community forums, where people are discussing how GitHub Copilot has potentially gone slightly downhill. And in some ways I agree, so I through I’d spend a little bit more time looking at the alternatives, and how they behave.

This post tries to compare 9 different setups, and will primarily look at the differences in presentation within the VS Code IDE that each of these different coding assistants have. How the default user interactions work, and how the tasks are broken down and presented to the user, and generally what the user experience is like between these different assistants.

I’ll try to flag up some other useful information along the way, such as time comparisons, amount of human interaction needed, and overall satisfaction with what the thing is doing, and if this all presents itself nicely in this post, I might find myself writing more in the future…

However, I will not be looking at cost, setup, resource usage or what’s happening with my data along the way…

Assistant, LLM combinations

AssistantModelMain tasks @Tests @Second app @
Github CopilotGPT 4o~ 5:00~ 24:45~ 32
Github CopilotGPT 4.1~ 15:00~ 17:40~ 35
Github CopilotClaude Sonnet 4~ 17:00 (inc tests)~ 17:00~ 28
Gemini Code AssistantGemini Something ?~ 11:20~ 14:30~ 25
AmazonQClaude Sonnet 4~ 7:20~ 15:50~ 28
RoocodeGPT 4.1 (via Github Copilot)~ 5:30~ 10:00~ 18
RoocodeClaude Sonnet 4 (via Anthropic)~ 15:30~ 20:00~ 37
Claude CodeClaude Sonnet 4~ 9:30~ 17:40~ 24
Claude CodeClaude Opus 4~ 10:00N/AN/A

I have setup this post, and the code problem in such a way that I should be able to easily add more combinations and comparisons in the future, and directly compare the performance back to this post. Ideally, at some stage I’d try some other models via Ollama, and also some other pay per requests LLM APIs…

Read more