From AI cloud agents to local flows

Mar 11, 2026 · 5 min read

It is hard to believe that it’s been 7 months since I wrote “Two weeks with the GitHub Coding Agent”. That post tried to document my findings after working two weeks with GitHub Copilot. This post is a quick follow on, to continue documenting my journey with AI agents in my engineering flow.

What has changed?

I’ve completely stopped using the hosted agentic flows from GitHub. I’ve been running an engineering community session at work recently, and oddly found that most engineers had also stopped using hosted flows - although we couldn’t quite articulate why. Let me try to do so here.

I’d boil this down to feedback loops. For over 20 years we’ve wanted to have “quicker feedback loops”. If you engage with AI agents on your machine, the feedback loop is there in your terminal. With hosted web application flows, there’s just more friction.

If I want to have a back and forth with an agent about a feature or bug, it’s like a chat conversation in Slack, but in your terminal. With the Copilot hosted flows (I cannot comment on other vendors), it all feels so much slower. I have to keep switching to my browser, whereas locally, I flick between tmux windows.

The upside of hosted flows is still the team-based aspect, e.g. we can share the PR and feedback to the AI agent. This isn’t really achievable locally (that I am aware of).

What is my current flow?

I’m using opencode as my CLI tool of choice. It’s vendor agnostic which is right up my street. The AI world is moving so quickly, I’m not sure I want to keep changing the tools I use, just the models. opencode is open source, and currently has wide support for the models I want to use. I’ve found the way it formats the output to be very clear and concise. The different modes of build and plan are intuitive.

I’m finding that I am using git worktrees more and more. I’ve been aware of them previously, but it wasn’t until Safia kept mentioning them on BlueSky that I really started taking note. It turns out that they are perfect for AI agent work, as you can work on different branches in parallel without cloning the repo multiple times.

I’m a tmux user, so I have a bash alias that fires up a tmux window with a certain pane layout. I’ve coupled that to the creation of git worktrees for projects too. I have a pane for opencode, a pane for neovim, and then a pane left for anything I need to do on the CLI.

I’ve found myself “pairing” with AI agents more to plan out work, investigate bugs, and find new ways of solving issues. It’s common for me to be pairing with an agent, whilst also delivering some value for product work. This can all happen in the worktrees, whilst I still can do other work in the main branch.

The main addition for my flow is Skills. opencode supports the same Skills as Claude. Skills are documents written in Markdown that define a task and how to complete it. They are a bit like a playbook for the agent. You can have them do anything you like, but they are particularly useful for things that require multiple steps, or require some sort of formatting.

The best thing I’ve found about skills is how easy they are to create. My flow normally looks like this.

I generally advocate that the skills use CLI tools, over MCPs. I don’t want to be running all the local MCPs, when a CLI is more than capable! They simply consume too many resources on my machine. MCPs also feel separate to my flow. If I use a jira CLI for example, I want the Skills to use the same tools I do. I don’t want to set up something specifically for an AI agent, when there is no need.

What have I learned?

Quite simply, the tech is evolving so much, you have to keep trying things.

Be daft, be playful, and see what sticks. I certainly feel like I try to embed AI flows into the way I want to work, rather than changing the way I work for AI flows. A local CLI flow is how I work. Having conversations and instructing AI agents to do things in the browser isn’t really my vibe.

When I originally thought about this second post in the series, I was going to show how the stats differed since my last post. But you know what? It’s just not realistic. At this point AI is so embedded in my flows, most of the work has had AI involvement in some way. Maybe this is another thing I prefer about local flows. It’s my work, and I happen to use AI tools. Hosted flows feel like it’s AI work, and I review it. Subtly different.

I’m looking forward to seeing how AI agents evolve and how that impacts my flow.


See also