Skip to main content

Posts

Showing posts from February, 2026

When AI Feels Like Extra Work

  I had a realization this week: I barely opened my LLM this week, and for the first time in months, I felt like I was falling behind. In the past few months I have been surrounded by the narrative that AI is the Great Accelerator. Our leadership team asks about it, our engineers are experimenting with it, and my LinkedIn feed is a constant stream of people highlighting how they’ve been using AI to automate their entire existence. And yet this week I didn’t have any new AI wins. I did a lot of things the old way - tough 1-1 conversations, frustrating planning meetings, dealing with misalignments, and reacting to new fire drills. What strikes me is I have a distinct feeling of AI related guilt. I did use AI - it helped me with my 1-1s, it helped me pull together an annual review for someone - it probably saved me 3 or 4 hours this week alone. But I still found myself wondering: Should I be prompting this? Am I falling behind because I’m doing the heavy lifting myself? Is my refusal ...

Playing a Chess Simul: Are AI Agents Exhausting Our Best People?

We were promised that AI would be the "Easy Button" for engineering. Instead, I’m seeing something else: my highest-performing leads and managers are reporting a level of fatigue that feels different from standard burnout. They aren’t just tired; they’re depleted . I recently heard a brilliant analogy for this: Using AI agents is like a Grandmaster playing 20 games of chess simultaneously. On paper, the Grandmaster is "20x more productive". But look closer at the board. They aren't just playing; they are context-switching at a terminal velocity. The "Reviewer's Tax" When an engineer writes code, they are in "Flow State." They build the mental model, piece by piece. It’s hard work, but it’s linear. When that same engineer uses three different AI agents to generate a PR, they shift from Creator to Auditor. They are no longer building the house; they are the site inspector walking through a house built in five minutes by a hyperactive robot....

The Cohesion Crisis: A Prediction on Feature Sprawl

Ever since becoming a leader my North Star has been velocity. Each day I’ve tried to optimize sprints, CI/CD pipelines, and stand-ups to solve one problem: how do we ship faster? Now, I’m staring at a world where AI might actually give us what we asked for. But as I watch our teams start to lean into these tools, I’m beginning to suspect we’re heading toward a different kind of disaster. My prediction: We are about to enter the era of products plagued by feature sprawl. The "Cost of Yes" is Bottoming Out Historically, engineering friction was a natural filter for product quality. Because code was expensive and slow to write, we had to be ruthless about what made the cut. We debated every setting, UI button, and every API endpoint because we only had so many hours in a sprint. But I’m seeing that friction start to evaporate. When AI makes it trivial to spin up a new module or a "quick" feature add-on, the organizational impulse won't be to simplify - it will be t...

The New Unit of Planning: Headcount vs Tokens

  As I work through another planning cycle, the ritual remains the same: the spreadsheet opens, the roadmap is prioritized, and we start the negotiation for more "heads." In my world, and likely yours, the engineer has always been the fundamental unit of progress. If we want to move faster, we hire more of them. But this year, the math feels... off. I’m asking myself - do I really need a new hire for this or do I just need a larger token budget? We are moving away from a world of Fixed Labor and into a world of Variable Compute. When you hire a Senior Engineer, you’re buying a long-term asset. You’re also buying a 6-month onboarding lag, a management overhead, and a permanent line item on the P&L.  When you "hire" tokens you’re buying instant, fractional capacity. If your engineers are telling you they can automate 30% of the "toil" using a custom-tuned model, the traditional argument for that extra engineer disappears. We are moving from mere manageme...

Are Engineering Managers Obsolete in an AI World?

So far in my career, our value as leaders was measured by how well we manage the machine - optimizing for velocity, smoothing out team dynamics, and ensuring predictable delivery of business goals. Now, the engine of that machine is changing. For the past few months a part of me is feeling some existential dread - with AI advancing so quickly the fundamental engine of the machine is changing. A lot of the discussion has been on 10x or 100x engineers, but what is this going to mean for Engineering Managers? Do we still need the role? If we do, what does an Engineering Manager in the future look like? The future - maybe I mean now…  I think it's too early to declare the death of the manager but I am ready to place a few definitive bets on changes that are coming.  Firstly Engineering Managers who don’t stay on top of AI developments will become obsolete. The 'wait and see' approach has become a 'wait and become obsolete' strategy. Within a performance cycle or two I p...

Prompting AI Adoption for Skeptical Engineers

We’ve all seen the headlines. "AI will replace the junior engineer”.  "The 10x engineer is now a 100x engineer." In leadership meetings almost every conversation comes back to velocity and how we could unlock efficiency with AI.  But when you talk with the team the reality can be different. Your most senior engineers - the ones who built the core of your system - are looking at Copilot or Cursor with a squint. Some see it as a toy; others see it as a threat to the craft; a few see it as a glorified Clippy that hallucinates half-baked solutions into their carefully maintained repos. As Engineering Managers and Directors, we are currently in a delicate spot. We are being asked by the business to "leverage AI," but we are being told by some of our best people that it’s "not quite there yet". How do we advocate for AI without sounding like a marketing brochure? How do we move the skeptics without losing their trust? 1. Validate the Skepticism (Because it...

The "Just-in-Time" Manager and the 1:1 Gemini Gem

  We’ve all been there. The calendar notification pings. You finish your previous meeting (which ran over by three minutes), and as the Zoom window for your next 1:1 pops up, you’re frantically tabbing through Google Docs and Slack. You’re trying to remember: What did we talk about last time? Did that project go okay? Wait, did they mention a vacation or a baby? You’re "prepping" while the meeting is already happening. You’re physically present, but mentally, you’re a detective trying to solve the mystery of your own calendar and trying to switch off the last meeting.  It’s a common trap for managers, especially as their direct reports grow and they now have skip level 1-1s, or skip-skip level 1-1s. Our schedules are a mosaic of context-switching, and unfortunately, the deep, reflective prep our direct reports deserve is often the first thing to get squeezed out as we are dealing with incidents or other emergencies.  From "Zero Prep" to "Instant Context" R...

Prompting the org

For a decade, my job as an Engineering Leader was about clarity and predictability. I optimized for the team's velocity, coached managers on team dynamics, helped engineers maximise their impact and tried to build a "machine" where human output was the primary engine. Then the engine changed. We’ve all seen the headlines about 10x productivity and AI-native workflows. My team is early in the adoption curve. We’re still figuring out which tools are signal and which are noise. Honestly? We’re still learning how to work in a world where the "how" (the code) is becoming a commodity, and the "what" (the intent) is everything. A lot has been written about what this means for engineers, but what does this mean for engineering managers, directors, VPs and executives? I’ve realized that as we integrate AI, I’m quickly realizing my role is shifting from managing the output to refining the input - what I call the 'Org Prompt’. In AI, a system prompt sets the ...