Skip to main content

When AI Feels Like Extra Work

 I had a realization this week: I barely opened my LLM this week, and for the first time in months, I felt like I was falling behind.

In the past few months I have been surrounded by the narrative that AI is the Great Accelerator. Our leadership team asks about it, our engineers are experimenting with it, and my LinkedIn feed is a constant stream of people highlighting how they’ve been using AI to automate their entire existence.

And yet this week I didn’t have any new AI wins. I did a lot of things the old way - tough 1-1 conversations, frustrating planning meetings, dealing with misalignments, and reacting to new fire drills. What strikes me is I have a distinct feeling of AI related guilt. I did use AI - it helped me with my 1-1s, it helped me pull together an annual review for someone - it probably saved me 3 or 4 hours this week alone. But I still found myself wondering: Should I be prompting this? Am I falling behind because I’m doing the heavy lifting myself? Is my refusal to use a tool a sign of "dinosaur syndrome," or is it actually a preservation of quality?



The Friction of "Efficiency"

Leading a top-tier org is like driving an F1 car at 200 mph: we know that pulling into the 'AI pit stop' to swap our manual tires for automated ones will make us faster in the long run. But when you’re defending a lead in your business, every second in the pits feels like a risk. We’re often choosing to stay on fading, manual tires just to keep our position on the track, fearing that the time it takes to 'upgrade' will cost us the race.

When I have 20 minutes between back-to-back meetings to solve a crisis, I have two choices:

  1. Use my 20 years of experience to pattern-match and write the solution. (Time: 15 mins).

  2. Contextualize the problem, sanitize the sensitive data, craft the prompt, review the output, realize it missed the cultural nuance of my specific org, and then edit it. (Time: 25 mins).

Right now, for the most complex parts of our jobs - the "human" parts - the AI path often feels like the prompting tax, not a shortcut.

Knowing When Not to Prompt

The conversation for Engineering Managers shouldn't just be about how to use AI but about when to use AI. If I use AI to write a performance review or a sensitive email to a struggling peer, I might save ten minutes, but I risk losing the "soul" of the message. If I use it to brainstorm technical architecture, I might get a standard industry answer, but miss the brilliance that my senior staff would provide.

However, the guilt persists because we know that the "AI Path" only gets faster if we keep walking it. Every time I choose the old way because I’m in a rush, I am technically delaying my own evolution.

This week, I’m reflecting on where that line sits.

  • Are we avoiding AI because the tools aren't ready for the nuance we need?

  • Or are we avoiding it because we are addicted to the "ego-hit" of solving hard problems ourselves?

  • Are we opting out due to necessity due to high pressure weeks?

I’d love to hear if others are feeling this guilt. Have you had a "Low-AI" week lately? Did it feel like a failure, or a return to craft?


Comments

Popular posts from this blog

The "Just-in-Time" Manager and the 1:1 Gemini Gem

  We’ve all been there. The calendar notification pings. You finish your previous meeting (which ran over by three minutes), and as the Zoom window for your next 1:1 pops up, you’re frantically tabbing through Google Docs and Slack. You’re trying to remember: What did we talk about last time? Did that project go okay? Wait, did they mention a vacation or a baby? You’re "prepping" while the meeting is already happening. You’re physically present, but mentally, you’re a detective trying to solve the mystery of your own calendar and trying to switch off the last meeting.  It’s a common trap for managers, especially as their direct reports grow and they now have skip level 1-1s, or skip-skip level 1-1s. Our schedules are a mosaic of context-switching, and unfortunately, the deep, reflective prep our direct reports deserve is often the first thing to get squeezed out as we are dealing with incidents or other emergencies.  From "Zero Prep" to "Instant Context" R...

Are Engineering Managers Obsolete in an AI World?

So far in my career, our value as leaders was measured by how well we manage the machine - optimizing for velocity, smoothing out team dynamics, and ensuring predictable delivery of business goals. Now, the engine of that machine is changing. For the past few months a part of me is feeling some existential dread - with AI advancing so quickly the fundamental engine of the machine is changing. A lot of the discussion has been on 10x or 100x engineers, but what is this going to mean for Engineering Managers? Do we still need the role? If we do, what does an Engineering Manager in the future look like? The future - maybe I mean now…  I think it's too early to declare the death of the manager but I am ready to place a few definitive bets on changes that are coming.  Firstly Engineering Managers who don’t stay on top of AI developments will become obsolete. The 'wait and see' approach has become a 'wait and become obsolete' strategy. Within a performance cycle or two I p...

Prompting AI Adoption for Skeptical Engineers

We’ve all seen the headlines. "AI will replace the junior engineer”.  "The 10x engineer is now a 100x engineer." In leadership meetings almost every conversation comes back to velocity and how we could unlock efficiency with AI.  But when you talk with the team the reality can be different. Your most senior engineers - the ones who built the core of your system - are looking at Copilot or Cursor with a squint. Some see it as a toy; others see it as a threat to the craft; a few see it as a glorified Clippy that hallucinates half-baked solutions into their carefully maintained repos. As Engineering Managers and Directors, we are currently in a delicate spot. We are being asked by the business to "leverage AI," but we are being told by some of our best people that it’s "not quite there yet". How do we advocate for AI without sounding like a marketing brochure? How do we move the skeptics without losing their trust? 1. Validate the Skepticism (Because it...