Skip to main content

From Reporting Grind to Leadership Time: My AI Workflow Evolution

A few weeks ago, I had one of those leadership moments that felt productive on paper and pointless in reality.

I had spent over an hour stitching together Jira updates, GitHub activity, and status comments from different docs so I could write a leadership report. I had "done the work." I had the update. But I had also burned the exact time I needed for coaching conversations, planning decisions, and follow-through with my team.


That tension has been following me all year: am I here to produce updates, or to produce outcomes?


Back in February, I wrote about using a Gemini Gem to prep for 1:1s and skip-levels in minutes instead of scrambling for context in real time. That was my starting point, and it was a meaningful one. (The "Just-in-Time" Manager and the 1:1 Gemini Gem) It helped me eliminate hollow conversations and recover mental bandwidth.


But between the start of March and now, I hit a wall.



The One-Shot Illusion

My early assumption was simple: if I wrote a better prompt, one tool would do everything.


I kept trying to one-shot entire workflows. "Read all this data, understand all the context, infer all the dependencies, produce a perfect output." I spent too much time iterating on mega-prompts that were always close but never truly right. Every pass felt like I was 90% there, and 90% is exactly where frustration lives.


What changed was not just the tools. It was my operating model.


I stopped asking AI to be a single super-assistant and started treating it like a toolkit. I still need to architect. I still need to decide what "good" looks like. I still need to connect the dots. The tools are accelerators, not substitutes for judgment.


That mindset shift has been bigger than any specific feature release.

From One Tool to a Workflow Stack

Today I use Gemini, Cursor, and Claude Code differently, not interchangeably.


I am no longer looking for one perfect prompt. I am building small, composable flows that each solve a concrete problem:


  • Read Jira and GitHub data, then generate structured reports that save hours of manual gathering.

  • Read Google Docs, summarize them, and cross-reference linked Jira and GitHub tickets so I can quickly spot misalignment.

  • Read spreadsheet trackers and flag where manual status is out of sync with the actual program tracker (for example, spreadsheet says "To Do" while system-of-record says complete).


This is where "vibe coding" has become practical for me. I can quickly stand up scripts and utilities for operational work that used to sit in the "nice to have" pile forever. Not polished products. Useful tools.


And those tools compound.


A small reporting helper turns into a weekly rhythm. A documentation utility turns into cleaner onboarding. A reconciliation script turns into fewer status debates. None of these are individually revolutionary, but together they change how the team runs.

The Leadership Tension I Did Not Expect

As this has become more effective, I have felt two tensions more strongly.


First: individual leverage vs team systems.


If AI makes me personally faster, what do I do with the reclaimed time? Do I just absorb more work? Or do I reinvest that time into better team mechanisms, better decision quality, better coaching, and better planning? Speed without reinvestment is just a faster treadmill.


Second: manager as builder vs manager as builder-of-builders.


Should I be writing these operational tools myself? Or should I stay out of the details and only shape the system through the team? I do not think this is a binary choice anymore. I think leaders need enough hands-on fluency to understand what is now possible, what is risky, and what support the team actually needs.


If I cannot use the tools, how do I coach on the tools?


If I cannot evaluate AI-assisted workflows, how do I set standards for quality, governance, and reliability?


I am increasingly convinced that hands-on experimentation is no longer optional leadership hygiene. Not because every manager should become a full-time builder, but because the management system itself is being reshaped by these capabilities.

What I Got Wrong (and What Finally Worked)

The biggest mistake I made was expecting completeness before value.


I wanted end-to-end perfection. I got long iteration cycles and inconsistent outcomes. The more I pushed for "do everything," the less value I captured.


What worked was reducing scope aggressively:


  • Automate one painful step.

  • Verify quality.

  • Keep the human decision in the loop.

  • Add the next step only after the first one reliably saves time.


That sounds obvious. It did not feel obvious while I was chasing perfect outputs.


The practical lesson for me: do not optimize for a flawless artifact; optimize for faster, better decisions with clearer context.


When I frame it that way, the answer to "is this automation good enough?" becomes much easier.

The Bet I Am Making Now

My current bet is simple: leaders should experiment across multiple AI tools and compose them intentionally.


Using one tool deeply taught me fundamentals. Using multiple tools made me effective.


I still use all three. I expect that mix to keep changing. The point is not tool loyalty. The point is leadership leverage.


If you are in a similar spot, here is the challenge I would offer:


Pick one recurring reporting or documentation workflow you dislike. Break it into three steps. Automate only the first step this week. Measure what time and attention you actually get back. Then decide where to reinvest that leadership time.


Because that is the real question under all of this, right?


Not "Can AI do this task?"


But "If AI can remove the reporting grind, what higher-value leadership work am I now willing to do with the time?"


Comments

Popular posts from this blog

The "Just-in-Time" Manager and the 1:1 Gemini Gem

  We’ve all been there. The calendar notification pings. You finish your previous meeting (which ran over by three minutes), and as the Zoom window for your next 1:1 pops up, you’re frantically tabbing through Google Docs and Slack. You’re trying to remember: What did we talk about last time? Did that project go okay? Wait, did they mention a vacation or a baby? You’re "prepping" while the meeting is already happening. You’re physically present, but mentally, you’re a detective trying to solve the mystery of your own calendar and trying to switch off the last meeting.  It’s a common trap for managers, especially as their direct reports grow and they now have skip level 1-1s, or skip-skip level 1-1s. Our schedules are a mosaic of context-switching, and unfortunately, the deep, reflective prep our direct reports deserve is often the first thing to get squeezed out as we are dealing with incidents or other emergencies.  From "Zero Prep" to "Instant Context" R...

Are Engineering Managers Obsolete in an AI World?

So far in my career, our value as leaders was measured by how well we manage the machine - optimizing for velocity, smoothing out team dynamics, and ensuring predictable delivery of business goals. Now, the engine of that machine is changing. For the past few months a part of me is feeling some existential dread - with AI advancing so quickly the fundamental engine of the machine is changing. A lot of the discussion has been on 10x or 100x engineers, but what is this going to mean for Engineering Managers? Do we still need the role? If we do, what does an Engineering Manager in the future look like? The future - maybe I mean now…  I think it's too early to declare the death of the manager but I am ready to place a few definitive bets on changes that are coming.  Firstly Engineering Managers who don’t stay on top of AI developments will become obsolete. The 'wait and see' approach has become a 'wait and become obsolete' strategy. Within a performance cycle or two I p...

Prompting AI Adoption for Skeptical Engineers

We’ve all seen the headlines. "AI will replace the junior engineer”.  "The 10x engineer is now a 100x engineer." In leadership meetings almost every conversation comes back to velocity and how we could unlock efficiency with AI.  But when you talk with the team the reality can be different. Your most senior engineers - the ones who built the core of your system - are looking at Copilot or Cursor with a squint. Some see it as a toy; others see it as a threat to the craft; a few see it as a glorified Clippy that hallucinates half-baked solutions into their carefully maintained repos. As Engineering Managers and Directors, we are currently in a delicate spot. We are being asked by the business to "leverage AI," but we are being told by some of our best people that it’s "not quite there yet". How do we advocate for AI without sounding like a marketing brochure? How do we move the skeptics without losing their trust? 1. Validate the Skepticism (Because it...