Skip to main content

Prompting the org

For a decade, my job as an Engineering Leader was about clarity and predictability. I optimized for the team's velocity, coached managers on team dynamics, helped engineers maximise their impact and tried to build a "machine" where human output was the primary engine.

Then the engine changed.

We’ve all seen the headlines about 10x productivity and AI-native workflows. My team is early in the adoption curve. We’re still figuring out which tools are signal and which are noise. Honestly? We’re still learning how to work in a world where the "how" (the code) is becoming a commodity, and the "what" (the intent) is everything.

A lot has been written about what this means for engineers, but what does this mean for engineering managers, directors, VPs and executives? I’ve realized that as we integrate AI, I’m quickly realizing my role is shifting from managing the output to refining the input - what I call the 'Org Prompt’.

In AI, a system prompt sets the rules, the persona, and the guardrails. In an engineering organization, our culture, our metrics, and our shared goals are the "prompt." If we’re shipping mediocre features at 10x speed, the bug isn't in the LLM - it’s in our Org Prompt.

I’m starting this blog, Prompting the Org, to document how I’m learning to navigate this shift. This isn’t a playbook written from the finish line; it’s a field journal from the front lines. Some weeks, the 'Org Prompt' will work; other weeks, we’ll just be debugging a messy output.

Whether you’re a VP, a first-time EM, or an Engineer trying to figure out your place in the new world I want this to be a place where we discuss the real-world friction of this transition:

  • How do we coach managers and engineers when their workflows are suddenly augmented by AI?

  • How do we define 'Seniority' when technical execution is no longer the primary bottleneck?

  • How do we stay technical as leaders when the "tech" is changing every three weeks?

  • How do we navigate the 'trust gap' when engineers worry that AI efficiency today means smaller teams tomorrow?


I don't have the definitive playbook yet. My goal is to find it alongside you. We're going to experiment, break some processes, and hopefully, refine our Org Prompt until we're building at our full potential.

Let’s start debugging. I’d love to hear from you—where are you seeing the most 'friction' in your team's adoption of AI right now?

Comments

Popular posts from this blog

The "Just-in-Time" Manager and the 1:1 Gemini Gem

  We’ve all been there. The calendar notification pings. You finish your previous meeting (which ran over by three minutes), and as the Zoom window for your next 1:1 pops up, you’re frantically tabbing through Google Docs and Slack. You’re trying to remember: What did we talk about last time? Did that project go okay? Wait, did they mention a vacation or a baby? You’re "prepping" while the meeting is already happening. You’re physically present, but mentally, you’re a detective trying to solve the mystery of your own calendar and trying to switch off the last meeting.  It’s a common trap for managers, especially as their direct reports grow and they now have skip level 1-1s, or skip-skip level 1-1s. Our schedules are a mosaic of context-switching, and unfortunately, the deep, reflective prep our direct reports deserve is often the first thing to get squeezed out as we are dealing with incidents or other emergencies.  From "Zero Prep" to "Instant Context" R...

Prompting AI Adoption for Skeptical Engineers

We’ve all seen the headlines. "AI will replace the junior engineer”.  "The 10x engineer is now a 100x engineer." In leadership meetings almost every conversation comes back to velocity and how we could unlock efficiency with AI.  But when you talk with the team the reality can be different. Your most senior engineers - the ones who built the core of your system - are looking at Copilot or Cursor with a squint. Some see it as a toy; others see it as a threat to the craft; a few see it as a glorified Clippy that hallucinates half-baked solutions into their carefully maintained repos. As Engineering Managers and Directors, we are currently in a delicate spot. We are being asked by the business to "leverage AI," but we are being told by some of our best people that it’s "not quite there yet". How do we advocate for AI without sounding like a marketing brochure? How do we move the skeptics without losing their trust? 1. Validate the Skepticism (Because it...