When managers code with AI: navigating the context gap
Beyond the hype and fear: what we're actually learning about authority, context, and AI in engineering teams
I’ve been watching a pattern emerge across engineering organizations. Managers discover AI coding assistants like Claude Code or GitHub Copilot and decide to contribute more actively to their team’s codebase. Sometimes it works brilliantly. Sometimes it creates unexpected turbulence. The difference? It usually comes down to context, approach, and something I call “authority amplification.”
This isn’t about gatekeeping or whether managers should code or not. It’s about understanding how the combination of positional power, AI assistance, and missing context can either empower teams or inadvertently constrain them. After observing various teams navigate this transition (with notably mixed results), I’ve noticed three critical dynamics at play.
The authority amplification dynamic
When someone in a position of authority contributes code in the same project as their team, their work carries extra weight; this is just organizational physics. Amy Edmondson’s research on psychological safety at Harvard Business School demonstrates that hierarchy naturally influences how teams engage with ideas and decisions. Add AI Assistance to this mix, and we get an interesting multiplier effect: managers can now contribute at unprecedented speed, potentially overwhelming the team’s ability to engage thoughtfully (read: critically do PR reviews) with those contributions.
In their book The Five Dysfunctions of a Team, Patrick Lencioni describes how absence of trust manifests when team members are reluctant to be vulnerable within the group. When a manager starts contributing AI-Assisted code without first establishing context and trust around this new way of working, it can trigger this dysfunction.
I’ve observed teams where managers successfully navigated this (more on that later), but I’ve also seen cases where it went sideways. In one organization, after their leader started actively contributing with AI Assistance, the team’s engagement in code reviews noticeably decreased. Not because the code was problematic, but because challenging code that came from both their boss AND was AI-generated felt like questioning two sources of authority simultaneously.
Here’s what makes this particularly tricky: the manager often doesn’t realize this is happening. They see faster contribution as modeling good AI adoption. The team sees it as a signal that their deliberative approach is too slow. It’s a classic case of misaligned interpretations of the same behavior.
The context and knowledge challenge
What we mean by “architectural drift”
Let me define a term that’s important here: architectural drift. This is when a codebase gradually moves away from its intended design patterns and principles through accumulated changes that individually seem reasonable but collectively compromise the system’s coherence. Such as pre-imposed deadlines with not taking time to clean-up or refactor, shortcuts due to fixes for a high profile customer, etc Think of it like linguistic drift in natural languages; small changes that eventually result in something quite different from the original.
AI coding assistants, powerful as they are, can’t access the narrative knowledge (Dave Snowden’s term from the Cynefin framework) that lives in a team’s collective experience.
The institutional memory problem
Every codebase carries what we might call “scar tissue”; patterns and micro-decisions that emerged from specific incidents, failures, or hard-won insights. A seemingly over-engineered error handling system might be the result of production incidents that nearly lost a major customer. That “unnecessarily complex” service boundary might prevent the kind of cascading failures the team experienced two years ago.
When someone contributes without this context (whether they’re using AI or not), they risk unwinding these deliberate decisions. The AI amplifies this risk because it enables rapid changes across multiple areas before the team can share the relevant history.
The growth and ownership dimension
Gene Kim, in The DevOps Handbook, describes the concept of “improvement kata”; small, daily practices through which teams gradually build capability. There’s something important that happens when developers wrestle with problems: they build mental models, discover edge cases, and develop intuition about the system.
When managers provide AI-assisted solutions to problems the team is working through, even with the best intentions, it can short-circuit this learning process. But (and this is crucial) this isn’t always negative. Sometimes, showing a sophisticated solution can accelerate learning. Sometimes it stifles it. The difference often lies in how it’s introduced.
When it works well
I’ve seen several cases where managers successfully used AI tools while coding alongside their teams. The pattern that works typically involves:
Starting as a learner: One leader I know spent weeks pairing with different team members before contributing any AI-assisted code. “I needed to understand not just what our code does, but why it exists in its current form,” he explained. he used AI to help her understand the existing patterns faster, not to immediately change them.
Maintaining domain boundaries: Another one uses AI assistance exclusively for the authentication system he’s remained close to. He doesn’t venture into areas where he lacks recent context. This respects both his expertise and his team’s ownership.
Transparent experimentation: Several successful managers create separate sandbox projects or clearly marked experimental branches where they explore AI capabilities, sharing learnings without directly impacting production code paths.
When it struggles
The pattern that tends to create challenges looks different:
Speed over context: Jumping in quickly to “help” without understanding the system’s evolution
Breadth over depth: Using AI to contribute across many areas simultaneously
Solution over process: Providing answers rather than participating in problem-solving
Finding the productive middle ground
The Dreyfus model of skill acquisition (which describes how people progress from novice to expert through pattern recognition built on experience) offers an interesting lens here. AI can simulate expertise through pattern matching, but it can’t simulate the specific patterns of your team’s journey. The question isn’t whether managers should use AI when coding; it’s how to combine AI assistance with deep context and collaborative discovery.
Practical approaches that work
Based on what I’ve observed working well:
For managers who want to model AI adoption while coding:
Pair with team members first; let them drive while you observe how AI suggestions interact with your codebase
Start with non-critical paths like tooling, documentation, or test improvements
Share your learning process openly, including when AI suggestions don’t fit your context
Ask more questions than you answer, even when AI provides quick solutions
For teams navigating this transition:
Establish explicit norms about AI-assisted contributions
Create safe spaces to discuss the impact of hierarchy on code review
Consider “context pairing” where team members share system history before managers contribute
Celebrate both rapid solutions AND deliberate problem-solving
For organizations thinking systemically:
Recognize that AI amplifies existing dynamics (both positive and negative)
Invest in psychological safety before accelerating contribution velocity
Consider the difference between modeling tool use and modeling learning
Measure team engagement and growth, not just contribution metrics
… or do like me … just choose personal projects outside of your daily workstream
The nuanced reality
Here’s what I think we’re learning: the intersection of management, AI assistance, and code contribution isn’t inherently good or bad. It’s a powerful combination that can accelerate teams or accidentally constrain them. The determining factors seem to be:
Context depth: How well does the contributor understand the system’s history and design decisions?
Power awareness: Is the manager conscious of how their position affects team dynamics and how does he/she manages that ?
Learning orientation: Is the focus on solving problems or building capability?
Team maturity: Can the team effectively push back when needed?
Some teams thrive with actively coding managers using AI. These tend to be teams with strong psychological safety, clear ownership boundaries, and managers who approach coding as learners rather than heroes. Other teams struggle, particularly when the manager’s contributions come without context or when the team lacks the safety to challenge ideas from authority.
Moving forward thoughtfully
The opportunity here isn’t to keep managers from coding or from using AI tools - on the contrary, given the power that ; it’s to be intentional about how these powerful combinations interact with team dynamics. If we want teams that grow, innovate, and maintain ownership of their work, we need to think carefully about context, power, and learning.
The most successful teams I’ve observed aren’t the ones where managers contribute the most code or use AI most effectively. They’re the ones where everyone (including managers) focuses on building collective capability while respecting the deep, contextual knowledge that no AI can replace. Judgement can’t be fully delegated to AI.
What patterns are you seeing in your organization? How are you navigating the intersection of hierarchy, AI assistance, and team dynamics? I’m particularly interested in cases where you’ve found unexpected benefits or creative approaches to making this work.

The architectural drift concept is something I've been calling "prompt drift" and it's the same underlying problem seen from a different angle. You're looking at it through the manager-team dynamic, which adds the authority amplification layer. I've been seeing it play out across entire teams regardless of seniority.
Every developer has their own AI habits. Different prompts, different tools, different coding patterns coming out. The codebase slowly fragments in ways no linter catches because the inconsistencies are conceptual, not syntactic. Coming to this a bit late but the problem has only gotten worse.
I covered a potential fix using OpenCode agents to encode team conventions directly into the agent config https://blog.devgenius.io/your-senior-devs-dont-scale-your-opencode-agents-can-e2ecf2d04548 so that regardless of who's driving the AI, the output stays consistent with the team's patterns. It doesn't solve the psychological safety issue you raised but it does reduce the surface area for accidental drift.