Scaling an org with agentic engineering
A few months ago, several people on our engineering team told me they were afraid for their jobs. Not in a vague, abstract way — they meant it concretely. They saw what the new generation of agentic coding tools could do and wondered where that left them.
I don’t think the fear is irrational. But I do think it’s pointed at the wrong thing. The question isn’t whether AI can write code — it’s who learns to direct it effectively, and what infrastructure makes that direction safe. That’s what we’re building toward at Sunday.
What follows is our current thinking. It’s a work in progress, not a finished playbook.
The shift: prompting as programming
The most important thing to understand about agentic coding tools is that the primary skill they require is not writing code — it’s writing precise, well-structured instructions. Prompting is the new programming, at least at the orchestration layer.
That’s a genuine leveling of the playing field. Someone with deep domain knowledge but limited coding experience can now express a working solution, provided they can describe it clearly. The distance between “I know what we need” and “I can ship what we need” is compressing.
But compression doesn’t mean elimination. There are things that require real engineering judgment — understanding how an agent’s output interacts with a production system, knowing when a generated solution is subtly wrong, maintaining a codebase that agents didn’t architect. Those skills still matter. They matter more, not less, because the blast radius of a confident, fast-moving mistake is larger when the code is being written autonomously.
So the goal isn’t to make engineers less necessary. It’s to make everyone more capable of contributing to the pipeline from idea to shipped product — while investing seriously in the infrastructure that makes autonomous contribution safe.
How we’re rolling this out
We’re thinking about this in phases. We’re actively in the first two; the others are directional.
What makes this work: skills and guardrails
The phase model only holds together if we invest in two things in parallel.
The first is skills — codified, agreed-upon prompting patterns for common tasks. At Sunday, a “skill” is essentially a document that tells an agent how to do a specific class of work correctly: what inputs to expect, what constraints to respect, what the output should look like. When we agree on a skill as a team, we’re doing something more valuable than writing a prompt — we’re encoding institutional knowledge about how we build software. That knowledge compounds.
Skills aren’t just for agents. They’re the canonical artifact that replaces scattered interpretations of how we work. Before skills, everyone had their own mental model of how to do a given task — some accurate, some outdated, most somewhere in between. A skill forces alignment. It’s the difference between “here’s how I’d prompt for this” and “here’s how Sunday does this.” That’s valuable independent of AI. The agent utility is almost a bonus.
The second is guardrails. These are the technical and process constraints that make it safe for less technical contributors to direct agents. Automated tests that catch bad output. Review workflows that route agent contributions through the right eyes. Clear boundaries on what an agent is permitted to touch.
The guardrails aren’t restrictions on what’s possible. They’re what makes it possible to expand access without accumulating hidden risk.
This is the answer to the job-fear question: engineers who invest in building this infrastructure become the people who make it safe for the whole org to move faster. That’s not a diminished role. It’s a more leveraged one.
What we don’t know yet
How do you maintain code quality and architectural coherence when contributors don’t have a mental model of the full system? What does code review look like when the reviewer didn’t write any of the code being reviewed? How do you build organizational trust in agent output without requiring every contributor to understand how agents work under the hood? And — the question I genuinely don’t have an answer to — where does the responsibility live when an autonomous agent ships something that breaks things?
Phase 4 is the version of this where those questions are answered well enough to act on. We’re not there. But working through phases 1 and 2 seriously is how we get there.