AI-assisted building workflow
A practical, repeatable AI workflow I use to plan, build, debug, and ship — plus generate assets (audio/images) — without turning everything into an over-engineered ‘AI project’.
Personal workflow / tooling · 2025 - present · Builder (PM mindset + hands-on dev)
Goals
- Move from ‘ideas’ to ‘shippable increments’ faster (with less cognitive load)
- Keep quality high: readable code, consistent routing/SEO, safe staging behaviour
- Use AI for asset creation (audio/images) where it makes sense — especially for the tower defence game
- Capture decisions as I go so future-me (or an interviewer) can follow the story
- Generate case-study-friendly output continuously instead of overthinking it
Constraints
- Solo delivery and limited time windows
- Avoid AI-driven spaghetti: changes must be reviewable and incremental
- No made-up metrics — keep outcomes honest and qualitative
- Tooling needs to be repeatable (not a one-off ‘magic prompt’ situation)
Problem
I wanted to use AI in a way that actually helps me ship: faster iteration, clearer decisions, better documentation, and less time spent staring at the same bug. The risk (and it’s a real one) is letting AI turn a project into a tangled mess: inconsistent code, unclear decisions, and ‘it works but I don’t know why’. So the goal became: keep the speed, keep the standards.
Approach
1) Start with a tiny, boring brief (so the scope can’t explode) - I write a short ‘what good looks like’ paragraph. - I list hard constraints (e.g., no trailing slashes, staging noindex, no analytics leakage). - I define the acceptance checklist.
2) Use ChatGPT for clarity and sequencing (PM brain on) - Turn the brief into phases: inventory → guardrails → implementation → polish → verification. - Identify risks early (SEO regressions, broken routes, consent/analytics behaviour, staging mistakes). - Produce prompts that are precise enough for Codex to execute.
3) Use Codex for implementation passes (hands-on dev brain on) - Codex reads the repo, reuses existing patterns, and applies targeted changes. - Work happens in small commits/patches: one concern per pass. - After each pass: build/typecheck + sanity checks.
4) Use AI for assets (tower defence) - ElevenLabs: quick sound variants (shots, UI clicks, impacts, ambience) to get the game feeling ‘real’ early. - Image generation (where applicable): placeholders and concepts for UI/background/tiles so I can test readability, mood, and layout before custom art.
5) Use Whispr Flow for speaking drafts (get ideas out of my head) - I talk through the idea, and Whispr turns it into a draft. - Then I tighten it into a case-study narrative: problem → approach → results → next steps.
6) Treat documentation as output, not admin - Every milestone becomes a short case study section. - I’m intentionally ‘shipping content locally’ first, then refining and publishing later.
Implementation highlights
- A repeatable prompt pattern: brief → constraints → phased plan → Codex implementation → verification checklist
- Environment gating as a habit (staging stays noindex and doesn’t leak analytics)
- AI-assisted refactors that are incremental and reviewable (no big-bang rewrites)
- Audio/image placeholders early to validate gameplay feel and UI clarity for the tower defence project
- Voice-to-draft flow (Whispr) to reduce friction when writing case studies
Challenges
- Resisting the urge to let AI ‘keep going’ past the brief (scope creep in disguise)
- Making sure I still understand and own the decisions (AI can’t be the architect)
- Keeping generated assets consistent with the game’s style direction (placeholders can drift)
- Knowing when to stop polishing and just ship the draft locally
Results
- More consistent momentum: less time stuck deciding, more time shipping
- Cleaner, more maintainable outcomes because work happens in small reviewable passes
- Faster prototyping for tower defence via early audio/visual placeholders
- A growing library of case studies that I can refine over time instead of starting from zero
What I'd do next
- Standardise a simple ‘prompt pack’ template I can reuse across projects
- Create a lightweight internal checklist for: SEO, routing, staging, consent, analytics, accessibility
- Turn the tower defence build into its own case study (learning journey + technical decisions)
- Add a small ‘AI usage policy’ note for myself (what AI can touch vs what stays manual)
How AI helped
This case study is basically the meta one: ChatGPT helps me think, Codex helps me build, ElevenLabs helps me make prototypes feel real, and Whispr Flow helps me get words on the page quickly. The key is that I’m using AI to remove friction — not to outsource ownership. I keep the scope tight, I review everything, and I treat each output as a draft I can refine over time.