Slide 1: Title
Speaker notes: Introduce yourself briefly. Research faculty at Georgia Tech, background in NLP and distributed systems. Today's talk: how AI fits into the engineering lifecycle you already believe in — not as a replacement for judgment, but as the mechanism that enforces the discipline you've always practiced.
Slide 2: The Lifecycle Hasn't Changed
Speaker notes: Start with common ground. Everyone in this room believes in this lifecycle. The challenge has never been knowing the right process — it's been consistently following it. Under deadline pressure, corners get cut. Architecture decisions get made implicitly. Tests get written "later." The lifecycle is aspirational for most teams most of the time.
Slide 3: The Risk — Vibe Coding
Speaker notes: The danger of vibe coding isn't bad syntax. AI syntax is fine. The danger is skipping the decisions that matter — architecture, decomposition, test strategy. When you let a model choose your architecture and error handling, you haven't saved time. You've accumulated technical debt at machine speed. You just don't know it yet because the code compiles.
Slide 4: The Thesis
Speaker notes: This is the core claim. AI doesn't lower the bar. It encodes the bar you already believe in and enforces it mechanically, every time, without fatigue or deadline pressure. The engineers who benefit most from AI aren't the ones who prompt hardest — they're the ones who have written down what "good" looks like and refuse to accept anything less.
Slide 5: Decide — Architecture, Design & Planning
Speaker notes: These three phases — architecture, design, planning — are where human judgment matters most and where corners get cut under pressure. AI makes each one easier, not harder. It can explore a codebase, draft an ADR, decompose an epic, and propose a plan in minutes. But the decisions — which approach, which tradeoffs, what's in scope — those are yours. The plan approval gate is critical: the developer explicitly says "proceed" before any code is written. This is the separation of judgment from execution in practice.
Slide 6: Build — Implementation & TDD
Speaker notes: TDD isn't optional in this workflow. The failing test is the specification — it encodes what the code is supposed to do. AI writes the test first, then the implementation, then refactors. This matters because AI-generated code is a statistical approximation. The test is the invariant that tells you whether the approximation is correct. Without it, you're trusting a statistical approximation. With it, you're verifying against an encoded invariant. Tests bridge the gap between human intent and AI output.
Slide 7: Ship — Verification & Delivery
Speaker notes: This is what "non-negotiable" means in practice. The same quality bar applies whether the code was written by hand, by a junior dev, or by AI. Format and lint auto-fix. Type check and tests require the AI to fix and retry. The commit is blocked until all four pass. No exceptions. Then delivery closes the loop: conventional commits make the log readable, and PRs link back through the full traceability chain. Six months later, the archaeology is trivial.
Slide 8: The Full Lifecycle
Speaker notes: This is the full pipeline. Note the feedback loop from verification back to implementation — failures become new tasks, not blockers. Most features verify on the first pass. Complex changes sometimes take 2-3 iterations. The system handles both. Also note: you enter wherever makes sense. Not everything needs an ADR. A small bug fix can start at planning. The lifecycle is a directed graph, not a rigid waterfall.
Slide 10: Encoding Your Standards
Speaker notes: This is the most generalizable takeaway. You don't need a custom system. You need a file that describes what "good" looks like in your codebase. Language conventions, architecture rules, quality thresholds. Put it where the AI reads it. The same file works across different AI tools — because the leverage is in the standards, not the tool.
Slide 11: Reference Code — Solving the Cold Start
Speaker notes: This is the practical complement to the standards file. Standards are rules — "use early returns," "repository pattern for data access." Reference code is examples — here's what that actually looks like in our stack. The cold start problem is real. On a brand new project, AI has nothing to pattern-match against and the output is generic. The fix is simple: pull in a repo that represents how you build, or seed the project with one well-structured module. AI extrapolates from examples far better than it interprets abstract rules. The combination — standards for constraints, reference code for patterns — is what produces consistent, high-quality output from day one.
Slide 12: AI Amplifies What's Already There
Speaker notes: This is the slide for skeptics. AI is an amplifier, not a corrective. If your engineering culture is disciplined, AI makes it faster. If it's sloppy, AI makes it sloppier — at machine speed. The question for every team isn't "should we use AI?" It's "do we have the discipline to use it well?" If the answer is no, fix the discipline first. Then the AI becomes leverage instead of liability.
Slide 13: Actionable Takeaways
Speaker notes: Make this practical. Everyone should leave with something they can do immediately. Items 1 and 2 are the minimum viable version — write down what "good" looks like, and give AI an example of what it looks like in practice. The rest follows naturally. The speed stat is real — 400 lines of well-structured, well-tested implementation in 20 minutes — but only because the standards, reference code, and gates were already in place.
Slide 14: Close
Speaker notes: Close with the thesis restated. The lifecycle hasn't changed — it's the same phases we've always agreed on. AI is the mechanism that makes it enforceable, consistently, without fatigue or shortcuts. The discipline is yours. The enforcement is automated. Point people to the blog and course for deeper dives. Open for questions.