Developer Induction Program  ·  11 Modules

AI-Assisted
Full-Stack
Induction

Learn to design, build, test, and deploy production-ready systems. This program builds disciplined engineering thinking — not shortcuts.

Full-Stack Thinking Spec Formation API-First Design TDD Across the Stack PR Review AI Collaboration Deployment Awareness
Scroll to start
01

What is
full-stack thinking?

Full-stack engineering is not about knowing all the tools. It is about understanding how systems fit together — and where they break.

🖥
Frontend
React / HTML
⚙️
API Layer
REST / GraphQL
🔧
Service
Business Logic
🗄️
Database
SQL / NoSQL
🏗️
Systems over features
Every feature exists within a system. Before touching code, ask: where does this fit? Who calls it? What does it call?
📋
Contracts between layers
API contracts, data models, and interface definitions are the agreements layers make with each other. Break a contract, break the system.
🔍
Debugging mindset
Follow the data: UI → network request → API handler → service → DB → response. Reproduce before you fix. Understand before you patch.
Watch: Full-Stack Thinking Explained
02

System lifecycle
overview

Every production system follows a lifecycle. Understanding each phase prevents costly rework. Click a step to explore it.

01
💬
Problem
02
📝
Spec
03
⚙️
API Design
04
💻
Implement
05
🧪
Testing
06
🚀
Deploy
07
📊
Monitor

01 · Problem Definition

Before any code is written, the problem must be clearly stated. Who is the user? What is the pain? What does success look like? This step produces a Problem Statement that every subsequent decision is tested against. If you can't explain the problem in two sentences, you don't understand it well enough to build it.

Watch: System Lifecycle Walkthrough
03

Learning workflow

Seven steps from reading an unknown codebase to shipping a validated pull request. Follow this sequence — in order.

Step 1 of 7
Codebase Reverse Engineering
Before writing a single line, understand what already exists. Map the architecture: what are the entry points? What are the data models? Where is business logic concentrated? Read tests before reading implementation — they describe intended behaviour.
💡 Try with AI
"I'm reading a new codebase. Here is the directory structure and main entry file. Help me map the data flow from the API endpoint to the database layer. Point out any anti-patterns you notice."
Watch: Reading Codebases Like a Pro
Step 2 of 7
Spec Formation
A spec is not a to-do list. It is a precise description of what the system must do, under what constraints, and what constitutes success. Good specs prevent you from building the wrong thing correctly. Define inputs, outputs, edge cases, and validation rules before opening your editor.
💡 Try with AI
"Here is a business requirement: [paste requirement]. Convert this into a technical spec with: 1) Inputs and types, 2) Expected outputs, 3) Validation rules, 4) Edge cases, 5) Error states. Be strict."
Watch: Business Logic to Technical Spec
Step 3 of 7
API-First Design
Design your API contract before writing any backend or frontend code. The API is the boundary between layers. Define endpoints, request shapes, response shapes, HTTP status codes, and error formats explicitly. API-first means consumers and producers can be developed in parallel.
💡 Try with AI
"Design a REST API for [feature]. Include: endpoint paths, HTTP methods, request body schema, response schema for success and error cases, HTTP status codes, and validation constraints. Output as OpenAPI YAML."
Watch: Designing APIs Correctly
Step 4 of 7
TDD Across the Stack
Test-Driven Development is a design discipline, not just a testing strategy. Write the failing test first — it forces you to define what "done" looks like before you start building. Then write the minimal code to make it pass. Refactor. Repeat. The cycle is Red → Green → Refactor.
💡 Try with AI
"Here is a function signature: [paste signature]. Generate comprehensive test cases including: happy path, boundary values, invalid inputs, and edge cases. Use pytest. Do not implement the function — only the tests."
Watch: TDD for Backend Systems
Step 5 of 7
Implementation
Implement to pass tests, not to impress. Prefer clarity over cleverness. Small, focused functions. Strong naming. Explicit validation. Handle errors at the boundary, not deep in the call stack. Separate concerns: the function that validates should not be the function that persists.
python · create_task
# ✅ Clear separation of concerns
def create_task(request_json):
    title = request_json.get("title")
    if not title or not isinstance(title, str):
        return {"error": "INVALID_INPUT", "field": "title"}, 400
    task_id = save_task(title.strip())
    return {"id": task_id, "status": "created"}, 201
💡 Try with AI
"Here is my implementation: [paste code]. Review it against these failing tests: [paste tests]. Identify: 1) why tests fail, 2) what the minimal fix is, 3) any other bugs you see. Do not rewrite the whole thing."
Watch: Clean Code in Practice
Step 6 of 7
AI Collaboration
AI is a collaborator, not an oracle. Use it to accelerate tasks you already understand — not to shortcut understanding. The moment you accept AI output you cannot explain or verify, you have made a liability. Always: understand before accepting, verify against tests, document what you used AI for.
💡 Try with AI
"I used AI to generate this code: [paste code]. Critique it as a senior engineer would: check for edge cases, validation gaps, naming clarity, and potential runtime errors. Explain each issue — don't just fix it."
Watch: Using AI Effectively for Development
Step 7 of 7
PR Validation Loop
A pull request is not a delivery mechanism — it is a review artifact. It must tell a complete story: what changed, why, what was tested, and what could break. Before requesting a review, review your own PR as if you are the most demanding engineer on the team.
💡 Try with AI
"Review this pull request diff as a senior backend engineer: [paste diff]. Check for: missing input validation, API contract consistency, error handling completeness, test coverage gaps, and performance red flags. Be specific about line numbers."
Watch: How to Review Code Like a Senior Engineer
04

Full-Stack
GitHub template

Every project starts with structure. This template enforces separation of concerns and forces you to document intent before coding.

repo structure
my-project/
├── frontend/          # UI layer
│   ├── src/
│   └── public/
├── backend/           # API + business logic
│   ├── api/
│   ├── services/
│   └── models/
├── tests/             # All test suites
│   ├── unit/
│   ├── integration/
│   └── e2e/
├── docs/              # Required documentation
│   ├── problem-statement.md
│   ├── system-design.md
│   ├── api-contracts.md
│   ├── data-models.md
│   ├── test-strategy.md
│   └── ai-usage-log.md
└── README.md
README must include
Problem Statement · System Design overview · API Contracts · Data Models · Test Strategy · AI Usage Log (mandatory)
Watch: How to Structure Your Repo
05

API design
in practice

An API is a promise. Define the contract precisely. A consumer should know exactly what to send and exactly what to expect — for success and failure.

contract · POST /api/tasks
# Request Body:
{ "title": string # required, 1-200 chars
  "dueDate": string # optional, ISO 8601
  "status": string # enum: pending|in_progress
}

# Success Response (201):
{ "id": "uuid", "status": "created" }

# Error Response (400):
{ "error": "INVALID_INPUT", "field": "title", "detail": "title is required" }
python · implementation
def create_task(request_json):
    title = request_json.get("title")
    if not title or not isinstance(title, str):
        return {"error": "INVALID_INPUT", "field": "title",
                "detail": "title is required"}, 400
    task_id = save_task(title.strip())
    return {"id": task_id, "status": "created"}, 201
Watch: Designing APIs Correctly
06

TDD in practice
full stack

Follow Red → Green → Refactor. Every function starts with a failing test. This is not optional.

Red — Write the Failing Test
Define expected behaviour first
Write a test that must fail. If it passes immediately, you don't understand the requirement. The test defines the contract before the implementation exists.
python · test · red
def test_calculate_total_price():
    items = [{"price": 100, "quantity": 2}]
    result = calculate_total_price(items, 0.1)
    assert result == 180

def test_invalid_price_raises():
    items = [{"price": -10, "quantity": 1}]
    with pytest.raises(ValueError):
        calculate_total_price(items, 0)
Green — Make It Pass
Minimal implementation only
Write the smallest amount of code that makes the tests pass. No more. No gold-plating. This step is about correctness, not quality.
python · implementation · green
def calculate_total_price(items, discount):
    if not 0 <= discount <= 1:
        raise ValueError("Invalid discount")
    total = 0
    for item in items:
        if item["price"] < 0 or item["quantity"] <= 0:
            raise ValueError("Invalid values")
        total += item["price"] * item["quantity"]
    return int(total * (1 - discount))
Refactor — Clean Up
Improve without changing behaviour
With tests green, improve the code: extract named functions, improve variable names, reduce duplication. Tests must stay green throughout. This is where craft happens.
Refactoring rules
Never change behaviour during refactoring. Run tests after every change. If a test breaks, undo the last change immediately — don't push forward.
Watch: TDD for Backend Systems
07

PR review
simulation

Learn to identify the most common classes of issues in real pull requests. Every issue below represents a bug that ships to production.

python · code under review
VALID_STATUSES = ["pending", "in_progress", "completed"]

def update_task_status(task_id, status):
    task = fetch_task(task_id)           # ① no null check
    task["status"] = status              # ② no validation
    update_task(task)
    return task                          # ③ no HTTP code

def process_order(order):
    total = 0
    for item in order["items"]:          # ④ KeyError risk
        total += item["price"] * item["qty"]
    if order.get("discount"):
        total -= total * order["discount"]  # ⑤ no range check
    return total
🔴
① Missing null check on fetch_task()If task_id doesn't exist, fetch_task() returns None. Calling None["status"] raises TypeError in production.
🔴
② Status not validated against VALID_STATUSESAny arbitrary string can be written to status. The constant is defined but never used.
🟡
③ API contract mismatch — no HTTP status codeReturning a raw dict without a status code violates the REST contract. Success should be 200, not-found 404.
🟡
④ KeyError if order["items"] is absentDirect key access on unvalidated input. Use .get() with a default, or validate at the boundary.
🔵
⑤ Discount has no range validationA discount of 1.5 produces a negative total. Enforce 0 ≤ discount ≤ 1 before applying.
Watch: How to Review Code Like a Senior Engineer
08

AI usage
guidelines

Using AI tools effectively requires discipline. These are engineering standards for this program — not suggestions.

✓ Do

  • Use AI to generate test cases you then review
  • Ask AI to explain code you don't understand
  • Use AI to identify edge cases in your spec
  • Ask AI to critique your own implementation
  • Use AI to generate boilerplate you verify
  • Document every AI prompt in ai-usage-log.md
  • Use AI to suggest refactoring — then judge each suggestion

✗ Don't

  • Accept output you cannot explain line by line
  • Skip tests because AI generated the implementation
  • Copy-paste without understanding what it does
  • Use AI to skip the spec-formation step
  • Assume AI output is correct without verification
  • Hide AI usage — always disclose in your PR
  • Let AI make architectural decisions for you
Watch: Using AI Effectively for Development
09

Prompt library

High-quality prompts produce high-quality output. Use these as starting points and refine them for your specific context.

"Design a REST API for [describe feature]. Include: endpoint paths, HTTP methods, request body schema, success and error response shapes, HTTP status codes, and validation rules. Output as OpenAPI 3.0 YAML."
"Review this API contract for inconsistencies: [paste contract]. Check: HTTP verb correctness, idempotency of PUT vs POST, error format consistency, missing status codes, and REST violations."
"Given this data model: [paste model]. Generate the complete REST API spec for CRUD operations including pagination, filtering, and sort parameters. Document which fields are required vs optional."
"Generate comprehensive pytest test cases for this function: [paste function signature]. Include: happy path, boundary values, null/empty inputs, invalid types, and business rule violations. Do NOT implement the function — tests only."
"Review this test suite for quality: [paste tests]. Identify: missing edge cases, weak assertions, test interdependencies, missing negative tests, and cases that don't verify the contract."
"Given this API endpoint: [paste spec]. Write integration tests that verify: correct status codes, response shape validation, validation rejection cases, and database state after mutation operations."
"Refactor this function for clarity and maintainability: [paste code]. Constraints: do not change behaviour, do not reduce test coverage, prefer small focused functions, improve naming. Explain each change."
"Identify code smells in this module: [paste module]. Reference: long methods, deep nesting, magic numbers, repeated logic, unclear naming. For each smell, explain the problem and suggest the refactoring pattern."
"I have a bug: [describe symptom]. Here is the relevant code: [paste code]. Here is the error: [paste]. Trace the execution path and identify the root cause. Do not give me the fix first — explain what is wrong and why."
"This test is failing: [paste test + error]. Here is the implementation: [paste]. Identify why the test fails without changing the test. If the test itself is wrong, explain why."
"Review this pull request diff as a strict senior engineer: [paste diff]. Check for: missing validation, API contract violations, error handling gaps, performance anti-patterns, missing tests, unclear naming. Give line-specific comments."
"Evaluate this PR description: [paste PR body]. Does it explain: what changed, why it changed, what was tested, what could break? Identify what's missing and suggest improvements."
10

Deployment
awareness

Code that can't be deployed isn't finished. Every engineer must understand the path from commit to production, even if they don't own the pipeline.

💻
commit
🔀
PR merge
🏗️
CI build
🧪
test suite
🎯
staging
🚀
production
📊
monitor
python · environment config
import os
DB_HOST = os.getenv("DB_HOST")
if not DB_HOST:
    raise RuntimeError("Missing DB config")

# Never hardcode credentials.
# Fail fast at startup — not silently at runtime.
What to monitor
Error rate and 5xx response rate · Latency (p50, p95, p99) · Structured logs with request IDs · Alerts on critical paths
Watch: From Code to Production
11

Final submission
checklist

Every item must be satisfied before your project is considered complete. Click to mark items off.

0 / 16 complete
📁 Documentation
🧪 Testing
⚙️ Implementation
📋 Pull Request
🎉 All checks passed — you're ready to submit.
Watch: Before You Submit Your Project