Developer Field Guide  ยท  8 Concepts

From Code
to Cloud

Everything you need before your first production deployment โ€” version control, containers, networking, and Google Cloud.

Git & GitHub Good Commits Docker Networking Google Cloud (GCP) CI/CD ยท Cloud Build Pre-Deploy Checklist
Scroll to start โ†“
01

Why this matters
before you touch Deploy

Writing code is only the beginning. Getting it running reliably โ€” for real users, on real servers โ€” requires a short but critical sequence.

๐Ÿ’ป
Your Laptop
it works here
โ†’
๐Ÿ—‚๏ธ
Git / GitHub
version control
โ†’
๐Ÿ“ฆ
Docker
containerize
โ†’
๐ŸŒ
Networking
ports, DNS
โ†’
โš™๏ธ
CI/CD
auto-checks
โ†’
โ˜๏ธ
GCP
live for users
๐Ÿ”’
Nothing is ever lost
Every change is tracked and reversible. Travel back to any point in history.
๐Ÿค
Teams don't collide
Multiple developers work simultaneously without overwriting each other.
๐Ÿ“ฆ
"Works on my machine" โ€” solved
Containers pack your app with everything it needs to run identically everywhere.
๐Ÿš€
Deploy with confidence
Automated pipelines catch bugs on your branch, not in production.
02

Git & GitHub
in plain English

Git is the tool on your computer. GitHub is the website that stores your code online. They are not the same thing.

TermThink of it asโ€ฆWhat it actually does
repositoryA project folder with superpowersTracks every change ever made to every file inside it
commitA save point with a sticky noteSnapshots your changes and records what you did and why
branchA parallel universe to experiment inWork on features without touching the "official" code
pushUpload to GitHubSends your local commits to GitHub so others can see them
pullDownload the latest versionGets teammates' newest commits from GitHub to your machine
pull requestA proposal + code reviewAsks the team to review & approve your branch before merging
mergeAccept and combine changesFolds your approved branch back into the main codebase
๐Ÿšซ Rule #1 โ€” Never commit directly to main
main is your live, deployed app. Always create a new branch, get it reviewed, then merge. Committing directly to main is like editing a document thousands of people are reading in real time.
Step 1 of 4
Create a repository and a branch
Always start by creating a new branch. Think of it as a safe sandbox โ€” your changes live here until reviewed and approved.
bash
# Create a new project and initialise Git
$ git init my-app && cd my-app
Initialized empty Git repository in ~/my-app/.git/

# Create and switch to a new feature branch
$ git checkout -b feature/add-login
Switched to a new branch 'feature/add-login'
Step 2 of 4
Stage your changes and commit
Staging (git add) is like putting items in a box. Committing (git commit) is sealing the box and labelling it clearly.
bash
# Check what files have changed
$ git status
modified: app.py   new file: templates/login.html

# Stage all changes
$ git add .

# Commit with a clear, purposeful message
$ git commit -m "Add login form with email validation"
โœ“ [feature/add-login 3f2a1c9] Add login form with email validation
Step 3 of 4
Push your branch and open a Pull Request
Pushing uploads your branch to GitHub. A Pull Request is your formal proposal for the team to review before the code goes anywhere near production.
bash
$ git push origin feature/add-login
โœ“ Branch pushed to GitHub.

# GitHub shows you a direct link to open the PR:
remote: Create a pull request:
remote: https://github.com/your-org/my-app/pull/new/feature/add-login
Step 4 of 4
Get approved and merge
โœ… A good PR description answers three questions
What changed?  ยท  Why?  ยท  How can someone test it?

You don't need to understand every line of code in a PR. Focus on the description, the approach, and whether it makes sense. That's a valuable review.
Watch: Git & GitHub for Beginners
03

Good commits
tell a story

A commit message is a note to your future self โ€” and every developer who comes after you. Write accordingly.

โŒ Useless
fix stuff
update
asdfgh
changes
wip
oops fixed it
โœ… Useful
Add JWT authentication to the API
Fix crash when cart is empty at checkout
Remove deprecated Stripe v2 API calls
Update README with GCP setup steps
Increase login timeout from 30s to 5min
Refactor DB queries to reduce N+1 problem
๐Ÿ“ The formula
Start with a present-tense verb: Add ยท Fix ยท Remove ยท Update ยท Refactor ยท Improve ยท Rename ยท Move. Say what, then optionally why. Stay under 72 characters.

๐Ÿšซ Things to never commit to Git
API keys & passwords ยท .env files ยท large binary files ยท compiled artifacts (node_modules/, __pycache__/, dist/).
Create a .gitignore file the moment you initialise a repository.
.gitignore (Python)
# Never commit these
.env
__pycache__/
*.pyc
venv/
.DS_Store
*.log
Watch: Writing Good Commit Messages
๐Ÿ“š Additional Resources
04

Docker: your app
in a self-contained box

A container bundles your code, runtime, libraries, and config into one portable unit โ€” running identically on every machine and every cloud.

๐Ÿ“
Your Code
+
๐Ÿ“š
Libraries
+
โš™๏ธ
Runtime
+
๐Ÿ”ง
Config
=
๐Ÿ“ฆ
Container
runs anywhere
TermAnalogyMeaning
DockerfileA recipeInstructions for building an image
ImageA pre-baked cakeThe built, immutable artifact โ€” ready to run
ContainerThe cake being servedA running instance of an image
Artifact RegistryA secure bakery shelfGCP's private storage for your Docker images
Step 1 of 4
Write your Dockerfile
Dockerfile
# Start from an official base image
FROM python:3.11-slim

# Set working directory inside the container
WORKDIR /app

# Copy dependency list first (Docker caches this layer)
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy your application code
COPY . .

# Port the app listens on
EXPOSE 8080

# Command to start the app
CMD ["python", "app.py"]
Step 2 of 4
Build the image
bash
$ docker build -t my-app:v1.0.0 .
[1/5] FROM python:3.11-slim ...
โœ“ Successfully built 3e8a2c9f1b2d
โœ“ Successfully tagged my-app:v1.0.0
โš ๏ธ Never tag as :latest in production
Tag with a version number or Git commit hash. :latest makes it impossible to track which version is deployed.
Step 3 of 4
Run and test it locally
bash
# Map port 8080 on your laptop โ†’ port 8080 inside container
$ docker run -p 8080:8080 my-app:v1.0.0

# Visit http://localhost:8080 to verify

# View logs from a running container
$ docker logs $(docker ps -q)
Step 4 of 4
Push to GCP Artifact Registry
bash
# Authenticate Docker with GCP
$ gcloud auth configure-docker asia-south1-docker.pkg.dev

# Tag for Artifact Registry
$ docker tag my-app:v1.0.0 \
  asia-south1-docker.pkg.dev/my-project/my-repo/my-app:v1.0.0

# Push
$ docker push asia-south1-docker.pkg.dev/my-project/my-repo/my-app:v1.0.0
โœ“ Image pushed to Artifact Registry.
Watch: Docker Tutorial for Beginners
๐Ÿ“š Additional Resources
04b

Docker Compose:
running multi-container apps

Most real apps aren't a single container โ€” they're a web server, an API, and a database running together. Docker Compose defines and starts all of them with one command.

๐Ÿ’ก The core idea
Instead of running docker run three times and manually wiring containers together, you describe your entire app stack in a single docker-compose.yml file. Compose handles the networking automatically.
๐ŸŒ
Frontend
:3000
โ†“
โš™๏ธ
Backend API
:5000
โ†“
๐Ÿ—„๏ธ
Database
:5432

All defined in: docker-compose.yml

The config file
Anatomy of a docker-compose.yml
Each entry under services is a container. Compose wires them together on a shared private network automatically โ€” they reference each other by service name.
docker-compose.yml
version: '3.9'
services:

# Your application
web:
build: .
ports:
- "5000:5000"
environment:
- DATABASE_URL=postgresql://db:5432/myapp
depends_on:
- db

# The database โ€” no Dockerfile needed, uses a public image
db:
image: postgres:15
environment:
- POSTGRES_DB=myapp
- POSTGRES_PASSWORD=use-secret-manager-in-prod
volumes:
- db_data:/var/lib/postgresql/data

volumes:
db_data:
Local development
The daily Compose workflow
Three commands cover 90% of your day-to-day usage. Compose rebuilds only what changed, so subsequent starts are fast.
bash
# Start all containers (add -d to run in background)
$ docker compose up --build
โœ“ Container web started on :5000
โœ“ Container db started on :5432

# Stop everything
$ docker compose down

# View logs from all containers at once
$ docker compose logs -f

# Run a one-off command inside a container (e.g. DB migration)
$ docker compose run web python manage.py migrate
In deployments
Compose in production โ€” what to know
Compose is excellent for local development. In production on GCP, the same concepts apply but the tooling shifts.
ContextUse Compose?GCP equivalent
Local developmentโœ… Yesโ€”
Single-server stagingโš ๏ธ PossibleCompute Engine + Compose
Production (scaled)โŒ NoCloud Run or GKE
โš ๏ธ Don't put database passwords in docker-compose.yml
Use a .env file locally (and add it to .gitignore). In production, always use GCP Secret Manager โ€” never hardcode credentials in any config file.
Watch: Docker Compose Tutorial
05

Networking basics
every developer must know

Your app runs in the cloud, but users reach it through a web of networks, ports, and addresses. Here's the minimum you need to understand.

User
BrowserMobile AppAPI Client
DNS
myapp.com โ†’ 34.102.12.45
Load
Balancer
HTTPS :443TLS terminationRoute traffic
Firewall
/ VPC
Allow :80, :443 onlyBlock all othersPrivate network
App
Container :8080Cloud Run
ConceptWhat it isWhy it matters
IP AddressA server's numeric address e.g. 34.102.12.45How machines find each other on a network
PortA numbered "door" on a server e.g. :8080One server runs many services โ€” ports tell traffic where to go
DNSTranslates myapp.com into an IP addressHumans use names; machines use numbers. DNS bridges them.
HTTP / HTTPSThe protocol browsers use to request pagesHTTPS is encrypted. Always use HTTPS in production.
FirewallRules that allow or block traffic to your serverPrevents unauthorised access โ€” only open ports you need
VPCA private network inside GCP for your team onlyYour databases and internal services are hidden from the public internet
Load BalancerDistributes traffic across multiple serversPrevents any one server from being overwhelmed; enables scaling
๐Ÿ’ก Common port numbers to memorise
:80 HTTP  ยท  :443 HTTPS  ยท  :5432 PostgreSQL  ยท  :3306 MySQL  ยท  :6379 Redis  ยท  :8080 app default

Your Cloud Run container listens on :8080. GCP's load balancer handles the public :443 and forwards traffic to it automatically.
โš ๏ธ The most common networking mistake
Opening your database port (:5432) to the entire internet (0.0.0.0/0) in firewall rules. Databases must never be publicly accessible โ€” keep them inside your VPC.
Watch: Computer Networking Fundamentals
06

Google Cloud
service by service

GCP has hundreds of services. Here are the eight you will actually use deploying your first application.

ServiceCategoryWhat it doesStart here ifโ€ฆ
Cloud RunComputeRuns your Docker container without managing servers. Scales to zero when idle.Building a web app or API โœ“
GKEComputeKubernetes cluster for many containers at scale.Multiple services, complex scaling
Artifact RegistryStoragePrivate storage for your Docker images inside GCP.Any Docker-based deploy
Cloud SQLDatabaseManaged PostgreSQL / MySQL. No server admin required.App needs a relational DB
Cloud BuildCI/CDRuns build, test, and deploy pipelines on every push.Automating GCP deployments
Cloud Load BalancingNetworkingRoutes HTTPS traffic to containers. Handles TLS certificates.Custom domains, high availability
Secret ManagerSecurityStores API keys and passwords securely in GCP.Any app with secrets โ€” all apps
Cloud LoggingObservabilityAggregates logs from all services in one place.Debugging production issues
โœ… Recommended starter stack
Cloud Run + Artifact Registry + Cloud SQL + Secret Manager + Cloud Build.
This five-service combination covers 90% of what a new production app needs.

Three environments โ€” a non-negotiable rule

๐Ÿง‘โ€๐Ÿ’ป
Development
Your local machine. Break things freely. Never touches production data.
๐Ÿงช
Staging
A mirror of production in GCP. Every change is tested here first.
๐ŸŒ
Production
Live. Real users. Only receives main merges that passed staging.
Watch: Google Cloud Platform Overview
๐Ÿ“š Additional Resources
06b

Code quality:
catch problems before CI does

Linters analyse your code automatically โ€” catching bugs, enforcing style, and flagging security issues before a single line reaches your teammates or your cloud.

๐Ÿ’ก Why lint before committing
A linting error caught on your laptop takes seconds to fix. The same error caught in a CI pipeline means a failed build, a re-push, and a wasted pipeline run. Shift the check left โ€” run the linter locally first.
๐Ÿ“
Write Code
locally
โ†’
๐Ÿ”
Run Linter
Ruff / Biome
โ†’
๐Ÿ”ง
Fix Issues
locally
โ†’
โœ…
Commit
clean code
โ†’
โš™๏ธ
CI passes
first time
Python
Ruff โ€” linter and formatter in one
Ruff replaces Flake8, isort, and pyupgrade with a single tool written in Rust. It runs roughly 100x faster than the tools it replaces and is configured from pyproject.toml.
bash
# Install
$ pip install ruff

# Check for lint errors across the whole project
$ ruff check .
app.py:12:5: F401 'os' imported but unused
app.py:34:1: E501 Line too long (92 > 88)

# Auto-fix everything that can be fixed safely
$ ruff check . --fix

# Format code (replaces Black)
$ ruff format .
โœ“ 6 files reformatted
pyproject.toml
[tool.ruff]
line-length = 88
target-version = "py311"

[tool.ruff.lint]
# E/W = pycodestyle, F = pyflakes, I = isort, S = security
select = ["E", "F", "I", "S", "W"]
ignore = ["E501"]
โš ๏ธ Enable the S (security) ruleset
The S ruleset flags common security issues โ€” hardcoded passwords, use of eval(), unsafe deserialization. It is not enabled by default. Add "S" to your select list in pyproject.toml.
JavaScript / TypeScript
Biome โ€” linter and formatter for JS/TS
Biome replaces ESLint and Prettier with a single tool. It is configured from biome.json and runs significantly faster than the tools it replaces.
bash
# Install as a dev dependency
$ npm install --save-dev @biomejs/biome

# Initialise config file
$ npx biome init

# Check for lint and format issues
$ npx biome check .
src/api.ts:8:3 lint/suspicious/noConsole โ”โ”โ”โ”
โœ– Don't use console.log in production code

# Auto-fix safe issues
$ npx biome check --write .
โœ“ 12 files fixed
biome.json
{
"$schema": "https://biomejs.dev/schemas/1.9.0/schema.json",
"linter": {
"enabled": true,
"rules": { "recommended": true }
},
"formatter": {
"enabled": true,
"indentStyle": "space",
"lineWidth": 100
}
}
Automation
Running linters automatically in CI
Adding linting to your GitHub Actions workflow ensures every pull request is checked before it can be merged. A PR with lint errors is blocked automatically โ€” no manual review needed to catch them.
.github/workflows/lint.yml
name: Lint
on: [pull_request]

jobs:
lint-python:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with: { python-version: '3.11' }
- name: Ruff check
run: pip install ruff && ruff check .

lint-js:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: '20' }
- name: Biome check
run: npm ci && npx biome ci .
โœ… biome ci vs biome check
Use npx biome ci . (not check) in pipelines โ€” it exits with a non-zero code on any issue, which tells GitHub Actions to fail the build. The check command is for local use.
Watch: Ruff โ€” Python Linting & Formatting
07

CI/CD: let the machine
check your work

Continuous Integration runs automated checks on every push. Continuous Deployment ships to GCP automatically when all checks pass. You focus on writing code.

๐Ÿ“
git push
you
โ†’
๐Ÿ”ฌ
Run Tests
pytest
โ†’
๐Ÿ”
Lint & Scan
ruff
โ†’
๐Ÿ“ฆ
Docker Build
Cloud Build
โ†’
๐Ÿš€
Deploy
Cloud Run

GCP's native CI/CD tool is Cloud Build. It reads a cloudbuild.yaml file in your repository and triggers automatically on every push to main.

cloudbuild.yaml
steps:

# 1. Run tests
- name: 'python:3.11'
entrypoint: bash
args: ['-c', 'pip install -r requirements.txt && pytest']

# 2. Build Docker image, tagged with the Git commit hash
- name: 'gcr.io/cloud-builders/docker'
args:
- build
- -t
- asia-south1-docker.pkg.dev/$PROJECT_ID/my-repo/my-app:$COMMIT_SHA
- .

# 3. Push to Artifact Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'asia-south1-docker.pkg.dev/$PROJECT_ID/my-repo/my-app:$COMMIT_SHA']

# 4. Deploy to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
args:
- run
- deploy
- my-app
- --image=asia-south1-docker.pkg.dev/$PROJECT_ID/my-repo/my-app:$COMMIT_SHA
- --region=asia-south1
- --platform=managed
๐Ÿ’ก Why $COMMIT_SHA matters
The image is tagged with the exact Git commit hash. This means you always know precisely which code is running in production โ€” and can roll back to any previous commit instantly.
Watch: CI/CD Pipeline Explained
08

Before you hit
Deploy

Run through this before every deployment. Click each item to check it off.

0 / 15 complete
๐Ÿ“ Version Control
๐Ÿ“ฆ Docker / Container
๐ŸŒ Networking & Security
๐Ÿ” Code Quality
โ˜๏ธ GCP Deployment
๐ŸŽ‰   All checks passed โ€” you're ready to ship.
Watch: Deploying to Production
๐Ÿ“š Going deeper
โ†—

What comes
next

You've covered the full journey from code to cloud. The next step is learning how to work effectively with AI tools as part of your development workflow.

โœ… You're ready for this ifโ€ฆ
You've worked through all eight chapters here and feel comfortable with Git, Docker, and the basics of deploying to GCP. The AI Induction tutorial picks up from here.
๐ŸŽ“ AI-Assisted Developer Induction
How to work with AI tools effectively, responsibly, and as part of a real development team. Covers prompting, code review, limitations, and team workflows.
Start the induction โ†—