Beyond vibe coding — how to build production-grade systems block by block, track every decision, and turn AI Studio into a disciplined engineering environment.
Contents
This section is the practical walkthrough this guide builds toward. Everything in sections 01–12 is reference material. This section is the sequence. Follow it start to finish for any new project — including converting an existing static site into a real application with auth, a CMS layer, and gated content.
The example project used here is realistic and complete: converting a static HTML site on Firebase into a structured application with a markdown-driven public layer, a gated notes/intelligence section on Cloud Run, and Lemon Squeezy payment integration. This is not hypothetical — it is the exact architecture described in the product plan and the exact project you will execute on danielflugger.com.
Complete the introductory GDG codelab first. You need to know how to create a project, use the Build tab, and do a one-click Cloud Run deploy. This walkthrough picks up exactly where the codelab ends.
Open a text editor, not AI Studio. Create a folder for your project. Create three files: DASHBOARD.md, EVOLUTION.md, and Design.md. Use the minimal templates from Sections 03, 04, and 05. Fill in only what you know right now — leave everything else as [DECIDE]. Commit these three files to a new GitHub repo before writing a single line of code.
New GitHub repo. Clone locally. Create DASHBOARD.md, EVOLUTION.md, Design.md from the templates in this guide. Commit: chore: project scaffolding — dashboards and evolution log
One paragraph: what you are building, who it is for, what "done" means, what it explicitly does NOT do. This takes 10 minutes and saves 10 hours. Do not skip it. Log it as your first EVOLUTION.md entry.
Before touching AI Studio, sketch your data model. For a site with gated content: User (id, lemon_squeezy_license, tier), Post (slug, title, body_md, tier, published_at), LogEntry (slug, title, body_md, published_at — public). Draw the table/model diagram on paper or in a note. You will give this to AI Studio as an image in step 5.
Go to your GCP Console. Copy your Project ID into DASHBOARD.md. Choose your region. List only the services you have already provisioned or will provision in the first block. Leave everything else blank with [ADD WHEN PROVISIONED]. Commit.
Open AI Studio. Create a new project. Connect it to your GitHub repo. Paste your System Instructions from the template in Section 06. Upload your three dashboard files as context. Then — and only then — write your first prompt.
# Upload: DASHBOARD.md, EVOLUTION.md, your hand-drawn data model photo
"Read DASHBOARD.md and EVOLUTION.md first.
Generate the Pydantic v2 data models for this project only —
no routing, no database connections, no UI. Three models:
User, Post, LogEntry.
Rules:
- Strict Pydantic v2 validation
- No optional fields unless explicitly noted
- Add a docstring to every model explaining its purpose
- Put all three models in models.py only
- Do not create any other files"
When the models are generated, review them carefully before accepting. Run the verification prompt from Section 10: ask the model to list every assumption it made. Adjust. When the models look correct, create a GitHub checkpoint, commit to GitHub: feat(data): Pydantic v2 models — User, Post, LogEntry [arch-001]. Log the decision in EVOLUTION.md.
Pick the single most important data operation in your project and build only that. For a site with gated content, it is license validation — the function that checks whether a Lemon Squeezy license key is valid before serving gated content. Everything else waits.
"Add a license validation function in auth.py only. The function: validate_license(license_key: str) -> bool - Makes a POST request to the Lemon Squeezy license validation API - Returns True if the license is active and valid - Returns False for all other states - Logs the API call at INFO level with the response status (not the key itself) - Includes a dry_run parameter that returns True without calling the API - Raises LicenseValidationError (define it in auth.py) on network failure Do not create a route, a UI, or any other file. Add the Lemon Squeezy API endpoint to config.py — not hardcoded."
After every prompt that generates code, run the verification prompt before accepting: "List every file you modified, every new dependency added, and every assumption made." Make this reflexive. It takes 10 seconds and catches the silent changes that become debugging sessions later.
You now have models and a license validation function. The next block connects them: a FastAPI route that accepts a license key, validates it, and returns the appropriate content tier. One route, one file.
"Add one FastAPI route in routes/content.py only.
Route: GET /content/{slug}
- Accepts header: X-License-Key: str
- Calls validate_license() from auth.py
- If valid: queries the database for the Post with matching slug
(use a stub function get_post_by_slug(slug) in db.py —
return a hardcoded Post object for now, real DB comes next block)
- If invalid: return 403 with message 'Valid license required'
- If post not found: return 404
- Return Post as JSON using the Pydantic model from models.py
Do not add the database connection yet — use the stub.
Do not add authentication middleware — just the header check for now."
The stub pattern is the key technique for newcomers. You build the route shape with fake data, verify the logic is correct, then replace the stub with real database calls in the next block. This keeps each block independently testable and prevents the common failure mode of building everything at once and not knowing where the error is.
Only now do you add the Cloud SQL connection. One prompt, one file, replacing one stub.
"Replace the get_post_by_slug stub in db.py with a real Cloud SQL (PostgreSQL) query using asyncpg. Requirements from DASHBOARD.md: - Cloud SQL instance: [from your dashboard] - Use Cloud SQL Python Connector for authentication - Do not hardcode connection strings — read from environment variables - IAM permission needed: roles/cloudsql.client — confirm this is in DASHBOARD.md - Connection should use connection pooling (min 2, max 10) - Add a get_db_pool() function that initializes once at startup - The get_post_by_slug function: returns Post | None Update DASHBOARD.md § Active Services to add Cloud SQL. Do not touch any other file."
The public log (markdown files → static HTML → Firebase) is a separate layer from the gated content. It requires a GitHub Action, not Cloud Run. Prompt AI Studio to generate the GitHub Action workflow file.
"Create .github/workflows/deploy-log.yml only. This workflow: - Triggers on push to main when any file in /log/*.md changes - Uses a Python script (scripts/build_log.py) to convert all /log/*.md files to /public/log/*.html - Each HTML file uses a minimal template that matches the style tokens in DASHBOARD.md § Colors and § Typography - Deploys the /public directory to Firebase Hosting - Requires secret: FIREBASE_SERVICE_ACCOUNT (already configured) Also create scripts/build_log.py with the conversion logic. Use python-markdown library. Keep the HTML template minimal — nav, article, footer. No JavaScript required."
At this point you have: data models, license validation, a content route with real database, and a public log pipeline. Deploy the FastAPI app to Cloud Run using the production flags from Section 11 — not the one-click deploy. Test every endpoint against the real Cloud Run URL, not the AI Studio preview. The latency, error handling, and IAM behavior will be different. Fix what breaks. Log it in EVOLUTION.md.
Only when the logic layer is deployed and tested do you build the UI. In AI Studio's Build tab, your System Instructions already have the style guide. Your first UI prompt is simple:
"Build a minimal HTML/CSS/JS frontend for the gated content layer.
It needs three views only:
1. /log — public list of log entries, links to individual entries
2. /notes — license key input form + content view (calls GET /content/{slug})
3. /notes/{slug} — single gated post view
Use only the design tokens in DASHBOARD.md § Colors and § Typography.
No JavaScript frameworks — vanilla JS for the license key form only.
No new colors, fonts, or spacing values.
Start with view 1 only. Do not build views 2 and 3 yet."
If you're building anything with streaming responses from Gemini, test it on Cloud Run — not the preview. Prototype with non-streaming, switch to streaming at deploy time. Log this in EVOLUTION.md § What Was Tried and Abandoned the first time it happens.
Your System Instructions say to ask first, but it will still sometimes add a new import silently. The verification prompt catches this. Run it after every significant code generation — not just when you're suspicious.
If you make a quick fix directly in GitHub, it will not sync back to AI Studio. Make all code changes inside AI Studio, commit out to GitHub. If you need to restore from GitHub, export the code and start a new AI Studio project.
The AI Studio preview is warm and fast. Your Cloud Run deployment with min-instances=0 will have a cold start on the first request after inactivity. For interactive tools, set min-instances=1 and log the cost implication in EVOLUTION.md.
AI Studio prototype assumes ambient permissions. Cloud Run requires explicit IAM. List the exact roles your service account needs in DASHBOARD.md before deploying. The most common missing role: roles/cloudsql.client for Cloud SQL connections.
If the model cannot fix something in two rounds, it will not fix it in five. Revert to the last checkpoint. Document the failure in EVOLUTION.md § What Was Tried and Abandoned. Approach from a different angle. This is the discipline that separates fast builders from people who spend a day arguing with a broken codebase.
The introductory codelab covers the mechanics. This guide starts where it ends. If you've watched the pre-recorded session, you know how to get an app running. The question this guide answers is different: how do you use AI Studio as a disciplined engineering environment when you don't know what the project is going to become?
Google AI Studio's differentiating advantages over competing platforms aren't obvious from the homepage. Privacy is one — your apps are private by default, unlike most free-tier vibe coding tools. Native Gemini access is another — no API key required for the free tier models, and the Gemini Developer API is notably faster and more code-precise than earlier versions. Meaning the generated code tends to be leaner and closer to what you actually need.
But the real advantage for serious builders is the combination of AI Studio's rapid iteration loop with a disciplined project infrastructure you build around it. Most guides teach you how to use the tool. This one teaches you how to engineer with it.
Technical leads, GCP builders, and founders who have shipped real systems and want to use AI Studio as a production prototyping environment — not just a demo tool. Intermediate to advanced. You should have completed the introductory codelab before reading this.
The single most useful thing you can do before writing your first prompt to the AI Studio coding agent is create two reference artifacts: a Style Guide Dashboard and a GCP Stack Reference Card. Both live as files in your project. Both get referenced in your System Instructions. Both make every subsequent prompt more precise.
This pattern solves the most common AI Studio failure mode: the model generating inconsistent UI, wrong service names, incorrect IAM patterns, or drift from your design system across a long build session. When the model can read your reference card, it stops hallucinating your stack.
Two sections: Style Guide and GCP Stack. Keep each section to one screen — the goal is fast reference, not documentation.
One line: "Before any UI or infrastructure change, check DASHBOARD.md for design tokens and approved service names."
The dashboard is a living document, not a spec. When you add a service or change a color, update the card. Commit the update to GitHub.
One file means one read operation for the model. When the coding agent needs to understand both your UI constraints and your infrastructure, having it in one place reduces the chance it checks one and ignores the other. Keep it short enough to fit in a single context window read.
The style guide section of your dashboard is a minimal design token reference. Not a full design system — just the set of values the AI needs to generate consistent UI without asking you. Colors, typography, spacing scale, component patterns you've approved. Here's what a production-ready style dashboard looks like:
The "Do Not Use" section is underrated. Explicitly telling the model what not to generate is as important as telling it what to generate. The AI will default to whatever is most common in its training data — usually Inter, rounded corners, and purple gradients. Block it explicitly.
When you don't know what the project is going to be yet, start with a minimal dashboard and add tokens as you make decisions. Each new decision becomes a committed line in the dashboard. The dashboard evolves with the project. By the time you're shipping, it's a complete record of every design decision you made and why — which is exactly what a new team member or a future version of you needs.
# STYLE GUIDE — [Project Name] # Update this file when design decisions are made. Commit every change. ## Colors --background: #ffffff --surface: #f8f7f4 --ink: #111827 --muted: #6b7280 --accent: [DECIDE: blue #2563eb OR green #059669] --rule: #e5e7eb ## Typography Body: DM Sans 300/400/500 Headings: Playfair Display 400/600 Code: DM Mono 400/500 ## Spacing Base unit: 4px Component padding: 16px Section spacing: 64px ## DECISIONS PENDING # - Accent color (above) # - Mobile nav pattern # - Table or card layout for data views
Add this to your System Instructions: "The style guide is in DASHBOARD.md under ## Colors and ## Typography. Never introduce a color, font, or spacing value not listed there. When you need something not in the style guide, ask before adding it."
The second section of your dashboard is a quick-reference card for your Google Cloud architecture. Its purpose is not documentation — it's to prevent the model from inventing service names, using deprecated APIs, or connecting services in ways that don't match your actual IAM setup.
Keep it to the services actually in use. Do not list aspirational services. The model reads this as ground truth.
AI Studio will sometimes substitute model names it thinks are equivalent. Add to System Instructions: "Never change the model name specified in DASHBOARD.md. If you think a different model is better, flag it — do not change it silently." Model substitutions break cost estimates, latency assumptions, and capability contracts.
List the minimum IAM roles your service account has. When the coding agent knows it can't use owner-level permissions, it writes code that works within the actual constraint rather than code that requires you to elevate permissions at deploy time. This is the single biggest source of friction between AI Studio prototypes and production deployments — the prototype assumed permissions the prod environment doesn't have.
## GCP Stack Project: your-project-id Region: us-central1 ### Active Services # Only list what is actually provisioned Compute: Cloud Run (serverless) AI: Vertex AI Gemini — gemini-2.0-flash-001 Storage: [ADD WHEN PROVISIONED] Database: [ADD WHEN PROVISIONED] ### IAM — Service Account Permissions roles/run.invoker roles/aiplatform.user # Add roles here as they are granted — never assume owner ### Do Not Use # List services/patterns to avoid for this project
The project dashboard is your reference. The Project Evolution MD is your log. It answers the question that every developer eventually asks: why did I make that decision three weeks ago?
Most documentation captures what was built. The Project Evolution MD captures what was decided, what was tried and abandoned, what changed direction and why. It is a running record of the project's thinking, not just its state. This is the artifact that makes a project recoverable — by you, by a future collaborator, by an AI coding agent that needs to understand the context of the current codebase.
Four sections, maintained in order. Never delete old entries — append only. The log is the proof of process.
# PROJECT EVOLUTION — [Project Name] # Append-only. Never delete. Each entry: date · decision · reasoning · alternatives considered. ## 01. Project Brief What we are building: [One paragraph. Written at project start. Do not update — see ## Pivots] Who it is for: [Specific audience — not "developers", but "GCP engineers at Series B companies"] Definition of done: [What does "shipped" mean for this project?] Non-goals: [Explicitly what this does NOT do — critical for keeping scope] ## 02. Architecture Decisions # Format: [DATE] DECISION — reasoning — what we considered and rejected [2026-03-15] Database: Cloud SQL over Firestore Reasoning: Need PostGIS spatial queries. Firestore has no spatial index. Alternatives considered: Firestore (rejected: no ST_DWithin), AlloyDB (rejected: cost at this scale) Cost implication: ~$30/month vs $0 Firestore free tier. Accepted. [2026-03-18] Auth: Lemon Squeezy JWT over custom session tokens Reasoning: No time to build auth infrastructure. LS license tokens are sufficient for V1. Revisit at: 500 users or when enterprise SSO is required. ## 03. What Was Tried and Abandoned # This section is as important as ## 02. Prevents re-trying failed approaches. [2026-03-16] Tried: Streaming responses from Vertex AI in AI Studio Build tab Result: Streaming breaks the AI Studio preview iframe. Works fine in Cloud Run. Do not attempt streaming in AI Studio Build — prototype with non-streaming, switch at deploy. ## 04. Pivots # When the brief changes, document it here. What changed, why, what it affects. [2026-03-20] Pivot: From B2C dashboard to B2B API Original brief: consumer-facing dashboard. New direction: headless API with documentation site. Trigger: First three user conversations revealed buyers want to integrate, not use a UI. Affects: Remove all UI components. Focus on OpenAPI spec and rate limiting. ## 05. Open Questions # Unresolved decisions. Move to ## 02 when decided. [ ] Should the rate limiting be per-user or per-organization? [ ] Does the Cloud Run instance need GPU for inference or CPU-only? [ ] What is the right batch size for BigQuery ingestion at 10k records/hour?
The evolution MD fails when it becomes a chore. Keep entries short — three to five lines per decision. Date every entry. Use the exact format above so it's parseable. The discipline is in the habit, not the length. A 30-second entry after every significant decision compounds into an invaluable project record.
Give the coding agent access to EVOLUTION.md and add to System Instructions: "Before suggesting an approach, check ## What Was Tried and Abandoned to verify we have not already tried it. If it is listed there, do not suggest it again." This alone eliminates a significant class of repeated mistakes in long build sessions.
Your Git commit history and the Evolution MD are complementary. The commit log shows what changed. The Evolution MD shows why. Together they make the project's reasoning fully recoverable. When you export your project from AI Studio to GitHub, every milestone commit should correspond to an entry in the Evolution MD.
| Git Commit Message | Corresponding Evolution MD Entry |
|---|---|
| feat: add PostGIS spatial query layer | [DATE] Database: Cloud SQL over Firestore — reasoning logged |
| refactor: remove streaming, use batch | [DATE] Tried: streaming in AI Studio — failed, documented |
| chore: pivot to headless API | [DATE] Pivot: B2C dashboard → B2B API — trigger and impact logged |
| fix: correct IAM roles for Cloud Run invoker | [DATE] Auth pattern: service account permissions — roles listed in DASHBOARD.md |
The System Instructions field in AI Studio is the most underused enterprise feature on the platform. Most developers leave it blank or write a single sentence. Treat it instead as the onboarding documentation you would write for a new junior engineer joining your project. They need to know how you work, what the stack is, how to communicate, and what they absolutely must not do.
## Project Context You are a senior software engineer on [Project Name]. Stack: [FastAPI / Cloud Run / BigQuery / Vertex AI Gemini]. Read DASHBOARD.md before any code change. It contains style tokens and approved GCP services. Read EVOLUTION.md before suggesting an approach — check ## What Was Tried and Abandoned first. ## Code Standards - One file per feature. Never create a monolithic app.tsx or main.py. - Add docstrings to every function: what it does, inputs, outputs, raises. - Start every file with a 3–5 line comment: what this feature does and its use cases. - Maintain Design.md at project root — update it when any feature changes. - Group all configurable values (model names, bucket names, endpoints) in config.py or config.ts. - Log all function calls at INFO level with parameters. Log all LLM calls with model, prompt, config, and output (strip inline binary data). - Always create a dry-run / test path that does not alter live data. ## Model Names (DO NOT CHANGE) - Gemini: gemini-2.0-flash-001 - Embeddings: text-embedding-004 If you believe a different model is better, flag it with a comment — do not substitute silently. ## Communication Protocol - If a request is ambiguous, ask one clarifying question before writing code. - If you are about to add a new dependency not in requirements.txt, ask first. - If you cannot implement something within the IAM permissions in DASHBOARD.md, say so. - Do not add features not explicitly requested. Scope creep starts here. - When you make a tradeoff, document it inline with a # TRADEOFF: comment. ## What Not To Do - Do not use App Engine, Cloud Functions, or Firestore (see DASHBOARD.md § Do Not Use). - Do not introduce colors or fonts not in the style guide. - Do not require owner-level IAM permissions. - Do not use rounded-xl, box-shadow on cards, or purple/gradient accents. - Do not change working code without being asked. Only fix what is broken.
The "What Not To Do" section is the highest-value addition most developers skip. Every line there represents a mistake that either happened in a previous session or a failure mode you've seen in production. It's a boundary layer, not a limitation.
Treat System Instructions as a living document. When the model makes a mistake you have to correct repeatedly, add a line preventing it. When you join a new project phase, update the stack section. Version-control your System Instructions by pasting a copy into your EVOLUTION.md at each major project milestone.
The most common mistake in AI Studio — even by experienced developers — is prompting for the complete application before the architecture is clear. The result is a monolith that works in the demo and breaks under the first real requirement. The block-by-block approach inverts this.
"The discipline is not in the prompt. It is in what you refuse to prompt for until the previous block is solid."
Before any UI or API, define the data model. Ask AI Studio to generate the Pydantic model or TypeScript interface only. No routing, no UI, no database connections. Verify the shape is correct for your use case.
Write one function that does one thing against one GCP service. A BigQuery query that returns rows. A Vertex AI call that returns an embedding. Test it in isolation. Commit it. Document the result in EVOLUTION.md.
Connect two blocks. The data model to the database write. The embedding to the vector search. Not three connections at once — one. When it works, commit. When it doesn't, revert (not debug endlessly).
Build the UI as the final layer over working logic. An AI Studio prototype with a beautiful UI and broken business logic has negative value — it looks done when it isn't. Verify the logic layer first, always.
When a block is solid, deploy it to Cloud Run as its own endpoint. Get a real URL. Test against real infrastructure. The latency and error profile of Cloud Run is different from the AI Studio preview — find out early.
AI Studio gives you checkpoints and GitHub sync. The rule: if the model cannot fix a problem in two rounds, revert to the last checkpoint. Do not negotiate with a broken codebase. Copy the broken approach into EVOLUTION.md under ## What Was Tried and Abandoned, note why it failed, and approach from a different direction. This discipline saves more time than any prompting technique.
AI Studio is the right tool for rapid block prototyping. Switch to Claude Code or a local environment when: you need fine-grained IAM configuration, complex Cloud Build pipelines, multi-repo coordination, or anything requiring a local filesystem. The tools are complementary — AI Studio for the iteration loop, Claude Code for the integration and deployment layer.
The Gemini Developer API (not the Vertex AI Gemini endpoint) is meaningfully different in practice: faster response times, more precise code generation with less boilerplate, and a cleaner Python client. For prototyping in AI Studio and for lightweight production workloads, it is often the better choice. For enterprise workloads requiring VPC, CMEK, or Vertex AI feature store integration, stay on the Vertex AI endpoint.
| Scenario | Use Gemini Developer API | Use Vertex AI Gemini |
|---|---|---|
| AI Studio prototyping | ✓ Native, no config | — |
| Fast iteration, light workloads | ✓ Lower latency | — |
| Enterprise VPC / private network | — | ✓ Required |
| CMEK / data residency compliance | — | ✓ Required |
| Vertex AI Feature Store integration | — | ✓ Required |
| RAG with Vertex AI Search | — | ✓ Required |
| Multi-model pipeline (mix providers) | ✓ Cleaner client | — |
One of the most underused features in AI Studio: the Get Code button exports your exact prompt configuration — model, system instructions, temperature, top-p — as executable Python, JavaScript, or cURL. This is not a template. It is the production code for that call, ready to drop into a FastAPI endpoint or Cloud Function.
import google.generativeai as genai import logging from config import GEMINI_MODEL, SYSTEM_INSTRUCTION # centralized config log = logging.getLogger(__name__) def generate_response(user_prompt: str, temperature: float = 0.2) -> str: """ Generate a response using the Gemini Developer API. Args: user_prompt: The user's input string. temperature: Controls output randomness. Lower = more precise. Returns: str: Model response text. Raises: google.api_core.exceptions.GoogleAPICallError: On API failure. """ log.info("generate_response called", extra={"prompt_len": len(user_prompt), "temp": temperature}) model = genai.GenerativeModel( model_name=GEMINI_MODEL, # from config — never hardcode system_instruction=SYSTEM_INSTRUCTION, generation_config=genai.GenerationConfig( temperature=temperature, top_p=0.95, max_output_tokens=2048, ) ) response = model.generate_content(user_prompt) log.info("generate_response complete", extra={"output_len": len(response.text)}) return response.text
Note the pattern: centralized config, docstring on every function, logging of all LLM calls, no hardcoded model names. These are not stylistic preferences — they are the difference between a prototype that works once and a system that is maintainable.
AI Studio's built-in checkpoints and GitHub sync are complementary tools. Understanding when to use each determines whether your project history is a recoverable asset or a tangle of partial saves.
Use checkpoints aggressively — before any significant prompt, before a refactor, before testing a major integration. They are free and fast. The rule: if you would be annoyed to lose the last 20 minutes of work, create a checkpoint first. Note the one critical limitation: checkpoints capture code state, not database state. Do not revert to a checkpoint that predates a schema migration or database write.
Commit to GitHub at meaningful milestones, not continuously. The goal is a clean commit history where each commit corresponds to a working block. Suggested commit points:
AI Studio → GitHub sync is currently one-way. You can commit from AI Studio to GitHub, but changes made directly in GitHub will not sync back to AI Studio. Treat GitHub as your milestone archive and AI Studio as your active working environment. If you need to restore from a GitHub commit, export the code and create a new AI Studio project from it.
# Format: type(scope): description [EVOLUTION.md ref] feat(data): add PostGIS spatial query layer [arch-001] feat(api): BigQuery ingestion endpoint, dry-run included [arch-002] refactor(ai): switch to batch inference, remove streaming [abandon-001] fix(iam): correct Cloud Run invoker permissions [arch-003] pivot(scope): remove UI layer, headless API only [pivot-001] docs(dashboard): update GCP stack — add Cloud SQL [arch-004] # The [ref] maps to your EVOLUTION.md entries # Anyone reading the git log can find the reasoning in the MD file
The following patterns apply specifically to AI Studio's Build tab coding agent — not the chat interface. They are ordered by impact.
State constraints before the request. The model uses constraints to scope its output — constraints stated after the request are often partially ignored.
❌ Weak: "Add a user authentication system using Firebase Auth." ✓ Constraint-first: "Add user authentication. Constraints: no Firebase (see DASHBOARD.md § Do Not Use). Use Cloud Run + Lemon Squeezy JWT validation only. Touch only auth.py and middleware.py — no other files. Do not add new dependencies without asking."
For UI feedback, take a screenshot, draw directly on it (AI Studio's Annotate App feature), and combine it with a one-line text description. Visual annotation dramatically reduces the clarification loop for layout and spacing changes. Write the text description anyway — the combination of annotated image and text outperforms either alone.
After any significant code generation, before accepting the changes, run this prompt:
"Before I accept these changes: list every file you modified, every new dependency you added, and every assumption you made about the existing codebase. Flag anything that is a # TRADEOFF or a # TODO."
This surfaces silent changes — files the model edited that you didn't ask it to touch, dependencies added without asking, assumptions made about data shape. Catching these before acceptance saves significant debugging time.
Issue this at project start, in System Instructions, and repeat it whenever the codebase shows signs of monolith growth. One file per feature is the rule. When the coding agent generates a 600-line main.py, it has violated this rule and the file needs to be split before continuing. Monoliths in AI Studio are almost impossible to maintain across sessions — the model loses track of what's in the file and begins generating contradictory code.
Upload architecture diagrams, data model ERDs, or whiteboard photos alongside your prompt. The model interprets these accurately and uses them to validate its generated code against your intended structure. A photo of a hand-drawn ER diagram is a legitimate and effective input — don't wait until you have a polished diagram.
| Task Type | Temperature | Reasoning |
|---|---|---|
| SQL / API / data models | 0.1 – 0.2 | Precision required, no creative variation |
| Business logic / algorithms | 0.2 – 0.4 | Mostly deterministic with some flexibility |
| UI layout / component design | 0.5 – 0.7 | Multiple valid solutions, creativity appropriate |
| Documentation / naming | 0.6 – 0.8 | Variety in phrasing is useful |
| Brainstorming / architecture | 0.8 – 1.0 | Exploring solution space, not committing |
AI Studio's one-click Cloud Run deploy is fast and useful for sharing prototypes. For anything approaching production, you need to understand what it's doing and where it makes decisions you need to override.
gcloud run deploy [SERVICE_NAME] \ --image gcr.io/[PROJECT_ID]/[IMAGE] \ --region us-central1 \ --platform managed \ # Use a dedicated service account with minimum permissions --service-account [SA_NAME]@[PROJECT_ID].iam.gserviceaccount.com \ # Set explicit resource limits — never use defaults in production --memory 512Mi \ --cpu 1 \ --min-instances 0 \ --max-instances 10 \ # Require authentication — remove allow-unauthenticated unless intentional --no-allow-unauthenticated \ # Set concurrency to match your application's thread safety --concurrency 80 \ # Environment variables — never hardcode secrets in image --set-env-vars "PROJECT_ID=[PROJECT_ID],REGION=us-central1" \ --set-secrets "GEMINI_API_KEY=gemini-api-key:latest"
The AI Studio preview runs in a warm, low-latency environment. Cold start on a Cloud Run instance with min-instances=0 adds 2–8 seconds to the first request after inactivity. For interactive applications this is unacceptable. Test your application's cold start latency early — set min-instances=1 for latency-sensitive workloads and document the cost implication in EVOLUTION.md.
Every tool in this guide — the dashboard, the evolution MD, the system instructions, the block-by-block methodology — serves a single underlying principle: your build process should be as verifiable as your output.
The EVOLUTION.md commit history is not just project management. It is a provenance record. Every decision logged, every abandoned approach documented, every pivot explained — this is the evidence layer that makes your work defensible, transferable, and recoverable. In a world where AI can generate the code, the thing that carries your name is the reasoning underneath it.
The infrastructure described in this guide is deliberately portable. Markdown files. Git commits. Standard GCP services. No platform lock-in, no proprietary format. The dashboards and logs work identically whether you're using AI Studio, Claude Code, a local terminal, or any combination. That portability is intentional — your project knowledge should be sovereign, not trapped in a tool's export format.
The VERA framework formalizes this verification approach with cryptographic session certificates and a formal maturity model for AI governance. If the verification layer resonates, read the framework and The Proof Economy essay for the full context.
"You do not build the proof layer to protect something you have stopped caring about. You build it, and in the building, you remember why you cared in the first place."
— from The Proof Economy