X -icon

Want early access?
Here’s how to connect.

Book a demo with the founders

Flow AI founders

Vibescoping 101: Guide to LLM-assisted product scoping

Practical guide with prompts, templates, and a full walkthrough of our AI-assisted scoping workflow.

Introduction

Building software in a startup context involves far more than writing code. It requires uncovering structure from ambiguity, aligning with user needs, and navigating the interplay between design, tools, and delivery. At the heart of this process lies the scope, and building that demands both creativity and structure.

In the process of building an MVP of our new product, we leveraged LLMs like Gemini 2.5 Pro as collaborative partners, acting as both companion and champion across key stages of product planning:

  1. We began with initial scoping from scattered pieces of information across various channels and contexts. The output was a scope structure relevant to our needs.
  2. We created a database model as a data foundation for our MVP.
  3. We ideated and defined API route level user stories. This was somewhat unorthodox, but it enabled us to think of granular details proactively and automated much of the API spec creation process.
  4. Using the user stories as input, we drafted an API specification using OpenAPI standards.
  5. Finally, using all previous knowledge, we defined tickets using a relevant ticket structure and plugged them into Linear.
👤
While AI helped accelerate the process, every step was human-led and included deep review. The LLMs served as collaborators, helping us think more clearly, not think for us.

Tools we used

  1. Notion: Served as our central hub for documentation, from early drafts to final scoping using nested pages, structured tables, and embedded code blocks.
  2. Gemini Assistant: To use Gemini 2.5 Pro, we opted for the assistant interface. Relevant documents were uploaded or data copy/pasted in the prompts.
  3. Excalidraw: To design and collaborate on diagrams.
  4. Linear MCP: To automate the creation of parent and child tickets. We used it with the Cursor AI agent for convenience, but it can work with any MCP-enabled agent.

1. Input collection

While direct inputs shape the scope, indirect upstream constraints often introduce unknown variables that can derail planning if left unchecked.

Therefore, we categorised information bases and fed them as preliminary data to our scoping process. These include:

  • Direct inputs: Technical documentations, architecture diagrams and notes, sequence diagrams, and insights from previous experiments leading up to the MVP.
  • Indirect inputs: Technical stack selection and relevant constraints, documentation of libraries/frameworks, and API specs of third-party tools.

Once collected, this information was fed into our iterative process:

2. Initial scoping

Step 1: Manual drafting

Once inputs were collected, we created a rough scope: an unstructured brain dump that helped us get everything into one place.

🧠
This document had everything: endpoint ideas, partial data model diagrams, architectural assumptions, and notes about tooling and tech stack. But it wasn’t yet something we could plan or delegate from.

Step 2: Structuring with Gemini

We then used Gemini 2.5 Pro (in canvas mode) to analyze and structure this rough scope.

Prompt example for structuring

Propose the best structure for the WIP scope following SE best practices. The resulting scope should be lightweight, but well organised. Do not fill out sections yet.

Analyse the SDK functionality and suggested endpoints, look at the WIP data model and data engine, and the API endpoints that we need for the backend.
Note that the SDK authenticates with API key and the frontend is authenticating with JWT.

System design:
1. Backend with FastAPI and neon DB postgresSQL. Same backend for data-engine and expert annotation service (UI)
2. Frontend with typescript React, tailwind and VITE - There is a PoC in place so it only needs adaptation. Leave it as placeholder at the moment
3. SDK - python and typescript. Will be generated using stainless from Open API specs. Keep this in mind.
4. Authentication with Clerk

For context, the ultimate goal is the need to refine the scope of the MVP that I am building for my startup, before I can onboard my team to start creating tickets, etc. and planning the execution.

I have attached the work-in-progress document.

Gemini output: structured MVP scope template

Gemini provided a well-structured scope outline that broke things into 12 logical sections.

Here’s a condensed version:

# MVP Scope Document: [Your Startup Name]

## 1. Introduction & MVP Goals
## 2. System Overview & Architecture
## 3. Data Model
## 4. Backend Scope (FastAPI & Neon DB)
## 5. Frontend Scope (React, TypeScript, Vite)
## 6. SDK Scope (Python & TypeScript)
## 7. Authentication Strategy
## 8. MVP Feature List Summary
## 9. Out of Scope for MVP
## 10. Assumptions
## 11. Dependencies
## 12. Success Metrics for MVP

We then instructed Gemini to begin filling out one section at a time, starting with context and MVP goals. This enabled fast iteration with commenting across our team.

3. Data model curation

Once we had the scope in place, we moved to one of the most foundational elements: the data model.

We took an LLM-assisted, iterative approach where the AI played two roles:

  • Champion: Leading the iteration, surfacing inconsistencies, and proposing refinements.
  • Companion: Acting as a smart assistant for reviews, suggesting improvements based on constraints.

We visualized the data model using mermaid ER diagrams for clarity and translation accuracy between human and machine.

🖇️
This step was closely tied to our architecture and sequence diagrams. Each refinement loop helped align the entities with the relevant features and APIs.

Here is an illustrative example of the data model using mermaid:

erDiagram
    user {
        UUID id PK
        UUID org_id FK "Nullable"
        VARCHAR email
        VARCHAR first_name
	      VARCHAR last_name
        BOOLEAN is_active
        TIMESTAMP created_at
        TIMESTAMP updated_at
    }

    api_key {
        UUID id PK
        UUID user_id FK "ON DELETE CASCADE"
        VARCHAR hashed_key
        VARCHAR key_name
        TIMESTAMP created_at
        TIMESTAMP last_used_at
    }

    project {
        UUID id PK
        VARCHAR name
        TEXT description
        UUID user_id FK "ON DELETE RESTRICT"
        BOOLEAN is_active
        TIMESTAMP created_at
        TIMESTAMP updated_at
    }

    job {
        UUID id PK
        VARCHAR status "e.g., PENDING, IN_PROGRESS, COMPLETED"
        UUID project_id FK "ON DELETE CASCADE"
        TIMESTAMP start_time
        TIMESTAMP end_time "Nullable"
        TIMESTAMP created_at
        TIMESTAMP updated_at
    }
    
    %% Relationships
    user ||--o{ api_key : "owns"
    user ||--o{ project : "owns"

    project ||--o{ job : "has"

4. User stories

To guide the API design without prematurely thinking in technical terms, we started with route-level user stories.

  • We imagined scenarios from a user’s perspective: starting from signing up to managing projects and jobs.
  • We deliberately avoided thinking about REST or GraphQL at this point, focusing purely on user needs and data interactions.
  • The stories helped us explore edge cases and revealed gaps in our data model.

This step was primarily human-led – though AI-assisted using LLMs as suggesters and evaluators during reviews. This enabled us to think from a user-first perspective, mapping the user’s journey from sign-up to goal achievement at the API level.

We found that taking control of this process disambiguated our mental model of the MVP and facilitated our evaluation of other LLM-championed steps.

Template: user story

**As a user, I should/want/need …**

- Resources involved: From the database, for example Projects, User, Job…
- Operation: UPDATE, CREATE, DELETE, READ
- User ROLE: USER, VALIDATOR
- Additional considerations/constraints:
    - Any considerations or constraints this requirement might have

5. API specification

After locking down the user stories, we tasked Gemini 2.5 Pro to convert them into concrete API endpoints using a predefined template.

Template: API endpoint

**Endpoint: …**

- **Purpose:**
    - [Clearly describe what this endpoint does from a functional perspective. What goal does it achieve for the user or system? e.g., "Retrieves the profile information for a specific user."]
- **HTTP Method:** [`GET`, `POST`, `PUT`, `PATCH`, `DELETE`, etc.]
- **Path:** [`/api/v{version_number}/resource/path...` e.g., `/api/v1/users/{userId}`]
- **Parameters:**
    - **Path Parameters:**
        - `{parameterName}` (type, required/optional): [Description, e.g., `{userId}` (string, required): The unique identifier of the user.]
        - *(Add more if needed)*
    - **Query Parameters:**
        - `parameterName` (type, optional, default: value): [Description, e.g., `includeDetails` (boolean, optional, default: false): If true, includes extended user details.]
        - *(Add more if needed)*
    - **Request Body:**
        - [Describe the expected structure of the JSON (or other format) body. List key fields and their types/requirements. e.g., JSON object containing: `email` (string, required), `password` (string, required)]
        - *(Only applicable for methods like POST, PUT, PATCH)*
- **Response (Success):**
    - **Status Code:** [`200 OK`, `201 Created`, `204 No Content`, etc.]
    - **Body:** [Describe the structure of the response body on success. e.g., "A JSON object representing the user profile including `userId`, `name`, `email`, `joinDate`."]
- **Response (Error):**
    - **Status Codes:** [List potential error codes, e.g., `400 Bad Request`, `401 Unauthorized`, `403 Forbidden`, `404 Not Found`, `500 Internal Server Error`]
    - **Body (Optional):** [Briefly describe the typical error response format, if standardized. e.g., "JSON object with `error` field containing a descriptive message."]
- **Authentication/Authorization:**
    - [Describe requirements. e.g., "Requires authenticated user session/token.", "Requires admin privileges.", "Publicly accessible."]
- **Notes/Considerations:**
    - [List any open questions, potential performance issues, rate limiting needs, dependencies, future enhancements, or other relevant points. e.g., "Need to define validation rules for input fields.", "Consider pagination for list endpoints.", "Database indexing strategy?"]

Prompt for Gemini

You are an expert at the OpenAPI spec and HTTP RESTful APIs. You are tasked with converting user stories that include information such as database resources (table) involved, database operation (CREATE, READ, UPDATE, DELETE), and considerations to endpoint specification using the provided endpoint format.

Consider the OpenAPI spec when thinking of endpoint paths and the parameters and the HTTP operation. Use appropriate HTTP status codes where necessary.

### **Instructions and Required Files**

Your task is to generate a detailed API endpoint specifications by processing the user stories found in the attached file: `user_stories.txt`.

When generating the specification, you must adhere to the following file-based context:
1.  **Template:** Your output must strictly follow the structure defined in the attached `Endpoint Template.md`.
2.  **Example:** You must refer to the attached `Endpoint Example.md` as a guide for the expected style, tone, and level of detail.

### **Required Output Format**

Your final response must be structured in two parts as shown below. You must include the `<endpoint>` and `<reasoning>` tags in your output.

<endpoint>
[Your generated endpoint specification based on the template]
</endpoint>
<reasoning>
[Your reasoning for the design choices]
</reasoning>


We processed the user stories in batches, allowing Gemini to create high-quality API specs while keeping our hands on the wheel for clarification, validation, and iteration.

6. Alignment

With the core planning components in place (scope, model, stories, and API specs), we initiated a final consistency pass using Claude 3.7-Sonnet and Gemini 2.5 Pro. The goal was to ensure coherence across documents and surface any mismatches between design and execution layers, such as:

  • Gaps between user stories and the API spec
  • Unclear data fields or missing relationships in the model
  • Scope sections that didn’t tie cleanly into tickets or code

Prompt example: alignment

Enclosed is a scope for a MVP project that will be used to {project_description}. 
Please review the scope and the API specs and note down any gaps or unclear items.

Suggestions for evaluation:
1. Refer to the data model with questions pertaining to the resources.
2. Compare the API spec requests and response to the resources in the data model.
3. Use the scope document to gauge missing APIs in the API Spec.
4. Use the API spec to assess missing fields in the data model.

Output your suggestions as a list of ammendments and give appropriate reasoning for them.

7. From scope to Linear issues

With everything validated, it was time to translate the scope into executable tickets for our team. This was a stepwise process with various refinement loops using Gemini assistant to get to a structure that made most execution sense. The stepwise process:

  1. Milestone creation
  2. Milestone-based ticket creation
  3. Ticket hierarchies with parent and child tickets

Milestone → ticket hierarchy

We used a naming convention for clarity:

M1
  M1.1
    M1.1.1
    M1.1.2
  M1.2
    ...

First, we used the scope and the API spec as input to create milestones by prompting the LLM:

Prompt example: milestone creation

You are a software project manager tasked with creating appropriate milestones for the enclosed project scope and API spec.
Use the following information to create appropriate milestones for the project.

Team composition:
1. T, AI Engineer
2. M, SE Engineer: Full-stack, Infrastructure
3. K, SE Engineer: Full-stack, Infrastructure, Networking
4. B, AI Engineer

Guidelines for milestone creation:
1. Logically divide the scope by thinking of software deliverables for each milestone.
2. Structure relevant pieces of the technical stack coupled together for e.g authentication and access control in the client and backend application.
3. Consider team composition and execution ability.

Output structure:
M1: Milestone 1
M2: Milestone 2
...


Next, in the same context, we prompted to generate the appropriate tickets for the milestones:

Prompt example: ticket creation

M1: Foundational setup & Project management
- M1.1: Repo Setup & Tooling
- M1.2: Configure Linting & Formatting Tools
- M1.3: Implement Initial DB Connection Logic
- M1.4: Setup Basic Dev Environment Config
- M1.5: Basic API Key management
- M1.6: Authentication
...

Output example: ticket creation

M1: Foundational setup & Project management
- M1.1: Repo Setup & Tooling
- M1.2: Configure Linting & Formatting Tools
- M1.3: Implement Initial DB Connection Logic
- M1.4: Setup Basic Dev Environment Config
- M1.5: Basic API Key management
- M1.6: Authentication
...


Subsequently, we asked the assistant (again, in the same context) to re-organize the tickets in a parent-child configuration, structuring logical units of work together.

Prompt example: ticket hierarchy

Restructure the tickets already created into a parent-child configuration. Structure logical units of work together, accounting for the team composition.
Structure the tickets as such:

M1: Milestone 1
- M1.1: Parent ticket title
- M1.1.1: Child ticket title
...

Output example: ticket hierarchy

M1: Foundational setup & Project management
- M1.1: Repo Setup & Tooling
- M1.1.1: Configure Linting & Formatting Tools
- M1.1.2: Implement Initial DB Connection Logic
- M1.1.3: Setup Basic Dev Environment Config
- M1.2: Basic API Key management
- M1.3: Authentication
...


Next, in a separate context, we asked Gemini 2.5 Pro to act as an AI Project Manager and evaluate ticket hierarchies for missing information.

Prompt example: evaluating tickets

Help me Project Manage this project.

Cross-check the scope and the tickets in this milestone to make sure we are not missing anything.


Finally, we populated the tickets with relevant information using the Gemini assistant in a new context. Each ticket had:

  1. Summary
  2. Description
  3. Acceptance criteria
  4. Additional notes

This allowed seamless import into Linear and fast onboarding for developers.

Example ticket structure

**Summary**

Add a one-line brief overview of the task to be completed.

**Description**

Add a description of the task, including additional context, and requirements. Highlight the expected outcome or impact of completing this task, ensuring anyone reading can quickly grasp its purpose and importance.

**Acceptance criteria**

Define what "done" means for this item, use a checklist format.

**Docs & resources**

Link any relevant resources, eg. Figma files, research notes - remove if not needed

Conclusion

The LLMs weren’t making decisions for us – they acted more like thought partners. They helped us spot gaps, clean up our thinking, and move faster without cutting corners.

What we ended up with is also a solid foundation for coding agents. A detailed API spec, a clear data model, and a well-scoped plan give enough context to automate big parts of the build. Therefore, not only accelerating the scoping speed and quality but the delivery as well.

In the end, the biggest value wasn’t speed or structure alone – it was shared clarity. By moving from scope → model → stories → APIs in a structured but flexible way, we gave the whole team a common mental model before writing any code. Everyone now knows what we’re building, why it matters, and how it fits together.

May 28, 2025

Minaam Shahid

Tiina Vaahtio

Aaro Isosaari

Arrow

Arrow