Powered by ThinkStart.ca
ShopAI.ca 2026 Demo Update

2026 Demo Update

From game-tape review to a runnable sports analysis platform in 60 minutes.

This ShopAI.ca demo walks from project creation to exportable code across requirements, use cases, data models, architecture, and deployment. The experience looks simple on the surface because the system is carrying project context, compliance, and artifact history underneath every generation.

The example project is a Canadian sports game-tape analysis platform with PIPEDA, role-based access, and WCAG 2.1 AA requirements. The page shows both what Sarah sees in the product and how the private admin workspace keeps the output consistent, exportable, and ready for coaching, analyst, and development teams.

The magic is not one giant prompt. It is context-aware routing that changes output by view, artifacts, and compliance needs.

How the Demo Opens

Create the project once. Carry the context everywhere.

The first screen defines the project metadata that the generation system will reuse across every later stage.

What Sarah sees

Project setup with compliance built in.

Sarah creates a game-tape analysis project, sets the industry, description, tech stack, and access requirements, then clicks create once.

  • Project name, industry, and description define the base context
  • Tech stack informs language and infrastructure defaults
  • PIPEDA, role-based access, and WCAG follow the project into generation
Every later prompt inherits this setup instead of starting from scratch.
First Screen See it live
Welcome to ShopAI.ca

Create your first project

[Project Name]
  "Game Tape Analysis Platform"

[Industry]
  "Sports / Video Analysis"

[Description]
  "Review game tape, tag possessions,
   detect breakdowns, generate coach notes"

[Tech Stack]
  "Node.js + React + PostgreSQL + Python"

[Compliance]
  [x] PIPEDA (Canada)
  [x] Role-based access
  [x] WCAG 2.1 AA (Accessibility)

          [Create Project]
              

Demo Stages

Five stages. One context thread.

The same project keeps evolving, but the output format changes by view so the user never has to restate the entire system.

0-5 min

Project setup

Create the project and lock in industry, tech stack, and Canadian compliance requirements.

Project context created
5-15 min

Requirements

Generate functional requirements from short prompts like breakdown tagging, clip search, or coach notes.

FR artifacts with acceptance criteria
15-30 min

Use cases

Switch context and generate PlantUML use case diagrams and linked use case specs for analysts, coordinators, and coaches.

Actors, flows, and exceptions
30-45 min

Data models

Infer entities, relationships, ER diagrams, DDL, and migrations for games, clips, players, possessions, and notes.

Schema plus migrations
45-55 min

Deployment

Generate architecture, Terraform, checklists, and rollout guidance for video ingestion and secure analyst workspaces.

IaC and deployment model
55-60 min

Export

Produce runnable code, docs, migrations, CI/CD files, and optional GitHub push.

Ready-to-build package

Stage 1

Requirements generation with live streaming feedback.

Sarah types a short capability and gets a structured requirement with acceptance criteria, priority, and linked operational constraints.

What Sarah sees

Functional requirements in minutes.

She types "tag defensive breakdowns from game tape" and the system produces FR-001 with acceptance criteria, priority, and links to related non-functional requirements. A second prompt like "search clips by possession, player, and play type" creates FR-002 immediately after.

  • Streaming progress updates while the model is working
  • Consistent format across every generated requirement
  • Requirement IDs and traceability from the start
Prompt and result FR-001
Prompt
"tag defensive breakdowns from game tape"

Result
FR-001: Defensive Breakdown Tagging

Description
System tags each defensive breakdown clip in sequence.

Acceptance Criteria
- Tag quarter, clock, possession, and lineup
- Mark missed rotations, late closeouts, and coverage gaps
- Attach a coaching note to each flagged clip
- Allow analyst review before save

Related prompt
"search clips by possession, player, and play type"
              

Stage 2

The same kind of prompt creates a different output when the view changes.

Once Sarah clicks into Use Cases, the system switches context and the same natural-language style prompt becomes diagram generation.

What Sarah sees

Use case diagrams and specs.

Sarah types "video analyst reviewing third-down breakdowns and exporting clips" and gets a PlantUML use case diagram, detected actors, related functional requirements, and a structured use case spec for flows like Review Clip and Export Cutup.

  • Actors and relationships are inferred from the phrase
  • Requirements already generated are linked automatically
  • Normal flows and exceptions stay aligned to the same system
Prompt and result Use Cases
Prompt
"video analyst reviewing third-down breakdowns and exporting clips"

Result
Actors
- Analyst
- Coordinator
- Coach

Use Cases
- Review clip
- Tag breakdown
- Export cutup
- Share coach notes

Artifacts Found
- FR-001 linked
- FR-002 linked
- New actors detected automatically
              

Stage 3

Data models are inferred from the requirements and use cases already created.

The system moves from screen-level flows into the entities, relationships, and schema required to make the application real.

What Sarah sees

ER diagrams, DDL, and migrations.

Sarah asks what entities are needed for games, clips, players, possessions, tags, and coaching notes. The app infers Game, Clip, Player, Possession, Tag, and Staff Note, then generates the conceptual ER view, the logical model, and the physical PostgreSQL schema.

  • Requirements and use cases become data-model inputs
  • Compliance adds audit fields and retention awareness
  • Exports can include SQL migrations immediately after
Prompt and result Data Model
Prompt
"what entities do we need for games, clips, players, possessions, tags, and notes?"

Result
Core Entities
- Game
- Possession
- Clip
- Player
- Tag
- Staff Note

Relationships
- Game -> Possessions -> Clips
- Clip -> Tags
- Clip -> Staff Notes
- Player -> Tagged events

Exports
- Conceptual ER view
- Schema outline
- Migration-ready table set
              

Stage 4

Architecture and deployment move to batch when the user does not need every token live.

For infrastructure work, ShopAI.ca can shift from live streaming to background generation for diagrams, Terraform, and checklists in parallel.

What Sarah sees

Architecture, Terraform, and deployment guidance.

Sarah asks for an AWS production deployment that ingests game tape, generates clips, and serves secure analyst workspaces. The result includes an architecture diagram, Terraform, a deployment checklist, scaling notes, and configuration with PIPEDA compliance noted clearly.

  • Architecture diagrams are paired with runnable IaC
  • Background processing supports non-urgent work without blocking the user
  • Users can keep moving while generation runs asynchronously
Prompt and result Deployment
Prompt
"aws architecture for game tape ingestion, clip generation, and analyst review"

Result
- Secure upload for raw game footage
- Processing workers for clip extraction
- Analyst workspace for review and tagging
- Role-based access to team footage
- Searchable clip library
- Export-ready deployment checklist
              

Stage 5

Export turns the model outputs into a runnable package.

At the end of the flow, the user chooses how far the system should go: code only, infrastructure only, or a full repository with docs and deployment assets.

What Sarah sees

Full codebase, ZIP, and optional GitHub repo.

Sarah exports a package that includes application code, infrastructure, migrations, docs, CI/CD workflows, and a README. From there, the repo can be cloned, run locally, and pushed to production services like Vercel, Railway, and AWS.

  • Runnable codebase instead of static documentation only
  • Infrastructure and database assets stay in sync with the app
  • GitHub push creates a handoff that engineering teams can keep building from
Prompt and result Export
Export Selection
- Frontend
- Services
- Data model
- Infrastructure
- Documentation
- GitHub repository

Result Package
game-tape-analysis-v1.zip

Contents
- analyst-workspace/
- services/
- video-workers/
- schema/
- infrastructure/
- docs/
- .github/
              

Admin Login

The private admin workspace is where routing, artifacts, and exports live.

This is the only place the internal mechanics are exposed. The demo pages stay focused on prompts and results.

Projects

Project library

See every active game, analysis project, and workspace in one admin list.

Prompts

Prompt history

Review prior prompts, saved outputs, and the sequence of generated artifacts.

Artifacts

Artifact library

Manage requirements, use cases, data models, exports, and revisions from one place.

Access

Role controls

Set who can view game tape, who can edit notes, and who can export packages.

Exports

Export controls

Choose what gets packaged for teams, environments, or repositories.

Audit

Admin trail

Track changes, prompt history, and review status without exposing the build details on the demo page.

Differentiators

Why this demo works as a product story.

The pitch is not just speed. It is structured output, reusable context, and a shorter path from idea to deployable system.

Traditional approach
ShopAI.ca approach
Architect spends days or weeks gathering requirements and translating them across teams.
The system generates requirements, diagrams, schema, and deployment assets inside one context thread.
Back-and-forth review cycles slow output and lose consistency between artifacts.
Each artifact is validated against prior work and linked automatically as the project grows.
Manual delivery makes each new project slower to repeat and harder to standardize.
The generation system is reusable across teams, projects, industries, and compliance frameworks.
Infrastructure and code handoff often happen after architecture, creating rework.
Infrastructure, code, docs, and migrations are export options from the same source context.

Core product promise

One guided conversation

Move from idea to architecture and exportable implementation without restarting the design process at every step.

Canadian angle

Compliance from the first screen

PIPEDA, role-based access, and accessibility requirements are set as part of the project, not bolted on at the end.

Operational value

Faster handoff to engineering

Generated artifacts are structured, exportable, and ready for teams to review, build, and deploy.

Call to action

Use the 2026 demo to show how ShopAI.ca thinks, not just what it generates.

This page works as a walkthrough for technical buyers who need to understand both the user experience and the architecture behind it.

Book the Demo

Walk through the full 60-minute flow with a live project scenario.

Talk to ShopAI.ca

Use this demo as the entry point for a more tailored product or sector-specific workflow.

2026 Demo Positioning

ShopAI.ca

Context-aware AI system design for Canadian businesses.

What the page proves

The value is not just generation speed. It is the ability to keep requirements, diagrams, schema, infrastructure, and code aligned while the user keeps moving.

Powered by ThinkStart.ca