Skip to Content
DocumentationDeployment Guide

Deployment Guide

This comprehensive guide covers deploying both the NeuraScale Console (frontend) and Neural Engine (backend) across different environments.

Deployment Overview

NeuraScale has two main components with separate deployment strategies:

  1. Console (Frontend) - Next.js app deployed on Vercel ✓ Available
  2. Neural Engine (Backend) - Python API deployed on Google Cloud Platform 🚀 Coming Soon

Deployment Status

EnvironmentFrontendBackendInfrastructure
Local Development✓ Available✓ AvailableDocker Compose
Preview (Vercel)✓ Available-Automatic PR deployments
Staging✓ Available🚀 Coming SoonCloud Run
Production✓ Available📅 PlannedGKE with Helm

Architecture Overview

Development

Local Docker Compose setup for rapid development

Setup Guide →

Staging

GCP Cloud Run for testing and validation

Deploy →

Production

GKE with Helm charts for high availability

Deploy →

Architecture Overview

Local Development

For rapid development and testing, use Docker Compose to run all services locally.

Prerequisites

Ensure you have:

  • Docker & Docker Compose
  • Python 3.12.11
  • Node.js 18+
  • Git

Clone and Setup

git clone https://github.com/identity-wael/neurascale.git cd neurascale # Setup virtual environments ./scripts/dev-tools/setup-venvs.sh

Start Infrastructure Services

# Start all infrastructure services docker-compose up -d # Verify services are running docker-compose ps

Services Started:

  • TimescaleDB: Time-series neural data storage
  • Redis: Caching and real-time features
  • PostgreSQL: Application metadata
  • MCP Server: AI assistant integration
  • Pub/Sub Emulator: Message queue for testing

Start Neural Engine

cd neural-engine source venv/bin/activate python -m src.main

The Neural Engine will be available at: http://localhost:8000 

Start Console

cd console pnpm install pnpm run dev

The Console will be available at: http://localhost:3000 

Verify Installation

# Health check curl http://localhost:8000/health # Create test device curl -X POST http://localhost:8000/api/v1/devices \ -H "Content-Type: application/json" \ -d '{"device_id": "test-001", "device_type": "synthetic"}' # Start streaming curl -X POST http://localhost:8000/api/v1/devices/test-001/stream/start

Console Deployment (Vercel)

The NeuraScale Console is deployed on Vercel for optimal performance and developer experience.

Prerequisites

  • Vercel account (free tier works)
  • Firebase project configured
  • NeonDB database created
  • Environment variables ready

Install Vercel CLI

npm i -g vercel # Or using pnpm pnpm add -g vercel

Configure Environment Variables

Create a .env.production file in the console directory:

# Firebase Configuration NEXT_PUBLIC_FIREBASE_API_KEY=your-api-key NEXT_PUBLIC_FIREBASE_AUTH_DOMAIN=your-auth-domain NEXT_PUBLIC_FIREBASE_PROJECT_ID=your-project-id NEXT_PUBLIC_FIREBASE_STORAGE_BUCKET=your-storage-bucket NEXT_PUBLIC_FIREBASE_MESSAGING_SENDER_ID=your-sender-id NEXT_PUBLIC_FIREBASE_APP_ID=your-app-id # NeonDB Configuration DATABASE_URL=postgresql://user:password@host/neurascale?sslmode=require DIRECT_URL=postgresql://user:password@host/neurascale?sslmode=require # Neural Engine API NEXT_PUBLIC_API_URL=https://api.neurascale.io NEXT_PUBLIC_WS_URL=wss://api.neurascale.io # Firebase Admin (Server-side) FIREBASE_ADMIN_PROJECT_ID=your-project-id FIREBASE_ADMIN_CLIENT_EMAIL=your-client-email FIREBASE_ADMIN_PRIVATE_KEY="your-private-key-here"

Deploy to Vercel

Using Vercel CLI

cd console # First deployment vercel # Follow prompts: # - Set up and deploy: Y # - Which scope: Select your account # - Link to existing project: N # - Project name: neurascale-console # - Directory: ./ # - Override settings: N # Production deployment vercel --prod

Configure Domain

# Add custom domain vercel domains add neurascale.io # Set as production domain vercel alias neurascale-console.vercel.app neurascale.io

Environment Configuration

Set environment variables in Vercel dashboard:

  1. Go to Project Settings → Environment Variables
  2. Add all variables from .env.production
  3. Configure per environment:
    • Production: For main branch
    • Preview: For all other branches
    • Development: For local development

Edge Functions

Configure API routes for server-side operations:

// app/api/auth/route.ts import { NextRequest, NextResponse } from 'next/server' import { initializeApp, cert } from 'firebase-admin/app' import { getAuth } from 'firebase-admin/auth' const app = initializeApp({ credential: cert({ projectId: process.env.FIREBASE_ADMIN_PROJECT_ID, clientEmail: process.env.FIREBASE_ADMIN_CLIENT_EMAIL, privateKey: process.env.FIREBASE_ADMIN_PRIVATE_KEY?.replace(/\\n/g, '\n') }) }) export async function POST(request: NextRequest) { const { idToken } = await request.json() try { const decodedToken = await getAuth(app).verifyIdToken(idToken) return NextResponse.json({ uid: decodedToken.uid }) } catch (error) { return NextResponse.json({ error: 'Invalid token' }, { status: 401 }) } }

Monitoring

Vercel provides built-in monitoring:

  1. Analytics: Core Web Vitals and performance metrics
  2. Functions: Edge function logs and performance
  3. Errors: Real-time error tracking

Enable in Project Settings → Analytics

Backend Deployment (GCP)

Multi-Project Architecture

NeuraScale uses a four-project GCP architecture with centralized services and identity management:

ProjectIDPurposeRegion
Central Servicesneurascale (Project ID: 555656387124)Identity federation, Terraform state, shared servicesUS
Developmentdevelopment-neurascaleFeature developmentnorthamerica-northeast1
Stagingstaging-neurascaleIntegration testingnorthamerica-northeast1
Productionproduction-neurascaleLive environmentnorthamerica-northeast1

Central Services Project: The neurascale project serves as the hub for:

  • Identity Federation: Workload Identity Federation for GitHub Actions
  • Terraform State: Centralized state bucket (neurascale-terraform-state)
  • Cross-Project IAM: Service accounts and permissions management
  • Shared Resources: Domain verification, organization-level policies

Identity Federation Architecture

Setting Up Workload Identity Federation

# Variables PROJECT_ID="neurascale" PROJECT_NUMBER="555656387124" POOL_NAME="github-actions-pool" PROVIDER_NAME="github-actions-provider" SA_NAME="github-actions" # Create workload identity pool gcloud iam workload-identity-pools create $POOL_NAME \ --project=$PROJECT_ID \ --location="global" \ --display-name="GitHub Actions Pool" # Create workload identity provider gcloud iam workload-identity-pools providers create-oidc $PROVIDER_NAME \ --project=$PROJECT_ID \ --location="global" \ --workload-identity-pool=$POOL_NAME \ --display-name="GitHub Actions Provider" \ --attribute-mapping="google.subject=assertion.sub,attribute.actor=assertion.actor,attribute.repository=assertion.repository" \ --issuer-uri="https://token.actions.githubusercontent.com" \ --attribute-condition="assertion.repository_owner=='identity-wael'" # Create service account gcloud iam service-accounts create $SA_NAME \ --project=$PROJECT_ID \ --display-name="GitHub Actions Service Account" # Allow workload identity to impersonate service account gcloud iam service-accounts add-iam-policy-binding \ ${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com \ --project=$PROJECT_ID \ --role="roles/iam.workloadIdentityUser" \ --member="principalSet://iam.googleapis.com/projects/${PROJECT_NUMBER}/locations/global/workloadIdentityPools/${POOL_NAME}/attribute.repository/identity-wael/neurascale" # Grant permissions to other projects for ENV_PROJECT in development-neurascale staging-neurascale production-neurascale; do gcloud projects add-iam-policy-binding $ENV_PROJECT \ --member="serviceAccount:${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" \ --role="roles/editor" done

Infrastructure Components

The Terraform modules deploy the following GCP services:

Data Storage & Processing:

  • Cloud SQL PostgreSQL: Application metadata, session data, user management
  • BigQuery: Analytics data warehouse with partitioned tables for neural sessions
  • Artifact Registry: Docker images for all services
  • Cloud Storage: Function source code, backups, and exports

Real-time Processing:

  • Pub/Sub Topics: Separate topics for each signal type (EEG, EMG, LFP, etc.)
  • Cloud Functions: Event-driven processing (Note: Bigtable integration planned)
  • Dead Letter Queues: Failed message handling

Deployment Targets:

  • Cloud Run: MCP server and API services
  • GKE (Production only): High-availability neural engine deployment
  • Cloud Functions: Signal processing pipelines

Security & Operations:

  • Secret Manager: API keys and sensitive configuration
  • Cloud KMS: Encryption keys for data at rest
  • VPC: Private networking with service connections
  • Redis (Memorystore): Caching layer

Staging Deployment

Prerequisites

# Ensure you're authenticated gcloud auth login gcloud auth application-default login # Set staging project gcloud config set project staging-neurascale

Deploy Infrastructure with Terraform

cd neural-engine/terraform # Initialize with staging backend configuration terraform init -backend-config=backend-configs/staging.hcl # Review staging variables cat environments/staging.tfvars # Plan deployment terraform plan -var-file="environments/staging.tfvars" # Apply infrastructure terraform apply -var-file="environments/staging.tfvars"

Key Staging Configuration:

# From staging.tfvars project_id = "staging-neurascale" environment = "staging" region = "northamerica-northeast1" # Reduced resources for cost optimization db_tier = "db-g1-small" redis_tier = "BASIC" enable_gke_cluster = false # Cloud Run only in staging

Build and Push Images to Artifact Registry

# Configure Docker for Artifact Registry gcloud auth configure-docker northamerica-northeast1-docker.pkg.dev # Build Neural Engine image cd neural-engine docker build -t northamerica-northeast1-docker.pkg.dev/staging-neurascale/neural-engine-staging/neural-engine:latest . # Push to Artifact Registry docker push northamerica-northeast1-docker.pkg.dev/staging-neurascale/neural-engine-staging/neural-engine:latest # Build and push MCP Server cd ../mcp-server docker build -t northamerica-northeast1-docker.pkg.dev/staging-neurascale/neural-engine-staging/mcp-server:latest . docker push northamerica-northeast1-docker.pkg.dev/staging-neurascale/neural-engine-staging/mcp-server:latest

Deploy to Cloud Run

# Deploy Neural Engine API gcloud run deploy neural-engine \ --image northamerica-northeast1-docker.pkg.dev/staging-neurascale/neural-engine-staging/neural-engine:latest \ --platform managed \ --region northamerica-northeast1 \ --service-account neural-ingestion-stag@staging-neurascale.iam.gserviceaccount.com \ --set-env-vars ENV=staging,PROJECT_ID=staging-neurascale \ --memory 2Gi \ --cpu 2 \ --min-instances 0 \ --max-instances 10 \ --no-allow-unauthenticated # Deploy MCP Server (if enabled) gcloud run deploy mcp-server \ --image northamerica-northeast1-docker.pkg.dev/staging-neurascale/neural-engine-staging/mcp-server:latest \ --platform managed \ --region northamerica-northeast1 \ --service-account neural-ingestion-stag@staging-neurascale.iam.gserviceaccount.com \ --memory 512Mi \ --cpu 1 \ --min-instances 0 \ --max-instances 3

Configure Database and Secrets

# Get Cloud SQL instance connection name INSTANCE_CONNECTION=$(gcloud sql instances describe staging-neural-postgres \ --format='value(connectionName)') # Update Cloud Run service with Cloud SQL proxy gcloud run services update neural-engine \ --add-cloudsql-instances=$INSTANCE_CONNECTION \ --region northamerica-northeast1 # Create secrets for database credentials echo -n "postgresql://neural_user:PASSWORD@localhost/neural_db?host=/cloudsql/$INSTANCE_CONNECTION" | \ gcloud secrets create db-connection-string --data-file=- # Grant access to secrets gcloud secrets add-iam-policy-binding db-connection-string \ --member="serviceAccount:neural-ingestion-stag@staging-neurascale.iam.gserviceaccount.com" \ --role="roles/secretmanager.secretAccessor" # Update Cloud Run to use secrets gcloud run services update neural-engine \ --update-secrets=DATABASE_URL=db-connection-string:latest \ --region northamerica-northeast1

Deploy Cloud Functions for Signal Processing

# Deploy function for each signal type for SIGNAL in eeg emg lfp spikes ecog accelerometer; do gcloud functions deploy process-${SIGNAL}-staging \ --gen2 \ --runtime python312 \ --region northamerica-northeast1 \ --source neural-engine/functions/signal_processor \ --entry-point process_signal \ --trigger-topic neural-data-${SIGNAL}-staging \ --service-account neural-ingestion-stag@staging-neurascale.iam.gserviceaccount.com \ --set-env-vars SIGNAL_TYPE=${SIGNAL},ENV=staging \ --memory 512MB \ --timeout 60s done

Verify Deployment

# Get Cloud Run service URL SERVICE_URL=$(gcloud run services describe neural-engine \ --region=northamerica-northeast1 \ --format='value(status.url)') # Get auth token AUTH_TOKEN=$(gcloud auth print-identity-token) # Health check curl -H "Authorization: Bearer $AUTH_TOKEN" $SERVICE_URL/health # Check BigQuery datasets bq ls --project_id=staging-neurascale # Check Pub/Sub topics gcloud pubsub topics list --project=staging-neurascale # Verify Cloud SQL gcloud sql instances describe staging-neural-postgres

Production Deployment

Production uses GKE for the Neural Engine with enhanced security and high availability.

Setup Production Environment

# Set production project gcloud config set project production-neurascale # Verify APIs are enabled (should be done via Terraform) gcloud services list --enabled

Deploy Infrastructure with Terraform

cd neural-engine/terraform # Initialize for production with approval workflow terraform init -backend-config=backend-configs/production.hcl # Review production configuration cat environments/production.tfvars # Plan deployment (save plan for review) terraform plan -var-file="environments/production.tfvars" -out=production.tfplan # Apply after approval terraform apply production.tfplan

Production Configuration Highlights:

# From production.tfvars project_id = "production-neurascale" environment = "production" region = "northamerica-northeast1" # GKE Configuration enable_gke_cluster = true gke_general_machine_type = "n2-standard-4" gke_neural_machine_type = "n2-highmem-8" enable_gpu_pool = true gpu_type = "nvidia-tesla-t4" # High Availability enable_db_high_availability = true db_tier = "db-n1-standard-4" redis_tier = "STANDARD_HA" # Enhanced Security enable_enhanced_security = true enable_kms_encryption = true enable_binary_authorization = true enable_vpc_service_controls = true # Data Retention data_retention_days = 2555 # 7 years for HIPAA

Production Infrastructure Created:

  • GKE Cluster: Auto-scaling node pools with GPU support
  • Cloud SQL PostgreSQL: HA configuration with automated backups
  • BigQuery: Partitioned tables for analytics with 7-year retention
  • Redis (Memorystore): High-availability caching
  • Cloud KMS: Customer-managed encryption keys
  • VPC Service Controls: Security perimeter for data protection
  • Artifact Registry: Production image repository

Connect to GKE Cluster

# Get cluster credentials gcloud container clusters get-credentials neural-engine-prod \ --region northamerica-northeast1 # Verify connection kubectl get nodes

Deploy with Helm

cd kubernetes/helm # Add dependencies helm dependency update neural-engine/ # Install Neural Engine helm install neural-engine ./neural-engine/ \ --namespace neural-engine \ --create-namespace \ --values neural-engine/values-production.yaml \ --set image.tag=$IMAGE_TAG \ --set database.host=$DB_HOST \ --set bigtable.project=$GCP_PROJECT

Configure Production Secrets

# Create Kubernetes secrets kubectl create secret generic neural-engine-secrets \ --from-literal=db-password=$DB_PASSWORD \ --from-literal=jwt-secret=$JWT_SECRET \ --from-literal=api-key=$API_KEY \ --namespace neural-engine # Create TLS certificate secret kubectl create secret tls neural-engine-tls \ --cert=tls.crt \ --key=tls.key \ --namespace neural-engine

Setup Monitoring

# Deploy monitoring stack kubectl apply -f infrastructure/k8s/monitoring/ # Port forward to Grafana kubectl port-forward svc/grafana 3000:3000 -n monitoring # Access Grafana at http://localhost:3000 # Default: admin/admin

Configure Auto-scaling

# Apply HPA (Horizontal Pod Autoscaler) kubectl apply -f - <<EOF apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: neural-engine-hpa namespace: neural-engine spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: neural-engine minReplicas: 3 maxReplicas: 20 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 70 - type: Resource resource: name: memory target: type: Utilization averageUtilization: 80 EOF

Verify Production Deployment

# Check deployment status kubectl get deployments -n neural-engine # Check pods kubectl get pods -n neural-engine # Check services kubectl get services -n neural-engine # Check ingress kubectl get ingress -n neural-engine # View logs kubectl logs -f deployment/neural-engine -n neural-engine # Test external access curl https://api.neurascale.io/health

Pre-commit Hooks and Quality Checks

NeuraScale enforces code quality through comprehensive pre-commit hooks:

Pre-commit Configuration

# .pre-commit-config.yaml highlights repos: # General code quality - repo: https://github.com/pre-commit/pre-commit-hooks hooks: - id: trailing-whitespace - id: end-of-file-fixer - id: check-yaml - id: check-added-large-files args: ['--maxkb=1000'] - id: detect-private-key # Security scanning - repo: https://github.com/gitleaks/gitleaks hooks: - id: gitleaks # Frontend formatting - repo: https://github.com/pre-commit/mirrors-prettier hooks: - id: prettier files: \.(js|jsx|ts|tsx|json|css|scss|md)$ # Python formatting (neural-engine) - repo: local hooks: - id: black-neural-engine name: black entry: scripts/dev-tools/run-black.sh language: system types: [python] files: ^neural-engine/.*\.py$ # Terraform formatting - repo: https://github.com/terraform-linters/tflint hooks: - id: terraform-fmt files: \.tf$

Local Development Setup

# Install pre-commit pip install pre-commit # Install hooks pre-commit install # Run manually on all files pre-commit run --all-files # Skip hooks temporarily (not recommended) git commit --no-verify -m "Emergency fix"

Python Version: The black formatter requires Python 3.12.11. The run-black.sh script ensures it runs in the correct virtual environment.

CI/CD Pipeline

Build Process

The build process creates Docker images and pushes them to Artifact Registry:

Artifact Registry Structure

northamerica-northeast1-docker.pkg.dev/ ├── development-neurascale/ │ └── neural-engine-development/ │ ├── neural-engine:latest │ ├── neural-engine:v1.2.3 │ └── mcp-server:latest ├── staging-neurascale/ │ └── neural-engine-staging/ │ ├── neural-engine:latest │ ├── neural-engine:v1.2.3 │ └── mcp-server:latest └── production-neurascale/ └── neural-engine-production/ ├── neural-engine:v1.2.3 # No latest tag in prod └── mcp-server:v1.2.3

Deployment Pipeline

Cloud Run Deployment

Used for API services and MCP server:

# Build and push image docker build -t $REGION-docker.pkg.dev/$PROJECT/neural-engine-$ENV/service:$VERSION . docker push $REGION-docker.pkg.dev/$PROJECT/neural-engine-$ENV/service:$VERSION # Deploy new revision gcloud run deploy service-name \ --image $REGION-docker.pkg.dev/$PROJECT/neural-engine-$ENV/service:$VERSION \ --region $REGION \ --platform managed \ --revision-suffix v${VERSION//./-} # Gradual rollout gcloud run services update-traffic service-name \ --to-revisions LATEST=10 \ --region $REGION

GitHub Actions Workflow (Planned)

# .github/workflows/deploy.yml name: Deploy NeuraScale on: push: branches: [main, develop] pull_request: branches: [main] env: REGION: northamerica-northeast1 STATE_BUCKET: neurascale-terraform-state jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Setup Python 3.12.11 uses: actions/setup-python@v4 with: python-version: '3.12.11' - name: Run tests run: | cd neural-engine pip install -r requirements-dev.txt pytest tests/ deploy: needs: test runs-on: ubuntu-latest steps: - name: Authenticate to Google Cloud uses: google-github-actions/auth@v1 with: credentials_json: ${{ secrets.GCP_SA_KEY }} - name: Deploy to environment run: | # Logic to deploy based on branch if [[ "${{ github.ref }}" == "refs/heads/main" ]]; then ENV=production elif [[ "${{ github.ref }}" == "refs/heads/develop" ]]; then ENV=staging else ENV=development fi # Deploy using appropriate method ./scripts/deploy.sh $ENV

Deployment Environments

Environment Configuration

Development Environment

ResourceConfigurationPurpose
ComputeDocker ComposeLocal development
DatabasePostgreSQL containerQuick setup
Time-seriesTimescaleDB containerLocal testing
CacheRedis containerFast iteration
Message QueuePub/Sub emulatorOffline development
StorageLocal filesystemNo cloud costs
# docker-compose.yml version: '3.8' services: postgresql: image: postgres:15 environment: POSTGRES_DB: neurascale_dev POSTGRES_USER: neurascale POSTGRES_PASSWORD: dev_password ports: - "5432:5432" timescaledb: image: timescale/timescaledb:latest-pg15 environment: POSTGRES_DB: timeseries_dev POSTGRES_USER: timescale POSTGRES_PASSWORD: dev_password ports: - "5433:5432" redis: image: redis:7-alpine ports: - "6379:6379"

Data Infrastructure

Cloud SQL PostgreSQL

NeuraScale uses Cloud SQL PostgreSQL for application metadata, session management, and user data:

-- Core tables structure CREATE SCHEMA IF NOT EXISTS neural; -- Device registry CREATE TABLE neural.devices ( device_id UUID PRIMARY KEY DEFAULT gen_random_uuid(), device_type VARCHAR(50) NOT NULL, device_name VARCHAR(255) NOT NULL, manufacturer VARCHAR(100), model VARCHAR(100), serial_number VARCHAR(255) UNIQUE, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); -- Sessions table CREATE TABLE neural.sessions ( session_id UUID PRIMARY KEY DEFAULT gen_random_uuid(), device_id UUID REFERENCES neural.devices(device_id), patient_id VARCHAR(255), -- De-identified start_time TIMESTAMP NOT NULL, end_time TIMESTAMP, duration_seconds INTEGER, channel_count INTEGER, sampling_rate FLOAT, data_quality_score FLOAT, metadata JSONB, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); -- Annotations CREATE TABLE neural.annotations ( annotation_id UUID PRIMARY KEY DEFAULT gen_random_uuid(), session_id UUID REFERENCES neural.sessions(session_id), timestamp TIMESTAMP NOT NULL, annotation_type VARCHAR(50), description TEXT, metadata JSONB, created_by VARCHAR(255), created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); -- Indexes for performance CREATE INDEX idx_sessions_patient ON neural.sessions(patient_id); CREATE INDEX idx_sessions_device ON neural.sessions(device_id); CREATE INDEX idx_sessions_time ON neural.sessions(start_time, end_time); CREATE INDEX idx_annotations_session ON neural.annotations(session_id);

BigQuery Analytics

BigQuery serves as the data warehouse for analytics and ML training:

-- Dataset: {environment}_neural_analytics -- Partitioned sessions table CREATE TABLE `project.dataset.neural_sessions` PARTITION BY DATE(session_timestamp) CLUSTER BY patient_id, device_type, signal_type AS SELECT session_id, session_timestamp, patient_id, device_type, signal_type, duration_seconds, channel_count, sampling_rate, data_quality_score, annotations FROM source_data; -- Aggregated metrics table (hourly partitioned) CREATE TABLE `project.dataset.neural_metrics` PARTITION BY TIMESTAMP_TRUNC(timestamp, HOUR) CLUSTER BY session_id, metric_type AS SELECT timestamp, session_id, metric_type, channel_id, value, frequency_band, metadata FROM metrics_stream; -- ML training dataset CREATE TABLE `project.dataset.ml_training_data` ( sample_id STRING, features ARRAY<FLOAT64>, label STRING, split STRING, -- train/validation/test created_timestamp TIMESTAMP );

Data Flow Architecture

Note on Bigtable: While Bigtable is referenced in the documentation as a future option for high-throughput time-series data, the current implementation uses Cloud SQL for metadata and BigQuery for analytics. Bigtable integration is planned for scenarios requiring sub-millisecond latency on massive datasets.

CI/CD Pipeline

Complete GitHub Actions Workflow

# .github/workflows/deploy.yml name: Deploy NeuraScale on: push: branches: [main, develop] pull_request: branches: [main] jobs: # Test Console (Frontend) test-console: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: pnpm/action-setup@v2 with: version: 9 - name: Setup Node.js uses: actions/setup-node@v4 with: node-version: '18' cache: 'pnpm' cache-dependency-path: console/pnpm-lock.yaml - name: Install dependencies run: | cd console pnpm install - name: Run tests run: | cd console pnpm test pnpm lint pnpm type-check - name: Build run: | cd console pnpm build # Test Neural Engine (Backend) test-engine: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Setup Python 3.12.11 uses: actions/setup-python@v4 with: python-version: '3.12.11' - name: Install dependencies run: | cd neural-engine pip install -r requirements.txt pip install -r requirements-dev.txt - name: Run tests run: | cd neural-engine pytest tests/ --cov=src --cov-report=xml black . --check flake8 . mypy . - name: Upload coverage uses: codecov/codecov-action@v3 # Deploy Console to Vercel deploy-console: needs: test-console runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: pnpm/action-setup@v2 with: version: 9 - name: Deploy to Vercel (Preview) if: github.ref != 'refs/heads/main' run: | cd console npx vercel --token=${{ secrets.VERCEL_TOKEN }} \ --scope=${{ secrets.VERCEL_SCOPE }} - name: Deploy to Vercel (Production) if: github.ref == 'refs/heads/main' run: | cd console npx vercel --prod --token=${{ secrets.VERCEL_TOKEN }} \ --scope=${{ secrets.VERCEL_SCOPE }} # Deploy Neural Engine to GCP using Workload Identity Federation deploy-engine-staging: if: github.ref == 'refs/heads/develop' needs: test-engine runs-on: ubuntu-latest permissions: contents: 'read' id-token: 'write' # Required for Workload Identity Federation steps: - uses: actions/checkout@v4 - name: Authenticate to Google Cloud with Workload Identity uses: google-github-actions/auth@v1 with: workload_identity_provider: 'projects/555656387124/locations/global/workloadIdentityPools/github-actions-pool/providers/github-actions-provider' service_account: 'github-actions@neurascale.iam.gserviceaccount.com' - name: Deploy to Cloud Run run: | cd neural-engine gcloud run deploy neural-engine \ --source . \ --platform managed \ --region northamerica-northeast1 \ --project staging-neurascale \ --set-env-vars "ENV=staging" deploy-engine-production: if: github.ref == 'refs/heads/main' needs: test-engine runs-on: ubuntu-latest environment: production permissions: contents: 'read' id-token: 'write' # Required for Workload Identity Federation steps: - uses: actions/checkout@v4 - name: Authenticate to Google Cloud with Workload Identity uses: google-github-actions/auth@v1 with: workload_identity_provider: 'projects/555656387124/locations/global/workloadIdentityPools/github-actions-pool/providers/github-actions-provider' service_account: 'github-actions@neurascale.iam.gserviceaccount.com' - name: Setup Helm uses: azure/setup-helm@v3 with: version: '3.12.0' - name: Deploy to GKE run: | gcloud container clusters get-credentials neural-engine-prod \ --region northamerica-northeast1 \ --project production-neurascale helm upgrade --install neural-engine \ kubernetes/helm/neural-engine/ \ --namespace neural-engine \ --create-namespace \ --values kubernetes/helm/neural-engine/values-production.yaml \ --set image.tag=${{ github.sha }}

Vercel GitHub Integration

For automatic deployments, connect your GitHub repository to Vercel:

  1. Go to vercel.com/new 
  2. Import your repository
  3. Configure:
    • Root Directory: console
    • Framework Preset: Next.js
    • Build Command: pnpm build
    • Install Command: pnpm install

Environment Secrets

Configure these secrets in GitHub:

# Vercel Deployment VERCEL_TOKEN: your-vercel-token VERCEL_SCOPE: your-team-or-username # GCP Deployment GCP_SA_KEY_STAGING: staging-service-account-key GCP_SA_KEY_PRODUCTION: production-service-account-key # Firebase Admin (for API routes) FIREBASE_ADMIN_KEY: firebase-admin-sdk-key

Rollback Procedures

Cloud Run Rollback

# List revisions gcloud run revisions list --service=neural-engine # Rollback to previous revision gcloud run services update-traffic neural-engine \ --to-revisions=neural-engine-00042-xyz=100 # Zero-downtime rollback gcloud run services update-traffic neural-engine \ --to-revisions=neural-engine-00042-xyz=50,neural-engine-00043-abc=50 # Complete rollback after verification gcloud run services update-traffic neural-engine \ --to-revisions=neural-engine-00042-xyz=100

GKE Rollback

# Check rollout history kubectl rollout history deployment/neural-engine -n neural-engine # Rollback to previous version kubectl rollout undo deployment/neural-engine -n neural-engine # Rollback to specific revision kubectl rollout undo deployment/neural-engine \ --to-revision=2 -n neural-engine # Monitor rollback status kubectl rollout status deployment/neural-engine -n neural-engine

Helm Rollback

# List releases helm list -n neural-engine # Check release history helm history neural-engine -n neural-engine # Rollback to previous release helm rollback neural-engine -n neural-engine # Rollback to specific revision helm rollback neural-engine 2 -n neural-engine

Monitoring and Observability

Health Checks

# Application health curl https://api.neurascale.io/health # Database connectivity curl https://api.neurascale.io/health/database # External dependencies curl https://api.neurascale.io/health/dependencies # Kubernetes liveness probe kubectl describe pod neural-engine-xxx -n neural-engine

Key Metrics to Monitor

MetricThresholdAlert
Response Timep99 < 500msCritical
Error Rate< 1%Warning
CPU Utilization< 80%Warning
Memory Usage< 85%Warning
Disk Usage< 90%Critical
Active Connections< 1000Info

Log Aggregation

# View Cloud Run logs gcloud logs read "resource.type=cloud_run_revision" \ --project=production-neurascale \ --format="value(textPayload)" # View GKE logs kubectl logs -f deployment/neural-engine -n neural-engine # Search logs in Cloud Logging gcloud logs read 'resource.type="k8s_container" resource.labels.namespace_name="neural-engine"' \ --format="value(textPayload)"

For detailed monitoring setup, see our Monitoring Guide with Grafana dashboards and alerting rules.

Security Considerations

Network Security

  • Private GKE cluster with authorized networks
  • VPC firewall rules restricting access
  • Cloud Armor for DDoS protection
  • TLS termination at load balancer

Data Security

  • Encryption at rest using Cloud KMS
  • Encryption in transit with TLS 1.3
  • Secret management with Secret Manager
  • IAM roles with least privilege

Compliance

  • HIPAA compliance for medical data
  • SOC 2 Type II certification
  • Audit logging for all data access
  • Data retention policies

Disaster Recovery

Backup Strategy

# Database backups (automated) gcloud sql backups list --instance=neurascale-prod # Manual backup gcloud sql backups create --instance=neurascale-prod # Bigtable backups gcloud bigtable backups create backup-20240115 \ --instance=neurascale-prod \ --cluster=neurascale-cluster \ --table=neural_data # Application data backup gsutil -m cp -r gs://neurascale-prod-data \ gs://neurascale-backup/$(date +%Y%m%d)

Recovery Procedures

  1. Database Recovery

    # Restore from backup gcloud sql backups restore BACKUP_ID \ --restore-instance=neurascale-prod
  2. Application Recovery

    # Redeploy last known good version helm rollback neural-engine -n neural-engine
  3. Data Recovery

    # Restore from backup gsutil -m cp -r gs://neurascale-backup/20240115 \ gs://neurascale-prod-data

Troubleshooting

Common Issues

Issue: Deployment fails with resource limits

Solution:

# Check resource quotas gcloud compute project-info describe --project=production-neurascale # Request quota increase if needed gcloud alpha compute quotas update

Issue: Cannot connect to Cloud SQL

Solution:

# Check Cloud SQL status gcloud sql instances describe neurascale-prod # Verify network connectivity gcloud sql instances describe neurascale-prod \ --format="get(ipAddresses[0].ipAddress)" # Check IAM permissions gcloud projects get-iam-policy production-neurascale

Issue: Pods fail to start in GKE

Solution:

# Check pod status kubectl describe pod neural-engine-xxx -n neural-engine # Check events kubectl get events -n neural-engine --sort-by='.lastTimestamp' # Check resource limits kubectl top pods -n neural-engine

Getting Help

Ready to deploy NeuraScale? Start with local development and work your way up to production!

Last updated on