Skip to Content
DocumentationGet Started

Get Started

This guide will help you get up and running with NeuraScale in minutes.

Prerequisites

Before you begin, ensure you have the following installed:

Core Requirements

For Console Development

For GCP Deployment

Python version must be exactly 3.12.11. Other versions may cause compatibility issues.

GCP Setup

Install Google Cloud SDK

# Using Homebrew brew install --cask google-cloud-sdk # Or download from https://cloud.google.com/sdk/docs/install

Authenticate with GCP

# Login to GCP gcloud auth login # Set default project (use your environment) gcloud config set project development-neurascale # Configure application default credentials gcloud auth application-default login

Install Terraform

# Using Homebrew brew install terraform # Verify installation terraform --version

Configure GCP Permissions

Ensure your GCP account has the following roles:

  • roles/owner or roles/editor for development
  • roles/iam.serviceAccountUser for service account impersonation
  • roles/storage.admin for Terraform state management

For production deployments, use the service account:

gcloud iam service-accounts keys create ~/key.json \ --iam-account=github-actions@neurascale.iam.gserviceaccount.com export GOOGLE_APPLICATION_CREDENTIALS=~/key.json

Firebase Setup

Install Firebase CLI

npm install -g firebase-tools # Verify installation firebase --version

Initialize Firebase Project

# Login to Firebase firebase login # Initialize Firebase in the console directory cd console firebase init

Select the following services:

  • Authentication - For user management
  • Hosting - For Vercel integration (optional)
  • Functions - For server-side auth operations

Configure Firebase Authentication

  1. Go to Firebase Console 
  2. Select your project or create a new one
  3. Navigate to AuthenticationSign-in method
  4. Enable the following providers:
    • Email/Password
    • Google
    • GitHub (optional)

Set up Environment Variables

Create .env.local in the console directory:

# Firebase Configuration NEXT_PUBLIC_FIREBASE_API_KEY=your-api-key NEXT_PUBLIC_FIREBASE_AUTH_DOMAIN=your-auth-domain NEXT_PUBLIC_FIREBASE_PROJECT_ID=your-project-id NEXT_PUBLIC_FIREBASE_STORAGE_BUCKET=your-storage-bucket NEXT_PUBLIC_FIREBASE_MESSAGING_SENDER_ID=your-sender-id NEXT_PUBLIC_FIREBASE_APP_ID=your-app-id # Firebase Admin SDK (for API routes) FIREBASE_ADMIN_PROJECT_ID=your-project-id FIREBASE_ADMIN_CLIENT_EMAIL=your-client-email FIREBASE_ADMIN_PRIVATE_KEY=your-private-key

NeonDB Setup

Create Neon Account

  1. Sign up at neon.tech 
  2. Create a new project for NeuraScale
  3. Choose your region (preferably same as GCP region)

Configure Database Connection

Get your connection string from the Neon dashboard and add to .env.local:

# NeonDB Configuration DATABASE_URL=postgresql://user:password@host/neurascale?sslmode=require DIRECT_URL=postgresql://user:password@host/neurascale?sslmode=require

Initialize Database Schema

cd console # Install Prisma CLI pnpm add -D prisma @prisma/client # Initialize Prisma npx prisma init # Push schema to database npx prisma db push # Generate Prisma client npx prisma generate

Set up Database Branching (Optional)

For development workflows:

# Install Neon CLI npm install -g neonctl # Authenticate neonctl auth # Create a development branch neonctl branches create --name dev --project-id your-project-id

Quick Start

Clone the Repository

git clone https://github.com/identity-wael/neurascale.git cd neurascale

Set Up Virtual Environments

Run our automated setup script to configure Python environments:

./scripts/dev-tools/setup-venvs.sh

This script will:

  • Verify Python 3.12.11 is available
  • Create virtual environments for all components
  • Install required dependencies

Start Infrastructure Services

Choose your deployment method:

Local Development

Launch the required infrastructure services locally:

docker-compose up -d

This starts:

  • TimescaleDB for time-series data
  • Redis for caching and real-time features
  • PostgreSQL for application data
  • MCP Server for AI assistant integration

GCP Deployment

For production GCP deployment, we use a multi-environment Terraform setup:

cd neural-engine/terraform # Initialize with environment-specific backend terraform init -backend-config=backend-configs/development.hcl # Plan with environment variables terraform plan -var-file="environments/development.tfvars" # Apply infrastructure terraform apply -var-file="environments/development.tfvars"

GCP Resources Created:

  • Bigtable: For high-performance time-series neural data storage
  • Pub/Sub: Topics and subscriptions for each signal type (EEG, EMG, etc.)
  • Cloud Run: MCP Server deployment
  • Artifact Registry: Docker image storage
  • Cloud SQL: PostgreSQL for metadata and session data
  • Secret Manager: API keys and sensitive configuration
  • GKE: Kubernetes cluster for Neural Engine (production only)

Start the Neural Engine

cd neural-engine source venv/bin/activate python -m src.main

The Neural Engine API will be available at: http://localhost:8000 

Start the Console

The NeuraScale Console provides a web interface for device management and data visualization.

cd console # Install dependencies pnpm install # Run database migrations npx prisma migrate dev # Start development server pnpm dev

The console will be available at: http://localhost:3000 

Console Features:

  • Real-time device monitoring
  • EEG data visualization
  • Session recording and playback
  • User authentication via Firebase
  • Experiment management

Test Your Installation

Create a synthetic device to verify everything is working:

# Create a test device curl -X POST http://localhost:8000/api/v1/devices \ -H "Content-Type: application/json" \ -d '{"device_id": "test-device", "device_type": "synthetic"}' # Start streaming data curl -X POST http://localhost:8000/api/v1/devices/test-device/stream/start

You should see synthetic neural data streaming in the logs.

Deployment Environments

NeuraScale uses a multi-environment GCP setup:

EnvironmentProject IDBranchPurpose
Developmentdevelopment-neurascaledevelopFeature development and testing
Stagingstaging-neurascalePR branchesIntegration testing and validation
Productionproduction-neurascalemainLive production environment

Each environment has:

  • Isolated GCP project with separate billing
  • Environment-specific Terraform state in GCS
  • Automated deployment via GitHub Actions
  • Cross-project IAM for service accounts

Project Structure

neurascale/ ├── neural-engine/ # Core neural data processing engine │ ├── src/ │ │ ├── api/ # FastAPI endpoints │ │ ├── devices/ # Device interfaces (OpenBCI, Emotiv, etc.) │ │ ├── processing/ # Signal processing pipelines │ │ ├── classification/ # Real-time ML models │ │ └── neural_ledger/ # HIPAA-compliant audit trail │ ├── terraform/ # GCP infrastructure as code │ │ ├── modules/ # Reusable Terraform modules │ │ │ ├── neural-ingestion/ # Pub/Sub, Bigtable, Cloud Functions │ │ │ ├── mcp-server/ # Cloud Run MCP deployment │ │ │ ├── networking/ # VPC and service connections │ │ │ ├── gke/ # Kubernetes cluster config │ │ │ └── database/ # Cloud SQL, Redis, BigQuery │ │ ├── environments/ # Environment-specific configs │ │ └── backend-configs/# Terraform state backends │ └── tests/ # Test suite ├── console/ # NeuraScale Console UI ├── docs-nextra/ # Documentation (you are here!) ├── infrastructure/ # Kubernetes & deployment configs │ ├── cross-project-iam/ # IAM setup across GCP projects │ └── k8s/ # Kubernetes manifests ├── kubernetes/ # Helm charts │ └── helm/neural-engine/# Neural Engine Helm chart ├── mcp-server/ # Model Context Protocol server └── scripts/ # Development tools and utilities

Next Steps

Now that you have NeuraScale running, explore:

Connect a Device

Learn how to connect real BCI devices

Device Integration →

Build Your First App

Create a simple BCI application

Tutorial →

API Reference

Explore the complete API documentation

API Docs →

Architecture Deep Dive

Understand the system architecture

Architecture →

Common Commands

# Activate virtual environment source neural-engine/venv/bin/activate # Start the engine python -m src.main # Run tests pytest tests/ # Format code black . # Lint code flake8 . # Type checking mypy .

Troubleshooting

If you see “Port 3000/8000 is already in use”:

# Find and kill the process lsof -ti:3000 | xargs kill -9 # Or use a different port PORT=3001 npm run dev

If you encounter Python version errors:

# Check your Python version python --version # Must be exactly 3.12.11 # Run the setup script to fix ./scripts/dev-tools/setup-venvs.sh

For more troubleshooting help, see our Troubleshooting Guide or ask in GitHub Discussions .

Getting Help

Ready to build something amazing? Let’s bridge mind and world together!

Last updated on