Get Started
This guide will help you get up and running with NeuraScale in minutes.
Prerequisites
Before you begin, ensure you have the following installed:
Core Requirements
- Python 3.12.11 (exact version required) - Download
- Node.js 18.x or higher - Download
- pnpm 9.x or higher - Installation guide
- Docker & Docker Compose - Download
- Git - Download
For Console Development
- Firebase CLI - Installation guide
- Vercel CLI - Installation guide
- Neon CLI (optional) - Installation guide
For GCP Deployment
- Google Cloud SDK - Installation guide
- Terraform 1.5.x or higher - Download
- kubectl - Installation guide
- Helm 3.x - Installation guide
Python version must be exactly 3.12.11. Other versions may cause compatibility issues.
GCP Setup
Install Google Cloud SDK
macOS
# Using Homebrew
brew install --cask google-cloud-sdk
# Or download from https://cloud.google.com/sdk/docs/install
Authenticate with GCP
# Login to GCP
gcloud auth login
# Set default project (use your environment)
gcloud config set project development-neurascale
# Configure application default credentials
gcloud auth application-default login
Install Terraform
macOS
# Using Homebrew
brew install terraform
# Verify installation
terraform --version
Configure GCP Permissions
Ensure your GCP account has the following roles:
roles/owner
orroles/editor
for developmentroles/iam.serviceAccountUser
for service account impersonationroles/storage.admin
for Terraform state management
For production deployments, use the service account:
gcloud iam service-accounts keys create ~/key.json \
--iam-account=github-actions@neurascale.iam.gserviceaccount.com
export GOOGLE_APPLICATION_CREDENTIALS=~/key.json
Firebase Setup
Install Firebase CLI
npm
npm install -g firebase-tools
# Verify installation
firebase --version
Initialize Firebase Project
# Login to Firebase
firebase login
# Initialize Firebase in the console directory
cd console
firebase init
Select the following services:
- Authentication - For user management
- Hosting - For Vercel integration (optional)
- Functions - For server-side auth operations
Configure Firebase Authentication
- Go to Firebase Console
- Select your project or create a new one
- Navigate to Authentication → Sign-in method
- Enable the following providers:
- Email/Password
- GitHub (optional)
Set up Environment Variables
Create .env.local
in the console directory:
# Firebase Configuration
NEXT_PUBLIC_FIREBASE_API_KEY=your-api-key
NEXT_PUBLIC_FIREBASE_AUTH_DOMAIN=your-auth-domain
NEXT_PUBLIC_FIREBASE_PROJECT_ID=your-project-id
NEXT_PUBLIC_FIREBASE_STORAGE_BUCKET=your-storage-bucket
NEXT_PUBLIC_FIREBASE_MESSAGING_SENDER_ID=your-sender-id
NEXT_PUBLIC_FIREBASE_APP_ID=your-app-id
# Firebase Admin SDK (for API routes)
FIREBASE_ADMIN_PROJECT_ID=your-project-id
FIREBASE_ADMIN_CLIENT_EMAIL=your-client-email
FIREBASE_ADMIN_PRIVATE_KEY=your-private-key
NeonDB Setup
Create Neon Account
- Sign up at neon.tech
- Create a new project for NeuraScale
- Choose your region (preferably same as GCP region)
Configure Database Connection
Get your connection string from the Neon dashboard and add to .env.local
:
# NeonDB Configuration
DATABASE_URL=postgresql://user:password@host/neurascale?sslmode=require
DIRECT_URL=postgresql://user:password@host/neurascale?sslmode=require
Initialize Database Schema
cd console
# Install Prisma CLI
pnpm add -D prisma @prisma/client
# Initialize Prisma
npx prisma init
# Push schema to database
npx prisma db push
# Generate Prisma client
npx prisma generate
Set up Database Branching (Optional)
For development workflows:
# Install Neon CLI
npm install -g neonctl
# Authenticate
neonctl auth
# Create a development branch
neonctl branches create --name dev --project-id your-project-id
Quick Start
Clone the Repository
git clone https://github.com/identity-wael/neurascale.git
cd neurascale
Set Up Virtual Environments
Run our automated setup script to configure Python environments:
./scripts/dev-tools/setup-venvs.sh
This script will:
- Verify Python 3.12.11 is available
- Create virtual environments for all components
- Install required dependencies
Start Infrastructure Services
Choose your deployment method:
Local Development
Launch the required infrastructure services locally:
docker-compose up -d
This starts:
- TimescaleDB for time-series data
- Redis for caching and real-time features
- PostgreSQL for application data
- MCP Server for AI assistant integration
GCP Deployment
For production GCP deployment, we use a multi-environment Terraform setup:
cd neural-engine/terraform
# Initialize with environment-specific backend
terraform init -backend-config=backend-configs/development.hcl
# Plan with environment variables
terraform plan -var-file="environments/development.tfvars"
# Apply infrastructure
terraform apply -var-file="environments/development.tfvars"
GCP Resources Created:
- Bigtable: For high-performance time-series neural data storage
- Pub/Sub: Topics and subscriptions for each signal type (EEG, EMG, etc.)
- Cloud Run: MCP Server deployment
- Artifact Registry: Docker image storage
- Cloud SQL: PostgreSQL for metadata and session data
- Secret Manager: API keys and sensitive configuration
- GKE: Kubernetes cluster for Neural Engine (production only)
Start the Neural Engine
Development
cd neural-engine
source venv/bin/activate
python -m src.main
The Neural Engine API will be available at: http://localhost:8000
Start the Console
The NeuraScale Console provides a web interface for device management and data visualization.
Local Development
cd console
# Install dependencies
pnpm install
# Run database migrations
npx prisma migrate dev
# Start development server
pnpm dev
The console will be available at: http://localhost:3000
Console Features:
- Real-time device monitoring
- EEG data visualization
- Session recording and playback
- User authentication via Firebase
- Experiment management
Test Your Installation
Create a synthetic device to verify everything is working:
# Create a test device
curl -X POST http://localhost:8000/api/v1/devices \
-H "Content-Type: application/json" \
-d '{"device_id": "test-device", "device_type": "synthetic"}'
# Start streaming data
curl -X POST http://localhost:8000/api/v1/devices/test-device/stream/start
You should see synthetic neural data streaming in the logs.
Deployment Environments
NeuraScale uses a multi-environment GCP setup:
Environment | Project ID | Branch | Purpose |
---|---|---|---|
Development | development-neurascale | develop | Feature development and testing |
Staging | staging-neurascale | PR branches | Integration testing and validation |
Production | production-neurascale | main | Live production environment |
Each environment has:
- Isolated GCP project with separate billing
- Environment-specific Terraform state in GCS
- Automated deployment via GitHub Actions
- Cross-project IAM for service accounts
Project Structure
neurascale/
├── neural-engine/ # Core neural data processing engine
│ ├── src/
│ │ ├── api/ # FastAPI endpoints
│ │ ├── devices/ # Device interfaces (OpenBCI, Emotiv, etc.)
│ │ ├── processing/ # Signal processing pipelines
│ │ ├── classification/ # Real-time ML models
│ │ └── neural_ledger/ # HIPAA-compliant audit trail
│ ├── terraform/ # GCP infrastructure as code
│ │ ├── modules/ # Reusable Terraform modules
│ │ │ ├── neural-ingestion/ # Pub/Sub, Bigtable, Cloud Functions
│ │ │ ├── mcp-server/ # Cloud Run MCP deployment
│ │ │ ├── networking/ # VPC and service connections
│ │ │ ├── gke/ # Kubernetes cluster config
│ │ │ └── database/ # Cloud SQL, Redis, BigQuery
│ │ ├── environments/ # Environment-specific configs
│ │ └── backend-configs/# Terraform state backends
│ └── tests/ # Test suite
├── console/ # NeuraScale Console UI
├── docs-nextra/ # Documentation (you are here!)
├── infrastructure/ # Kubernetes & deployment configs
│ ├── cross-project-iam/ # IAM setup across GCP projects
│ └── k8s/ # Kubernetes manifests
├── kubernetes/ # Helm charts
│ └── helm/neural-engine/# Neural Engine Helm chart
├── mcp-server/ # Model Context Protocol server
└── scripts/ # Development tools and utilities
Next Steps
Now that you have NeuraScale running, explore:
Common Commands
Neural Engine
# Activate virtual environment
source neural-engine/venv/bin/activate
# Start the engine
python -m src.main
# Run tests
pytest tests/
# Format code
black .
# Lint code
flake8 .
# Type checking
mypy .
Troubleshooting
If you see “Port 3000/8000 is already in use”:
# Find and kill the process
lsof -ti:3000 | xargs kill -9
# Or use a different port
PORT=3001 npm run dev
If you encounter Python version errors:
# Check your Python version
python --version
# Must be exactly 3.12.11
# Run the setup script to fix
./scripts/dev-tools/setup-venvs.sh
For more troubleshooting help, see our Troubleshooting Guide or ask in GitHub Discussions .
Getting Help
- Documentation: docs.neurascale.io
- GitHub Discussions: Ask questions
- Issue Tracker: Report bugs
- Email Support: support@neurascale.io
Ready to build something amazing? Let’s bridge mind and world together!