Skip to Content
DocumentationTroubleshooting & FAQ

Troubleshooting & FAQ

This guide helps you diagnose and resolve common issues when working with NeuraScale, from device connection problems to performance optimization.

Quick Diagnostics

System Health Check

# Check API connectivity curl -H "Authorization: Bearer YOUR_TOKEN" \ https://api.neurascale.io/v2/health # Check device availability curl -H "Authorization: Bearer YOUR_TOKEN" \ https://api.neurascale.io/v2/devices # Check system status curl https://status.neurascale.io/api/v2/status.json

Performance Monitoring

from neurascale import NeuraScaleClient import time async def health_check(): client = NeuraScaleClient(api_key="YOUR_API_KEY") start_time = time.time() status = await client.health.check() latency = (time.time() - start_time) * 1000 print(f"API Health: {status}") print(f"Response Time: {latency:.1f}ms") # Check device connectivity devices = await client.devices.list() print(f"Available Devices: {len(devices)}") for device in devices: print(f" {device.id}: {device.status}")

Common Issues

Device Connection Problems

OpenBCI Connection Issues

Problem: Device not detected

Error: Device 'openbci_001' not found

Solutions:

  1. Check USB Connection

    # macOS/Linux: List USB devices lsusb | grep -i openbci # Windows: Check Device Manager # Look for "USB Serial Device" or "OpenBCI"
  2. Verify Permissions (Linux/macOS)

    # Add user to dialout group (Linux) sudo usermod -a -G dialout $USER # Set permissions (macOS) sudo chmod 666 /dev/cu.usbserial-*
  3. Update Firmware

    from neurascale.devices import OpenBCIDevice device = OpenBCIDevice('openbci_001') await device.update_firmware()

Problem: Poor signal quality

Warning: Signal quality below threshold (0.3)

Solutions:

  1. Improve Electrode Contact

    • Clean skin with alcohol
    • Apply conductive gel
    • Check electrode impedance
  2. Reduce Environmental Noise

    # Configure noise reduction config = { "filters": { "notch": 60, # US power line frequency "highpass": 0.5, "lowpass": 100 }, "noise_reduction": { "enable_adaptive": True, "reference_channels": [1, 2] } }

Problem: Data dropouts

Error: Packet loss detected (5.2%)

Solutions:

  1. Check Bluetooth Range (Ganglion)

    • Stay within 10 feet of receiver
    • Minimize interference from WiFi
  2. USB Buffer Issues (Cyton)

    # Increase buffer size device_config = { "buffer_size": 8192, "sample_rate": 250, "timeout": 5000 }

API and Network Issues

Authentication Problems

Problem: Token expired

{ "error": { "code": "TOKEN_EXPIRED", "message": "JWT token has expired" } }

Solution: Implement automatic token refresh

from neurascale import NeuraScaleClient import asyncio from datetime import datetime, timedelta class AuthenticatedClient: def __init__(self, email, password): self.email = email self.password = password self.token = None self.token_expiry = None self.client = None async def ensure_authenticated(self): if not self.token or datetime.now() > self.token_expiry: await self.refresh_token() async def refresh_token(self): # Get new token auth_response = await self.authenticate() self.token = auth_response['access_token'] self.token_expiry = datetime.now() + timedelta(seconds=auth_response['expires_in']) # Update client self.client = NeuraScaleClient(api_key=self.token) async def api_call(self, method, *args, **kwargs): await self.ensure_authenticated() return await getattr(self.client, method)(*args, **kwargs)

Rate Limiting

Problem: Too many requests

{ "error": { "code": "RATE_LIMIT_EXCEEDED", "message": "Rate limit of 1000 requests/hour exceeded" } }

Solution: Implement exponential backoff

import asyncio import random from functools import wraps def with_retry(max_retries=3, base_delay=1): def decorator(func): @wraps(func) async def wrapper(*args, **kwargs): for attempt in range(max_retries): try: return await func(*args, **kwargs) except RateLimitError as e: if attempt == max_retries - 1: raise # Exponential backoff with jitter delay = base_delay * (2 ** attempt) + random.uniform(0, 1) await asyncio.sleep(delay) return None return wrapper return decorator @with_retry(max_retries=5, base_delay=2) async def make_api_call(): return await client.devices.list()

Connection Timeouts

Problem: Slow API responses

TimeoutError: Request timed out after 30 seconds

Solutions:

  1. Increase Timeout Values

    client = NeuraScaleClient( api_key="YOUR_KEY", timeout=60.0, # 60 second timeout max_retries=3 )
  2. Use Connection Pooling

    import aiohttp connector = aiohttp.TCPConnector( limit=100, # Connection pool size limit_per_host=30, keepalive_timeout=30 ) client = NeuraScaleClient( api_key="YOUR_KEY", connector=connector )

Data Processing Issues

Memory Usage

Problem: High memory consumption

MemoryError: Unable to allocate array with shape (1000000, 64)

Solutions:

  1. Process Data in Chunks

    async def process_large_dataset(session_id, chunk_size=10000): total_samples = await client.data.count(session_id) for offset in range(0, total_samples, chunk_size): chunk = await client.data.query( session_id=session_id, limit=chunk_size, offset=offset ) # Process chunk processed = await process_chunk(chunk) # Save intermediate results await save_results(processed, offset) # Clear memory del chunk, processed
  2. Use Memory Mapping

    import numpy as np # Memory-mapped file for large datasets data = np.memmap( 'neural_data.dat', dtype='float32', mode='r', shape=(1000000, 64) )

Performance Optimization

Problem: Slow real-time processing

Warning: Processing latency exceeds target (120ms > 100ms)

Solutions:

  1. Optimize Processing Pipeline

    from concurrent.futures import ThreadPoolExecutor import asyncio class OptimizedProcessor: def __init__(self): self.executor = ThreadPoolExecutor(max_workers=4) self.buffer = asyncio.Queue(maxsize=100) async def process_stream(self, stream): # Producer: receive data receive_task = asyncio.create_task(self.receive_data(stream)) # Consumer: process data process_task = asyncio.create_task(self.process_data()) await asyncio.gather(receive_task, process_task) async def process_data(self): while True: packet = await self.buffer.get() # Offload CPU-intensive work to thread pool loop = asyncio.get_event_loop() result = await loop.run_in_executor( self.executor, self.cpu_intensive_processing, packet ) await self.handle_result(result)
  2. Use Vectorized Operations

    import numpy as np from scipy import signal # Vectorized filtering (faster than loop) def apply_filters_vectorized(data, fs=250): # Design filters once b_notch, a_notch = signal.iirnotch(60, 60/10, fs) b_bp, a_bp = signal.butter(4, [0.5, 100], btype='band', fs=fs) # Apply to all channels at once filtered = signal.filtfilt(b_notch, a_notch, data, axis=1) filtered = signal.filtfilt(b_bp, a_bp, filtered, axis=1) return filtered

Frequently Asked Questions

General Questions

Q: What sampling rates does NeuraScale support?

A: NeuraScale supports a wide range of sampling rates depending on the device:

  • EEG: 250-2000 Hz (most common: 250, 500, 1000 Hz)
  • LFP: 1000-2000 Hz
  • Spikes: Up to 30 kHz
  • EMG/ECG: 250-1000 Hz

Q: How many channels can I use simultaneously?

A: Channel limits depend on your subscription:

  • Starter: Up to 32 channels
  • Professional: Up to 256 channels
  • Enterprise: Up to 10,000+ channels

Q: What’s the maximum session duration?

A: Session duration limits:

  • Real-time streaming: Unlimited
  • Recorded sessions: Up to 24 hours per session
  • Total storage depends on your plan

Technical Questions

Q: How do I reduce latency below 100ms?

A: To achieve ultra-low latency:

# Optimize client configuration client = NeuraScaleClient( api_key="YOUR_KEY", performance_mode="ultra_low_latency", buffer_size=128, # Minimal buffering batch_size=1 # Process immediately ) # Use gRPC for lowest latency grpc_client = NeuraScaleGRPCClient( endpoint="api.neurascale.io:443", compression=False, # Disable compression keepalive_time_ms=30000 )

Q: How do I handle packet loss?

A: Implement packet loss detection and recovery:

class PacketLossHandler: def __init__(self, tolerance=0.02): # 2% tolerance self.expected_sequence = 0 self.tolerance = tolerance self.loss_count = 0 self.total_packets = 0 def check_packet(self, packet): self.total_packets += 1 if packet.sequence_number != self.expected_sequence: # Packet loss detected missed = packet.sequence_number - self.expected_sequence self.loss_count += missed loss_rate = self.loss_count / self.total_packets if loss_rate > self.tolerance: raise PacketLossError(f"Packet loss rate: {loss_rate:.1%}") self.expected_sequence = packet.sequence_number + 1

Q: How do I implement custom preprocessing?

A: Create custom preprocessing pipelines:

from neurascale.processing import ProcessingStage class CustomPreprocessor(ProcessingStage): def __init__(self, custom_param=1.0): super().__init__() self.custom_param = custom_param async def process(self, data): # Your custom preprocessing logic processed = self.custom_algorithm(data, self.custom_param) return processed def custom_algorithm(self, data, param): # Implement your algorithm return data * param # Use in pipeline pipeline = ProcessingPipeline() pipeline.add_stage(CustomPreprocessor(custom_param=2.0))

Device-Specific Questions

Q: Can I use multiple devices simultaneously?

A: Yes, NeuraScale supports multi-device sessions:

# Create multi-device session session = await client.sessions.create( name="Multi-device Recording", device_ids=["openbci_001", "emotiv_001", "muse_001"], sync_method="hardware_trigger" # or "software_sync" ) # Handle synchronized data async for data_packet in session.stream(): # data_packet contains data from all devices for device_id, device_data in data_packet.devices.items(): print(f"Data from {device_id}: {len(device_data.channels)} channels")

Q: How do I calibrate my device?

A: Device calibration procedures:

# OpenBCI calibration await device.calibrate( calibration_type="impedance", target_impedance=5000, # 5kΩ timeout=60 ) # Emotiv calibration await device.calibrate( calibration_type="contact_quality", minimum_quality=0.8, timeout=120 ) # Custom calibration calibration_data = await device.record_calibration( duration=30, # 30 seconds instructions="Close your eyes and relax" ) await device.apply_calibration(calibration_data)

Getting Help

Support Channels

  1. Documentation: docs.neurascale.io 
  2. Community Forum: community.neurascale.io 
  3. GitHub Issues: github.com/neurascale/neurascale/issues 
  4. Email Support: support@neurascale.io

Before Contacting Support

Please include the following information:

  1. System Information

    # Get system info python -c " import platform, sys print(f'OS: {platform.system()} {platform.release()}') print(f'Python: {sys.version}') print(f'Architecture: {platform.machine()}') "
  2. NeuraScale Version

    import neurascale print(f"NeuraScale SDK: {neurascale.__version__}")
  3. Device Information

    devices = await client.devices.list() for device in devices: print(f"Device: {device.id} ({device.type}) - {device.status}")
  4. Error Logs

    import logging # Enable debug logging logging.basicConfig(level=logging.DEBUG) logger = logging.getLogger('neurascale') # Your code that produces the error # Logs will show detailed information

Performance Monitoring

Enable monitoring to help diagnose issues:

from neurascale.monitoring import PerformanceMonitor monitor = PerformanceMonitor() monitor.enable_metrics([ 'latency', 'packet_loss', 'memory_usage', 'cpu_usage' ]) # Your application code await process_neural_data() # Get performance report report = monitor.generate_report() print(report.summary())

For real-time assistance, join our Discord community at discord.gg/neurascale  where you can get help from both the development team and experienced users.

Last updated on