neurolink

Streams are the future of AI powered by unlimited free tokens.

0
0
0
public
Forked

NeuroLink

undefinedThe pipe layer for the AI nervous system.undefined

AI intelligence flows as streams — tokens, tool calls, memory, voice, documents.
NeuroLink is the vascular layer that carries these streams from where they are
generated (LLM providers: the neurons) to where they are needed (connectors: the organs).

import { NeuroLink } from "@juspay/neurolink";

const pipe = new NeuroLink();

// Everything is a stream
const result = await pipe.stream({ input: { text: "Hello" } });
for await (const chunk of result.stream) {
  if ("content" in chunk) {
    process.stdout.write(chunk.content);
  }
}

undefined→ Docs · → Quick Start · → npmundefined


🧠 What is NeuroLink?

undefinedNeuroLink is the universal AI integration platform that unifies 13 major AI providers and 100+ models under one consistent API.undefined

Extracted from production systems at Juspay and battle-tested at enterprise scale, NeuroLink provides a production-ready solution for integrating AI into any application. Whether you’re building with OpenAI, Anthropic, Google, AWS Bedrock, Azure, or any of our 13 supported providers, NeuroLink gives you a single, consistent interface that works everywhere.

undefinedWhy NeuroLink? Switch providers with a single parameter change, leverage 64+ built-in tools and MCP servers, deploy with confidence using enterprise features like Redis memory and multi-provider failover, and optimize costs automatically with intelligent routing. Use it via our professional CLI or TypeScript SDK—whichever fits your workflow.

undefinedWhere we’re headed: We’re building for the future of AI—edge-first execution and continuous streaming architectures that make AI practically free and universally available. Read our vision →undefined

undefinedGet Started in <5 Minutes →undefined


What’s New (Q1 2026)

Feature Version Description Guide
undefinedGemini 3 Multi-turn Tool Fixundefined v9.49.0 Fixed multi-step agentic tool calling on Vertex AI Gemini 3 models. Correct thoughtSignature replay, stepIndex parallel-call grouping, executionId session isolation, 5-min timeout, silent-timeout surfacing. Vertex AI Guide
undefinedAutoResearchundefined v9.17.0 Autonomous AI experiment engine: proposes code changes, runs experiments, evaluates metrics, keeps improvements — unattended for hours. AutoResearch Guide
undefinedMCP Enhancementsundefined v9.16.0 Advanced MCP features: tool routing, result caching, request batching, annotations, elicitation, custom server base, multi-server management MCP Enhancements Guide
undefinedMemoryundefined v9.12.0 Per-user condensed memory that persists across conversations. LLM-powered condensation with S3, Redis, or SQLite backends. Memory Guide
undefinedContext Window Managementundefined v9.2.0 4-stage compaction pipeline with auto-detection, budget gate at 80% usage, per-provider token estimation Context Compaction Guide
undefinedTool Execution Controlundefined v9.3.0 prepareStep and toolChoice support for per-step tool enforcement in multi-step agentic loops. API-level control over tool calls. API Reference
undefinedFile Processor Systemundefined v9.1.0 17+ file type processors with ProcessorRegistry, security sanitization, SVG text injection File Processors Guide
undefinedRAG with generate()/stream()undefined v9.2.0 Pass rag: { files } to generate/stream for automatic document chunking, embedding, and AI-powered search. 10 chunking strategies, hybrid search, reranking. RAG Guide
undefinedExternal TracerProvider Supportundefined v8.43.0 Integrate NeuroLink with existing OpenTelemetry instrumentation. Prevents duplicate registration conflicts. Observability Guide
undefinedServer Adaptersundefined v8.43.0 Multi-framework HTTP server with Hono, Express, Fastify, Koa support. Full CLI for server management with foreground/background modes. Server Adapters Guide
undefinedTitle Generation Eventsundefined v8.38.0 Emit conversation:titleGenerated event when conversation title is generated. Supports custom title prompts via NEUROLINK_TITLE_PROMPT. Conversation Memory Guide
undefinedVideo Generation with Veoundefined v8.32.0 Video generation using Veo 3.1 (veo-3.1). Realistic video generation with many parameter options Video Generation Guide
undefinedImage Generation with Geminiundefined v8.31.0 Native image generation using Gemini 2.0 Flash Experimental (imagen-3.0-generate-002). High-quality image synthesis directly from Google AI. Image Generation Guide
undefinedHTTP/Streamable HTTP Transportundefined v8.29.0 Connect to remote MCP servers via HTTP with authentication headers, automatic retry with exponential backoff, and configurable rate limiting. HTTP Transport Guide
  • undefinedAutoResearch – Autonomous AI experiment engine inspired by Karpathy’s autoresearch. Phase-gated tool access, git-backed safety, deterministic metric evaluation, and TaskManager integration for continuous unattended research. 12 research tools, 10 typed events, 9 CLI subcommands. → AutoResearch Guide
  • undefinedMemory – Per-user condensed memory that persists across all conversations. Automatically retrieves and stores memory on each generate()/stream() call. Supports S3, Redis, and SQLite storage with LLM-powered condensation. → Memory Guide
  • undefinedExternal TracerProvider Support – Integrate NeuroLink with applications that already have OpenTelemetry instrumentation. Supports auto-detection and manual configuration. → Observability Guide
  • undefinedClaude Proxy Telemetry – Bootstrap a local OpenObserve + OTEL collector stack with neurolink proxy telemetry setup, import the maintained NeuroLink Proxy Observability dashboard, and inspect proxy logs, traces, metrics, cache reuse, and routing behavior. → Claude Proxy Guide | Proxy Observability Guide
  • undefinedServer Adapters – Deploy NeuroLink as an HTTP API server with your framework of choice (Hono, Express, Fastify, Koa). Full CLI support with serve and server commands for foreground/background modes, route management, and OpenAPI generation. → Server Adapters Guide
  • undefinedTitle Generation Events – Emit real-time events when conversation titles are auto-generated. Listen to conversation:titleGenerated for session tracking. → Conversation Memory Guide
  • undefinedCustom Title Prompts – Customize conversation title generation with NEUROLINK_TITLE_PROMPT environment variable. Use ${userMessage} placeholder for dynamic prompts. → Conversation Memory Guide
  • undefinedVideo Generation – Transform images into 8-second videos with synchronized audio using Google Veo 3.1 via Vertex AI. Supports 720p/1080p resolutions, portrait/landscape aspect ratios. → Video Generation Guide
  • undefinedPPT Generation – Create professional PowerPoint presentations from text prompts with 35 slide types (title, content, charts, timelines, dashboards, composite layouts), 5 themes, and optional AI-generated images. Works with Vertex AI, OpenAI, Anthropic, Google AI, Azure, and Bedrock. → PPT Generation Guide
  • undefinedImage Generation – Generate images from text prompts using Gemini models via Vertex AI or Google AI Studio. Supports streaming mode with automatic file saving. → Image Generation Guide
  • undefinedRAG with generate()/stream() – Just pass rag: { files: ["./docs/guide.md"] } to generate() or stream(). NeuroLink auto-chunks, embeds, and creates a search tool the AI can invoke. 10 chunking strategies, hybrid search, 5 reranker types. → RAG Guide
  • undefinedHTTP/Streamable HTTP Transport for MCP – Connect to remote MCP servers via HTTP with authentication headers, retry logic, and rate limiting. → HTTP Transport Guide
  • 🧠 Gemini 3 Native Multi-turn Tool Calling — Fixed multi-step agentic tool calling for Gemini 3 models on Vertex AI. The native @google/genai path now correctly replays thoughtSignature as a sibling field on each functionCall part, groups parallel tool calls by stepIndex, enforces a 5-minute default timeout on the generate path, and surfaces silent timeouts as proper TimeoutError instead of empty responses. Multi-execution session overlap (where continueOrchestratorWorkflow restarts the loop on the same sessionId) is addressed by an executionId per invocation as a composite grouping key — this prevents tool calls from two different executions colliding into the same Gemini model turn and causing the model to return 0 function calls.
  • 🧠 Gemini 3 Preview Support - Full support for gemini-3-flash-preview and gemini-3-pro-preview with extended thinking capabilities
  • 🎯 Tool Execution Control – Use prepareStep to enforce specific tool calls, change the LLM models per step in multi-step agentic executions. Prevents LLMs from skipping required tools. Use toolChoice for static control, or prepareStep for dynamic per-step logic. → GenerateOptions Reference
  • undefinedStructured Output with Zod Schemas – Type-safe JSON generation with automatic validation using schema + output.format: "json" in generate(). → Structured Output Guide
  • undefinedCSV File Support – Attach CSV files to prompts for AI-powered data analysis with auto-detection. → CSV Guide
  • undefinedPDF File Support – Process PDF documents with native visual analysis for Vertex AI, Anthropic, Bedrock, AI Studio. → PDF Guide
  • undefined50+ File Types – Process Excel, Word, RTF, JSON, YAML, XML, HTML, SVG, Markdown, and 50+ code languages with intelligent content extraction. → File Processors Guide
  • undefinedLiteLLM Integration – Access 100+ AI models from all major providers through unified interface. → Setup Guide
  • undefinedSageMaker Integration – Deploy and use custom trained models on AWS infrastructure. → Setup Guide
  • undefinedOpenRouter Integration – Access 300+ models from OpenAI, Anthropic, Google, Meta, and more through a single unified API. → Setup Guide
  • undefinedHuman-in-the-loop workflows – Pause generation for user approval/input before tool execution. → HITL Guide
  • undefinedGuardrails middleware – Block PII, profanity, and unsafe content with built-in filtering. → Guardrails Guide
  • undefinedContext summarization – Automatic conversation compression for long-running sessions. → Summarization Guide
  • undefinedMCP Enhancements – 14 production-grade modules: tool routing (6 strategies), result caching (LRU/FIFO/LFU), request batching, tool annotations with auto-inference, middleware chain, elicitation protocol, multi-server management, and more. → MCP Enhancements Guide
  • undefinedRedis conversation export – Export full session history as JSON for analytics and debugging. → History Guide
// Image Generation with Gemini (v8.31.0)
const image = await neurolink.generate({
  input: { text: "A futuristic cityscape" },
  provider: "google-ai",
  model: "imagen-3.0-generate-002",
});
console.log(image.imageOutput?.base64); // Base64-encoded image

// AutoResearch — autonomous experiment loop (v9.17.0)
import { resolveConfig, ResearchWorker } from "@juspay/neurolink/autoresearch";

const config = resolveConfig({
  repoPath: "/path/to/repo",
  mutablePaths: ["train.py"],
  runCommand: "python3 train.py",
  metric: {
    name: "val_bpb",
    direction: "lower",
    pattern: "^val_bpb:\\s+([\\d.]+)",
  },
});
const worker = new ResearchWorker(config);
await worker.initialize("experiment-1");
const result = await worker.runExperimentCycle("Try lower learning rate");

// HTTP Transport for Remote MCP (v8.29.0)
await neurolink.addExternalMCPServer("remote-tools", {
  transport: "http",
  url: "https://mcp.example.com/v1",
  headers: { Authorization: "Bearer token" },
  retries: 3,
  timeout: 15000,
});

Previous Updates (Q4 2025)
  • undefinedImage Generation – Generate images from text prompts using Gemini models via Vertex AI or Google AI Studio. → Guide
  • undefinedGemini 3 Preview Support - Full support for gemini-3-flash-preview and gemini-3-pro-preview with extended thinking
  • undefinedStructured Output with Zod Schemas – Type-safe JSON generation with automatic validation. → Guide
  • undefinedCSV & PDF File Support – Attach CSV/PDF files to prompts with auto-detection. → CSV | PDF
  • undefinedLiteLLM & SageMaker – Access 100+ models via LiteLLM, deploy custom models on SageMaker. → LiteLLM | SageMaker
  • undefinedOpenRouter Integration – Access 300+ models through a single unified API. → Guide
  • undefinedHITL & Guardrails – Human-in-the-loop approval workflows and content filtering middleware. → HITL | Guardrails
  • undefinedRedis & Context Management – Session export, conversation history, and automatic summarization. → History

Enterprise Security: Human-in-the-Loop (HITL)

NeuroLink includes a production-ready HITL system for regulated industries and high-stakes AI operations:

Capability Description Use Case
undefinedTool Approval Workflowsundefined Require human approval before AI executes sensitive tools Financial transactions, data modifications
undefinedOutput Validationundefined Route AI outputs through human review pipelines Medical diagnosis, legal documents
undefinedConfidence Thresholdsundefined Automatically trigger human review below confidence level Critical business decisions
undefinedComplete Audit Trailundefined Full audit logging for compliance (HIPAA, SOC2, GDPR) Regulated industries
import { NeuroLink } from "@juspay/neurolink";

const neurolink = new NeuroLink({
  hitl: {
    enabled: true,
    requireApproval: ["writeFile", "executeCode", "sendEmail"],
    confidenceThreshold: 0.85,
    reviewCallback: async (action, context) => {
      // Custom review logic - integrate with your approval system
      return await yourApprovalSystem.requestReview(action);
    },
  },
});

// AI pauses for human approval before executing sensitive tools
const result = await neurolink.generate({
  input: { text: "Send quarterly report to stakeholders" },
});

undefinedEnterprise HITL Guide | Quick Startundefined

📚 Quick Start Guide

This guide will have you generating AI responses in under 5 minutes using either the SDK or CLI.

Installation

Choose your preferred package manager:

# npm
npm install @juspay/neurolink

# pnpm (recommended)
pnpm add @juspay/neurolink

# yarn
yarn add @juspay/neurolink

# CLI only (no installation needed)
npx @juspay/neurolink --help

Configuration

NeuroLink works with 13+ AI providers. You’ll need at least one API key to get started:

undefinedOption 1: Interactive Setup (Recommended)undefined

# Run the setup wizard to configure providers
pnpm dlx @juspay/neurolink setup

The wizard will guide you through:

  • Selecting your preferred AI providers
  • Validating API keys
  • Setting up configuration files

undefinedOption 2: Manual Configurationundefined

Create a .env file in your project root:

# Choose one or more providers
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_AI_API_KEY=...

undefinedFree Tier Options:undefined

Your First API Call (SDK)

undefinedBasic Text Generation:undefined

import { NeuroLink } from "@juspay/neurolink";

// Initialize (auto-selects best available provider from your .env)
const neurolink = new NeuroLink();

// Generate a response
const result = await neurolink.generate({
  input: { text: "Explain quantum computing in simple terms" },
});

console.log(result.content);

undefinedStreaming Responses:undefined

// Stream tokens in real-time
const stream = await neurolink.stream({
  input: { text: "Write a haiku about code" },
});
for await (const chunk of stream.stream) {
  if ("content" in chunk) process.stdout.write(chunk.content);
}

undefinedMultimodal Input (Images + Text):undefined

const result = await neurolink.generate({
  input: {
    text: "What's in this image?",
    images: ["./photo.jpg"],
  },
});

undefinedUsing Tools:undefined

// Built-in tools are automatically available
const result = await neurolink.generate({
  input: {
    text: "What time is it and what files are in the current directory?",
  },
  // AI can call getCurrentTime and listDirectory tools
});

Your First API Call (CLI)

undefinedBasic Generation:undefined

# Simple text generation
npx @juspay/neurolink generate "Explain TypeScript generics"

# Specify provider and model
npx @juspay/neurolink generate "Hello!" --provider openai --model gpt-4o

# Stream responses
npx @juspay/neurolink stream "Write a story about AI" --provider anthropic

undefinedMultimodal Input:undefined

# Analyze images
npx @juspay/neurolink generate "Describe this image" --image photo.jpg

# Process PDFs
npx @juspay/neurolink generate "Summarize this document" --pdf report.pdf

# Combine multiple file types
npx @juspay/neurolink generate "Analyze this data" --file data.xlsx --file config.json

undefinedInteractive Loop Mode:undefined

# Start an interactive session with persistent context
npx @juspay/neurolink loop

# Inside loop mode:
> set provider anthropic
> set model claude-opus-4
> generate "Hello, Claude!"
> history  # View conversation history
> exit

Common Use Cases

undefinedRAG (Retrieval-Augmented Generation):undefined

// Automatically chunk, embed, and search documents
const result = await neurolink.generate({
  input: { text: "What are the key features mentioned in the documentation?" },
  rag: {
    files: ["./docs/guide.md", "./docs/api.md"],
    chunkSize: 512,
    topK: 5,
  },
});

undefinedStructured Output with Zod:undefined

import { z } from "zod";

const schema = z.object({
  name: z.string(),
  age: z.number(),
  email: z.string().email(),
});

const result = await neurolink.generate({
  input: {
    text: "Extract user info: John Doe, 30 years old, john@example.com",
  },
  schema,
  output: { format: "json" },
});

// Parse the structured JSON from result.content
const parsed = schema.parse(JSON.parse(result.content));
console.log(parsed); // { name: "John Doe", age: 30, email: "john@example.com" }

undefinedExternal MCP Servers (GitHub, Slack, etc.):undefined

// Connect to GitHub MCP server
await neurolink.addExternalMCPServer("github", {
  command: "npx",
  args: ["-y", "@modelcontextprotocol/server-github"],
  transport: "stdio",
  env: { GITHUB_TOKEN: process.env.GITHUB_TOKEN },
});

// AI can now interact with GitHub
const result = await neurolink.generate({
  input: { text: 'Create an issue titled "Bug: login fails"' },
});

Next Steps

Troubleshooting

undefinedIssue: “Provider not configured”undefined

  • Run npx @juspay/neurolink setup or add provider API key to .env

undefinedIssue: Rate limit errorsundefined

  • Configure multiple providers for redundancy — NeuroLink auto-selects the best available
  • Use provider: "litellm" with LiteLLM to proxy across many providers

undefinedIssue: Large context overflowsundefined

  • Enable conversation memory with compaction: new NeuroLink({ conversationMemory: { enabled: true } })
  • Use rag option to search documents instead of sending full content

Need help? Check our Troubleshooting Guide or open an issue.


🌟 Complete Feature Set

NeuroLink is a comprehensive AI development platform. Every feature below is production-ready and fully documented.

🤖 AI Provider Integration

undefined13 providers unified under one API - Switch providers with a single parameter change.

Provider Models Free Tier Tool Support Status Documentation
undefinedOpenAIundefined GPT-4o, GPT-4o-mini, o1 ✅ Full ✅ Production Setup Guide
undefinedAnthropicundefined Claude 4.5 Opus/Sonnet/Haiku, Claude 4 Opus/Sonnet ✅ Full ✅ Production Setup Guide | Subscription Guide
undefinedGoogle AI Studioundefined Gemini 3 Flash/Pro, Gemini 2.5 Flash/Pro ✅ Free Tier ✅ Full ✅ Production Setup Guide
undefinedAWS Bedrockundefined Claude, Titan, Llama, Nova ✅ Full ✅ Production Setup Guide
undefinedGoogle Vertexundefined Gemini 3/2.5 (gemini-3-*-preview) ✅ Full ✅ Production Setup Guide
undefinedAzure OpenAIundefined GPT-4, GPT-4o, o1 ✅ Full ✅ Production Setup Guide
undefinedLiteLLMundefined 100+ models unified Varies ✅ Full ✅ Production Setup Guide
undefinedAWS SageMakerundefined Custom deployed models ✅ Full ✅ Production Setup Guide
undefinedMistral AIundefined Mistral Large, Small ✅ Free Tier ✅ Full ✅ Production Setup Guide
undefinedHugging Faceundefined 100,000+ models ✅ Free ⚠️ Partial ✅ Production Setup Guide
undefinedOllamaundefined Local models (Llama, Mistral) ✅ Free (Local) ⚠️ Partial ✅ Production Setup Guide
undefinedOpenAI Compatibleundefined Any OpenAI-compatible endpoint Varies ✅ Full ✅ Production Setup Guide
undefinedOpenRouterundefined 200+ Models via OpenRouter Varies ✅ Full ✅ Production Setup Guide

undefined📖 Provider Comparison Guide - Detailed feature matrix and selection criteria
undefined🔬 Provider Feature Compatibility - Test-based compatibility reference for all 19 features across 13 providers


🔧 Built-in Tools & MCP Integration

undefined6 Core Tools (work across all providers, zero configuration):

Tool Purpose Auto-Available Documentation
getCurrentTime Real-time clock access Tool Reference
readFile File system reading Tool Reference
writeFile File system writing Tool Reference
listDirectory Directory listing Tool Reference
calculateMath Mathematical operations Tool Reference
websearchGrounding Google Vertex web search ⚠️ Requires credentials Tool Reference

undefined58+ External MCP Servers supported (GitHub, PostgreSQL, Google Drive, Slack, and more):

// stdio transport - local MCP servers via command execution
await neurolink.addExternalMCPServer("github", {
  command: "npx",
  args: ["-y", "@modelcontextprotocol/server-github"],
  transport: "stdio",
  env: { GITHUB_TOKEN: process.env.GITHUB_TOKEN },
});

// HTTP transport - remote MCP servers via URL
await neurolink.addExternalMCPServer("github-copilot", {
  transport: "http",
  url: "https://api.githubcopilot.com/mcp",
  headers: { Authorization: "Bearer YOUR_COPILOT_TOKEN" },
  timeout: 15000,
  retries: 5,
});

// Tools automatically available to AI
const result = await neurolink.generate({
  input: { text: 'Create a GitHub issue titled "Bug in auth flow"' },
});

undefinedMCP Transport Options:undefined

Transport Use Case Key Features
stdio Local servers Command execution, environment variables
http Remote servers URL-based, auth headers, retries, rate limiting
sse Event streams Server-Sent Events, real-time updates
websocket Bi-directional Full-duplex communication

undefined📖 MCP Integration Guide - Setup external servers
undefined📖 HTTP Transport Guide - Remote MCP server configuration


🔌 MCP Enhancements

undefinedProduction-grade MCP capabilities for managing tool calls at scale across multi-server environments:

Module Purpose
undefinedTool Routerundefined Intelligent routing across servers with 6 strategies
undefinedTool Cacheundefined Result caching with LRU, FIFO, and LFU eviction
undefinedRequest Batcherundefined Automatic batching of tool calls for throughput
undefinedTool Annotationsundefined Safety metadata and behavior hints for MCP tools
undefinedTool Converterundefined Bidirectional conversion between NeuroLink and MCP formats
undefinedElicitation Protocolundefined Interactive user input during tool execution (HITL)
undefinedMulti-Server Managerundefined Load balancing and failover across server groups
undefinedMCP Server Baseundefined Abstract base class for building custom MCP servers
undefinedEnhanced Tool Discoveryundefined Advanced search and filtering across servers
undefinedAgent & Workflow Exposureundefined Expose agents and workflows as MCP tools
undefinedServer Capabilitiesundefined Resource and prompt management per MCP spec
undefinedRegistry Clientundefined Discover and connect to MCP servers from registries
undefinedTool Integrationundefined End-to-end tool lifecycle with middleware chain
undefinedElicitation Managerundefined Manages elicitation flows with validation and timeouts
import { ToolRouter, ToolCache, RequestBatcher } from "@juspay/neurolink";

// Route tool calls across multiple MCP servers
const router = new ToolRouter({
  strategy: "capability-based",
  servers: [
    { name: "github", url: "https://mcp-github.example.com" },
    { name: "db", url: "https://mcp-postgres.example.com" },
  ],
});

// Cache repeated tool results (LRU, FIFO, or LFU)
const cache = new ToolCache({ strategy: "lru", maxSize: 500, ttl: 60_000 });

// Batch concurrent tool calls for throughput
const batcher = new RequestBatcher({ maxBatchSize: 10, maxWaitMs: 50 });

undefined📖 MCP Enhancements Guide - Full reference for all 14 modules


💻 Developer Experience Features

undefinedSDK-First Design with TypeScript, IntelliSense, and type safety:

Feature Description Documentation
undefinedAuto Provider Selectionundefined Intelligent provider fallback SDK Guide
undefinedStreaming Responsesundefined Real-time token streaming Streaming Guide
undefinedConversation Memoryundefined Automatic context management with embedded per-user memory Memory Guide
undefinedFull Type Safetyundefined Complete TypeScript types Type Reference
undefinedError Handlingundefined Graceful provider fallback Error Guide
undefinedAnalytics & Evaluationundefined Usage tracking, quality scores Analytics Guide
undefinedMiddleware Systemundefined Request/response hooks Middleware Guide
undefinedFramework Integrationundefined Next.js, SvelteKit, Express Framework Guides
undefinedExtended Thinkingundefined Native thinking/reasoning mode for Gemini 3 and Claude models Thinking Guide
undefinedRAG Document Processingundefined rag: { files } on generate/stream with 10 chunking strategies and hybrid search RAG Guide

📁 Multimodal & File Processing

undefined17+ file categories supported (50+ total file types including code languages) with intelligent content extraction and provider-agnostic processing:

Category Supported Types Processing
undefinedDocumentsundefined Excel (.xlsx, .xls), Word (.docx), RTF, OpenDocument Sheet extraction, text extraction
undefinedDataundefined JSON, YAML, XML Validation, syntax highlighting
undefinedMarkupundefined HTML, SVG, Markdown, Text OWASP-compliant sanitization
undefinedCodeundefined 50+ languages (TypeScript, Python, Java, Go, etc.) Language detection, syntax metadata
undefinedConfigundefined .env, .ini, .toml, .cfg Secure parsing
undefinedMediaundefined Images (PNG, JPEG, WebP, GIF), PDFs, CSV Provider-specific formatting
// Process any supported file type
const result = await neurolink.generate({
  input: {
    text: "Analyze this data and code",
    files: [
      "./data.xlsx", // Excel spreadsheet
      "./config.yaml", // YAML configuration
      "./diagram.svg", // SVG (injected as sanitized text)
      "./main.py", // Python source code
    ],
  },
});

// CLI: Use --file for any supported type
// neurolink generate "Analyze this" --file ./report.xlsx --file ./config.json

undefinedKey Features:undefined

  • undefinedProcessorRegistry - Priority-based processor selection with fallback
  • undefinedOWASP Security - HTML/SVG sanitization prevents XSS attacks
  • undefinedAuto-detection - FileDetector identifies file types by extension and content
  • undefinedProvider-agnostic - All processors work across all 13 AI providers

undefined📖 File Processors Guide - Complete reference for all file types


🏢 Enterprise & Production Features

undefinedProduction-ready capabilities for regulated industries:undefined

Feature Description Use Case Documentation
undefinedEnterprise Proxyundefined Corporate proxy support Behind firewalls Proxy Setup
undefinedRedis Memoryundefined Distributed conversation state Multi-instance deployment Redis Guide
undefinedMemoryundefined Per-user condensed memory (S3/Redis/SQLite) Long-term user context Memory Guide
undefinedCost Optimizationundefined Automatic cheapest model selection Budget control Cost Guide
undefinedMulti-Provider Failoverundefined Automatic provider switching High availability Failover Guide
undefinedTelemetry & Monitoringundefined OpenTelemetry integration Observability Telemetry Guide
undefinedSecurity Hardeningundefined Credential management, auditing Compliance Security Guide
undefinedCustom Model Hostingundefined SageMaker integration Private models SageMaker Guide
undefinedLoad Balancingundefined LiteLLM proxy integration Scale & routing Load Balancing

undefinedSecurity & Compliance:undefined

  • ✅ SOC2 Type II compliant deployments
  • ✅ ISO 27001 certified infrastructure compatible
  • ✅ GDPR-compliant data handling (EU providers available)
  • ✅ HIPAA compatible (with proper configuration)
  • ✅ Hardened OS verified (SELinux, AppArmor)
  • ✅ Zero credential logging
  • ✅ Encrypted configuration storage
  • ✅ Automatic context window management with 4-stage compaction pipeline and 80% budget gate

undefined📖 Enterprise Deployment Guide - Complete production checklist


Enterprise Persistence: Redis Memory

Production-ready distributed conversation state for multi-instance deployments:

Capabilities

Feature Description Benefit
undefinedDistributed Memoryundefined Share conversation context across instances Horizontal scaling
undefinedSession Exportundefined Export full history as JSON Analytics, debugging, audit
undefinedAuto-Detectionundefined Automatic Redis discovery from environment Zero-config in containers
undefinedGraceful Failoverundefined Falls back to in-memory if Redis unavailable High availability
undefinedTTL Managementundefined Configurable session expiration Memory management

Quick Setup

import { NeuroLink } from "@juspay/neurolink";

// Auto-detect Redis from REDIS_URL environment variable
const neurolink = new NeuroLink({
  conversationMemory: {
    enabled: true,
    enableSummarization: true,
  },
});

// Or explicit Redis configuration
const neurolinkExplicit = new NeuroLink({
  conversationMemory: {
    enabled: true,
    redisConfig: {
      host: "redis.example.com",
      port: 6379,
      password: process.env.REDIS_PASSWORD,
      ttl: 86400, // 24-hour session expiration (seconds)
    },
  },
});

// Retrieve conversation history for analytics
const history = await neurolink.getConversationHistory("session-id");
await saveToDataWarehouse(history);

Docker Quick Start

# Start Redis
docker run -d --name neurolink-redis -p 6379:6379 redis:7-alpine

# Configure NeuroLink
export REDIS_URL=redis://localhost:6379

# Start your application
node your-app.js

undefinedRedis Setup Guide | Production Configuration | Migration Patternsundefined


🎨 Professional CLI

undefined15+ commands for every workflow:

Command Purpose Example Documentation
setup Interactive provider configuration neurolink setup Setup Guide
generate Text generation neurolink gen "Hello" Generate
stream Streaming generation neurolink stream "Story" Stream
status Provider health check neurolink status Status
loop Interactive session neurolink loop Loop
mcp MCP server management neurolink mcp discover MCP CLI
models Model listing neurolink models Models
eval Model evaluation neurolink eval Eval
serve Start HTTP server in foreground mode neurolink serve Serve
server start Start HTTP server in background mode neurolink server start Server
server stop Stop running background server neurolink server stop Server
server status Show server status information neurolink server status Server
server routes List all registered API routes neurolink server routes Server
server config View or modify server configuration neurolink server config Server
server openapi Generate OpenAPI specification neurolink server openapi Server
rag chunk Chunk documents for RAG neurolink rag chunk f.md RAG CLI

undefinedRAG flags are available on generate and stream: --rag-files, --rag-strategy, --rag-chunk-size, --rag-chunk-overlap, --rag-top-k

undefined📖 Complete CLI Reference - All commands and options


🤖 GitHub Action

Run AI-powered workflows directly in GitHub Actions with 13 provider support and automatic PR/issue commenting.

- uses: juspay/neurolink@v1
  with:
    anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
    prompt: "Review this PR for security issues and code quality"
    post_comment: true
Feature Description
undefinedMulti-Providerundefined 13 providers with unified interface
undefinedPR/Issue Commentsundefined Auto-post AI responses with intelligent updates
undefinedMultimodal Supportundefined Attach images, PDFs, CSVs, Excel, Word, JSON, YAML, XML, HTML, SVG, code files to prompts
undefinedCost Trackingundefined Built-in analytics and quality evaluation
undefinedExtended Thinkingundefined Deep reasoning with thinking tokens

undefined📖 GitHub Action Guide - Complete setup and examples


💰 Smart Model Selection

NeuroLink features intelligent model selection and cost optimization:

Cost Optimization Features

  • undefined💰 Automatic Cost Optimization: Selects cheapest models for simple tasks
  • undefined🔄 LiteLLM Model Routing: Access 100+ models with automatic load balancing
  • undefined🔍 Capability-Based Selection: Find models with specific features (vision, function calling)
  • undefined⚡ Intelligent Fallback: Seamless switching when providers fail
# Cost optimization - automatically use cheapest model
npx @juspay/neurolink generate "Hello" --optimize-cost

# LiteLLM specific model selection
npx @juspay/neurolink generate "Complex analysis" --provider litellm --model "anthropic/claude-3-5-sonnet"

# Auto-select best available provider
npx @juspay/neurolink generate "Write code" # Automatically chooses optimal provider

Revolutionary Interactive CLI

NeuroLink’s CLI goes beyond simple commands - it’s a full AI development environment:

Why Interactive Mode Changes Everything

Feature Traditional CLI NeuroLink Interactive
Session State None Full persistence
Memory Per-command Conversation-aware
Configuration Flags per command /set persists across session
Tool Testing Manual per tool Live discovery & testing
Streaming Optional Real-time default

Live Demo: Development Session

$ npx @juspay/neurolink loop --enable-conversation-memory

neurolink > /set provider vertex
✓ provider set to vertex (Gemini 3 support enabled)

neurolink > /set model gemini-3-flash-preview
✓ model set to gemini-3-flash-preview

neurolink > Analyze my project architecture and suggest improvements

✓ Analyzing your project structure...
[AI provides detailed analysis, remembering context]

neurolink > Now implement the first suggestion
[AI remembers previous context and implements suggestion]

neurolink > /mcp discover
✓ Discovered 58 MCP tools:
   GitHub: create_issue, list_repos, create_pr...
   PostgreSQL: query, insert, update...
   [full list]

neurolink > Use the GitHub tool to create an issue for this improvement
✓ Creating issue... (requires HITL approval if configured)

neurolink > /export json > session-2026-01-01.json
✓ Exported 15 messages to session-2026-01-01.json

neurolink > exit
Session saved. Resume with: neurolink loop --session session-2026-01-01.json

Session Commands Reference

Command Purpose
/set <key> <value> Persist configuration (provider, model, temperature)
/mcp discover List all available MCP tools
/export json Export conversation to JSON
/history View conversation history
/clear Clear context while keeping settings

undefinedInteractive CLI Guide | CLI Referenceundefined

Skip the wizard and configure manually? See docs/getting-started/provider-setup.md.

CLI & SDK Essentials

neurolink CLI mirrors the SDK so teams can script experiments and codify them later.

# Discover available providers and models
npx @juspay/neurolink status
npx @juspay/neurolink models list --provider google-ai

# Route to a specific provider/model
npx @juspay/neurolink generate "Summarize customer feedback" \
  --provider azure --model gpt-4o-mini

# Turn on analytics + evaluation for observability
npx @juspay/neurolink generate "Draft release notes" \
  --enable-analytics --enable-evaluation --format json

# RAG: Ask questions about your docs (auto-chunks, embeds, searches)
npx @juspay/neurolink generate "What are the key features?" \
  --rag-files ./docs/guide.md ./docs/api.md --rag-strategy markdown

# Claude proxy + local OpenObserve dashboard
npx @juspay/neurolink proxy setup
npx @juspay/neurolink proxy telemetry setup
npx @juspay/neurolink proxy status --format json
import { NeuroLink } from "@juspay/neurolink";

const neurolink = new NeuroLink({
  conversationMemory: {
    enabled: true,
  },
  enableOrchestration: true,
});

const result = await neurolink.generate({
  input: {
    text: "Create a comprehensive analysis",
    files: [
      "./sales_data.csv", // Auto-detected as CSV
      "examples/data/invoice.pdf", // Auto-detected as PDF
      "./diagrams/architecture.png", // Auto-detected as image
      "./report.xlsx", // Auto-detected as Excel
      "./config.json", // Auto-detected as JSON
      "./diagram.svg", // Auto-detected as SVG (injected as text)
      "./app.ts", // Auto-detected as TypeScript code
    ],
  },
  provider: "vertex", // PDF-capable provider (see docs/features/pdf-support.md)
  enableEvaluation: true,
  region: "us-east-1",
});

console.log(result.content);
console.log(result.evaluation?.overallScore);

// RAG: Ask questions about your documents
const answer = await neurolink.generate({
  input: { text: "What are the main architectural decisions?" },
  rag: {
    files: ["./docs/architecture.md", "./docs/decisions.md"],
    strategy: "markdown",
    topK: 5,
  },
});
console.log(answer.content); // AI searches your docs and answers

Gemini 3 with Extended Thinking

import { NeuroLink } from "@juspay/neurolink";

const neurolink = new NeuroLink();

// Use Gemini 3 with extended thinking for complex reasoning
const result = await neurolink.generate({
  input: {
    text: "Solve this step by step: What is the optimal strategy for...",
  },
  provider: "vertex",
  model: "gemini-3-flash-preview",
  thinkingConfig: {
    thinkingLevel: "medium", // Options: "minimal", "low", "medium", "high"
  },
});

console.log(result.content);

Full command and API breakdown lives in docs/cli/commands.md and docs/sdk/api-reference.md.

Platform Capabilities at a Glance

Capability Highlights
undefinedProvider unificationundefined 13+ providers with automatic fallback, cost-aware routing, provider orchestration (Q3).
undefinedMultimodal pipelineundefined Stream images + CSV data + PDF documents across providers with local/remote assets. Auto-detection for mixed file types.
undefinedQuality & governanceundefined Auto-evaluation engine (Q3), guardrails middleware (Q4), HITL workflows (Q4), audit logging.
undefinedMemory & contextundefined Conversation memory, Redis history export (Q4), context summarization (Q4).
undefinedCLI toolingundefined Loop sessions (Q3), setup wizard, config validation, Redis auto-detect, JSON output.
undefinedEnterprise opsundefined Proxy support, regional routing (Q3), telemetry hooks, local OpenObserve dashboard setup, configuration management.
undefinedTool ecosystemundefined MCP auto discovery, HTTP/stdio/SSE/WebSocket transports, LiteLLM hub access, SageMaker custom deployment, web search.

Documentation Map

Area When to Use Link
Getting started Install, configure, run first prompt docs/getting-started/index.md
Feature guides Understand new functionality front-to-back docs/features/index.md
CLI reference Command syntax, flags, loop sessions docs/cli/index.md
SDK reference Classes, methods, options docs/sdk/index.md
RAG Document chunking, hybrid search, reranking, rag:{} API docs/features/rag.md
Integrations LiteLLM, SageMaker, MCP docs/litellm-integration.md
Advanced Middleware, architecture, streaming patterns docs/advanced/index.md
Cookbook Practical recipes for common patterns docs/cookbook/index.md
Guides Migration, Redis, troubleshooting, provider selection docs/guides/index.md
Operations Configuration, troubleshooting, provider matrix docs/reference/index.md

New in 2026: Enhanced Documentation

undefinedEnterprise Features:undefined

undefinedProvider Intelligence:undefined

undefinedMiddleware System:undefined

undefinedRedis & Persistence:undefined

undefinedMigration Guides:undefined

undefinedDeveloper Experience:undefined

Integrations

Contributing & Support


NeuroLink is built with ❤️ by Juspay. Contributions, questions, and production feedback are always welcome.

[beta]v0.3.0