Manus Twin Documentation
Overview
Manus Twin is a local Windows application that replicates the architecture and functionality of Manus.im. It provides a powerful AI assistant interface that supports multiple AI models including Gemini and Claude, with the ability to add custom API keys. The application includes a virtual Linux sandbox, memory modules for longer context, and an enhanced knowledge system supporting up to 100 knowledge entries.
Table of Contents
- Architecture
- Features
- Installation
- Configuration
- Usage
- API Reference
- Troubleshooting
- Development
Architecture
Manus Twin follows a modular architecture with the following key components:
Backend Components
- undefinedServer: Express.js server that handles API requests and serves the frontend
- undefinedModel Integration: Interface for interacting with different AI models
- undefinedAPI Management: System for securely storing and managing API keys
- undefinedWorkflow Engine: Orchestrates the execution of tasks and function calls
- undefinedMemory Module: Maintains conversation history and context across sessions
- undefinedKnowledge System: Stores and retrieves knowledge entries for context
- undefinedVirtual Linux Sandbox: Provides a secure environment for executing commands
- undefinedWindows Compatibility Layer: Ensures the application runs smoothly on Windows
Frontend Components
- undefinedDashboard: Main interface for interacting with the AI assistant
- undefinedChat Interface: Where conversations with the AI take place
- undefinedSettings: Configuration options for the application
- undefinedModel Selection: Interface for choosing which AI model to use
Data Flow
- User inputs a message through the chat interface
- The message is sent to the backend server
- The workflow engine processes the message and determines the appropriate action
- If needed, the memory module provides context from previous conversations
- If relevant, the knowledge system provides additional context
- The model integration sends the message to the selected AI model
- The AI model generates a response
- The response is returned to the frontend and displayed to the user
Features
Multiple AI Model Support
- undefinedGemini Models: Support for various Gemini models including 1.5 Flash and 2.0 experimental
- undefinedClaude Models: Support for Claude 3.7 and other Claude models
- undefinedCustom API Integration: Ability to add custom API endpoints for other AI models
Memory and Knowledge
- undefinedLong-term Memory: Maintains context across sessions
- undefinedEnhanced Knowledge System: Supports up to 100 knowledge entries (compared to 20 in Manus)
- undefinedContext-aware Responses: AI responses take into account previous conversations and stored knowledge
Virtual Linux Sandbox
- undefinedCommand Execution: Run Linux commands in a secure sandbox environment
- undefinedFile Operations: Create, read, update, and delete files within the sandbox
- undefinedShell Sessions: Interactive shell sessions for complex operations
Windows Integration
- undefinedLocal Deployment: Runs locally on Windows computers
- undefinedDesktop Integration: Creates shortcuts and registers with Windows
- undefinedPortable Version: Option to create a portable version of the application
Installation
System Requirements
- Windows 10 or later
- 8GB RAM minimum (16GB recommended)
- 2GB free disk space
- Internet connection for AI model access
Installation Methods
Installer
- Download the installer from the releases page
- Run the installer and follow the on-screen instructions
- Launch the application from the Start menu or desktop shortcut
Portable Version
- Download the portable version from the releases page
- Extract the ZIP file to a location of your choice
- Run the
start.bat file to launch the application
Configuration
API Keys
- Open the application and navigate to the Settings page
- Click on “API Keys” in the sidebar
- Enter your API keys for the models you want to use:
- For Gemini: Enter your Google API key
- For Claude: Enter your Anthropic API key
- For custom providers: Enter the API key and endpoint URL
Application Settings
- undefinedTheme: Choose between light and dark mode
- undefinedMemory Retention: Configure how long conversations are stored
- undefinedKnowledge Management: Add, edit, or remove knowledge entries
- undefinedSandbox Configuration: Configure the virtual Linux sandbox environment
Usage
Starting a Conversation
- Launch the application
- Select the AI model you want to use from the dropdown menu
- Type your message in the input field and press Enter or click Send
- The AI will respond based on the selected model and your input
Using the Virtual Linux Sandbox
- In the chat interface, ask the AI to perform a task that requires the sandbox
- The AI will execute the necessary commands in the sandbox environment
- Results will be displayed in the chat interface
Managing Knowledge
- Navigate to the Settings page
- Click on “Knowledge” in the sidebar
- Add new knowledge entries with a name, content, and optional tags
- Activate or deactivate knowledge entries as needed
API Reference
REST API Endpoints
/api/models: Manage AI models
/api/functions: Register and execute functions
/api/settings: Configure application settings
/api/integration: Interact with AI models
/api/management: Manage API keys
/api/workflow: Control the workflow engine
/api/memory: Access the memory module
/api/knowledge: Manage knowledge entries
/api/sandbox: Interact with the virtual Linux sandbox
/api/platform: Access platform-specific functionality
/api/deployment: Manage application deployment
WebSocket API
/ws: Real-time communication for streaming responses
Troubleshooting
Common Issues
API Key Issues
- undefinedError: “Invalid API key”
- undefinedSolution: Verify that you’ve entered the correct API key in the Settings page
Model Availability
- undefinedError: “Model not available”
- undefinedSolution: Some models may be deprecated or renamed. Try using a different model or check the provider’s documentation for the latest model names.
- undefinedIssue: Slow response times
- undefinedSolution: Close other resource-intensive applications, ensure you have a stable internet connection, or try a different AI model that may be more efficient.
Development
Building from Source
- Clone the repository
- Install dependencies with
npm install
- Build the application with
npm run build
- Start the application with
npm start
Project Structure
/backend: Server-side code
/src: Source code
/api: API routes
/models: Model integration
/workflow: Workflow engine
/memory: Memory module
/knowledge: Knowledge system
/sandbox: Virtual Linux sandbox
/platform: Platform-specific code
/frontend: Client-side code
/src: Source code
/components: React components
/pages: Page layouts
/utils: Utility functions
Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
License
This project is licensed under the MIT License - see the LICENSE file for details.