Companies like OpenAI built "Super AI" that threatens human independence. We crave individuality: AI that amplifies, not erases, YOU.
Weβre challenging that with "Second Me": an open-source prototype where you craft your own AI selfβa new AI species that preserves you, delivers your context, and defends your interests.
Itβs locally trained and hostedβyour data, your controlβyet globally connected, scaling your intelligence across an AI network. Beyond that, itβs your AI identity interfaceβa bold standard linking your AI to the world, sparks collaboration among AI selves, and builds tomorrowβs truly native AI apps.
Tech enthusiasts, AI pros, domain experts, Join us! Second Me is your launchpad to extend your mind into the digital horizon.
Train Your AI Self with AI-Native Memory (Paper)
Start training your Second Me today with your own memories! Using Hierarchical Memory Modeling (HMM) and the Me-Alignment Algorithm, your AI self captures your identity, understands your context, and reflects you authentically.
Launch your AI self from your laptop onto our decentralized networkβanyone or any app can connect with your permission, sharing your context as your digital identity.
Roleplay: Your AI self switches personas to represent you in different scenarios.
AI Space: Collaborate with other Second Mes to spark ideas or solve problems.
Unlike traditional centralized AI systems, Second Me ensures that your information and intelligence remain local and completely private.
Star and join us, and you will receive all release notifications from GitHub without any delay!
Note: "B" in the table represents "billion parameters model". Data shown are examples only; actual supported model sizes may vary depending on system optimization, deployment environment, and other hardware/software conditions.
Memory (GB) | Docker Deployment (Windows/Linux) | Docker Deployment (Mac) | Integrated Setup (Windows/Linux) | Integrated Setup (Mac) |
---|---|---|---|---|
8 | ~0.8B (example) | ~0.4B (example) | ~1.0B (example) | ~0.6B (example) |
16 | 1.5B (example) | 0.5B (example) | ~2.0B (example) | ~0.8B (example) |
32 | ~2.8B (example) | ~1.2B (example) | ~3.5B (example) | ~1.5B (example) |
Note: Models below 0.5B may not provide satisfactory performance for complex tasks. And we're continuously improving cross-platform support - please submit an issue for feedback or compatibility problems on different operating systems.
MLX Acceleration: Mac M-series users can use MLX to run larger models (CLI-only).
Note: Docker setup on Mac M-Series chips has 25-30% performance overhead compared to integrated setup, but offers easier installation process.
-
Docker and Docker Compose installed on your system
- For Docker installation: Get Docker
- For Docker Compose installation: Install Docker Compose
-
For Windows Users: You can use MinGW to run
make
commands. You may need to modify the Makefile by replacing Unix-specific commands with Windows-compatible alternatives. -
Memory Usage Settings (important):
- Configure these settings in Docker Desktop (macOS) or Docker Desktop (Windows) at: Dashboard -> Settings -> Resources
- Make sure to allocate sufficient memory resources (at least 8GB recommended)
- Clone the repository
git clone [email protected]:Mindverse/Second-Me.git
cd Second-Me
- Start the containers
make docker-up
- After starting the service (either with local setup or Docker), open your browser and visit:
http://localhost:3000
- View help and more commands
make help
- For custom Ollama model configuration, please refer to: Custom Model Config(Ollama)
Note: Integrated Setup provides best performance, especially for larger models, as it runs directly on your host system without containerization overhead.
- Python 3.10+ installed on your system
- Node.js 18+ and npm installed
- Basic build tools (cmake, make, etc.)
- Clone the repository
git clone [email protected]:Mindverse/Second-Me.git
cd Second-Me
- Run the integrated setup (installs all dependencies and prepares the environment)
make setup
- Start all services
make restart
- After services are started, open your browser and visit:
http://localhost:3000
π‘ Advantages: This method offers better performance than Docker on Mac & Linux systems while still providing a simple setup process. It installs directly on your host system without containerization overhead. (Windows not tested)
π οΈ Feel free to follow User tutorial to build your Second Me.
π‘ Check out the links below to see how Second Me can be used in real-life scenarios:
- Felix AMA (Roleplay app)
- Brainstorming a 15-Day European City Itinerary (Network app)
- Icebreaking as a Speed Dating Match (Network app)
The following features have been completed internally and are being gradually integrated into the open-source project. For detailed experimental results and technical specifications, please refer to our Technical Report.
- [β] Long Chain-of-Thought Training Pipeline: Enhanced reasoning capabilities through extended thought process training
- [β] Direct Preference Optimization for L2 Model: Improved alignment with user preferences and intent
- Data Filtering for Training: Advanced techniques for higher quality training data selection
- [β] Apple Silicon Support: Native support for Apple Silicon processors with MLX Training and Serving capabilities
- Natural Language Memory Summarization: Intuitive memory organization in natural language format
We welcome contributions to Second Me! Whether you're interested in fixing bugs, adding new features, or improving documentation, please check out our Contribution Guide. You can also support Second Me by sharing your experience with it in your community, at tech conferences, or on social media.
For more detailed information about development, please refer to our Contributing Guide.
We would like to express our gratitude to all the individuals who have contributed to Second Me! If you're interested in contributing to the future of intelligence uploading, whether through code, documentation, or ideas, please feel free to submit a pull request to our repository: Second-Me.
Made with contrib.rocks.
This work leverages the power of the open-source community.
For data synthesis, we utilized GraphRAG from Microsoft.
For model deployment, we utilized llama.cpp, which provides efficient inference capabilities.
Our base models primarily come from the Qwen2.5 series.
We also want to extend our sincere gratitude to all users who have experienced Second Me. We recognize that there is significant room for optimization throughout the entire pipeline, and we are fully committed to iterative improvements to ensure everyone can enjoy the best possible experience locally.
Second Me is open source software licensed under the Apache License 2.0. See the LICENSE file for more details.