๐ก Summary
AI Partner Chat enables personalized, context-aware conversations by integrating user and AI personas with vectorized notes.
๐ฏ Target Audience
๐ค AI Roast: โPowerful, but the setup might scare off the impatient.โ
Risk: Medium. Review: shell/CLI command execution; filesystem read/write scope and path traversal; dependency pinning and supply-chain risk. Run with least privilege and audit before enabling in production.
name: ai-partner-chat description: ๅบไบ็จๆท็ปๅๅๅ้ๅ็ฌ่ฎฐๆไพไธชๆงๅๅฏน่ฏใๅฝ็จๆท้่ฆไธชๆงๅไบคๆตใไธไธๆๆ็ฅ็ๅๅบ๏ผๆๅธๆ AI ่ฎฐไฝๅนถๅผ็จๅ ถไนๅ็ๆณๆณๅ็ฌ่ฎฐๆถไฝฟ็จใ
AI Partner Chat
Overview
Provide personalized, context-aware conversations by integrating user persona, AI persona, and vectorized personal notes. This skill enables AI to remember and reference the user's previous thoughts, preferences, and knowledge base, creating a more coherent and personalized interaction experience.
Prerequisites
Before first use, complete these steps in order:
-
Create directory structure
mkdir -p config notes vector_db scripts -
Set up Python environment
python3 -m venv venv ./venv/bin/pip install -r .claude/skills/ai-partner-chat/scripts/requirements.txtNote: First run will download embedding model (~4.3GB)
-
Generate persona templates Copy from
.claude/skills/ai-partner-chat/assets/toconfig/:user-persona-template.mdโconfig/user-persona.mdai-persona-template.mdโconfig/ai-persona.md
-
User adds notes Place markdown notes in
notes/directory (any format/structure) -
Initialize vector database (see section 1.2 below)
Now proceed to Core Workflow โ
Core Workflow
1. Initial Setup
Before using this skill for the first time, complete the following setup:
1.1 Create Persona Files
Create two Markdown files to define interaction parameters:
User Persona (user-persona.md):
- Define user's background, expertise, interests
- Specify communication preferences and working style
- Include learning goals and current projects
- Use template:
assets/user-persona-template.md
AI Persona (ai-persona.md):
- Define AI's role and expertise areas
- Specify communication style and tone
- Set interaction guidelines and response strategies
- Define how to use user context and reference notes
- Use template:
assets/ai-persona-template.md
1.2 Initialize Vector Database
This skill uses AI Agent approach for intelligent note chunking:
When you initialize the vector database, Claude Code will:
- Read notes from
<project_root>/notes/directory - Analyze each note's format (daily logs, structured docs, continuous text, etc.)
- Generate custom chunking code tailored to that specific note
- Execute the code to produce chunks conforming to
chunk_schema.Chunkformat - Generate embeddings using BAAI/bge-m3 (optimized for Chinese text)
- Store in ChromaDB at
<project_root>/vector_db/
Key advantages:
- โ No pre-written chunking strategies needed
- โ Each note gets optimal chunking based on its actual structure
- โ True AI Agent - generates tools on demand, not calling pre-built tools
Chunk Format Requirement:
All chunks must conform to this schema (see scripts/chunk_schema.py):
{ 'content': 'chunk text content', 'metadata': { 'filename': 'note.md', # Required 'filepath': '/path/to/file', # Required 'chunk_id': 0, # Required 'chunk_type': 'date_entry', # Required 'date': '2025-11-07', # Optional 'title': 'Section title', # Optional } }
Implementation Requirements
Location: Create <project_root>/scripts/chunk_and_index.py
Required structure:
# Import provided utilities import sys from pathlib import Path sys.path.insert(0, str(Path(__file__).parent.parent / ".claude/skills/ai-partner-chat/scripts")) from chunk_schema import Chunk, validate_chunk from vector_indexer import VectorIndexer def chunk_note_file(filepath: str) -> List[Chunk]: """ Analyze THIS file's format and generate appropriate chunks. Each chunk must conform to chunk_schema.Chunk format: { 'content': 'text', 'metadata': { 'filename': 'file.md', 'filepath': '/path/to/file', 'chunk_id': 0, 'chunk_type': 'your_label' } } """ # TODO: Analyze actual file format (NOT template-based) # TODO: Generate chunks based on analysis # TODO: Validate each chunk with validate_chunk() pass def main(): # Initialize vector database indexer = VectorIndexer(db_path="./vector_db") indexer.initialize_db() # Process all note files all_chunks = [] for note_file in Path("./notes").glob("**/*"): if note_file.is_file(): chunks = chunk_note_file(str(note_file)) all_chunks.extend(chunks) # Index chunks indexer.index_chunks(all_chunks) if __name__ == "__main__": main()
Execute: ./venv/bin/python scripts/chunk_and_index.py
Key points:
- The
chunk_note_file()function logic should be dynamically created based on analyzing actual file content - Do NOT copy chunking strategies from examples or templates
- Each file may have different format - analyze individually
- Only requirement: output must conform to
chunk_schema.Chunk
2. Conversation Workflow
For each user query, follow this process:
2.1 Load Personas
Read both persona files to understand:
- User's background, preferences, and communication style
- AI's role definition and interaction guidelines
- How to appropriately reference context
2.2 Retrieve Relevant Notes
Query the vector database to find the top 5 most semantically similar notes:
from scripts.vector_utils import get_relevant_notes # Query for relevant context relevant_notes = get_relevant_notes( query=user_query, db_path="./vector_db", top_k=5 )
Or use the command-line tool:
python scripts/query_notes.py "user query text" --top-k 5
2.3 Construct Context
Combine the following elements to inform the response:
- User Persona: Background, preferences, expertise
- AI Persona: Role, communication style, guidelines
- Relevant Notes (top 5): User's previous thoughts and knowledge
- Current Conversation: Ongoing chat history
2.4 Generate Response
Synthesize a response that:
- Aligns with both persona definitions
- Naturally references relevant notes when applicable
- Maintains continuity with user's knowledge base
- Follows the AI persona's communication guidelines
When Referencing Notes:
- Use natural phrasing: "Based on your previous note about..."
- Make connections: "This relates to what you mentioned in..."
- Avoid robotic citations: integrate context smoothly
Example Response Pattern:
[Acknowledge user's query in preferred communication style]
[Incorporate relevant note context naturally if applicable]
"I remember you mentioned [insight from note] - this connects well with..."
[Provide main response following AI persona guidelines]
[Optional: Ask follow-up question based on user's learning style]
3. Maintenance
Adding New Notes
When the user creates new notes, add them to the vector database:
python scripts/add_note.py /path/to/new_note.md
Updating Personas
Personas can be updated anytime by editing the Markdown files. Changes take effect in the next conversation.
Reinitializing Database
To completely rebuild the vector database:
python scripts/init_vector_db.py /path/to/notes --db-path ./vector_db
This will delete the existing database and re-index all notes.
Technical Details
Data Architecture
User data is stored in project root, not inside the skill directory:
<project_root>/
โโโ notes/ # User's markdown notes
โโโ vector_db/ # ChromaDB vector database
โโโ venv/ # Python dependencies
โโโ config/
โ โโโ user-persona.md # User persona definition
โ โโโ ai-persona.md # AI persona definition
โโโ .claude/skills/ai-partner-chat/ # Skill code (can be deleted/reinstalled)
โโโ SKILL.md
โโโ scripts/
โโโ chunk_schema.py # Chunk format specification
โโโ vector_indexer.py # Core indexing utilities
โโโ vector_utils.py # Query utilities
Design principles:
- โ User data (notes, personas, vectors) lives in project root
- โ Easy to backup, migrate, or share across skills
- โ Skill code is stateless and replaceable
AI Agent Chunking
Philosophy: Instead of pre-written chunking strategies, Claude Code analyzes each note and generates optimal chunking code on the fly.
How it works:
- Claude reads a note file
- Analyzes format features (date headers, section titles, separators, etc.)
- Writes Python code that chunks this specific note optimally
- Executes the code to produce chunks
- Validates chunks against
chunk_schema.Chunkformat - Indexes chunks using
vector_indexer.py
Benefits:
- Adapts to any note format without pre-programming
- Can handle mixed formats, unusual structures, or evolving note styles
- True "vibe coding" approach - tools are created when needed
Vector Database
- Storage: ChromaDB (persistent local storage at
<project_root>/vector_db/) - Embedding Model: BAAI/bge-m3 (multilingual, optimized for Chinese)
- Similarity Metric: Cosine similarity
- Chunking: AI-generated custom code per note
Scripts
chunk_schema.py: Defines required chunk format specificationvector_indexer.py: Core utilities for embedding generation and ChromaDB indexingvector_utils.py: Query utilities for retrieving relevant chunksrequirements.txt: Python dependencies (chromadb, sentence-transformers)
Note: No pre-written chunking scripts. Chunking is done by Claude Code dynamically.
File Structure
<project_root>/
โโโ notes/ # User's notes (managed by user)
โ โโโ *.md
โโโ vector_db/ # Vector database (auto-generated)
โโโ venv/ # Python environment
โโโ config/ # User configuration
โ โโโ user-persona.md
โ โโโ ai-persona.md
โโโ .claude/skills/ai-partner-chat/
โโโ S
Pros
- Personalized interactions based on user context
- Dynamic chunking adapts to various note formats
- Integrates seamlessly with user and AI personas
Cons
- Initial setup can be complex for non-technical users
- Requires maintenance of user and AI persona files
- Dependency on external libraries for functionality
Related Skills
pytorch
SโIt's the Swiss Army knife of deep learning, but good luck figuring out which of the 47 installation methods is the one that won't break your system.โ
agno
SโIt promises to be the Kubernetes for agents, but let's see if developers have the patience to learn yet another orchestration layer.โ
nuxt-skills
SโIt's essentially a well-organized cheat sheet that turns your AI assistant into a Nuxt framework parrot.โ
Disclaimer: This content is sourced from GitHub open source projects for display and rating purposes only.
Copyright belongs to the original author eze-is.
