Fase 1: Populate BPM in sample_metadata.db (283/511 samples from filenames) Fase 2: DB-aware sample selection (_pick_best_db) with BPM±5 and key matching Fase 3: Auto-warp samples to project tempo via warp_clip_to_bpm Fase 4: Connect pattern_library engines (BassPatterns, ChordProgressions, MelodyGenerator) Fase 5: Harmonic coherence — detect key from drumloop and transpose MIDI Fase 6: SentimientoLatino2025 + reggaeton3 integrated — 616 samples, 19 clean categories New files: - engines/bpm_key_parser.py — robust BPM+key parser for filenames - engines/populate_bpm_from_filenames.py — DB population script - engines/recategorize_samples.py — category normalization (19 categories) Modified: - score_renderer.py — DB selection, auto-warp, engine patterns, key detection, 18 categories - ai_loop.py — SYSTEM_PROMPT with full category list
51 lines
2.0 KiB
Markdown
51 lines
2.0 KiB
Markdown
# CLAUDE.md - AbletonMCP_AI v3.2
|
|
|
|
> **Canonical project context** for AI agents.
|
|
> Read this BEFORE doing any work.
|
|
|
|
## CRITICAL RULES
|
|
|
|
1. **NEVER touch `libreria/` or `librerias/`** - User's sample library.
|
|
2. **NEVER delete project files** - Overwrite only.
|
|
3. **NEVER create debug .md files in project root** - All in `AbletonMCP_AI/docs/`.
|
|
4. **ALWAYS compile after changes**: `python -m py_compile "<file_path>"`
|
|
5. **ALWAYS restart Ableton** after changes to `__init__.py`.
|
|
6. **STRICT SESSION VIEW ONLY** - Arrangement View is discarded for production.
|
|
|
|
## Architecture
|
|
|
|
```
|
|
AbletonMCP_AI/
|
|
├── __init__.py # Remote Script (All-in-one API)
|
|
├── docs/ # Sprints & SYSTEM_SCORE_RENDER.md
|
|
└── mcp_server/
|
|
├── server.py # MCP Server (130+ tools)
|
|
├── score_engine.py # [NEW] Pure Python song data model
|
|
├── score_renderer.py # [NEW] Session View renderer
|
|
├── ai_loop.py # [NEW] Autonomous production loop
|
|
└── scores/ # [NEW] JSON song storage
|
|
```
|
|
|
|
## Primary Workflow (Score → Render)
|
|
|
|
The preferred way to produce music is the **Compose-then-Render** pipeline:
|
|
|
|
1. **Compose**: Use `compose_from_template` or incremental `new_score` + `compose_*` tools.
|
|
2. **Review**: Use `get_score` to see the JSON structure.
|
|
3. **Save**: Use `save_score` to persist the canzone in `mcp_server/scores/`.
|
|
4. **Render**: Use `render_score` to inject the JSON into Ableton's Session View.
|
|
5. **Batch**: Use `render_all_scores` to produce multiple songs at once.
|
|
|
|
## How It Works
|
|
|
|
1. **Ableton** starts TCP server (9877).
|
|
2. **MCP tools** build a `SongScore` object in memory.
|
|
3. **Renderer** translates JSON sections to **Scenes** and definitions to **Clip Slots**.
|
|
4. **Patterns** (Dembow, Bass, etc.) are resolved server-side into MIDI notes.
|
|
|
|
## Workflow
|
|
|
|
- **Kimi** codes fast, implements features.
|
|
- **Qwen** verifies, compiles, debugs, creates next sprint.
|
|
- Refer to `docs/SYSTEM_SCORE_RENDER.md` for full technical details.
|