Checkpoint: 6-phase upgrade — DB-aware samples, auto-warp, pattern engines, harmonic coherence, SentimientoLatino2025

Fase 1: Populate BPM in sample_metadata.db (283/511 samples from filenames)
Fase 2: DB-aware sample selection (_pick_best_db) with BPM±5 and key matching
Fase 3: Auto-warp samples to project tempo via warp_clip_to_bpm
Fase 4: Connect pattern_library engines (BassPatterns, ChordProgressions, MelodyGenerator)
Fase 5: Harmonic coherence — detect key from drumloop and transpose MIDI
Fase 6: SentimientoLatino2025 + reggaeton3 integrated — 616 samples, 19 clean categories

New files:
- engines/bpm_key_parser.py — robust BPM+key parser for filenames
- engines/populate_bpm_from_filenames.py — DB population script
- engines/recategorize_samples.py — category normalization (19 categories)

Modified:
- score_renderer.py — DB selection, auto-warp, engine patterns, key detection, 18 categories
- ai_loop.py — SYSTEM_PROMPT with full category list
This commit is contained in:
Administrator
2026-04-14 16:53:06 -03:00
parent 96ecf86812
commit 602676ac87
8 changed files with 2013 additions and 148 deletions

79
QWEN.md
View File

@@ -1,7 +1,7 @@
# QWEN.md - AbletonMCP_AI v3.0 (Senior Architecture)
# QWEN.md - AbletonMCP_AI v3.2 (Score → Render)
> **Context**: MCP-based system for controlling Ableton Live 12 from AI agents.
> **Architecture**: Senior v3.0 (Arrangement-first workflow).
> **Architecture**: Compose-then-Render v3.2 (**STRICT SESSION VIEW**).
> **Team**: Qwen (verify/debug/architecture) + Kimi (fast coding).
## CRITICAL RULES (READ FIRST)
@@ -9,7 +9,7 @@
1. **NEVER touch `libreria/` or `librerias/`** - User's sample library. NEVER delete, move, or modify. These are read-only.
2. **NEVER delete project files** - Overwrite, don't delete then create.
3. **NEVER create debug .md files in project root** - All docs go in `AbletonMCP_AI/docs/`.
4. **NEVER use `rmdir /s /q` except for `__pycache__`** - Can accidentally delete the whole project.
4. **STRICT SESSION VIEW ONLY** - Arrangement View and its commands (`create_arrangement_*`) are DISCARDED for this sprint. All production goes to scenes and clip slots.
5. **NEVER modify Ableton's built-in scripts** - `_Framework`, `_APC`, `_Komplete_Kontrol`, etc. are not yours.
6. **ALWAYS compile after changes**: `python -m py_compile "<file_path>"`
7. **ALWAYS restart Ableton Live** after changes to `__init__.py` (no hot-reload for Remote Scripts).
@@ -23,32 +23,27 @@
```
AI Agent (OpenCode/Claude/Kimi)
↓ Natural language prompts
MCP Server (FastMCP, stdio transport)
SongScore Engine (Pure Python Data Model)
↓ JSON score representation
Score Renderer (Session View Translator)
↓ JSON commands via TCP socket
50+ Production Engines (drums, bass, melody, mixing, etc.)
↓ Real-time clip creation
LiveBridge (TCP → Ableton Live API)
Ableton Live 12 Suite → Arrangement View
Ableton Live 12 Suite → Session View Scenes & Clip Slots
```
### Key Architecture Components
| Component | File | Purpose |
|-----------|------|---------|
| **Remote Script** | `AbletonMCP_AI/__init__.py` | Ableton Control Surface (~9752 lines). Starts TCP server on port 9877. Handles all Live API calls. |
| **MCP Server** | `AbletonMCP_AI/mcp_server/server.py` | FastMCP server (~6745 lines). Defines 114+ MCP tools. Communicates with Ableton via TCP. |
| **BPM Analyzer** | `AbletonMCP_AI/mcp_server/engines/bpm_analyzer.py` | Librosa-based BPM detection for 800+ samples. |
| **Spectral Coherence** | `AbletonMCP_AI/mcp_server/engines/spectral_coherence.py` | MFCC embeddings for sample similarity. |
| **Session Orchestrator** | `AbletonMCP_AI/mcp_server/engines/session_orchestrator.py` | MIDI instrument validation and auto-loading. |
| **Launcher** | `mcp_wrapper.py` | Entry point for MCP stdio transport. Imports and runs the server. |
| **Integration** | `AbletonMCP_AI/mcp_server/integration.py` | Senior Architecture coordinator. Wires all components together. |
| **LiveBridge** | `AbletonMCP_AI/mcp_server/engines/live_bridge.py` | Direct Ableton Live API execution. Creates clips, writes automation, routes tracks. |
| **Arrangement Recorder** | `AbletonMCP_AI/mcp_server/engines/arrangement_recorder.py` | State machine for Session→Arrangement recording. 7 states, musical quantization. |
| **Metadata Store** | `AbletonMCP_AI/mcp_server/engines/metadata_store.py` | SQLite database of pre-analyzed sample features. No numpy required for queries. |
| **Sample Selector** | `AbletonMCP_AI/mcp_server/engines/sample_selector.py` | Smart sample selection with coherence scoring. |
| **Mixing Engine** | `AbletonMCP_AI/mcp_server/engines/mixing_engine.py` | Professional mixing chains (EQ, compression, bus routing). |
| **Song Generator** | `AbletonMCP_AI/mcp_server/engines/song_generator.py` | Track generation from prompts. |
| **Remote Script** | `AbletonMCP_AI/__init__.py` | Ableton Control Surface. TCP server on port 9877. Handles all Live API calls. |
| **Score Engine** | `mcp_server/score_engine.py` | [Sprint 9] JSON data model for songs. Decoupled from Ableton logic. |
| **Score Renderer** | `mcp_server/score_renderer.py` | [Sprint 9] Translates JSON Score to Session View Scenes/Clips. |
| **AI Loop** | `mcp_server/ai_loop.py` | [Sprint 9] Autonomous production loop (Anthropic-compatible). |
| **Metadata Store** | `mcp_server/engines/metadata_store.py` | SQLite database of pre-analyzed sample features. No numpy required for queries. |
| **Sample Selector** | `mcp_server/engines/sample_selector.py` | Smart sample selection with coherence scoring. |
| **Mixing Engine** | `mcp_server/engines/mixing_engine.py` | Professional mixing chains (EQ, compression). |
| **LiveBridge** | `mcp_server/engines/live_bridge.py` | Direct Ableton Live API execution engine. |
### Directory Structure
@@ -62,22 +57,12 @@ MIDI Remote Scripts/
│ ├── examples/ # Usage examples
│ ├── presets/ # Saved configurations (.json)
│ └── mcp_server/
│ ├── server.py # MCP FastMCP server
│ ├── integration.py # Senior Architecture coordinator
│ ├── test_arrangement.py # Verification tests
── engines/ # 65+ production engines
├── sample_selector.py
├── song_generator.py
│ ├── arrangement_recorder.py
│ ├── live_bridge.py
│ ├── mixing_engine.py
│ ├── metadata_store.py
│ ├── massive_selector.py
│ ├── coherence_system.py
│ ├── bpm_analyzer.py # Sprint 7: Librosa BPM detection
│ ├── spectral_coherence.py # Sprint 7: MFCC embeddings
│ └── session_orchestrator.py # Sprint 7: MIDI validation
│ └── ... (50+ more)
│ ├── server.py # MCP FastMCP server (130+ tools)
│ ├── score_engine.py # SongScore model
│ ├── score_renderer.py # Session View renderer
── ai_loop.py # AI production loop
├── scores/ # [NEW] JSON songs folder
└── engines/ # Specialized production engines
├── libreria/ # User samples (READ-ONLY, git-ignored)
├── librerias/ # Organized samples (READ-ONLY, git-ignored)
├── mcp_wrapper.py # MCP server launcher
@@ -214,11 +199,14 @@ Primary production workflow:
- `validate_session` - Verify MIDI tracks have instruments
- `fix_session_midi_tracks` - Auto-load instruments by track name
### Advanced
- `create_riser` / `create_downlifter` / `create_impact` - FX generation
- `automate_filter` / `generate_curve_automation` - Parameter automation
- `humanize_track` - Velocity/timing variations
- `apply_professional_mix` - Complete mix chain
### Score → Render Pipeline (Sprint 9)
- `new_score` / `get_score` - Score lifecycle
- `compose_from_template` - Quick song generation
- `compose_audio_track` / `compose_midi_track` - Direct composition
- `compose_pattern` - MIDI pattern application
- `save_score` / `load_score` - JSON persistence
- `render_score` - Inject score into Session View (Scene-by-scene)
- `render_all_scores` - Batch autonomous production
See `AbletonMCP_AI/docs/API_REFERENCE_PRO.md` for complete documentation.
@@ -545,9 +533,8 @@ All sprints saved to `AbletonMCP_AI/docs/sprint_N_description.md`
## Current Sprint Assignment
**Sprint 8 (Active):** MIDI Instrument Loading + BPM Integration
**Owner:** Qwen + Kimi
**Goal:** MIDI tracks sound without manual intervention
**Deadline:** TBD (user decides priority)
**Sprint 9 (Active):** Score → Render Pipeline (Compose-then-Render)
**Goal:** 50+ songs generated and rendered autonomously via ai_loop.py
**Status:** ✅ Completed 2026-04-14 (Strict Session View Implementation)
**Next:** Sprint 9 (Max for Live or Arrangement Recording)
**Key Dev:** Refer to `docs/SYSTEM_SCORE_RENDER.md` for JSON schema and rendering logic.