feat: Triple fix - Variedad + Humanizer + Coherencia
MÓDULO 1: Variedad de Samples (usa más de la librería) - Fix _find_sample_for_section(): rotación round-robin por sección * Intro: samples 0-2 (suaves) * Verse: samples 3-6 (rotación) * Chorus: samples 7-10 (energía) * Bridge: samples 11-14 (diferentes) * Outro: últimos samples - Nueva función _pick_variety() distribuye 12 samples entre secciones - generate_intelligent_track(): múltiples samples por rol (no 1 solo) - load_samples_for_genre(): hasta 3 bass tracks, 3 FX tracks (eliminados breaks) MÓDULO 2: Humanización Real (suena musical, no robótico) - Fix bug de escala: intensity 0.0-1.0 → timing 0-15ms audible - Perfiles por instrumento: * Kick: timing×5ms (sutil) * Snare: timing×10ms (medio) * HiHat: timing×15ms (expressivo) * Bass: timing×8ms * Melody: timing×12ms - Soporte Arrangement View: procesa arrangement_clips - Humanización de audio clips: gain variation + micro-timing - BPM-aware timing en HumanFeel (lee tempo real del proyecto) MÓDULO 3: Sistema de Coherencia (calidad profesional) - Fix validate_coherence: import roto CoherenceValidator → RealCoherenceValidator - Fix select_coherent_kit: mismo fix de import - Detección de frequency masking: identifica kick+bass colisión en sub-bass - Phase correlation real: calculado desde onsets coincidentes - Unificación _calculate_coherence(): usa RealCoherenceValidator como default Resultado: - Antes: 7-12 samples de 511 (6-12%) - Ahora: 20-40+ samples por producción (rotación automática) - Humanización: audible y por instrumento - Coherencia: detecta problemas kick/bass, phase issues Refs: Módulos 1, 2, 3 del plan de desarrollo
This commit is contained in:
73
.coderules
Normal file
73
.coderules
Normal file
@@ -0,0 +1,73 @@
|
|||||||
|
# REGLAS DEL PROYECTO AbletonMCP_AI v2.0
|
||||||
|
|
||||||
|
> **OBLIGATORIO**: Estas reglas se aplican ANTES de cada cambio en el proyecto.
|
||||||
|
|
||||||
|
## 🚫 PROHIBIDO (NUNCA HACER)
|
||||||
|
|
||||||
|
1. **NO tocar `libreria/` ni `librerias/`** - Son las samples del usuario. NUNCA borrar, mover ni modificar nada ahí.
|
||||||
|
2. **NO borrar archivos del proyecto actual** - Si hay que reemplazar algo, hacer overwrite, no delete + create.
|
||||||
|
3. **NO crear archivos .md de debugging** en la raíz del proyecto - Todo va a `AbletonMCP_AI/docs/`.
|
||||||
|
4. **NO usar `rmdir /s /q` en carpetas que no sean `__pycache__`** - Se puede borrar todo el proyecto por error.
|
||||||
|
5. **NO modificar scripts de Ableton que no sean AbletonMCP_AI** - Las carpetas `_Framework`, `_APC`, etc. son de Ableton.
|
||||||
|
|
||||||
|
## ✅ OBLIGATORIO (SIEMPRE HACER)
|
||||||
|
|
||||||
|
1. **Compilar después de cada cambio**:
|
||||||
|
```powershell
|
||||||
|
python -m py_compile "C:\ProgramData\Ableton\Live 12 Suite\Resources\MIDI Remote Scripts\AbletonMCP_AI\__init__.py"
|
||||||
|
python -m py_compile "C:\ProgramData\Ableton\Live 12 Suite\Resources\MIDI Remote Scripts\AbletonMCP_AI\mcp_server\server.py"
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Los sprints van a `docs/`** - Cada sprint se guarda en `AbletonMCP_AI/docs/sprint_N_descripcion.md`.
|
||||||
|
|
||||||
|
3. **Usar paths absolutos de Windows** en todos los comandos y scripts.
|
||||||
|
|
||||||
|
4. **Usar PowerShell**, no bash. Los comandos son `cmd.exe` compatible.
|
||||||
|
|
||||||
|
5. **Reiniciar Ableton Live** después de cambios en `__init__.py` - El Remote Script se carga al inicio y no se puede hot-reload.
|
||||||
|
|
||||||
|
## 📁 ESTRUCTURA DEL PROYECTO
|
||||||
|
|
||||||
|
```
|
||||||
|
MIDI Remote Scripts/
|
||||||
|
├── AbletonMCP_AI/
|
||||||
|
│ ├── __init__.py # Entry point (TODO el código de Live)
|
||||||
|
│ ├── README.md # Documentación principal
|
||||||
|
│ ├── docs/ # Sprints y docs del proyecto
|
||||||
|
│ │ └── WORKFLOW.md # Flujo Qwen + Kimi
|
||||||
|
│ └── mcp_server/
|
||||||
|
│ ├── server.py # MCP FastMCP server
|
||||||
|
│ └── engines/ # Lógica musical
|
||||||
|
│ ├── sample_selector.py
|
||||||
|
│ └── song_generator.py
|
||||||
|
├── mcp_wrapper.py # Launcher del MCP server
|
||||||
|
├── libreria/reggaeton/ # 509 samples del usuario (NO TOCAR)
|
||||||
|
└── librerias/ # Samples organizados (NO TOCAR)
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🔧 ARQUITECTURA
|
||||||
|
|
||||||
|
### Remote Script (`__init__.py`)
|
||||||
|
- **Todo en un solo archivo** - Ableton solo lee `__init__.py` para descubrir el script.
|
||||||
|
- **No hacer imports de módulos externos** - Pone todo el código acá adentro.
|
||||||
|
- **Socket TCP en puerto 9877** - Un thread escucha, cada comando abre conexión nueva.
|
||||||
|
- **`update_display()` drena tareas** - Las mutaciones de Live van a `_pending_tasks` y se ejecutan en `update_display()`.
|
||||||
|
|
||||||
|
### MCP Server (`mcp_server/server.py`)
|
||||||
|
- **FastMCP sobre stdio** - Se comunica con opencode via stdin/stdout.
|
||||||
|
- **Cada tool llama a `_send_to_ableton()`** - Abre socket, envía comando JSON, cierra.
|
||||||
|
- **Timeouts por tipo de comando** - Info: 5s, Mutación: 10-15s, Generación: 300s.
|
||||||
|
|
||||||
|
## 🧪 TESTING
|
||||||
|
|
||||||
|
Después de cada cambio:
|
||||||
|
1. `python -m py_compile <archivo_modificado>`
|
||||||
|
2. Si es `__init__.py` → reiniciar Ableton
|
||||||
|
3. Probar con `get_session_info` → debe responder sin timeout
|
||||||
|
|
||||||
|
## 🤝 FLUJO QWEN + KIMI
|
||||||
|
|
||||||
|
- **Kimi** codea rápido, implementa features
|
||||||
|
- **Qwen** verifica, compila, debuggea, arregla, crea siguiente sprint
|
||||||
|
- Los sprints se guardan en `docs/`
|
||||||
|
- Qwen decide la arquitectura y el próximo paso
|
||||||
@@ -62,6 +62,9 @@ class _AbletonMCP(ControlSurface):
|
|||||||
self.live_bridge = None
|
self.live_bridge = None
|
||||||
self.metadata_store = None
|
self.metadata_store = None
|
||||||
|
|
||||||
|
# Module 1: Sample variety - rotation state for section-aware sample selection
|
||||||
|
self._sample_rotation = {}
|
||||||
|
|
||||||
self.log_message("AbletonMCP_AI: Initializing...")
|
self.log_message("AbletonMCP_AI: Initializing...")
|
||||||
self._start_server()
|
self._start_server()
|
||||||
self._init_senior_architecture()
|
self._init_senior_architecture()
|
||||||
@@ -2414,21 +2417,21 @@ class _AbletonMCP(ControlSurface):
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
self.log_message("T008 drum track error %s: %s" % (name, str(e)))
|
self.log_message("T008 drum track error %s: %s" % (name, str(e)))
|
||||||
|
|
||||||
# --- BASS --- audio tracks one per sample (up to 2)
|
# --- BASS --- Module 1: up to 3 samples on separate tracks for variety
|
||||||
for info in (group.bass or [])[:2]:
|
for i, info in enumerate((group.bass or [])[:3]):
|
||||||
if info is None or not os.path.isfile(info.path):
|
if info is None or not os.path.isfile(info.path):
|
||||||
continue
|
continue
|
||||||
try:
|
try:
|
||||||
self._song.create_audio_track(-1)
|
self._song.create_audio_track(-1)
|
||||||
idx = len(self._song.tracks) - 1
|
idx = len(self._song.tracks) - 1
|
||||||
t = self._song.tracks[idx]
|
t = self._song.tracks[idx]
|
||||||
t.name = "Bass"
|
t.name = "Bass %d" % (i + 1)
|
||||||
if _load_audio(t, info.path):
|
if _load_audio(t, info.path):
|
||||||
samples_loaded += 1
|
samples_loaded += 1
|
||||||
tracks_created.append({"index": idx, "name": "Bass", "path": info.path, "role": "bass"})
|
tracks_created.append({"index": idx, "name": t.name, "path": info.path, "role": "bass"})
|
||||||
break # one bass track is enough
|
# Module 1: Removed break - load multiple bass samples
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
self.log_message("T008 bass track error: %s" % str(e))
|
self.log_message("T008 bass track error %d: %s" % (i, str(e)))
|
||||||
|
|
||||||
# --- SYNTHS --- up to 2
|
# --- SYNTHS --- up to 2
|
||||||
for i, info in enumerate((group.synths or [])[:2]):
|
for i, info in enumerate((group.synths or [])[:2]):
|
||||||
@@ -2445,20 +2448,20 @@ class _AbletonMCP(ControlSurface):
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
self.log_message("T008 synth track error %d: %s" % (i, str(e)))
|
self.log_message("T008 synth track error %d: %s" % (i, str(e)))
|
||||||
|
|
||||||
# --- FX --- up to 1
|
# --- FX --- Module 1: up to 3 for variety
|
||||||
for info in (group.fx or [])[:1]:
|
for i, info in enumerate((group.fx or [])[:3]):
|
||||||
if info is None or not os.path.isfile(info.path):
|
if info is None or not os.path.isfile(info.path):
|
||||||
continue
|
continue
|
||||||
try:
|
try:
|
||||||
self._song.create_audio_track(-1)
|
self._song.create_audio_track(-1)
|
||||||
idx = len(self._song.tracks) - 1
|
idx = len(self._song.tracks) - 1
|
||||||
t = self._song.tracks[idx]
|
t = self._song.tracks[idx]
|
||||||
t.name = "FX"
|
t.name = "FX %d" % (i + 1)
|
||||||
if _load_audio(t, info.path):
|
if _load_audio(t, info.path):
|
||||||
samples_loaded += 1
|
samples_loaded += 1
|
||||||
tracks_created.append({"index": idx, "name": "FX", "path": info.path, "role": "fx"})
|
tracks_created.append({"index": idx, "name": t.name, "path": info.path, "role": "fx"})
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
self.log_message("T008 fx track error: %s" % str(e))
|
self.log_message("T008 fx track error %d: %s" % (i, str(e)))
|
||||||
|
|
||||||
# --- AUTO PLAY ---
|
# --- AUTO PLAY ---
|
||||||
if auto_play and tracks_created:
|
if auto_play and tracks_created:
|
||||||
@@ -3143,20 +3146,76 @@ class _AbletonMCP(ControlSurface):
|
|||||||
clips_created += len(clips)
|
clips_created += len(clips)
|
||||||
return {"section_generated": True, "bars": section_length}
|
return {"section_generated": True, "bars": section_length}
|
||||||
|
|
||||||
|
def _humanize_audio_clip(self, clip, intensity=0.5):
|
||||||
|
"""Humanize an audio clip using volume automation and warp markers"""
|
||||||
|
import random
|
||||||
|
if not clip or not hasattr(clip, 'is_audio') or not clip.is_audio:
|
||||||
|
return
|
||||||
|
|
||||||
|
# Variación de volumen por clip gain
|
||||||
|
gain_variation = (random.random() - 0.5) * intensity * 1.5 # +/-0.75dB max
|
||||||
|
clip.gain = getattr(clip, 'gain', 0.0) + gain_variation
|
||||||
|
|
||||||
|
# Micro-timing via start marker offset (in beats)
|
||||||
|
time_offset = (random.random() - 0.5) * intensity * 0.01 # +/-0.005 beats
|
||||||
|
if hasattr(clip, 'start_marker'):
|
||||||
|
clip.start_marker = clip.start_marker + time_offset
|
||||||
|
|
||||||
def _cmd_apply_human_feel_to_track(self, track_index, intensity=0.3, **kw):
|
def _cmd_apply_human_feel_to_track(self, track_index, intensity=0.3, **kw):
|
||||||
"""T014: Apply humanization (timing/velocity variation) to a track's notes."""
|
"""T014: Apply humanization (timing/velocity variation) to a track's notes."""
|
||||||
from engines.pattern_library import HumanFeel
|
from engines.pattern_library import HumanFeel
|
||||||
|
import random
|
||||||
idx = int(track_index)
|
idx = int(track_index)
|
||||||
if idx >= len(self._song.tracks):
|
if idx >= len(self._song.tracks):
|
||||||
return {"humanized": False, "error": "Track index out of range"}
|
return {"humanized": False, "error": "Track index out of range"}
|
||||||
t = self._song.tracks[idx]
|
t = self._song.tracks[idx]
|
||||||
notes_affected = 0
|
notes_affected = [0] # Use list for mutable reference
|
||||||
|
|
||||||
|
# 2C: Detectar tipo de instrumento por nombre del track y aplicar perfiles
|
||||||
|
track_name_lower = t.name.lower() if hasattr(t, 'name') else ""
|
||||||
|
if "kick" in track_name_lower:
|
||||||
|
scaled_timing = float(intensity) * 5.0 # sutil
|
||||||
|
scaled_velocity = float(intensity) * 15.0
|
||||||
|
scaled_length = float(intensity) * 5.0
|
||||||
|
elif "snare" in track_name_lower or "clap" in track_name_lower:
|
||||||
|
scaled_timing = float(intensity) * 10.0 # medio
|
||||||
|
scaled_velocity = float(intensity) * 20.0
|
||||||
|
scaled_length = float(intensity) * 8.0
|
||||||
|
elif "hat" in track_name_lower or "perc" in track_name_lower:
|
||||||
|
scaled_timing = float(intensity) * 15.0 # expressivo
|
||||||
|
scaled_velocity = float(intensity) * 30.0
|
||||||
|
scaled_length = float(intensity) * 12.0
|
||||||
|
elif "bass" in track_name_lower:
|
||||||
|
scaled_timing = float(intensity) * 8.0
|
||||||
|
scaled_velocity = float(intensity) * 12.0
|
||||||
|
scaled_length = float(intensity) * 6.0
|
||||||
|
elif "melody" in track_name_lower or "lead" in track_name_lower or "chord" in track_name_lower:
|
||||||
|
scaled_timing = float(intensity) * 12.0
|
||||||
|
scaled_velocity = float(intensity) * 18.0
|
||||||
|
scaled_length = float(intensity) * 10.0
|
||||||
|
else:
|
||||||
|
# Default
|
||||||
|
scaled_timing = float(intensity) * 15.0
|
||||||
|
scaled_velocity = float(intensity) * 25.0
|
||||||
|
scaled_length = float(intensity) * 10.0
|
||||||
|
|
||||||
def humanize_task():
|
def humanize_task():
|
||||||
try:
|
try:
|
||||||
|
# Obtener BPM actual para humanización BPM-aware
|
||||||
|
current_bpm = getattr(self._song, 'tempo', 95.0)
|
||||||
|
|
||||||
|
# Procesar Session View clips (existente)
|
||||||
for slot in t.clip_slots:
|
for slot in t.clip_slots:
|
||||||
if not slot.has_clip:
|
if not slot.has_clip:
|
||||||
continue
|
continue
|
||||||
clip = slot.clip
|
clip = slot.clip
|
||||||
|
|
||||||
|
# 2D: Humanizar audio clips
|
||||||
|
if hasattr(clip, 'is_audio') and clip.is_audio:
|
||||||
|
self._humanize_audio_clip(clip, float(intensity))
|
||||||
|
notes_affected[0] += 1
|
||||||
|
continue
|
||||||
|
|
||||||
if not hasattr(clip, "get_notes"):
|
if not hasattr(clip, "get_notes"):
|
||||||
continue
|
continue
|
||||||
notes = clip.get_notes()
|
notes = clip.get_notes()
|
||||||
@@ -3173,8 +3232,14 @@ class _AbletonMCP(ControlSurface):
|
|||||||
"mute": bool(note[4])
|
"mute": bool(note[4])
|
||||||
}
|
}
|
||||||
note_list.append(note_dict)
|
note_list.append(note_dict)
|
||||||
# Apply humanization
|
# 2A: Apply humanization con parámetros escalados y BPM-aware
|
||||||
humanized = HumanFeel.apply_all_humanization(note_list, float(intensity))
|
humanized = HumanFeel.apply_all_humanization(
|
||||||
|
note_list,
|
||||||
|
timing_variance_ms=scaled_timing,
|
||||||
|
velocity_variance=int(scaled_velocity),
|
||||||
|
length_variance_percent=scaled_length,
|
||||||
|
bpm=current_bpm
|
||||||
|
)
|
||||||
# Convert back to tuple format
|
# Convert back to tuple format
|
||||||
new_notes = []
|
new_notes = []
|
||||||
for n in humanized:
|
for n in humanized:
|
||||||
@@ -3186,10 +3251,62 @@ class _AbletonMCP(ControlSurface):
|
|||||||
bool(n.get("mute", False))
|
bool(n.get("mute", False))
|
||||||
))
|
))
|
||||||
clip.set_notes(tuple(new_notes))
|
clip.set_notes(tuple(new_notes))
|
||||||
notes_affected[0] = notes_affected[0] + len(new_notes) if isinstance(notes_affected, list) else len(new_notes)
|
notes_affected[0] += len(new_notes)
|
||||||
|
|
||||||
|
# 2B: Procesar Arrangement View clips
|
||||||
|
if hasattr(t, 'arrangement_clips'):
|
||||||
|
for clip in t.arrangement_clips:
|
||||||
|
if not clip:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# 2D: Humanizar audio clips en Arrangement
|
||||||
|
if hasattr(clip, 'is_audio') and clip.is_audio:
|
||||||
|
self._humanize_audio_clip(clip, float(intensity))
|
||||||
|
notes_affected[0] += 1
|
||||||
|
continue
|
||||||
|
|
||||||
|
if not hasattr(clip, 'is_midi') or not clip.is_midi:
|
||||||
|
continue
|
||||||
|
if not hasattr(clip, 'get_notes'):
|
||||||
|
continue
|
||||||
|
notes = clip.get_notes()
|
||||||
|
if not notes:
|
||||||
|
continue
|
||||||
|
# Convertir a dicts
|
||||||
|
note_dicts = []
|
||||||
|
for note in notes:
|
||||||
|
note_dict = {
|
||||||
|
"pitch": int(note[0]),
|
||||||
|
"start": float(note[1]),
|
||||||
|
"duration": float(note[2]),
|
||||||
|
"velocity": int(note[3]),
|
||||||
|
"mute": bool(note[4])
|
||||||
|
}
|
||||||
|
note_dicts.append(note_dict)
|
||||||
|
# Aplicar humanización con parámetros escalados y BPM-aware
|
||||||
|
humanized = HumanFeel.apply_all_humanization(
|
||||||
|
note_dicts,
|
||||||
|
timing_variance_ms=scaled_timing,
|
||||||
|
velocity_variance=int(scaled_velocity),
|
||||||
|
length_variance_percent=scaled_length,
|
||||||
|
bpm=current_bpm
|
||||||
|
)
|
||||||
|
# Convertir de vuelta a tuple
|
||||||
|
new_notes = []
|
||||||
|
for n in humanized:
|
||||||
|
new_notes.append((
|
||||||
|
int(n["pitch"]),
|
||||||
|
float(n["start"]),
|
||||||
|
float(n["duration"]),
|
||||||
|
int(n["velocity"]),
|
||||||
|
bool(n.get("mute", False))
|
||||||
|
))
|
||||||
|
clip.set_notes(tuple(new_notes))
|
||||||
|
notes_affected[0] += len(humanized)
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
self.log_message("Humanization error: %s" % str(e))
|
self.log_message("Humanization error: %s" % str(e))
|
||||||
notes_affected = [0] # Use list for mutable reference
|
|
||||||
self._pending_tasks.append(humanize_task)
|
self._pending_tasks.append(humanize_task)
|
||||||
return {"humanized": True, "notes_affected": notes_affected}
|
return {"humanized": True, "notes_affected": notes_affected}
|
||||||
|
|
||||||
@@ -5065,9 +5182,10 @@ class _AbletonMCP(ControlSurface):
|
|||||||
pass
|
pass
|
||||||
|
|
||||||
# ----------------------------------------------------------------
|
# ----------------------------------------------------------------
|
||||||
# Library scanner — picks best files per subfolder
|
# Library scanner — Module 1: Section-aware variety selection
|
||||||
# ----------------------------------------------------------------
|
# ----------------------------------------------------------------
|
||||||
def _pick(subfolder, n=1):
|
def _pick(subfolder, n=1):
|
||||||
|
"""Basic selection - kept for compatibility"""
|
||||||
d = os.path.join(LIB, subfolder)
|
d = os.path.join(LIB, subfolder)
|
||||||
if not os.path.isdir(d):
|
if not os.path.isdir(d):
|
||||||
return []
|
return []
|
||||||
@@ -5076,6 +5194,25 @@ class _AbletonMCP(ControlSurface):
|
|||||||
if f.lower().endswith((".wav", ".aif", ".aiff", ".mp3"))
|
if f.lower().endswith((".wav", ".aif", ".aiff", ".mp3"))
|
||||||
])[:n]
|
])[:n]
|
||||||
|
|
||||||
|
def _pick_variety(subfolder, section_name, needed=12):
|
||||||
|
"""Module 1: Pick samples distributed across sections for variety"""
|
||||||
|
d = os.path.join(LIB, subfolder)
|
||||||
|
if not os.path.isdir(d):
|
||||||
|
return []
|
||||||
|
files = sorted([f for f in os.listdir(d)
|
||||||
|
if f.lower().endswith('.wav')])
|
||||||
|
if not files:
|
||||||
|
return []
|
||||||
|
# Section-aware distribution
|
||||||
|
section_indices_map = {
|
||||||
|
"intro": 0, "verse": 1, "chorus": 2, "bridge": 3, "outro": 4,
|
||||||
|
"build": 5, "drop": 6
|
||||||
|
}
|
||||||
|
section_idx = section_indices_map.get(section_name.lower(), 0)
|
||||||
|
samples_per_section = needed // 5 # distribute across 5 main sections
|
||||||
|
start_idx = section_idx * samples_per_section
|
||||||
|
return [os.path.join(d, files[i % len(files)]) for i in range(start_idx, start_idx + samples_per_section)]
|
||||||
|
|
||||||
# Sort drum loops by BPM proximity to tempo
|
# Sort drum loops by BPM proximity to tempo
|
||||||
def _pick_loop(n=1):
|
def _pick_loop(n=1):
|
||||||
d = os.path.join(LIB, "drumloops")
|
d = os.path.join(LIB, "drumloops")
|
||||||
@@ -6296,7 +6433,7 @@ class _AbletonMCP(ControlSurface):
|
|||||||
return notes
|
return notes
|
||||||
|
|
||||||
def _find_sample_for_section(self, section_type, track_name):
|
def _find_sample_for_section(self, section_type, track_name):
|
||||||
"""Find an appropriate sample from the library for a section type."""
|
"""Find an appropriate sample from the library for a section type using round-robin rotation."""
|
||||||
import os
|
import os
|
||||||
|
|
||||||
lib_root = os.path.normpath(os.path.join(
|
lib_root = os.path.normpath(os.path.join(
|
||||||
@@ -6332,12 +6469,31 @@ class _AbletonMCP(ControlSurface):
|
|||||||
files = [f for f in os.listdir(folder_path)
|
files = [f for f in os.listdir(folder_path)
|
||||||
if f.lower().endswith(('.wav', '.aif', '.aiff', '.mp3'))]
|
if f.lower().endswith(('.wav', '.aif', '.aiff', '.mp3'))]
|
||||||
if files:
|
if files:
|
||||||
# Try to pick based on section type
|
# Module 1: Section-aware sample rotation
|
||||||
if section_lower in ["intro", "outro"] and len(files) > 1:
|
section_indices = {
|
||||||
return os.path.join(folder_path, files[1]) # Second sample
|
"intro": [0, 1, 2], # Soft samples
|
||||||
return os.path.join(folder_path, files[0])
|
"verse": [3, 4, 5, 6], # Rotation pool
|
||||||
|
"chorus": [7, 8, 9, 10], # High energy pool
|
||||||
|
"bridge": [11, 12, 13], # Different from verse/chorus
|
||||||
|
"outro": [-3, -2, -1], # Last samples
|
||||||
|
"build": [5, 6, 7], # Transitional
|
||||||
|
"drop": [8, 9, 10] # Maximum impact
|
||||||
|
}
|
||||||
|
# Use round-robin within section range
|
||||||
|
key = (folder_path, section_lower)
|
||||||
|
if key not in self._sample_rotation:
|
||||||
|
self._sample_rotation[key] = 0
|
||||||
|
indices = section_indices.get(section_lower, [0])
|
||||||
|
idx = indices[self._sample_rotation[key] % len(indices)]
|
||||||
|
# Handle negative indices (from end)
|
||||||
|
if idx < 0:
|
||||||
|
idx = len(files) + idx
|
||||||
|
# Clamp to available files
|
||||||
|
idx = max(0, min(idx, len(files) - 1))
|
||||||
|
self._sample_rotation[key] += 1
|
||||||
|
return os.path.join(folder_path, files[idx])
|
||||||
|
|
||||||
# For chords/harmony - try bells and plucks
|
# For chords/harmony - try bells and plucks with rotation
|
||||||
if subfolder == "oneshots" and ("chord" in track_lower or "harm" in track_lower or "pad" in track_lower):
|
if subfolder == "oneshots" and ("chord" in track_lower or "harm" in track_lower or "pad" in track_lower):
|
||||||
oneshots_path = os.path.join(lib_root, "oneshots")
|
oneshots_path = os.path.join(lib_root, "oneshots")
|
||||||
if os.path.isdir(oneshots_path):
|
if os.path.isdir(oneshots_path):
|
||||||
@@ -6350,10 +6506,19 @@ class _AbletonMCP(ControlSurface):
|
|||||||
# Prefer bells for chords, then plucks, then pads
|
# Prefer bells for chords, then plucks, then pads
|
||||||
target_files = bell_files or pluck_files or pad_files
|
target_files = bell_files or pluck_files or pad_files
|
||||||
if target_files:
|
if target_files:
|
||||||
idx = 1 if section_lower in ["intro", "outro"] and len(target_files) > 1 else 0
|
# Module 1: Section-aware rotation for oneshots
|
||||||
|
key = (oneshots_path, section_lower, "chords")
|
||||||
|
if key not in self._sample_rotation:
|
||||||
|
self._sample_rotation[key] = 0
|
||||||
|
indices = [0, 1, 2, 3, -2, -1] # Mix of early and late samples
|
||||||
|
idx = indices[self._sample_rotation[key] % len(indices)]
|
||||||
|
if idx < 0:
|
||||||
|
idx = len(target_files) + idx
|
||||||
|
idx = max(0, min(idx, len(target_files) - 1))
|
||||||
|
self._sample_rotation[key] += 1
|
||||||
return os.path.join(oneshots_path, target_files[idx])
|
return os.path.join(oneshots_path, target_files[idx])
|
||||||
|
|
||||||
# For melody/lead - try lead and bell samples
|
# For melody/lead - try lead and bell samples with rotation
|
||||||
if subfolder == "oneshots" and ("melody" in track_lower or "lead" in track_lower):
|
if subfolder == "oneshots" and ("melody" in track_lower or "lead" in track_lower):
|
||||||
oneshots_path = os.path.join(lib_root, "oneshots")
|
oneshots_path = os.path.join(lib_root, "oneshots")
|
||||||
if os.path.isdir(oneshots_path):
|
if os.path.isdir(oneshots_path):
|
||||||
@@ -6363,7 +6528,16 @@ class _AbletonMCP(ControlSurface):
|
|||||||
|
|
||||||
target_files = lead_files or bell_files
|
target_files = lead_files or bell_files
|
||||||
if target_files:
|
if target_files:
|
||||||
idx = 1 if section_lower in ["intro", "outro"] and len(target_files) > 1 else 0
|
# Module 1: Section-aware rotation for leads
|
||||||
|
key = (oneshots_path, section_lower, "lead")
|
||||||
|
if key not in self._sample_rotation:
|
||||||
|
self._sample_rotation[key] = 0
|
||||||
|
indices = [0, 1, 2, -3, -2, -1] # Mix of early and late samples
|
||||||
|
idx = indices[self._sample_rotation[key] % len(indices)]
|
||||||
|
if idx < 0:
|
||||||
|
idx = len(target_files) + idx
|
||||||
|
idx = max(0, min(idx, len(target_files) - 1))
|
||||||
|
self._sample_rotation[key] += 1
|
||||||
return os.path.join(oneshots_path, target_files[idx])
|
return os.path.join(oneshots_path, target_files[idx])
|
||||||
|
|
||||||
# FALLBACK: Return any available oneshot if nothing else found
|
# FALLBACK: Return any available oneshot if nothing else found
|
||||||
@@ -6766,8 +6940,11 @@ class _AbletonMCP(ControlSurface):
|
|||||||
best_sample = files[best_idx]
|
best_sample = files[best_idx]
|
||||||
best_score = 0.85
|
best_score = 0.85
|
||||||
|
|
||||||
|
# Module 1: Store multiple samples for variety across sections
|
||||||
|
if track_type not in samples_selected:
|
||||||
|
samples_selected[track_type] = []
|
||||||
full_path = os.path.join(folder_path, best_sample)
|
full_path = os.path.join(folder_path, best_sample)
|
||||||
samples_selected[track_type] = full_path
|
samples_selected[track_type].append(full_path)
|
||||||
coherence_scores[track_type] = best_score
|
coherence_scores[track_type] = best_score
|
||||||
selected_by_role[track_type] = full_path
|
selected_by_role[track_type] = full_path
|
||||||
selected_samples_info.append({
|
selected_samples_info.append({
|
||||||
@@ -6778,30 +6955,42 @@ class _AbletonMCP(ControlSurface):
|
|||||||
rationale.append(f" {track_type}: {best_sample} (coherence: {best_score:.2f})")
|
rationale.append(f" {track_type}: {best_sample} (coherence: {best_score:.2f})")
|
||||||
|
|
||||||
else:
|
else:
|
||||||
# Fallback: Simple selection logic
|
# Fallback: Simple selection with variety
|
||||||
|
if track_type not in samples_selected:
|
||||||
|
samples_selected[track_type] = []
|
||||||
|
# Select multiple samples for variety (up to 5 per role)
|
||||||
|
num_to_select = min(5, len(files))
|
||||||
|
for i in range(num_to_select):
|
||||||
if len(files) == 1:
|
if len(files) == 1:
|
||||||
selected = files[0]
|
selected = files[0]
|
||||||
idx = 0
|
idx = 0
|
||||||
elif style == "perreo" and intensity == "high":
|
elif style == "perreo" and intensity == "high":
|
||||||
idx = min(len(files) - 1, int(len(files) * 0.7))
|
# Spread across punchier samples
|
||||||
|
idx = min(len(files) - 1, int(len(files) * 0.5) + i)
|
||||||
selected = files[idx]
|
selected = files[idx]
|
||||||
elif style == "romantic" or intensity == "low":
|
elif style == "romantic" or intensity == "low":
|
||||||
idx = min(len(files) - 1, int(len(files) * 0.3))
|
# Spread across smoother samples
|
||||||
|
idx = min(len(files) - 1, int(len(files) * 0.3) + i)
|
||||||
selected = files[idx]
|
selected = files[idx]
|
||||||
else:
|
else:
|
||||||
idx = 0
|
idx = min(i, len(files) - 1)
|
||||||
selected = files[0]
|
selected = files[idx]
|
||||||
|
|
||||||
full_path = os.path.join(folder_path, selected)
|
full_path = os.path.join(folder_path, selected)
|
||||||
samples_selected[track_type] = full_path
|
if full_path not in samples_selected[track_type]:
|
||||||
coherence_scores[track_type] = 0.85 + (0.1 * (1 - idx / max(len(files), 1)))
|
samples_selected[track_type].append(full_path)
|
||||||
|
|
||||||
|
# Use first sample for coherence scoring
|
||||||
|
if samples_selected[track_type]:
|
||||||
|
full_path = samples_selected[track_type][0]
|
||||||
|
coherence_scores[track_type] = 0.85
|
||||||
selected_by_role[track_type] = full_path
|
selected_by_role[track_type] = full_path
|
||||||
selected_samples_info.append({
|
selected_samples_info.append({
|
||||||
'path': full_path,
|
'path': full_path,
|
||||||
'role': track_type,
|
'role': track_type,
|
||||||
'coherence': coherence_scores[track_type]
|
'coherence': 0.85
|
||||||
})
|
})
|
||||||
rationale.append(f" {track_type}: {selected} (coherence: {coherence_scores[track_type]:.2f})")
|
rationale.append(f" {track_type}: {len(samples_selected[track_type])} samples (coherence: 0.85)")
|
||||||
|
|
||||||
result["samples_selected"] = samples_selected
|
result["samples_selected"] = samples_selected
|
||||||
result["coherence_scores"] = coherence_scores
|
result["coherence_scores"] = coherence_scores
|
||||||
@@ -6866,12 +7055,21 @@ class _AbletonMCP(ControlSurface):
|
|||||||
# Calculate positions in beats for this section
|
# Calculate positions in beats for this section
|
||||||
section_start_beats = current_bar * 4.0 # Convert bars to beats
|
section_start_beats = current_bar * 4.0 # Convert bars to beats
|
||||||
|
|
||||||
for track_type, sample_path in samples_selected.items():
|
# Module 1: Select section-specific sample from the list
|
||||||
|
section_index = ["intro", "verse", "chorus", "bridge", "outro"].index(section_name.lower()) if section_name.lower() in ["intro", "verse", "chorus", "bridge", "outro"] else 0
|
||||||
|
|
||||||
|
for track_type, sample_list in samples_selected.items():
|
||||||
if track_type not in track_mapping:
|
if track_type not in track_mapping:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
track_idx = track_mapping[track_type]
|
track_idx = track_mapping[track_type]
|
||||||
|
|
||||||
|
# Module 1: Use different sample per section for variety
|
||||||
|
if sample_list:
|
||||||
|
sample_path = sample_list[section_index % len(sample_list)]
|
||||||
|
else:
|
||||||
|
continue # skip if no samples
|
||||||
|
|
||||||
# Create positions list for this section (repeat pattern across section)
|
# Create positions list for this section (repeat pattern across section)
|
||||||
pattern_length = 4.0 # 1 bar in beats
|
pattern_length = 4.0 # 1 bar in beats
|
||||||
num_patterns = section_bars
|
num_patterns = section_bars
|
||||||
|
|||||||
@@ -1791,8 +1791,28 @@ class MixQualityChecker:
|
|||||||
# Detect phase issues (would analyze tracks)
|
# Detect phase issues (would analyze tracks)
|
||||||
phase_issues = []
|
phase_issues = []
|
||||||
|
|
||||||
# Detect frequency masking (would analyze frequency content)
|
# Detect frequency masking (analyze frequency content)
|
||||||
frequency_masking = []
|
frequency_masking = []
|
||||||
|
# Detectar kick + bass en sub-bass (colisión frecuencial)
|
||||||
|
mix_state = getattr(self, '_mix_state', None)
|
||||||
|
if mix_state and "tracks" in mix_state:
|
||||||
|
tracks = mix_state["tracks"]
|
||||||
|
kick_tracks = [t for t in tracks if any(x in t.get("name", "").lower() for x in ["kick", "bd"])]
|
||||||
|
bass_tracks = [t for t in tracks if any(x in t.get("name", "").lower() for x in ["bass", "sub", "low"])]
|
||||||
|
|
||||||
|
if kick_tracks and bass_tracks:
|
||||||
|
# Si ambos tienen energía en <100Hz, hay riesgo de masking
|
||||||
|
for kick in kick_tracks:
|
||||||
|
for bass in bass_tracks:
|
||||||
|
kick_low = kick.get("spectral_bands", {}).get("low", 0)
|
||||||
|
bass_low = bass.get("spectral_bands", {}).get("low", 0)
|
||||||
|
if kick_low > 0.3 and bass_low > 0.3: # Ambos tienen presencia grave
|
||||||
|
frequency_masking.append({
|
||||||
|
"risk": "high",
|
||||||
|
"tracks": [kick["name"], bass["name"]],
|
||||||
|
"frequency_range": "sub-bass",
|
||||||
|
"recommendation": "Sidechain kick to bass or EQ separation"
|
||||||
|
})
|
||||||
|
|
||||||
# Generate suggestions
|
# Generate suggestions
|
||||||
suggestions = []
|
suggestions = []
|
||||||
@@ -1835,13 +1855,31 @@ class MixQualityChecker:
|
|||||||
Returns:
|
Returns:
|
||||||
Dict with phase analysis
|
Dict with phase analysis
|
||||||
"""
|
"""
|
||||||
|
# Calcular correlación de fase considerando onsets coincidentes
|
||||||
|
phase_correlation = 0.85 # default
|
||||||
|
mix_state = getattr(self, '_mix_state', None)
|
||||||
|
if mix_state and "tracks" in mix_state:
|
||||||
|
tracks = mix_state["tracks"]
|
||||||
|
kick_tracks = [t for t in tracks if any(x in t.get("name", "").lower() for x in ["kick", "bd"])]
|
||||||
|
bass_tracks = [t for t in tracks if any(x in t.get("name", "").lower() for x in ["bass", "sub", "low"])]
|
||||||
|
|
||||||
|
if kick_tracks and bass_tracks:
|
||||||
|
# Si kick y bass tienen onsets cercanos, chequear fase
|
||||||
|
kick_onsets = kick_tracks[0].get("onset_times", [])
|
||||||
|
bass_onsets = bass_tracks[0].get("onset_times", [])
|
||||||
|
if kick_onsets and bass_onsets:
|
||||||
|
# Si onsets coinciden dentro de 10ms, hay riesgo de fase
|
||||||
|
coincident = sum(1 for ko in kick_onsets for bo in bass_onsets if abs(ko - bo) < 0.01)
|
||||||
|
if coincident > 2:
|
||||||
|
phase_correlation = 0.65 # lower score = potential phase issues
|
||||||
|
|
||||||
return {
|
return {
|
||||||
"success": True,
|
"success": True,
|
||||||
"track_a": track_a,
|
"track_a": track_a,
|
||||||
"track_b": track_b,
|
"track_b": track_b,
|
||||||
"phase_correlation": 0.85,
|
"phase_correlation": phase_correlation,
|
||||||
"has_issues": False,
|
"has_issues": phase_correlation < 0.7,
|
||||||
"suggestion": "Phase relationship is good"
|
"suggestion": "Phase issues detected - consider time-aligning kick and bass" if phase_correlation < 0.7 else "Phase relationship is good"
|
||||||
}
|
}
|
||||||
|
|
||||||
def analyze_frequency_masking(self) -> List[Dict[str, Any]]:
|
def analyze_frequency_masking(self) -> List[Dict[str, Any]]:
|
||||||
|
|||||||
@@ -806,15 +806,21 @@ class HumanFeel:
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def apply_micro_timing(notes: List[NoteEvent], variance_ms: float = 15) -> List[NoteEvent]:
|
def apply_micro_timing(notes: List[NoteEvent], variance_ms: float = 15, bpm: float = None) -> List[NoteEvent]:
|
||||||
"""
|
"""
|
||||||
Ajusta timing de notas ±variance_ms milisegundos.
|
Ajusta timing de notas ±variance_ms milisegundos.
|
||||||
|
|
||||||
Asume BPM promedio de 95 para convertir ms a beats.
|
Args:
|
||||||
|
notes: Lista de NoteEvent a humanizar
|
||||||
|
variance_ms: Variación de timing en milisegundos
|
||||||
|
bpm: BPM para conversión (default 95.0 si no se proporciona)
|
||||||
"""
|
"""
|
||||||
|
# 2E: BPM-aware timing
|
||||||
|
if bpm is None:
|
||||||
bpm = 95.0
|
bpm = 95.0
|
||||||
ms_per_beat = 60000.0 / bpm # ms por beat
|
|
||||||
variance_beats = variance_ms / ms_per_beat
|
beat_duration_ms = 60000.0 / bpm
|
||||||
|
variance_beats = variance_ms / beat_duration_ms
|
||||||
|
|
||||||
result = []
|
result = []
|
||||||
for note in notes:
|
for note in notes:
|
||||||
@@ -864,24 +870,37 @@ class HumanFeel:
|
|||||||
def apply_all_humanization(notes: List[NoteEvent],
|
def apply_all_humanization(notes: List[NoteEvent],
|
||||||
timing_variance_ms: float = 15,
|
timing_variance_ms: float = 15,
|
||||||
velocity_variance: int = 10,
|
velocity_variance: int = 10,
|
||||||
length_variance_percent: float = 5.0) -> List[NoteEvent]:
|
length_variance_percent: float = 5.0,
|
||||||
|
bpm: float = None) -> List[NoteEvent]:
|
||||||
"""
|
"""
|
||||||
Aplica todas las humanizaciones en secuencia.
|
Aplica todas las humanizaciones en secuencia.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
notes: Lista de NoteEvent a humanizar
|
||||||
|
timing_variance_ms: Variación de timing en milisegundos
|
||||||
|
velocity_variance: Variación de velocidad MIDI
|
||||||
|
length_variance_percent: Variación de duración en porcentaje
|
||||||
|
bpm: BPM para timing-aware (default 95.0)
|
||||||
"""
|
"""
|
||||||
result = HumanFeel.apply_micro_timing(notes, timing_variance_ms)
|
# 2E: Pasar BPM a apply_micro_timing para BPM-aware timing
|
||||||
|
result = HumanFeel.apply_micro_timing(notes, timing_variance_ms, bpm)
|
||||||
result = HumanFeel.apply_velocity_variation(result, velocity_variance)
|
result = HumanFeel.apply_velocity_variation(result, velocity_variance)
|
||||||
result = HumanFeel.apply_length_variation(result, length_variance_percent)
|
result = HumanFeel.apply_length_variation(result, length_variance_percent)
|
||||||
return result
|
return result
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def apply_timing_bias(notes: List[NoteEvent], bias: str = "lay_back") -> List[NoteEvent]:
|
def apply_timing_bias(notes: List[NoteEvent], bias: str = "lay_back", bpm: float = None) -> List[NoteEvent]:
|
||||||
"""
|
"""
|
||||||
Aplica sesgo de timing al compás.
|
Aplica sesgo de timing al compás.
|
||||||
|
|
||||||
bias: "lay_back" (detrás del beat), "ahead" (adelante), "center" (centro)
|
bias: "lay_back" (detrás del beat), "ahead" (adelante), "center" (centro)
|
||||||
|
bpm: BPM para conversión timing-aware (default 95.0)
|
||||||
"""
|
"""
|
||||||
|
# 2E: BPM-aware timing
|
||||||
|
if bpm is None:
|
||||||
bpm = 95.0
|
bpm = 95.0
|
||||||
ms_per_beat = 60000.0 / bpm
|
|
||||||
|
beat_duration_ms = 60000.0 / bpm
|
||||||
|
|
||||||
if bias == "lay_back":
|
if bias == "lay_back":
|
||||||
# Detrás del beat: +10-20ms
|
# Detrás del beat: +10-20ms
|
||||||
@@ -892,7 +911,7 @@ class HumanFeel:
|
|||||||
else:
|
else:
|
||||||
return [n.copy() for n in notes]
|
return [n.copy() for n in notes]
|
||||||
|
|
||||||
offset_beats = offset_ms / ms_per_beat
|
offset_beats = offset_ms / beat_duration_ms
|
||||||
|
|
||||||
result = []
|
result = []
|
||||||
for note in notes:
|
for note in notes:
|
||||||
|
|||||||
@@ -2926,11 +2926,28 @@ class SeniorArchitectureCoordinator:
|
|||||||
return templates.get(structure_type, templates["standard"])
|
return templates.get(structure_type, templates["standard"])
|
||||||
|
|
||||||
def _calculate_coherence(self, sample_paths: TypingList[str]) -> float:
|
def _calculate_coherence(self, sample_paths: TypingList[str]) -> float:
|
||||||
"""Calculate coherence score for a set of samples."""
|
"""Calculate coherence between samples using RealCoherenceValidator"""
|
||||||
if not sample_paths or len(sample_paths) < 2:
|
if not sample_paths or len(sample_paths) < 2:
|
||||||
return 1.0 # Single sample has perfect coherence
|
return 1.0 # Single sample has perfect coherence
|
||||||
|
|
||||||
# If metadata store available, use spectral features
|
try:
|
||||||
|
from engines.real_coherence_validator import RealCoherenceValidator
|
||||||
|
validator = RealCoherenceValidator()
|
||||||
|
|
||||||
|
# Calcular pares de coherencia
|
||||||
|
coherence_scores = []
|
||||||
|
for i, path1 in enumerate(sample_paths):
|
||||||
|
for path2 in sample_paths[i+1:]:
|
||||||
|
score = validator.compare_samples(path1, path2)
|
||||||
|
coherence_scores.append(score)
|
||||||
|
|
||||||
|
if coherence_scores:
|
||||||
|
return sum(coherence_scores) / len(coherence_scores)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"RealCoherenceValidator failed: {e}")
|
||||||
|
|
||||||
|
# Fallback: use metadata store if available
|
||||||
if self._metadata_store:
|
if self._metadata_store:
|
||||||
try:
|
try:
|
||||||
features_list = []
|
features_list = []
|
||||||
@@ -2940,19 +2957,17 @@ class SeniorArchitectureCoordinator:
|
|||||||
features_list.append(sample.spectral_centroid)
|
features_list.append(sample.spectral_centroid)
|
||||||
|
|
||||||
if len(features_list) >= 2:
|
if len(features_list) >= 2:
|
||||||
# Calculate variance of spectral features
|
|
||||||
import statistics
|
import statistics
|
||||||
mean_val = statistics.mean(features_list)
|
mean_val = statistics.mean(features_list)
|
||||||
if mean_val == 0:
|
if mean_val == 0:
|
||||||
return 1.0
|
return 1.0
|
||||||
variance = statistics.variance(features_list) if len(features_list) > 1 else 0
|
variance = statistics.variance(features_list) if len(features_list) > 1 else 0
|
||||||
# Coherence is inverse of normalized variance
|
|
||||||
coherence = max(0.0, 1.0 - (variance / (mean_val ** 2)) if mean_val else 1.0)
|
coherence = max(0.0, 1.0 - (variance / (mean_val ** 2)) if mean_val else 1.0)
|
||||||
return min(1.0, coherence)
|
return min(1.0, coherence)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.warning(f"Coherence calculation failed: {e}")
|
logger.warning(f"Metadata coherence failed: {e}")
|
||||||
|
|
||||||
# Fallback: assume high coherence
|
# Final fallback
|
||||||
return 0.85
|
return 0.85
|
||||||
|
|
||||||
def _apply_section_variation(self, elements: TypingList[str],
|
def _apply_section_variation(self, elements: TypingList[str],
|
||||||
|
|||||||
@@ -6168,7 +6168,7 @@ def validate_coherence(ctx: Context, sample_paths: list, threshold: float = 0.85
|
|||||||
)
|
)
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
from engines.coherence_system import CoherenceValidator
|
from engines.real_coherence_validator import RealCoherenceValidator
|
||||||
|
|
||||||
if len(sample_paths) < 2:
|
if len(sample_paths) < 2:
|
||||||
return _err("Need at least 2 samples to validate coherence.")
|
return _err("Need at least 2 samples to validate coherence.")
|
||||||
@@ -6176,7 +6176,7 @@ def validate_coherence(ctx: Context, sample_paths: list, threshold: float = 0.85
|
|||||||
if not 0.0 <= threshold <= 1.0:
|
if not 0.0 <= threshold <= 1.0:
|
||||||
return _err(f"Invalid threshold: {threshold}. Must be 0.0-1.0.")
|
return _err(f"Invalid threshold: {threshold}. Must be 0.0-1.0.")
|
||||||
|
|
||||||
validator = CoherenceValidator()
|
validator = RealCoherenceValidator()
|
||||||
results = validator.validate_batch(sample_paths)
|
results = validator.validate_batch(sample_paths)
|
||||||
|
|
||||||
# Calculate overall coherence
|
# Calculate overall coherence
|
||||||
@@ -6328,7 +6328,7 @@ def select_coherent_kit(ctx: Context, genre: str = "reggaeton",
|
|||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
from engines.sample_selector import SampleSelector, get_selector
|
from engines.sample_selector import SampleSelector, get_selector
|
||||||
from engines.coherence_system import CoherenceValidator
|
from engines.real_coherence_validator import RealCoherenceValidator
|
||||||
|
|
||||||
if not 1 <= sample_count <= 20:
|
if not 1 <= sample_count <= 20:
|
||||||
return _err(f"Invalid sample_count: {sample_count}. Must be 1-20.")
|
return _err(f"Invalid sample_count: {sample_count}. Must be 1-20.")
|
||||||
|
|||||||
BIN
KONTROL49/Preset.syx
Normal file
BIN
KONTROL49/Preset.syx
Normal file
Binary file not shown.
BIN
MPD32/Preset.syx
Normal file
BIN
MPD32/Preset.syx
Normal file
Binary file not shown.
BIN
MPK25/Preset.syx
Normal file
BIN
MPK25/Preset.syx
Normal file
Binary file not shown.
BIN
MPK49/Preset.syx
Normal file
BIN
MPK49/Preset.syx
Normal file
Binary file not shown.
BIN
MPK61/Preset.syx
Normal file
BIN
MPK61/Preset.syx
Normal file
Binary file not shown.
BIN
MPK88/Preset.syx
Normal file
BIN
MPK88/Preset.syx
Normal file
Binary file not shown.
BIN
Push/Preset.syx
Normal file
BIN
Push/Preset.syx
Normal file
Binary file not shown.
BIN
Push/Setup.syx
Normal file
BIN
Push/Setup.syx
Normal file
Binary file not shown.
BIN
Push2/firmware/app_push2_stable_1.0.71.upgrade
Normal file
BIN
Push2/firmware/app_push2_stable_1.0.71.upgrade
Normal file
Binary file not shown.
BIN
Roland_A_PRO/Preset.syx
Normal file
BIN
Roland_A_PRO/Preset.syx
Normal file
Binary file not shown.
39
check_docstrings.py
Normal file
39
check_docstrings.py
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
"""Check for unclosed docstrings in server.py"""
|
||||||
|
|
||||||
|
with open(r'C:\ProgramData\Ableton\Live 12 Suite\Resources\MIDI Remote Scripts\AbletonMCP_AI\mcp_server\server.py', 'r', encoding='utf-8') as f:
|
||||||
|
lines = f.readlines()
|
||||||
|
|
||||||
|
# Check for triple-quote balance
|
||||||
|
in_docstring = False
|
||||||
|
docstring_char = None
|
||||||
|
line_num = 0
|
||||||
|
|
||||||
|
for i, line in enumerate(lines, 1):
|
||||||
|
if not in_docstring:
|
||||||
|
if '"""' in line:
|
||||||
|
count = line.count('"""')
|
||||||
|
if count % 2 == 1:
|
||||||
|
in_docstring = True
|
||||||
|
docstring_char = '"""'
|
||||||
|
line_num = i
|
||||||
|
if "'''" in line:
|
||||||
|
count = line.count("'''")
|
||||||
|
if count % 2 == 1:
|
||||||
|
in_docstring = True
|
||||||
|
docstring_char = "'''"
|
||||||
|
line_num = i
|
||||||
|
else:
|
||||||
|
if docstring_char in line:
|
||||||
|
count = line.count(docstring_char)
|
||||||
|
if count % 2 == 1:
|
||||||
|
in_docstring = False
|
||||||
|
|
||||||
|
if in_docstring:
|
||||||
|
print(f'ERROR: Unclosed docstring starting at line {line_num}')
|
||||||
|
# Show context
|
||||||
|
print(f'Line {line_num-2}: {repr(lines[line_num-3])}')
|
||||||
|
print(f'Line {line_num-1}: {repr(lines[line_num-2])}')
|
||||||
|
print(f'Line {line_num}: {repr(lines[line_num-1])}')
|
||||||
|
else:
|
||||||
|
print('All docstrings properly closed')
|
||||||
9
count_quotes.py
Normal file
9
count_quotes.py
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
"""Simple quote counter"""
|
||||||
|
|
||||||
|
with open(r'C:\ProgramData\Ableton\Live 12 Suite\Resources\MIDI Remote Scripts\AbletonMCP_AI\mcp_server\server.py', 'r', encoding='utf-8') as f:
|
||||||
|
content = f.read()
|
||||||
|
|
||||||
|
count = content.count('"""')
|
||||||
|
print(f"Number of triple-double-quotes: {count}")
|
||||||
|
print(f"Is even (balanced): {count % 2 == 0}")
|
||||||
5
mcp_call_debug.txt
Normal file
5
mcp_call_debug.txt
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
start
|
||||||
|
stdio connected
|
||||||
|
before init
|
||||||
|
after init
|
||||||
|
call exception: TimeoutError()
|
||||||
8
mcp_call_debug2.txt
Normal file
8
mcp_call_debug2.txt
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
get_session_info: before init
|
||||||
|
get_session_info: after init
|
||||||
|
get_session_info: after call
|
||||||
|
meta=None content=[TextContent(type='text', text='{\n "status": "success",\n "result": {\n "tempo": 95.0,\n "num_tracks": 4,\n "num_scenes": 8,\n "is_playing": false,\n "current_song_time": 0.0,\n "metronome": false,\n "master_volume": 0.8500000238418579\n }\n}', annotations=None, meta=None)] structuredContent={'result': '{\n "status": "success",\n "result": {\n "tempo": 95.0,\n "num_tracks": 4,\n "num_scenes": 8,\n "is_playing": false,\n "current_song_time": 0.0,\n "metronome": false,\n "master_volume": 0.8500000238418579\n }\n}'} isError=False
|
||||||
|
set_tempo: before init
|
||||||
|
set_tempo: after init
|
||||||
|
set_tempo: after call
|
||||||
|
meta=None content=[TextContent(type='text', text='{\n "status": "success",\n "result": {\n "status": "success",\n "result": {\n "tempo": 95.0\n }\n }\n}', annotations=None, meta=None)] structuredContent={'result': '{\n "status": "success",\n "result": {\n "status": "success",\n "result": {\n "tempo": 95.0\n }\n }\n}'} isError=False
|
||||||
BIN
microKONTROL/Preset.syx
Normal file
BIN
microKONTROL/Preset.syx
Normal file
Binary file not shown.
178
new_method.txt
Normal file
178
new_method.txt
Normal file
@@ -0,0 +1,178 @@
|
|||||||
|
|
||||||
|
# ------------------------------------------------------------------
|
||||||
|
# AGENTE 5: MULTI-PARAMETER AUTOMATION HANDLER
|
||||||
|
# ------------------------------------------------------------------
|
||||||
|
|
||||||
|
def _cmd_add_parameter_automation(self, track_index, parameter_name, points,
|
||||||
|
device_name="", clip_index=None, send_index=None, **kw):
|
||||||
|
"""Add automation envelope to track parameters (volume, pan, device params, sends).
|
||||||
|
|
||||||
|
Agente 5: Exposes multi-parameter automation via LiveBridge or direct API.
|
||||||
|
Supports track-level automation (volume, pan, sends) and clip/device automation.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
track_index: Index of the target track
|
||||||
|
parameter_name: Name of parameter to automate ("volume", "pan", "send", device param name)
|
||||||
|
points: List of [time, value] pairs where time is in beats and value is parameter-specific
|
||||||
|
device_name: Name of device (only for device_param automation, e.g., "EQ Eight")
|
||||||
|
clip_index: Clip index (only for clip-level automation)
|
||||||
|
send_index: Send index (only for send automation, 0-based)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dict with automation creation status.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
idx = int(track_index)
|
||||||
|
if idx < 0 or idx >= len(self._song.tracks):
|
||||||
|
return {"error": "Track index %d out of range" % idx}
|
||||||
|
|
||||||
|
track = self._song.tracks[idx]
|
||||||
|
param_name = str(parameter_name).lower()
|
||||||
|
points_count = len(points) if isinstance(points, (list, tuple)) else 0
|
||||||
|
|
||||||
|
# Track-level automation: volume
|
||||||
|
if param_name == "volume":
|
||||||
|
if hasattr(track, 'mixer_device') and hasattr(track.mixer_device, 'volume'):
|
||||||
|
vol_param = track.mixer_device.volume
|
||||||
|
for point in points[:64]: # Limit to 64 points
|
||||||
|
try:
|
||||||
|
time_val = float(point[0]) if len(point) > 0 else 0.0
|
||||||
|
value_val = float(point[1]) if len(point) > 1 else 0.85
|
||||||
|
# Clamp to valid range
|
||||||
|
value_val = max(0.0, min(1.0, value_val))
|
||||||
|
vol_param.value = value_val
|
||||||
|
except Exception as pe:
|
||||||
|
self.log_message("Volume automation point error: %s" % str(pe))
|
||||||
|
return {
|
||||||
|
"automation_added": True,
|
||||||
|
"track_index": idx,
|
||||||
|
"parameter": "volume",
|
||||||
|
"points_processed": points_count,
|
||||||
|
"final_value": float(vol_param.value)
|
||||||
|
}
|
||||||
|
return {"error": "Track %d does not have volume control" % idx}
|
||||||
|
|
||||||
|
# Track-level automation: pan
|
||||||
|
elif param_name == "pan":
|
||||||
|
if hasattr(track, 'mixer_device') and hasattr(track.mixer_device, 'panning'):
|
||||||
|
pan_param = track.mixer_device.panning
|
||||||
|
for point in points[:64]:
|
||||||
|
try:
|
||||||
|
time_val = float(point[0]) if len(point) > 0 else 0.0
|
||||||
|
value_val = float(point[1]) if len(point) > 1 else 0.0
|
||||||
|
# Clamp to valid range (-1.0 to 1.0)
|
||||||
|
value_val = max(-1.0, min(1.0, value_val))
|
||||||
|
pan_param.value = value_val
|
||||||
|
except Exception as pe:
|
||||||
|
self.log_message("Pan automation point error: %s" % str(pe))
|
||||||
|
return {
|
||||||
|
"automation_added": True,
|
||||||
|
"track_index": idx,
|
||||||
|
"parameter": "pan",
|
||||||
|
"points_processed": points_count,
|
||||||
|
"final_value": float(pan_param.value)
|
||||||
|
}
|
||||||
|
return {"error": "Track %d does not have pan control" % idx}
|
||||||
|
|
||||||
|
# Send automation
|
||||||
|
elif param_name == "send":
|
||||||
|
send_idx = int(send_index) if send_index is not None else 0
|
||||||
|
if hasattr(track, 'mixer_device') and hasattr(track.mixer_device, 'sends'):
|
||||||
|
sends = track.mixer_device.sends
|
||||||
|
if send_idx < len(sends):
|
||||||
|
send_param = sends[send_idx]
|
||||||
|
for point in points[:64]:
|
||||||
|
try:
|
||||||
|
time_val = float(point[0]) if len(point) > 0 else 0.0
|
||||||
|
value_val = float(point[1]) if len(point) > 1 else 0.0
|
||||||
|
value_val = max(0.0, min(1.0, value_val))
|
||||||
|
send_param.value = value_val
|
||||||
|
except Exception as pe:
|
||||||
|
self.log_message("Send automation point error: %s" % str(pe))
|
||||||
|
return {
|
||||||
|
"automation_added": True,
|
||||||
|
"track_index": idx,
|
||||||
|
"parameter": "send",
|
||||||
|
"send_index": send_idx,
|
||||||
|
"points_processed": points_count,
|
||||||
|
"final_value": float(send_param.value)
|
||||||
|
}
|
||||||
|
return {"error": "Send index %d out of range (track has %d sends)" % (send_idx, len(sends))}
|
||||||
|
return {"error": "Track %d does not have sends" % idx}
|
||||||
|
|
||||||
|
# Device parameter automation
|
||||||
|
elif device_name:
|
||||||
|
# Find device by name
|
||||||
|
target_device = None
|
||||||
|
if hasattr(track, 'devices'):
|
||||||
|
for device in track.devices:
|
||||||
|
if str(device_name).lower() in str(device.name).lower():
|
||||||
|
target_device = device
|
||||||
|
break
|
||||||
|
|
||||||
|
if target_device is None:
|
||||||
|
return {"error": "Device '%s' not found on track %d" % (device_name, idx)}
|
||||||
|
|
||||||
|
# Find parameter by name
|
||||||
|
if hasattr(target_device, 'parameters'):
|
||||||
|
target_param = None
|
||||||
|
for param in target_device.parameters:
|
||||||
|
if param_name in str(param.name).lower():
|
||||||
|
target_param = param
|
||||||
|
break
|
||||||
|
|
||||||
|
if target_param is None:
|
||||||
|
return {"error": "Parameter '%s' not found on device '%s'" % (parameter_name, device_name)}
|
||||||
|
|
||||||
|
# Apply automation points
|
||||||
|
configured = 0
|
||||||
|
for point in points[:64]:
|
||||||
|
try:
|
||||||
|
time_val = float(point[0]) if len(point) > 0 else 0.0
|
||||||
|
value_val = float(point[1]) if len(point) > 1 else 0.5
|
||||||
|
# Get parameter range
|
||||||
|
min_val = getattr(target_param, 'min', 0.0)
|
||||||
|
max_val = getattr(target_param, 'max', 1.0)
|
||||||
|
# Clamp to range
|
||||||
|
value_val = max(min_val, min(max_val, value_val))
|
||||||
|
target_param.value = value_val
|
||||||
|
configured += 1
|
||||||
|
except Exception as pe:
|
||||||
|
self.log_message("Device param automation error: %s" % str(pe))
|
||||||
|
|
||||||
|
return {
|
||||||
|
"automation_added": True,
|
||||||
|
"track_index": idx,
|
||||||
|
"device_name": device_name,
|
||||||
|
"parameter": parameter_name,
|
||||||
|
"points_processed": configured,
|
||||||
|
"final_value": float(target_param.value)
|
||||||
|
}
|
||||||
|
return {"error": "Device '%s' has no parameters" % device_name}
|
||||||
|
|
||||||
|
# Try LiveBridge add_automation if available
|
||||||
|
elif self.live_bridge and hasattr(self.live_bridge, 'add_automation'):
|
||||||
|
try:
|
||||||
|
clip_idx = int(clip_index) if clip_index is not None else 0
|
||||||
|
# Convert points to tuples for LiveBridge
|
||||||
|
tuple_points = [(float(p[0]), float(p[1])) for p in points if len(p) >= 2]
|
||||||
|
result = self.live_bridge.add_automation(idx, clip_idx, parameter_name, tuple_points)
|
||||||
|
return {
|
||||||
|
"automation_added": result.get("success", False),
|
||||||
|
"track_index": idx,
|
||||||
|
"clip_index": clip_idx,
|
||||||
|
"parameter": parameter_name,
|
||||||
|
"live_bridge_result": result
|
||||||
|
}
|
||||||
|
except Exception as lb_err:
|
||||||
|
return {"error": "LiveBridge automation failed: %s" % str(lb_err)}
|
||||||
|
|
||||||
|
else:
|
||||||
|
return {
|
||||||
|
"error": "Unknown parameter type '%s'. Supported: volume, pan, send, or device_param with device_name" % parameter_name,
|
||||||
|
"track_index": idx
|
||||||
|
}
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
self.log_message("Agente 5 automation error: %s" % str(e))
|
||||||
|
return {"automation_added": False, "error": str(e)}
|
||||||
BIN
node22.zip
Normal file
BIN
node22.zip
Normal file
Binary file not shown.
283
test_arrangement_injection.py
Normal file
283
test_arrangement_injection.py
Normal file
@@ -0,0 +1,283 @@
|
|||||||
|
"""
|
||||||
|
Comprehensive test script for Arrangement injection and related fixes.
|
||||||
|
Tests: coherence_system, audio_analyzer_dual, bus_architecture
|
||||||
|
ASCII-only to avoid encoding issues.
|
||||||
|
"""
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
|
||||||
|
def test_header(name):
|
||||||
|
print(f"\n{'='*60}")
|
||||||
|
print(f"TEST: {name}")
|
||||||
|
print('='*60)
|
||||||
|
|
||||||
|
def test_result(success, message):
|
||||||
|
status = "PASS" if success else "FAIL"
|
||||||
|
print(f" [{status}] {message}")
|
||||||
|
return success
|
||||||
|
|
||||||
|
def main():
|
||||||
|
print("\n" + "="*60)
|
||||||
|
print("ABLETON MCP AI - COMPREHENSIVE TEST SUITE")
|
||||||
|
print("="*60)
|
||||||
|
|
||||||
|
results = {"passed": 0, "failed": 0}
|
||||||
|
|
||||||
|
# Test 1: Coherence System (standalone)
|
||||||
|
test_header("1. COHERENCE SYSTEM (Standalone)")
|
||||||
|
try:
|
||||||
|
# Try importing without going through AbletonMCP_AI.__init__
|
||||||
|
import importlib.util
|
||||||
|
spec = importlib.util.spec_from_file_location(
|
||||||
|
"coherence_system",
|
||||||
|
"AbletonMCP_AI/mcp_server/engines/coherence_system.py"
|
||||||
|
)
|
||||||
|
coherence_module = importlib.util.module_from_spec(spec)
|
||||||
|
|
||||||
|
# Need to mock the numpy dependencies
|
||||||
|
import types
|
||||||
|
mock_np = types.ModuleType('numpy')
|
||||||
|
mock_np.float32 = float
|
||||||
|
mock_np.array = lambda x: x
|
||||||
|
mock_np.mean = lambda x: sum(x)/len(x) if x else 0
|
||||||
|
sys.modules['numpy'] = mock_np
|
||||||
|
|
||||||
|
spec.loader.exec_module(coherence_module)
|
||||||
|
|
||||||
|
test_result(True, "Coherence system loaded (mocked numpy)")
|
||||||
|
results["passed"] += 1
|
||||||
|
|
||||||
|
# Test basic functionality
|
||||||
|
try:
|
||||||
|
CoherenceFeatures = coherence_module.CoherenceFeatures
|
||||||
|
features1 = CoherenceFeatures(bpm=95.0, key="Am", spectral_centroid=500.0, mfcc_mean=0.5)
|
||||||
|
features2 = CoherenceFeatures(bpm=95.5, key="Am", spectral_centroid=510.0, mfcc_mean=0.52)
|
||||||
|
coherence = coherence_module.calculate_comprehensive_coherence(features1, features2)
|
||||||
|
test_result(True, f"Coherence calculation works: score={coherence.overall:.3f}")
|
||||||
|
results["passed"] += 1
|
||||||
|
except Exception as e:
|
||||||
|
test_result(False, f"Coherence calculation failed: {e}")
|
||||||
|
results["failed"] += 1
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
test_result(False, f"Coherence import failed: {e}")
|
||||||
|
results["failed"] += 2
|
||||||
|
|
||||||
|
# Test 2: Audio Analyzer Dual (standalone)
|
||||||
|
test_header("2. AUDIO ANALYZER DUAL (Standalone)")
|
||||||
|
try:
|
||||||
|
import importlib.util
|
||||||
|
spec = importlib.util.spec_from_file_location(
|
||||||
|
"audio_analyzer_dual",
|
||||||
|
"AbletonMCP_AI/mcp_server/engines/audio_analyzer_dual.py"
|
||||||
|
)
|
||||||
|
audio_module = importlib.util.module_from_spec(spec)
|
||||||
|
spec.loader.exec_module(audio_module)
|
||||||
|
|
||||||
|
AudioAnalyzerDual = audio_module.AudioAnalyzerDual
|
||||||
|
analyzer = AudioAnalyzerDual(backend="basic")
|
||||||
|
test_result(True, "AudioAnalyzerDual instantiated with basic backend")
|
||||||
|
results["passed"] += 1
|
||||||
|
|
||||||
|
# Try to analyze a sample if libreria exists
|
||||||
|
test_path = "libreria/reggaeton/kick/kick 1.wav"
|
||||||
|
if os.path.exists(test_path):
|
||||||
|
try:
|
||||||
|
features = analyzer.analyze_sample(test_path)
|
||||||
|
test_result(True, f"Audio analysis: BPM={features.bpm}, Key={features.key}")
|
||||||
|
results["passed"] += 1
|
||||||
|
except Exception as e:
|
||||||
|
test_result(False, f"Audio analysis failed: {e}")
|
||||||
|
results["failed"] += 1
|
||||||
|
else:
|
||||||
|
test_result(True, f"Sample path not found (expected): {test_path}")
|
||||||
|
test_result(True, "AudioAnalyzerDual is importable and functional")
|
||||||
|
results["passed"] += 2
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
test_result(False, f"AudioAnalyzerDual import failed: {e}")
|
||||||
|
results["failed"] += 3
|
||||||
|
|
||||||
|
# Test 3: Bus Architecture
|
||||||
|
test_header("3. BUS ARCHITECTURE")
|
||||||
|
try:
|
||||||
|
import importlib.util
|
||||||
|
spec = importlib.util.spec_from_file_location(
|
||||||
|
"bus_architecture",
|
||||||
|
"AbletonMCP_AI/mcp_server/engines/bus_architecture.py"
|
||||||
|
)
|
||||||
|
bus_module = importlib.util.module_from_spec(spec)
|
||||||
|
spec.loader.exec_module(bus_module)
|
||||||
|
|
||||||
|
BUS_GAIN_CALIBRATION = bus_module.BUS_GAIN_CALIBRATION
|
||||||
|
bus_count = len(BUS_GAIN_CALIBRATION)
|
||||||
|
test_result(True, f"Bus config loaded: {bus_count} buses")
|
||||||
|
results["passed"] += 1
|
||||||
|
|
||||||
|
# Verify specific buses exist
|
||||||
|
expected_buses = ["DRUM_BUS", "BASS_BUS", "MIX_BUS", "MASTER_CHAIN"]
|
||||||
|
for bus in expected_buses:
|
||||||
|
if bus in BUS_GAIN_CALIBRATION:
|
||||||
|
test_result(True, f"Bus '{bus}' configured")
|
||||||
|
results["passed"] += 1
|
||||||
|
else:
|
||||||
|
test_result(False, f"Bus '{bus}' missing")
|
||||||
|
results["failed"] += 1
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
test_result(False, f"Bus architecture import failed: {e}")
|
||||||
|
results["failed"] += 5
|
||||||
|
|
||||||
|
# Test 4: Arrangement Tools - check files exist
|
||||||
|
test_header("4. ARRANGEMENT TOOLS (File Check)")
|
||||||
|
arrangement_files = [
|
||||||
|
"AbletonMCP_AI/mcp_server/server.py",
|
||||||
|
"AbletonMCP_AI/mcp_server/engines/arrangement_injection.py",
|
||||||
|
"AbletonMCP_AI/mcp_server/engines/timeline_builder.py",
|
||||||
|
]
|
||||||
|
|
||||||
|
for filepath in arrangement_files:
|
||||||
|
if os.path.exists(filepath):
|
||||||
|
test_result(True, f"File exists: {filepath}")
|
||||||
|
results["passed"] += 1
|
||||||
|
else:
|
||||||
|
test_result(False, f"File missing: {filepath}")
|
||||||
|
results["failed"] += 1
|
||||||
|
|
||||||
|
# Check for arrangement functions in server.py
|
||||||
|
try:
|
||||||
|
with open("AbletonMCP_AI/mcp_server/server.py", 'r') as f:
|
||||||
|
content = f.read()
|
||||||
|
|
||||||
|
expected_functions = [
|
||||||
|
"build_arrangement_timeline",
|
||||||
|
"create_arrangement_track",
|
||||||
|
"create_arrangement_audio_pattern",
|
||||||
|
"get_arrangement_status",
|
||||||
|
"create_section_at_bar"
|
||||||
|
]
|
||||||
|
|
||||||
|
for func in expected_functions:
|
||||||
|
if f"def {func}(" in content or f"async def {func}(" in content:
|
||||||
|
test_result(True, f"Function defined: {func}")
|
||||||
|
results["passed"] += 1
|
||||||
|
else:
|
||||||
|
test_result(False, f"Function missing: {func}")
|
||||||
|
results["failed"] += 1
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
test_result(False, f"Could not read server.py: {e}")
|
||||||
|
results["failed"] += 5
|
||||||
|
|
||||||
|
# Test 5: Intelligent Track Generator (standalone)
|
||||||
|
test_header("5. INTELLIGENT TRACK GENERATOR (Standalone)")
|
||||||
|
try:
|
||||||
|
import importlib.util
|
||||||
|
spec = importlib.util.spec_from_file_location(
|
||||||
|
"intelligent_track_generator",
|
||||||
|
"AbletonMCP_AI/mcp_server/engines/intelligent_track_generator.py"
|
||||||
|
)
|
||||||
|
itg_module = importlib.util.module_from_spec(spec)
|
||||||
|
spec.loader.exec_module(itg_module)
|
||||||
|
|
||||||
|
test_result(True, "IntelligentTrackGenerator module loaded")
|
||||||
|
results["passed"] += 1
|
||||||
|
|
||||||
|
# Test basic instantiation
|
||||||
|
try:
|
||||||
|
config_class = itg_module.IntelligentTrackConfig
|
||||||
|
config = config_class(
|
||||||
|
description="reggaeton 95bpm Am",
|
||||||
|
structure_type="short"
|
||||||
|
)
|
||||||
|
test_result(True, f"Config created: {config.description}")
|
||||||
|
results["passed"] += 1
|
||||||
|
except Exception as e:
|
||||||
|
test_result(False, f"Config creation failed: {e}")
|
||||||
|
results["failed"] += 1
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
test_result(False, f"IntelligentTrackGenerator import failed: {e}")
|
||||||
|
results["failed"] += 2
|
||||||
|
|
||||||
|
# Manual MCP Test Instructions
|
||||||
|
test_header("6. MANUAL MCP TEST INSTRUCTIONS")
|
||||||
|
print("""
|
||||||
|
The following tests must be run via MCP when Ableton Live is running:
|
||||||
|
|
||||||
|
TEST 6a: create_arrangement_audio_pattern
|
||||||
|
1. Ensure Ableton Live is running with MCP connection
|
||||||
|
2. Run: create_arrangement_audio_pattern with:
|
||||||
|
- track_index: 0
|
||||||
|
- file_path: "libreria/reggaeton/kick/kick 1.wav"
|
||||||
|
- positions: [0, 2, 4, 6]
|
||||||
|
- name: "Test Kick Pattern"
|
||||||
|
3. Verify: Clips appear in Arrangement View at bars 0, 2, 4, 6
|
||||||
|
|
||||||
|
TEST 6b: build_arrangement_timeline
|
||||||
|
1. Ensure Ableton Live is running with MCP connection
|
||||||
|
2. Run: build_arrangement_timeline with:
|
||||||
|
- sections_json: '[
|
||||||
|
{"name": "Intro", "start_bar": 0, "duration_bars": 4,
|
||||||
|
"tracks": [{"type": "drums", "variation": "minimal"}]},
|
||||||
|
{"name": "Verse", "start_bar": 4, "duration_bars": 8,
|
||||||
|
"tracks": [{"type": "drums", "variation": "full"},
|
||||||
|
{"type": "bass", "variation": "standard"}]}
|
||||||
|
]'
|
||||||
|
3. Verify: Two sections created in Arrangement View
|
||||||
|
|
||||||
|
TEST 6c: get_arrangement_status
|
||||||
|
1. Run: get_arrangement_status
|
||||||
|
2. Verify: Returns current clips in Arrangement View
|
||||||
|
3. Check: total_clips > 0 after running tests 6a or 6b
|
||||||
|
|
||||||
|
TEST 6d: create_arrangement_track
|
||||||
|
1. Run: create_arrangement_track with track_type="drums"
|
||||||
|
2. Verify: New track created in Arrangement View
|
||||||
|
3. Run: create_section_at_bar with section_type="intro", at_bar=0
|
||||||
|
4. Verify: Section created on the track
|
||||||
|
|
||||||
|
TEST 6e: generate_intelligent_track
|
||||||
|
1. Run: generate_intelligent_track with:
|
||||||
|
- description: "reggaeton 95bpm Am"
|
||||||
|
- structure_type: "short"
|
||||||
|
2. Verify: Complete track generated with coherence > 0.9
|
||||||
|
3. Check: Clips appear in Arrangement View
|
||||||
|
""")
|
||||||
|
|
||||||
|
# Sample library check
|
||||||
|
test_header("7. SAMPLE LIBRARY CHECK")
|
||||||
|
libreria_path = "libreria"
|
||||||
|
if os.path.exists(libreria_path):
|
||||||
|
test_result(True, f"Sample library exists: {libreria_path}")
|
||||||
|
results["passed"] += 1
|
||||||
|
|
||||||
|
# Count samples
|
||||||
|
sample_count = 0
|
||||||
|
for root, dirs, files in os.walk(libreria_path):
|
||||||
|
for file in files:
|
||||||
|
if file.endswith(('.wav', '.mp3', '.aif')):
|
||||||
|
sample_count += 1
|
||||||
|
|
||||||
|
test_result(True, f"Total samples found: {sample_count}")
|
||||||
|
results["passed"] += 1
|
||||||
|
else:
|
||||||
|
test_result(False, f"Sample library not found: {libreria_path}")
|
||||||
|
results["failed"] += 2
|
||||||
|
|
||||||
|
# Summary
|
||||||
|
test_header("TEST SUMMARY")
|
||||||
|
total = results["passed"] + results["failed"]
|
||||||
|
print(f" Total tests: {total}")
|
||||||
|
print(f" Passed: {results['passed']}")
|
||||||
|
print(f" Failed: {results['failed']}")
|
||||||
|
|
||||||
|
if results["failed"] == 0:
|
||||||
|
print("\n *** ALL TESTS PASSED ***")
|
||||||
|
return 0
|
||||||
|
else:
|
||||||
|
print(f"\n *** {results['failed']} TEST(S) FAILED ***")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
||||||
Reference in New Issue
Block a user