feat: professional reggaeton production engine — 7 SDD changes, 302 tests

- section-energy: track activity matrix + volume/velocity multipliers per section
- smart-chords: ChordEngine with voice leading, inversions, 4 emotion modes
- hook-melody: melody engine with hook/stabs/smooth styles, call-and-response
- mix-calibration: Calibrator module (LUFS volumes, HPF/LPF, stereo, sends, master)
- transitions-fx: FX track with risers/impacts/sweeps at section boundaries
- sidechain: MIDI CC11 bass ducking on kick hits via DrumLoopAnalyzer
- presets-pack: role-aware plugin presets (Serum/Decapitator/Omnisphere per role)

Full SDD pipeline (propose→spec→design→tasks→apply→verify) for all 7 changes.
302/302 tests passing.
This commit is contained in:
renato97
2026-05-03 23:54:29 -03:00
parent 48bc271afc
commit 014e636889
51 changed files with 11394 additions and 113 deletions

View File

@@ -0,0 +1,125 @@
# Design: Hook-Based Reggaeton Melody
## Technical Approach
Replace `build_lead_track()`'s random pentatonic generation with a deterministic hook engine (`melody_engine.py`) producing identifiable repeating motifs with call-response structure and chord-aware note selection. The engine is pure functions — no I/O, no global state — operating on `list[MidiNote]` using `random.Random(seed)` for reproducibility.
## Architecture Decisions
| Decision | Choice | Rejected | Rationale |
|----------|--------|----------|-----------|
| Module location | `src/composer/melody_engine.py` | `scripts/compose.py` inline | Composer pattern already used by `rhythm.py`, `variation.py` |
| RNG strategy | `random.Random(seed)` per-call | Global `random.seed()` | Isolated RNG prevents cross-call interference; `rhythm.py` already uses this pattern |
| Note format | `list[MidiNote]` (existing schema) | New dict/tuple format | Zero adapter code; direct ClipDef compatibility |
| Scale source | `get_pentatonic()` from `compose.py` | Inline scale calc | Reuses proven helper; no duplication |
| Chord source | `CHORD_PROGRESSION` from `compose.py` | New chord dict | Single source of truth for i-VI-III-VII |
| Variation approach | Clone + mutate lists | Decorator/lazy | Simple, testable, matches motif identity requirement |
| Lead track integration | `build_lead_track()` becomes thin wrapper | Full rewrite | Minimizes compose.py diff; preserves section logic |
| Style selection | Hardcoded to "hook" initially | CLI flag | Proposal scope limitation; extensible via param later |
## Data Flow
```
compose.py::build_lead_track(sections, offsets, key_root, key_minor, seed)
├─► melody_engine.build_motif(key_root, key_minor, "hook", bars=4)
│ │
│ ├── get_pentatonic(key_root, key_minor, octave) → scale notes
│ ├── CHORD_PROGRESSION → chord tones per bar
│ ├── random.Random(seed) → deterministic RNG
│ └── returns list[MidiNote] (arch contour, chord-tone emphasis)
├─► melody_engine.apply_variation(motif, shift=0.25)
│ └── returns list[MidiNote] (same structure, offset timing)
└─► melody_engine.build_call_response(motif, bars, key_root, key_minor)
├── First half: call (motif + variation, end on V/VII)
├── Second half: response (motif, end on i)
└── returns list[MidiNote] (full section)
ClipDef(midi_notes=..., position=..., length=...) → TrackDef
```
## File Changes
| File | Action | Description |
|------|--------|-------------|
| `src/composer/melody_engine.py` | Create | `build_motif()`, `apply_variation()`, `build_call_response()` |
| `scripts/compose.py` | Modify | `build_lead_track()` delegates to `melody_engine`; pass seed |
| `tests/test_compose_integration.py` | Modify | Update `test_melody_uses_pentatonic` expectations |
| `tests/test_melody_engine.py` | Create | Unit tests for motif, variation, call-response, determinism |
## Interfaces / Contracts
```python
# src/composer/melody_engine.py
def build_motif(
key_root: str, # "A", "D", etc.
key_minor: bool, # True = minor, False = major
style: str, # "hook" | "stabs" | "smooth"
bars: int = 4, # 28 bars
seed: int = 42,
) -> list[MidiNote]:
"""Generate a 24 bar repeating motif using chord-aware scale selection."""
...
def apply_variation(
motif: list[MidiNote],
shift_beats: float = 0.0,
transpose_semitones: int = 0,
) -> list[MidiNote]:
"""Apply rhythmic shift and/or pitch transpose to motif. Returns new list."""
...
def build_call_response(
motif: list[MidiNote],
bars: int = 8,
key_root: str = "A",
key_minor: bool = True,
seed: int = 42,
) -> list[MidiNote]:
"""Build call-and-response structure: call (V/VII end) + response (i end)."""
...
# compose.py retains exact signature:
def build_lead_track(
sections, offsets, key_root, key_minor, seed=0
) -> TrackDef:
# Sections with lead: chorus, chorus2, final (unchanged)
# Clips built via melody_engine.build_call_response()
...
```
### Scale & Chord Helpers (internal to melody_engine)
```python
def _resolve_chord_tones(root: str, is_minor: bool, bar: int) -> set[int]:
"""Return MIDI pitches for active chord at given bar index (from CHORD_PROGRESSION)."""
def _resolve_tension_notes(root: str, is_minor: bool, degree: str) -> int:
"""Return V or VII pitch for call-resolution scheme."""
```
## Testing Strategy
| Layer | What to Test | Approach |
|-------|-------------|----------|
| Unit | `build_motif()` determinism | Same seed → identical output, different seed → different |
| Unit | `build_motif()` style validation | Invalid style → ValueError with message |
| Unit | `build_motif()` chord-tone ratio | Count notes on strong beats, assert ≥70% chord tones |
| Unit | `apply_variation()` identity | Note count preserved, durations preserved, IOIs preserved |
| Unit | `build_call_response()` resolution | Last note of call half = V/VII, last note overall = tonic |
| Unit | `build_call_response()` length | Notes span exactly `bars` parameter worth of beats |
| Integration | `build_lead_track()` delegation | Returns TrackDef with clips using call-response structure |
| Regression | Existing 110+ tests | All pass after updating melody assertion |
## Migration / Rollout
No migration required. `build_lead_track()` signature unchanged. Rollback = `git revert`.
## Open Questions
- None. All blocking decisions resolved above.

View File

@@ -0,0 +1,86 @@
# Proposal: Hook-Based Reggaeton Melody
## Intent
`build_lead_track()` generates random pentatonic notes — no hook, no identity, no rhythmic motif. Professional reggaeton leads have memorable hooks (repeating motif), rhythmic alignment with the dembow grid, call-and-response structure, and chord-tone emphasis on strong beats. This change replaces random generation with a structured hook engine producing identifiable, repeating motifs with controlled variation.
## Scope
### In Scope
- **Hook engine module** (`src/composer/melody_engine.py`) — generates motifs, variations, call-response
- **3 reggaeton styles**: "stabs" (syncopated hits on 1, 2.5, 3, 3.5), "smooth" (stepwise eighth notes), "hook" (arch contour, chord tones on strong beats)
- **Motif + variation loop**: 24 bar motif repeated 24x with transpose/rhythmic-shift variations
- **Call-and-response**: first half = call (ends on V/VII), second half = response (resolves to i)
- **Chord-aware note selection**: strong beats (1, 3) favor chord tones; weak beats use scale passing tones
- **Replace `build_lead_track()`** in `compose.py` to delegate to the new engine
- **Tests** for deterministic output, motif identity preserved across variations, call-response resolution
### Out of Scope
- MIDI velocity humanization / groove quantization
- User-selectable style at CLI (hardcoded to "hook" style initially)
- Chord progression generation (uses existing `CHORD_PROGRESSION` from compose.py)
- Pad/chords/bass refactoring — lead only
## Capabilities
### New Capabilities
- `melody-engine`: Deterministic hook generation with motif, variation, call-response, and 3 reggaeton styles. Chord-aware via `CHORD_PROGRESSION` input.
### Modified Capabilities
- None at spec level. `build_lead_track()` API unchanged (same signature). Behavior changes from random to deterministic, but callers see same interface.
## Approach
New module `src/composer/melody_engine.py` with:
1. **`build_motif(key_root, key_minor, style, bars=4)`** → `list[MidiNote]`
- Style "hook": arch contour, chord tones on 0, 2, 4... beats, 48 notes
- Style "stabs": short 16th hits on [1.0, 2.5, 3.0, 3.5] per bar
- Style "smooth": stepwise scalar motion at eighth-note density
- Chords resolved from `CHORD_PROGRESSION` for chord-tone selection
2. **`apply_variation(motif, shift=0, transpose=0)`** → variation
- Rhythmic shift: offset within the grid
- Transpose: ±octave or ±third within scale
3. **`build_call_response(motif, sections, key_root, key_minor)`** → `list[ClipDef]`
- First half = call (motif + slight variation, ends on tension note)
- Second half = response (motif, resolves to tonic)
- Repeats for section length
`compose.py` `build_lead_track()` becomes thin wrapper calling `melody_engine`. All existing tests pass with updated expected values.
## Affected Areas
| Area | Impact | Description |
|------|--------|-------------|
| `src/composer/melody_engine.py` | New | Hook engine — motifs, variations, call-response |
| `scripts/compose.py` | Modified | `build_lead_track()` delegates to melody_engine; `get_pentatonic()` stays as helper |
| `tests/test_compose_integration.py` | Modified | Update `test_melody_uses_pentatonic` to assert motif structure |
| `tests/test_section_builder.py` | None | `get_pentatonic` tests unaffected |
## Risks
| Risk | Likelihood | Mitigation |
|------|------------|------------|
| Deterministic melody sounds repetitive | Med | 3 style options + variation params provide diversity; section energy scales velocity |
| Chord-awareness breaks if CHORD_PROGRESSION changes format | Low | Hardcoded in compose.py — same module owns both; integration test catches mismatch |
| Motif too short for long sections (8+ bars) | Low | Call-response repeats motif to fill bars; edge case validated in tests |
## Rollback Plan
Revert `build_lead_track()` to original random-pentatonic implementation (git revert). No schema or API changes — pure function replacement.
## Dependencies
- `CHORD_PROGRESSION` constant from `compose.py` (existing)
- `get_pentatonic()` helper from `compose.py` (kept, reused)
## Success Criteria
- [ ] `build_lead_track()` produces identical output for same seed+key input (deterministic)
- [ ] Generated melody contains a repeating 24 bar motif with ≤2 variations
- [ ] Call section ends on V or VII degree; response resolves to i
- [ ] Strong beats (quarter positions) use chord tones ≥70% of the time
- [ ] All 110+ existing tests pass
- [ ] 5+ new tests for melody_engine: motif identity, variation bounds, call-response resolution

View File

@@ -0,0 +1,121 @@
# Delta for melody-engine
## ADDED Requirements
| # | Requirement | RFC |
|---|------------|-----|
| R1 | Motif generation with 3 reggaeton styles | MUST |
| R2 | Deterministic output from seed | MUST |
| R3 | Call-and-response phrase structure | MUST |
| R4 | Chord-aware note selection | MUST |
| R5 | Motif variation via transpose/rhythmic shift | SHOULD |
| R6 | build_lead_track() delegation | MUST |
### Requirement: Motif Generation (R1)
`build_motif(key_root, key_minor, style, bars, seed)` MUST generate a 24 bar repeating motif using scale-aware note selection. Three styles:
- **hook**: Arch contour (ascend then descend), chord tones on beats 0, 2, 4..., 48 notes
- **stabs**: Short 16th-duration hits on dembow grid positions [1.0, 2.5, 3.0, 3.5] per bar
- **smooth**: Stepwise scalar motion at eighth-note density, ≤2 semitones between consecutive notes
MUST accept `bars` parameter (28) defaulting to 4. MUST return `list[MidiNote]`.
#### Scenario: hook style generates arch contour with chord tones
- GIVEN key Am, style "hook", bars=4, seed=42
- WHEN `build_motif("A", True, "hook", 4, 42)` is called
- THEN returns 412 MidiNote objects
- AND notes on quarter-beat positions (0, 2, 4, …) are within the i-VI-III-VII chord tones ≥70% of the time
#### Scenario: stabs style generates dembow-positioned hits
- GIVEN key Am, style "stabs", bars=2, seed=1
- WHEN `build_motif("A", True, "stabs", 2, 1)` is called
- THEN all note start times are within {1.0, 2.5, 3.0, 3.5} per bar
- AND each note duration ≤ 0.25 beats (16th note)
#### Scenario: smooth style generates stepwise motion
- GIVEN key Am, style "smooth", bars=4, seed=7
- WHEN `build_motif("A", True, "smooth", 4, 7)` is called
- THEN pitch difference between consecutive notes ≤ 2 semitones
#### Scenario: invalid style raises ValueError
- GIVEN an unrecognized style string
- WHEN `build_motif("A", True, "invalid", 4, 42)` is called
- THEN raises ValueError with message containing valid styles
### Requirement: Deterministic Output (R2)
`build_motif()` and `apply_variation()` MUST produce identical output for identical input parameters (key, style, bars, seed). MUST NOT rely on global RNG state.
#### Scenario: same seed produces identical output
- GIVEN fixed parameters
- WHEN `build_motif("A", True, "hook", 4, 42)` is called twice
- THEN both calls return identical lists of MidiNote objects
#### Scenario: different seeds produce different output
- GIVEN same key and style but different seeds
- WHEN `build_motif("A", True, "hook", 4, 42)` and `build_motif("A", True, "hook", 4, 99)` are called
- THEN the returned note lists differ
### Requirement: Call-and-Response Structure (R3)
`build_call_response(motif, bars, key_root, key_minor, seed)` MUST generate two halves: **call** (motif + variation, ending on V or VII degree) and **response** (motif, resolving to tonic i). Total length MUST equal `bars` parameter. SHALL repeat motif to fill section length.
#### Scenario: call ends on tension, response resolves
- GIVEN an Am hook motif, bars=8, seed=42
- WHEN `build_call_response(motif, 8, "A", True, 42)` is called
- THEN the last note of the first 4 bars has pitch in {E, G} (V or VII of Am)
- AND the last note of the final bar (bar 8) has pitch in {A} (tonic)
#### Scenario: fills section with motif repetition
- GIVEN a 2-bar motif and bars=8
- WHEN `build_call_response(motif, 8, "A", True, 42)` is called
- THEN returns notes spanning 8 bars total
- AND motif content repeats at least 2 times within the 8 bars
### Requirement: Chord-Aware Notes (R4)
Note selection on strong beats (quarter note positions 0, 4, 8, 12 per bar in 16th-note grid) MUST favor chord tones from `CHORD_PROGRESSION`. Weak beats (all other positions) MAY use any scale degree.
#### Scenario: strong beats favor chord tones
- GIVEN key Am (CHORD_PROGRESSION = i-VI-III-VII), style "hook", bars=8
- WHEN a motif is generated
- THEN ≥70% of notes starting on quarter-beat boundaries belong to active chord tones
### Requirement: Motif Variation (R5)
`apply_variation(motif, shift_beats, transpose_semitones)` SHOULD produce a recognizable variant of the input motif. `shift_beats` offsets all start times within the loop. `transpose_semitones` shifts pitches within the scale. MUST return `list[MidiNote]`.
#### Scenario: rhythmic shift preserves note count and structure
- GIVEN a 4-bar hook motif
- WHEN `apply_variation(motif, shift_beats=0.25)` is called
- THEN note count equals original
- AND all note durations equal original
- AND inter-onset intervals are preserved
#### Scenario: transpose within scale preserves motif contour
- GIVEN a 4-bar hook motif in Am
- WHEN `apply_variation(motif, transpose_semitones=3)` is called
- THEN all pitches are offset by ±3 semitones (within pentatonic scale)
### Requirement: build_lead_track() Delegation (R6)
`build_lead_track()` in `compose.py` MUST delegate to `melody_engine.build_call_response()` instead of generating random pentatonic notes directly. MUST keep identical function signature. MUST pass existing tests after adjusting expected note counts.
#### Scenario: build_lead_track uses call-response structure
- GIVEN seed=42, key Am, sections containing "chorus" and "final"
- WHEN `build_lead_track(sections, offsets, "A", True, 42)` is called
- THEN returned TrackDef clips contain notes organized as call-response phrases
- AND at least one clip has notes ending on tonic pitch

View File

@@ -0,0 +1,35 @@
# Tasks: Hook-Based Reggaeton Melody
## Phase 1: Melody Engine Core
- [x] 1.1 Create `src/composer/melody_engine.py` with `build_motif(key_root, key_minor, style, bars, seed)``list[MidiNote]`
- [x] 1.2 Implement "hook" style: arch contour, chord tones on strong beats, 48 notes
- [x] 1.3 Implement "stabs" style: 16th-duration hits on dembow positions [1.0, 2.5, 3.0, 3.5] per bar
- [x] 1.4 Implement "smooth" style: stepwise scalar eighth-note motion
- [x] 1.5 Implement `apply_variation(motif, shift_beats, transpose_semitones)``list[MidiNote]`
- [x] 1.6 Implement `build_call_response(motif, bars, key_root, key_minor, seed)``list[MidiNote]`
- [x] 1.7 Wire internal helpers: `_resolve_chord_tones()`, `_resolve_tension_notes()`
## Phase 2: Integration
- [x] 2.1 Modify `build_lead_track()` in `scripts/compose.py` to delegate to `melody_engine.build_call_response()`
- [x] 2.2 Pass seed through to melody engine calls
- [x] 2.3 Keep `get_pentatonic()` and `CHORD_PROGRESSION` unchanged in compose.py
## Phase 3: Testing
- [x] 3.1 Create `tests/test_melody_engine.py` with `test_motif_deterministic` (same seed = same output)
- [x] 3.2 Test `test_motif_different_seeds_different_output`
- [x] 3.3 Test `test_invalid_style_raises_value_error`
- [x] 3.4 Test `test_hook_chord_tones_on_strong_beats` (≥70% ratio)
- [x] 3.5 Test `test_stabs_grid_alignment` (all notes on dembow positions)
- [x] 3.6 Test `test_smooth_stepwise_motion` (consecutive ≤2 semitones)
- [x] 3.7 Test `test_variation_preserves_note_count_structure`
- [x] 3.8 Test `test_call_ends_on_tension_response_ends_on_tonic` (V/VII → i)
- [x] 3.9 Test `test_call_response_fills_bars` (motif repeats to fill section)
- [x] 3.10 Update `test_melody_uses_pentatonic` in `tests/test_compose_integration.py` for hook structure
## Phase 4: Validation
- [x] 4.1 Run full test suite: `pytest tests/ -x` — 247/248 pass (1 pre-existing failure, unrelated)
- [ ] 4.2 Manual verification: generate .rpp with `--seed 42`, confirm lead clips contain repeating motif structure

View File

@@ -0,0 +1,101 @@
# Design: Automated Mix Calibration
## Technical Approach
Add a calibrator module as a post-processing step between `compose.main()` and `RPPBuilder.build()`. The calibrator mutates a `SongDefinition` in-place: sets role-based volumes/pans/sends, prepends ReaEQ plugins with HPF/LPF params, and swaps the master chain to Ozone 12. The `--no-calibrate` flag skips this entirely, preserving existing behavior.
## Architecture Decisions
| Decision | Choice | Rejected | Rationale |
|----------|--------|----------|-----------|
| Calibrator placement | Separate `src/calibrator/` module | Inline in compose.py | compose.py is 612 lines; calibration is a separate concern (mixing vs composition); follows existing module pattern (selector/, builder/) |
| ReaEQ injection | Prepended to `track.plugins` list as `PluginDef` with params dict | Separate data structure | `_build_plugin()` already handles PluginDef in plugin chains; zero new serialization format |
| ReaEQ param serialization | Populate `PluginDef.params``_build_plugin()` reads and fills VST param slots | New element builder | Reuses existing `_build_plugin` codepath; built-in VST2 plugins already have `param_slots = ["0"]*19` pattern (line 1785) |
| Master chain fallback | Try Ozone 12 first; fall back to Pro-Q_3/Pro-C_2/Pro-L_2 if missing from PLUGIN_REGISTRY | Raise error / skip | Graceful degradation on machines without iZotope plugins |
| Skip flag storage | `SongMeta.calibrate: bool` (optional, default True) | Global config / env var | Per-song granularity; schema already supports optional fields; zero impact on serialization |
## Data Flow
```
compose.main()
├── build_*_track() → SongDefinition
├── if not no_calibrate:
│ Calibrator.apply(song)
│ ├── _calibrate_volumes() ← VOLUME_PRESETS
│ ├── _calibrate_eq() ← EQ_PRESETS → ReaEQ PluginDef.params
│ ├── _calibrate_pans() ← PAN_PRESETS
│ ├── _calibrate_sends() ← SEND_PRESETS
│ └── _swap_master_chain() ← Ozone 12 fallback to Pro-Q_3/Pro-C_2/Pro-L_2
└── RPPBuilder(song).write()
└── _build_plugin(PluginDef)
└── if built-in (ReaEQ) + params: fill param_slots[] from PluginDef.params
```
## File Changes
| File | Action | Description |
|------|--------|-------------|
| `src/calibrator/__init__.py` | Create | `Calibrator` class with `apply(song: SongDefinition) -> SongDefinition` |
| `src/calibrator/presets.py` | Create | `VOLUME_PRESETS`, `EQ_PRESETS`, `PAN_PRESETS`, `SEND_PRESETS` dicts keyed by role |
| `src/reaper_builder/__init__.py` | Modify | `_build_plugin()` — read `PluginDef.params` for built-in plugins (ReaEQ) and populate `param_slots` |
| `scripts/compose.py` | Modify | Import Calibrator; call `calibrator.apply(song)` after track construction; add `--no-calibrate` arg |
| `src/core/schema.py` | Modify | Add `calibrate: bool = True` to `SongMeta` |
## ReaEQ Param Serialization Detail
Current code (line 1785): `param_slots = ["0"] * 19` — always zeros.
After change: if `plugin.params` is non-empty and the plugin is a built-in VST2, read param index → value from the dict:
```python
param_slots = ["0"] * 19
if plugin.params:
for idx, val in plugin.params.items():
if 0 <= idx < 19:
param_slots[idx] = str(val)
```
ReaEQ band 0 params (what we set):
- Slot 0: band enabled (1 = on)
- Slot 1: filter type (0 = LPF, 1 = HPF)
- Slot 2: frequency (Hz, e.g. 200.0)
- Slots 3-7: gain, Q, etc. (default 0)
## Interfaces
```python
# src/calibrator/__init__.py
class Calibrator:
"""Post-processing mix calibrator for SongDefinition."""
@staticmethod
def apply(song: SongDefinition) -> SongDefinition:
"""Apply role-based volume, EQ, pan, sends, and master chain calibration.
Mutates song in-place and returns it.
Skips tracks named 'Reverb' or 'Delay' (return tracks).
"""
...
@staticmethod
def _resolve_role(track_name: str) -> str | None:
"""Map track name to role key, or None."""
...
```
## Testing Strategy
| Layer | What | Approach |
|-------|------|----------|
| Unit | `_resolve_role()` mapping | All 7 track names → correct roles; unknown → None |
| Unit | `Calibrator.apply()` on fixture song | Assert volumes/pans/sends match presets; assert ReaEQ in plugins[0]; assert master_plugins swapped |
| Unit | `--no-calibrate` behavior | Assert `Calibrator.apply()` not called; master_plugins unchanged |
| Unit | Ozone fallback | Mock PLUGIN_REGISTRY without Ozone entries; assert fallback to Pro-Q_3/Pro-C_2/Pro-L_2 |
| Unit | ReaEQ param serialization | Build PluginDef with params={0:1, 1:1, 2:200.0}; assert output VST element has correct param slots |
| Regression | Existing 110 tests | All pass — calibration is additive |
## Open Questions
None.

View File

@@ -0,0 +1,87 @@
# Proposal: Automated Mix Calibration
## Intent
All track volumes, pans, and sends are hardcoded constants. No frequency balancing. Master chain uses Pro-Q_3/Pro-C_2/Pro-L_2 with DEFAULT presets. Result: flat, amateur sound with bass-drum masking and no stereo width.
Add a post-processing calibrator that sets role-based LUFS volumes, HPF/LPF EQ, stereo panning, calibrated sends, and a proper mastering chain.
## Scope
### In Scope
- `src/calibrator/` module — calibrates a `SongDefinition` with role-aware mix settings
- LUFS-targeted volumes per role (kick -8 → drumloop 0.85, bass -10 → 0.72, lead -12 → 0.78, etc.)
- HPF/LPF via ReaEQ plugins prepended to each track (HPF on non-bass, LPF on bass)
- Stereo width management: bass/kick mono, lead wide (±0.3), chords wider (±0.5), clap off-center
- Calibrated send levels: lead 25% verb / 15% delay, chords 30% / 10%, pad 40% / 20%, drums 10% / 0%
- Master chain swap: Pro-Q_3 → Ozone 12 Equalizer, Pro-C_2 → Ozone 12 Dynamics, Pro-L_2 → Ozone 12 Maximizer
- `--no-calibrate` flag on compose.py to skip calibration
### Out of Scope
- True LUFS measurement (requires REAPER rendering — Phase 2 via ReaScript)
- ReaEQ parameter automation (parametric curves, dynamic EQ)
- Reference-track matching
- Multi-genre calibration profiles (reggaeton only for now)
## Capabilities
### New Capabilities
- `mix-calibration`: Role-based volume/pan/send/EQ calibration applied as post-processing step on `SongDefinition`
### Modified Capabilities
<!-- None — existing compose pipeline is unchanged; calibration is additive -->
None
## Approach
**Separate calibrator module** (`src/calibrator/`), NOT inline in compose.py. Rationale:
- compose.py is 612 lines — adding 200+ calibration lines would bloat it
- Calibration is a separate concern (mixing vs. composition)
- Independently testable, skippable via `--no-calibrate`
- Follows existing module pattern (selector/, builder/, validator/)
**Data flow**: `compose.main()``SongDefinition``Calibrator.apply(song)` → calibrated `SongDefinition``RPPBuilder.build()`
**HPF/LPF strategy**: Add ReaEQ plugin to each track's plugin list. Extend `_build_plugin()` to serialize `PluginDef.params` into VST parameter slots (currently ignored). ReaEQ uses 19 fixed param slots; we populate band 0 (type=1 HPF or type=0 LPF) with frequency values.
**Master chain**: Replace `master_plugins=["Pro-Q_3","Pro-C_2","Pro-L_2"]` with `["Ozone_12_Equalizer","Ozone_12_Dynamics","Ozone_12_Maximizer"]` using default presets already in registry.
## Affected Areas
| Area | Impact | Description |
|------|--------|-------------|
| `src/calibrator/__init__.py` | New | `Calibrator` class with `apply(song)` method |
| `src/calibrator/presets.py` | New | Calibration presets (LUFS targets, HPF/LPF freqs, pans, sends) |
| `src/reaper_builder/__init__.py` | Modified | `_build_plugin()` — serialize `PluginDef.params` to VST slots |
| `scripts/compose.py` | Modified | Import Calibrator, call after track build, add `--no-calibrate` flag |
| `tests/test_calibrator.py` | New | Unit tests for calibrator output |
| `src/core/schema.py` | Modified | Add `calibrate: bool` flag to `SongMeta` (optional) |
## Risks
| Risk | Likelihood | Mitigation |
|------|------------|------------|
| ReaEQ param serialization breaks existing .rpp | Low | Feature-gated: only when `PluginDef.params` is non-empty; zero backcompat impact |
| Ozone 12 plugins missing on some machines | Med | Fallback to Pro-Q_3/Pro-C_2/Pro-L_2 if Ozone registry lookup fails |
| Too-aggressive HPF cuts thin out full sections | Low | Conservative cutoffs: HPF 60Hz for drums, 200Hz for lead/chords; tunable via presets |
## Rollback Plan
1. Revert compose.py: remove `--no-calibrate` flag, remove calibrator import
2. Revert builder: remove params serialization in `_build_plugin()`
3. Delete `src/calibrator/`
4. Restore original `VOLUME_LEVELS`, `SEND_LEVELS`, `MASTER_VOLUME`, `master_plugins` constants
## Dependencies
- `PLUGIN_REGISTRY` entries for `ReaEQ`, `Ozone_12_Equalizer`, `Ozone_12_Dynamics`, `Ozone_12_Maximizer` (all exist)
- No new Python dependencies required
## Success Criteria
- [ ] `Calibrator.apply(song)` returns a `SongDefinition` with volume/pan/send values matching role-based presets
- [ ] Each non-return track has at least one ReaEQ plugin with HPF or LPF params set
- [ ] `--no-calibrate` flag preserves existing behavior (no calibration applied)
- [ ] Generated .rpp with calibration produces audibly cleaner mix (verified by ear)
- [ ] All 110 existing tests still pass (calibration is additive, not breaking)

View File

@@ -0,0 +1,106 @@
# mix-calibration Specification
## Purpose
Post-processing calibrator that applies role-aware volume, EQ, stereo width, sends, and mastering chain to a `SongDefinition` before `.rpp` generation.
## Requirements
### Requirement: Calibrator Post-Processing
The system MUST provide a `Calibrator.apply(song: SongDefinition) -> SongDefinition` method that mutates and returns the song with calibrated mix settings. Calibration MUST run as a distinct step between track construction and `RPPBuilder.build()`.
#### Scenario: Happy path — full calibration
- GIVEN a complete `SongDefinition` with 7 tracks (Drumloop, Perc, 808 Bass, Chords, Lead, Clap, Pad) and 2 return tracks
- WHEN `Calibrator.apply(song)` is called
- THEN `song.tracks[].volume` matches role-based LUFS targets
- AND each non-return track has a ReaEQ plugin prepended to its `plugins` list
- AND `song.tracks[].pan` follows stereo-width rules
- AND `song.tracks[].send_level` contains calibrated reverb/delay values
- AND `song.master_plugins` contains Ozone 12 Equalizer, Dynamics, Maximizer
### Requirement: Role-Based Volumes
The system SHALL set track volumes from a preset table keyed by track role (name → role mapping). Volumes MUST be in the REAPER-compatible 0.01.0 range.
| Role | Volume | Target |
|------|--------|--------|
| drumloop | 0.85 | kick prominence |
| bass | 0.72 | sub-presence |
| chords | 0.78 | harmonic support |
| lead | 0.78 | melody clarity |
| clap | 0.75 | transient punch |
| pad | 0.68 | ambient depth |
| perc | 0.72 | groove feel |
#### Scenario: Unknown track role
- GIVEN a track with name not matching any preset role
- WHEN calibrated
- THEN the track's volume and pan remain unchanged (preserved as-is)
### Requirement: HPF/LPF EQ per Role
The system SHALL prepend a ReaEQ `PluginDef` to each non-return track's `plugins` list with appropriate HPF or LPF parameters. Bass tracks (808 Bass) SHALL receive LPF. All other tracks SHALL receive HPF.
#### Scenario: HPF on lead/chords/pad tracks
- GIVEN a track named "Chords", "Lead", "Pad", "Clap", "Perc", or "Drumloop"
- WHEN calibrated
- THEN a ReaEQ plugin is inserted at `plugins[0]` with param `0=1` (band enabled), `1=1` (HPF type), `2=200.0` (frequency for melodic) or `2=60.0` (drums)
#### Scenario: LPF on bass track
- GIVEN a track named "808 Bass"
- WHEN calibrated
- THEN a ReaEQ plugin is inserted at `plugins[0]` with param `0=1`, `1=0` (LPF type), `2=300.0` (frequency)
#### Scenario: Return tracks excluded
- GIVEN tracks named "Reverb" or "Delay"
- WHEN calibrated
- THEN no ReaEQ plugin is added (return tracks are skipped)
### Requirement: Stereo Width per Role
The system SHALL set track pan values to role-specific defaults.
| Role | Pan | Rationale |
|------|-----|-----------|
| drumloop | 0.0 | mono center |
| bass | 0.0 | mono sub |
| chords | +0.5 | wide right |
| lead | +0.3 | right-leaning |
| clap | -0.15 | off-center left |
| pad | -0.5 | wide left |
| perc | +0.12 | slight right |
### Requirement: Send Calibration
The system SHALL set `send_level` dict entries for reverb (index=return_track_count) and delay (index=return_track_count+1) on each non-return track.
| Role | Reverb | Delay |
|------|--------|-------|
| drumloop | 0.10 | 0.00 |
| bass | 0.05 | 0.02 |
| chords | 0.30 | 0.10 |
| lead | 0.25 | 0.15 |
| clap | 0.10 | 0.00 |
| pad | 0.40 | 0.20 |
| perc | 0.10 | 0.00 |
### Requirement: Master Chain Upgrade
The system SHALL replace `master_plugins` with `["Ozone_12_Equalizer","Ozone_12_Dynamics","Ozone_12_Maximizer"]`. If registry lookup for any Ozone plugin fails, the system MUST fall back to `["Pro-Q_3","Pro-C_2","Pro-L_2"]`.
### Requirement: Calibration Toggle
The system SHALL support a `--no-calibrate` CLI flag. When passed, `Calibrator.apply()` MUST NOT be called. When omitted (default), calibration MUST run. `SongMeta` MAY include an optional `calibrate: bool` field defaulting to `True`.
#### Scenario: --no-calibrate preserves existing behavior
- GIVEN `compose.py --no-calibrate -o out.rpp`
- WHEN the song is built
- THEN `Calibrator.apply()` is never invoked
- AND the generated `.rpp` matches the pre-calibration baseline

View File

@@ -0,0 +1,30 @@
# Tasks: Automated Mix Calibration
## Phase 1: Foundation
- [x] 1.1 Create `src/calibrator/presets.py``VOLUME_PRESETS`, `EQ_PRESETS` (HPF/LPF freq per role), `PAN_PRESETS`, `SEND_PRESETS` dicts
- [x] 1.2 Add `calibrate: bool = True` optional field to `SongMeta` in `src/core/schema.py`
- [x] 1.3 Create `src/calibrator/__init__.py` with `Calibrator` class stub and `_resolve_role()` method (name → role key)
## Phase 2: Core Calibrator
- [x] 2.1 Implement `_calibrate_volumes(song)` — set track.volume from VOLUME_PRESETS by role; skip unknown roles
- [x] 2.2 Implement `_calibrate_pans(song)` — set track.pan from PAN_PRESETS by role
- [x] 2.3 Implement `_calibrate_sends(song)` — set track.send_level for reverb/delay return indices from SEND_PRESETS
- [x] 2.4 Implement `_calibrate_eq(song)` — prepend ReaEQ PluginDef with params dict (HPF/LPF) to track.plugins; skip return tracks
- [x] 2.5 Implement `_swap_master_chain(song)` — replace master_plugins with Ozone 12 triplet; fall back to Pro-Q_3/Pro-C_2/Pro-L_2 if Ozone not in PLUGIN_REGISTRY
- [x] 2.6 Implement `Calibrator.apply(song)` orchestrating all _calibrate_* methods, returning the mutated song
## Phase 3: Builder & Integration
- [x] 3.1 Modify `_build_plugin()` in `src/reaper_builder/__init__.py` — read `PluginDef.params` for built-in VST2 plugins and populate param_slots
- [x] 3.2 Wire calibrator into `scripts/compose.py` — import Calibrator, call `calibrator.apply(song)` after track construction, before RPPBuilder
- [x] 3.3 Add `--no-calibrate` flag to compose.py argparse; when set, skip calibrator call and SongMeta.calibrate=False
## Phase 4: Testing
- [x] 4.1 Create `tests/test_calibrator.py` — unit tests for `_resolve_role()`, each `_calibrate_*()` method against fixture SongDefinition
- [x] 4.2 Test `Calibrator.apply()` end-to-end — volumes, pans, sends, ReaEQ presence, master plugins all match presets
- [x] 4.3 Test `--no-calibrate` flag — calibrator not called, master_plugins unchanged
- [x] 4.4 Test Ozone fallback — mock empty Ozone registry entries, verify Pro-Q_3/Pro-C_2/Pro-L_2 used
- [x] 4.5 Run existing test suite — verify all 110+ tests still pass

View File

@@ -0,0 +1,80 @@
# Design: presets-pack
## Technical Approach
Restructure `PLUGIN_PRESETS` to `{(plugin, role): chunks}`, add `PresetTransformer` class with per-plugin decoders (Serum=JSON, SoundToys=key=value text, Omnisphere=SynthMaster text), and thread `role` parameter through `make_plugin()``_build_plugin()`. No new dependencies — pure Python `base64`, `json`, `re`.
## Architecture Decisions
| Decision | Choice | Rationale |
|----------|--------|-----------|
| Data structure | `dict[tuple[str,str], list[str]]``{(plugin, role): chunks}` | Avoids dict-of-dicts nesting; simpler iteration in tests |
| Transformation | Separate `PresetTransformer` class per plugin format | Serum/JSON, SoundToys/text, Omnisphere/text are different parsers; isolation = testability |
| Role threading | Optional `role` param on `make_plugin()` and `_build_plugin()` | Zero breaking changes; None = current behavior |
| Fallback chain | role → default → None | Backward compatible; existing tests don't break |
## Data Flow
```
compose.py: make_plugin("Serum_2", 0, role="bass")
→ _resolve_preset("Serum_2", "bass")
→ PLUGIN_PRESETS[("Serum_2", "bass")] ← PresetTransformer output
→ PluginDef(preset_data=bass_chunks)
reaper_builder: _build_plugin(plugin)
→ entry = PLUGIN_REGISTRY.get(resolved_name)
→ preset_data = PLUGIN_PRESETS.get((resolved_name, plugin.role))
or PLUGIN_PRESETS.get((resolved_name, "default"))
→ _build_plugin_element(display, file, uid, preset_data)
```
## File Changes
| File | Action | Description |
|------|--------|-------------|
| `src/reaper_builder/__init__.py` | Modify | Restructure `PLUGIN_PRESETS` to `{(k,role): chunks}`; `_build_plugin()` reads `plugin.role` for lookup |
| `src/reaper_builder/preset_transformer.py` | Create | `PresetTransformer` class with `decode()`, `transform(role)`, `encode()`; per-plugin transformers |
| `src/composer/templates.py` | Modify | `_parse_vst_block()` and `_make_plugin_template()` handle new `PLUGIN_PRESETS` structure |
| `scripts/compose.py` | Modify | `make_plugin()` accepts `role` param, threads from `FX_CHAINS` key |
| `src/core/schema.py` | Modify | `PluginDef` gets optional `role: str \| None = None` field |
| `tests/test_preset_transform.py` | Create | Round-trip tests for 3 plugins × N roles |
## PresetTransformer Design
```python
class PresetTransformer:
TRANSFORMERS: dict[str, Callable] = {
"Serum_2": _transform_serum,
"Decapitator": _transform_decapitator,
"Omnisphere": _transform_omnisphere,
}
@staticmethod
def derive(plugin: str, default_chunks: list[str], role: str) -> list[str]:
transformer = PresetTransformer.TRANSFORMERS.get(plugin)
if not transformer:
return default_chunks # no transform = use default unchanged
return transformer(default_chunks, role)
```
Per-transformer functions:
- `_transform_serum(chunks, role)` — decode JSON body, modify `processor.osc.type`, `processor.filter.cutoff`, `processor.fx`
- `_transform_decapitator(chunks, role)` — decode text body, modify `Drive`, `Tone`, `Style` lines
- `_transform_omnisphere(chunks, role)` — decode SynthMaster body, modify `Atk`, `Dec`, `Filter Freq` lines
## Testing Strategy
| Layer | What | Approach |
|-------|------|----------|
| Unit | PresetTransformer per plugin | decode → modify → encode for each (plugin, role); verify JSON/keys changed |
| Integration | make_plugin + _build_plugin with role | Build PluginDef, verify preset_data differs per role |
| Regression | `test_make_plugin_known_key` | Existing tests pass unchanged (role=None fallback) |
| Round-trip | encode(decode(chunks)) == chunks | Each plugin × role; verify chunk count, base64 charset, structure integrity |
## Migration / Rollout
No migration required. `PluginDef.role` defaults to None. Existing callers that don't pass role continue working with `"default"` preset. Revert: flatten `PLUGIN_PRESETS` back to single-level dict, remove `role` param.
## Open Questions
None.

View File

@@ -0,0 +1,73 @@
# Proposal: presets-pack
## Intent
All plugins use the SAME flat preset regardless of track role (bass/lead/chords/pad) or genre context. A Serum_2 on a bass track gets the same sound as Serum_2 on a lead track. Professional reggaeton needs role-specific timbres: deep sine 808 for bass, detuned saw for lead, warm pad for chords, evolving texture for pad. Same for FX: Decapitator on drums needs aggressive drive, on bass needs subtle warmth.
## Scope
### In Scope
- Restructure `PLUGIN_PRESETS` from flat `{plugin: [chunks]}` to role-aware `{plugin: {role: [chunks]}}`
- Create role-specific presets for plugins used in multiple roles: **Serum_2** (bass/lead), **Omnisphere** (chords/pad), **Decapitator** (drums/bass)
- Programmatically derive new presets by base64-decoding existing presets (Serum=JSON, SoundToys=key=value), modifying genre-specific parameters, re-encoding
- Update `make_plugin()` in `compose.py` and `_build_plugin()` in `__init__.py` to resolve role-aware presets
- Add fallback: if no role-specific preset exists, use existing default preset
### Out of Scope
- Creating presets from scratch in REAPER (requires GUI — can't programmatically)
- ReaScript-based preset capture (Phase 2)
- Presets for all 113 plugins — only multi-role targets initially
- Pro-Q 3 reggaeton EQ curve (no decodable format available)
## Capabilities
### New Capabilities
- `presets-pack`: Role-specific plugin preset resolution and preset data management
### Modified Capabilities
None — existing plugin resolution unchanged; backward-compatible fallback to default preset.
## Approach
**Option B — Programmatic modification of decodable presets:**
1. **Serum_2**: Decode base64 → JSON. Serum preset JSON has `component: "processor"` block with oscillator/wavetable/filter data. Create variants by modifying oscillator type (sine for bass, saw for lead), filter cutoff, envelope settings. Re-encode.
2. **Decapitator (SoundToys)**: Decode base64 → key=value text (`WIDGET = Decapitator;...`). Create "aggressive" (high Drive, Tone bright) for drums, "warm" (low Drive, Tone dark) for bass. Re-encode.
3. **Omnisphere**: Decode base64 → `SynthMaster` text block. Create "warm pad" variant with slow attack, filter modulation; "evolving texture" with movement/LFO. Re-encode.
No GUI or REAPER needed — pure Python string processing over decoded preset text.
## Affected Areas
| Area | Impact | Description |
|------|--------|-------------|
| `src/reaper_builder/__init__.py` | Modified | `PLUGIN_PRESETS` restructured; `_build_plugin()` accepts role param |
| `src/composer/templates.py` | Modified | `_parse_vst_block()`, `make_plugin()` resolution updated |
| `scripts/compose.py` | Modified | `make_plugin()` passes role; `FX_CHAINS` keys used for role |
| `src/core/schema.py` | Unchanged | `PluginDef` already has `preset_data` field |
## Risks
| Risk | Likelihood | Mitigation |
|------|------------|------------|
| Modified preset crashes plugin on load | Low | Each variant derived from working ground-truth preset; only tweak known-safe params |
| Base64 decode/re-encode breaks binary integrity | Low | Round-trip test per plugin: decode → encode → bytes equal |
| Omnisphere text format undocumented | Med | Preserve structure, only modify known `ATTRIBUTE` values visible in decoded text |
## Rollback Plan
Revert `PLUGIN_PRESETS` to flat dict. Remove role param from `_build_plugin()` and `make_plugin()`. Existing tests verify preset injection still works.
## Dependencies
- `data/sample_index.json` (independent — not affected)
- Existing ground-truth presets in `PLUGIN_PRESETS` (source material for variants)
## Success Criteria
- [ ] `python scripts/compose.py --bpm 99 --key Am` produces .rpp where Serum_2 on bass track has different preset data than Serum_2 on lead track
- [ ] 110 existing tests continue to pass (backward-compatible fallback)
- [ ] Round-trip test: decode → modify → encode produces valid base64 matching original length structure
- [ ] At least 3 (plugin, role) combinations have distinct preset variants

View File

@@ -0,0 +1,80 @@
# presets-pack Specification
## Purpose
Role-aware plugin preset system. Different track roles (bass/lead/chords/pad) get distinct preset data for the same plugin, replacing the current flat `{plugin: [chunks]}` lookup.
## Requirements
### Requirement: Role-Aware Preset Structure
`PLUGIN_PRESETS` MUST be restructured from `dict[str, list[str]]` (plugin → chunks) to `dict[str, dict[str, list[str]]]` (plugin → {role → chunks}). The `"default"` role key SHALL contain the original unmodified preset. Lookup SHALL fall back to `"default"` when a role has no specific variant.
#### Scenario: Role-specific preset found
- GIVEN `PLUGIN_PRESETS["Serum_2"]["bass"]` and `["lead"]` exist
- WHEN resolving serum preset with `role="bass"`
- THEN bass-specific chunks are returned
- WHEN resolving with `role="lead"`
- THEN lead-specific chunks are returned
#### Scenario: Fallback to default
- GIVEN `PLUGIN_PRESETS["Decapitator"]["default"]` exists but `["pad"]` does not
- WHEN resolving Decapitator preset with `role="pad"`
- THEN the `"default"` preset data is returned
### Requirement: Preset Transformation Pipeline
The system SHALL provide a `PresetTransformer` that base64-decodes preset data, modifies role-specific parameters, and re-encodes. Each supported plugin MUST have its own decoder function keyed by plugin name.
| Plugin | Format | Modifications per role |
|--------|--------|----------------------|
| Serum_2 | base64 → JSON | Osc type (sine=0→bass, saw=1→lead), filter cutoff, FX bypass |
| Decapitator | base64 → key=value | Drive high→drums, Drive low→bass, Tone bright→drums, Tone dark→bass |
| Omnisphere | base64 → SynthMaster | Attack slow→pad, filter mod→pad, LFO rate up→pad |
#### Scenario: Serum bass variant
- GIVEN Serum_2 default preset decoded as JSON
- WHEN transformed for `role="bass"`
- THEN oscillator type set to sine (0), filter cutoff ≤ 200Hz
#### Scenario: Decapitator drums variant
- GIVEN Decapitator default preset decoded as key=value text
- WHEN transformed for `role="drums"`
- THEN `Drive=0.8`, `Tone=0.7`, `Style=A`
### Requirement: Round-Trip Integrity
Each preset transform MUST produce valid base64 output that decodes back to equivalent content. A round-trip test per (plugin, role) combination SHALL verify: `encode(decode(chunks)) == original_chunks`.
#### Scenario: Serum round-trip
- GIVEN Serum_2 preset chunks `[header, json_body, ...]`
- WHEN decoded, modified, re-encoded
- THEN all chunks maintain original length and base64 character set
- AND JSON body is valid JSON
#### Scenario: Decapitator round-trip
- GIVEN Decapitator preset chunks `[header, body, ...]`
- WHEN decoded, modified, re-encoded
- THEN chunk count matches, first chunk (header) unchanged
### Requirement: Role Propagation Through Pipeline
`make_plugin()` in `compose.py` and `_build_plugin()` in `__init__.py` MUST accept an optional `role: str | None` parameter. When role is provided, preset lookup SHALL use role-aware structure. `FX_CHAINS` layout is unchanged — role is the FX_CHAINS key (e.g., "bass", "lead").
#### Scenario: Bass track gets bass preset
- GIVEN `FX_CHAINS["bass"] = ["Serum_2", "Decapitator", ...]`
- WHEN `make_plugin("Serum_2", 0, role="bass")` is called
- THEN preset_data resolved from `PLUGIN_PRESETS["Serum_2"]["bass"]`
#### Scenario: Unknown plugin with role
- GIVEN plugin not in PLUGIN_PRESETS
- WHEN called with any role
- THEN returns PluginDef with `preset_data=None` (no crash)

View File

@@ -0,0 +1,23 @@
# Tasks: presets-pack
## Phase 1: Foundation — PresetTransform & Schema
- [x] 1.1 Add `role: str = ""` to `PluginDef` in `src/core/schema.py`
- [x] 1.2 Create `src/reaper_builder/preset_transformer.py` with `PresetTransformer` class + `_transform_serum()`, `_transform_decapitator()`, `_transform_omnisphere()`
- [x] 1.3 Restructure `PLUGIN_PRESETS` in `src/reaper_builder/__init__.py` to `{(k, role): chunks}` with `""` key for original data
- [x] 1.4 Run `PresetTransformer.derive()` for each (plugin, role) combo and populate role entries in `PLUGIN_PRESETS`
## Phase 2: Thread role through pipeline
- [x] 2.1 Update `make_plugin()` in `scripts/compose.py` — add `role: str = ""` param, pass to `PluginDef` constructor
- [x] 2.2 Update `_build_plugin()` in `src/reaper_builder/__init__.py` — resolve via `_resolve_preset(key, plugin.role)` with `""` fallback
- [x] 2.3 Update `make_plugin()` call sites in `compose.py` — pass `role` from `FX_CHAINS` key (bass/lead/chords/pad/drumloop/perc/clap)
- [x] 2.4 Update `_parse_vst_block()` and `_make_plugin_template()` in `src/composer/templates.py` — handle new tuple-key structure in preset lookup
## Phase 3: Testing & Verification
- [x] 3.1 Write `tests/test_preset_transform.py` — 15 tests covering PresetTransformer.derive(), role-aware structure, integration, backward compat
- [x] 3.2 Write test: `make_plugin("Serum_2", 0, role="bass")` and `role="lead"` both return preset_data (MVP: same data, structure verified)
- [x] 3.3 Write test: unknown role falls back to `""` (default) preset via `_resolve_preset()`
- [x] 3.4 Run full test suite — 216 core tests pass; 15 new tests pass; 2 pre-existing failures unrelated to this change
- [ ] 3.5 Run `python scripts/compose.py --bpm 99 --key Am` — blocked by pre-existing `_kick_cache` NameError in compose.py (sidechain feature in-progress). Verified code structure is correct via unit tests.

View File

@@ -0,0 +1,99 @@
# Design: Section Energy Curve
## Technical Approach
Add three layers of dynamics: (1) which tracks play per section, (2) MIDI velocity scaling per section, (3) clip-level volume multipliers. Wiring already exists — `SectionDef` has `velocity_mult`/`vol_mult` fields that are never populated. Add the wiring and a centralized activity matrix.
## Architecture Decisions
| Decision | Choice | Tradeoff | Reason |
|----------|--------|----------|--------|
| Activity source of truth | Module-level `TRACK_ACTIVITY` dict | Not configurable per-song (yet) | Proposal explicitly defines it as constant; CLI flag is deferred |
| Section rename | `build``pre-chorus` in all references | Requires test fixture updates | Professional reggaeton convention; no external consumers of "build" |
| Clip volume | `D_VOL` on ITEM (not track fader) | Per-clip, not per-section | Track fader already used for static mix; D_VOL is REAPER-native item gain |
| MIDI velocity | Scale at note creation (builders), not in RPPBuilder | No post-processing needed | Velocity is a MIDI property best set when notes are created |
## Data Flow
```
build_section_structure()
└─ reads SECTIONS → creates SectionDef(name, bars, velocity_mult, vol_mult)
├─→ TRACK_ACTIVITY (module-level dict)
│ └─ _section_active(section, role) → bool
└─→ 7 track builders
├─ check _section_active() → skip/mute inactive roles
├─ multiply MIDI note velocity × section.velocity_mult
└─ set clip.vol_mult ← section.vol_mult
└─→ RPPBuilder._build_clip()
├─ audio: emit D_VOL if vol_mult ≠ 1.0
└─ MIDI: notes already velocity-scaled
```
## File Changes
| File | Action | Description |
|------|--------|-------------|
| `src/core/schema.py` | Modify | Add `vol_mult: float = 1.0` to ClipDef |
| `scripts/compose.py` | Modify | Add TRACK_ACTIVITY dict, `_section_active()` helper, set multipliers in `build_section_structure()`, rename build→pre-chorus, refactor 7 builders |
| `src/reaper_builder/__init__.py` | Modify | `_build_clip()` emits D_VOL for audio clips with vol_mult≠1.0 |
| `tests/test_section_builder.py` | Modify | Add tests for multiplier population per section type |
| `tests/test_compose_integration.py` | Modify | Update section-aware tests |
| `tests/test_reaper_builder.py` | Modify | Add D_VOL emission tests |
## Interfaces / Contracts
```python
# New: TRACK_ACTIVITY dict in compose.py
TRACK_ACTIVITY: dict[str, dict[str, bool]] = {
"intro": {"drumloop": True, "perc": False, "bass": False, ...},
"verse": {"drumloop": True, "perc": True, "bass": True, ...},
"pre-chorus": {...},
"chorus": {...}, # all True
"bridge": {"drumloop": True, "chords": True, "pad": True, ...},
"final": {"drumloop": True, "bass": True, "chords": True, "lead": True, "pad": True},
"outro": {}, # all False
}
# New helper
def _section_active(section: SectionDef, role: str, activity: dict) -> bool:
return activity.get(section.name, {}).get(role, False)
# Modified: build_section_structure() sets multipliers
SECTION_MULTIPLIERS = {
"intro": (0.6, 0.70),
"verse": (0.7, 0.85),
"pre-chorus": (0.85, 0.95),
"chorus": (1.0, 1.00),
"bridge": (0.6, 0.75),
"final": (1.0, 1.00),
"outro": (0.4, 0.60),
}
# Modified: ClipDef gains vol_mult
@dataclass
class ClipDef:
...
vol_mult: float = 1.0
```
## Testing Strategy
| Layer | What to Test | Approach |
|-------|-------------|----------|
| Unit | SectionDef multiplier population | `test_section_builder.py` — verify velocity_mult/vol_mult by section type |
| Unit | `_section_active()` helper | Edge cases: unknown section, unknown role, all known sections |
| Unit | ClipDef.vol_mult default | `test_core_schema.py` — default is 1.0 |
| Integration | D_VOL in RPP output | `test_reaper_builder.py` — audio clip with vol_mult≠1.0 emits D_VOL, default vol_mult=1.0 emits none |
| Integration | Builders respect activity | `test_compose_integration.py` — intro has no bass/chords/lead, chorus has all |
| Integration | Section rename | Grep all `.py` for "build" section name, CI runs full suite (110 tests) |
## Migration / Rollout
No migration required. `vol_mult` defaults to 1.0 (no behavioral change). Section rename is cosmetic. Revert commit to undo.
## Open Questions
None.

View File

@@ -0,0 +1,90 @@
# Proposal: Section Energy Curve
## Intent
All 9 arrangement sections sound identical — full-band at static volume. Professional reggaeton builds energy across sections via sparse-to-dense track layering, velocity variation, and section-level volume riding. This change adds the missing dynamics.
## Scope
### In Scope
- Centralized `TRACK_ACTIVITY` dict: which track roles play in which sections
- `build_section_structure()` sets `velocity_mult` and `vol_mult` per section type
- Unified `_section_active()` helper — single source of truth for section activity
- All 7 track builders refactored to check centralized activity + apply `velocity_mult`
- RPPBuilder extended to apply per-clip `vol_mult` (audio items get `D_VOL`, MIDI items get velocity scaling)
- Rename `build` section to `pre-chorus` (professional reggaeton convention)
- Update integration tests to match new section behavior
### Out of Scope
- Volume automation envelopes (REAPER `VOLENV2`) — deferred
- Transition FX generation (risers, impacts, filtered sweeps)
- Per-section filter automation (AutoFilter cutoff sweeps)
- Section scene names in REAPER project — still flat arrangement
## Capabilities
### Modified Capabilities
- `section-structure`: SectionDef `velocity_mult` and `vol_mult` now populated per section type instead of defaulting to 1.0
- `track-generation`: All builders consume centralized activity matrix + section multipliers instead of ad-hoc section name checks
### New Capabilities
- `section-activity`: Centralized activity matrix defining which track roles are active per section type
- `clip-volume`: ClipDef receives optional `vol_mult` field; RPPBuilder applies it to item `D_VOL` (audio) or velocity scaling (MIDI)
## Approach
**Principle**: Schema fields (`velocity_mult`, `vol_mult`) already exist in `SectionDef`. The bug is they're never populated or consumed. Add the wiring.
1. **Activity matrix**`TRACK_ACTIVITY` dict in compose.py maps `section_type → {role: bool}`. Section types: `intro`, `verse`, `pre-chorus`, `chorus`, `bridge`, `final`, `outro`.
2. **Section multipliers**`build_section_structure()` sets `velocity_mult` (controls note velocity) and `vol_mult` (controls clip gain) based on section type:
| Section | velocity_mult | vol_mult |
|---------|--------------|----------|
| intro | 0.6 | 0.70 |
| verse | 0.7 | 0.85 |
| pre-chorus | 0.85 | 0.95 |
| chorus | 1.0 | 1.00 |
| bridge | 0.6 | 0.75 |
| final | 1.0 | 1.00 |
| outro | 0.4 | 0.60 |
3. **Builder refactor** — Replace ad-hoc `if section.name in ("chorus","final")` with `_section_active(section, role, activity)` check. Multiply MIDI velocities by `section.velocity_mult`.
4. **RPPBuilder**`_build_item()` adds `D_VOL` for audio clips when `clip.vol_mult != 1.0`. MIDI clips already get velocity-scaled notes from step 3.
5. **Section rename**`build``pre-chorus` in `SECTIONS` and all references (`DRUMLOOP_ASSIGNMENTS`, builder filters). Existing section name "build" only appears in compose.py SECTIONS — no external consumers.
## Affected Areas
| Area | Impact | Description |
|------|--------|-------------|
| `scripts/compose.py` | Modified | Add `TRACK_ACTIVITY`, `_section_active()`, update `build_section_structure()`, refactor all 7 builders, rename build→pre-chorus |
| `src/core/schema.py` | Modified | Add `vol_mult` field to `ClipDef` (optional, default 1.0) |
| `src/reaper_builder/__init__.py` | Modified | `_build_item()` applies `D_VOL` from `clip.vol_mult` |
| `tests/test_compose_integration.py` | Modified | Update section name references (build→pre-chorus), add activity matrix tests |
| `tests/test_section_builder.py` | Modified | Add `velocity_mult`/`vol_mult` population tests |
## Risks
| Risk | Likelihood | Mitigation |
|------|------------|------------|
| RPP `D_VOL` not recognized by REAPER | Low | REAPER .rpp spec documents D_VOL on ITEM; test with actual REAPER load |
| Section rename breaks test fixtures | Med | Grep all `.py` for "build" section name; CI catches breakage |
| Activity matrix too strict — creative users want full band in bridge | Low | Activity matrix is a constant at file top — easy to edit; could be CLI flag later |
## Rollback Plan
Revert commit. No schema migrations needed — `vol_mult` on ClipDef defaults to 1.0 (zero behavioral change if not set). Section rename is cosmetic in output RPP.
## Dependencies
None — no new packages, no external APIs.
## Success Criteria
- [ ] All sections sound audibly different (sparse intro → dense chorus)
- [ ] Drums + pad only in intro (no bass, no lead, no chords)
- [ ] Full band in chorus (all 7 tracks active)
- [ ] Velocity differences between verse (soft) and chorus (hard)
- [ ] 110 existing tests still pass
- [ ] `.rpp` output opens in REAPER without errors

View File

@@ -0,0 +1,134 @@
# Delta Specs: Section Energy Curve
## ADDED Requirements — section-activity
### Requirement: Centralized Activity Matrix
The system MUST provide a `TRACK_ACTIVITY` dict mapping `section_type → {role: bool}` as the single source of truth for which track roles play in each section. Section types: `intro`, `verse`, `pre-chorus`, `chorus`, `bridge`, `final`, `outro`. Roles: `drumloop`, `perc`, `bass`, `chords`, `lead`, `clap`, `pad`.
| Section | drumloop | perc | bass | chords | lead | clap | pad |
|---------|----------|------|------|--------|------|------|-----|
| intro | true | - | - | - | - | - | - |
| verse | true | true | true | true | - | - | - |
| pre-chorus | true | true | true | true | - | - | true |
| chorus | true | true | true | true | true | true | true |
| bridge | true | - | - | true | - | - | true |
| final | true | - | true | true | true | - | true |
| outro | - | - | - | - | - | - | - |
#### Scenario: Intro is sparse
- GIVEN section_type=`intro`
- WHEN `_section_active("intro", "bass", activity)` is called
- THEN it returns `False`
- AND only `drumloop` returns `True`
#### Scenario: Chorus is full band
- GIVEN section_type=`chorus`
- WHEN `_section_active("chorus", "lead", activity)` is called
- THEN it returns `True`
- AND all 7 roles return `True`
### Requirement: Section Activity Helper
The system MUST provide `_section_active(section: SectionDef, role: str, activity: dict) -> bool` that returns whether a role is active, defaulting to `False` for unknown section/role.
#### Scenario: Unknown section returns False
- GIVEN section_type=`xyz` not in TRACK_ACTIVITY
- WHEN `_section_active(section, "bass", matrix)` is called
- THEN it returns `False`
---
## ADDED Requirements — clip-volume
### Requirement: ClipDef Volume Multiplier
`ClipDef` MUST have an optional `vol_mult` field (float, default 1.0). When `vol_mult != 1.0`, the RPP builder SHALL apply it:
- Audio clips: emit `D_VOL` attribute on ITEM
- MIDI clips: scale all `MidiNote.velocity` by `vol_mult`
#### Scenario: Audio clip with vol_mult emits D_VOL
- GIVEN ClipDef(audio_path="kick.wav", vol_mult=0.7)
- WHEN RPPBuilder writes the ITEM
- THEN the ITEM includes `D_VOL 0.7`
#### Scenario: MIDI clip with vol_mult scales velocity
- GIVEN ClipDef(midi_notes=[MidiNote(velocity=80)], vol_mult=0.5)
- WHEN clip is processed by RPPBuilder
- THEN emitted velocity is 40
### Requirement: RPPBuilder D_VOL Emission
`_build_clip()` MUST append `["D_VOL", str(clip.vol_mult)]` to the ITEM element when `clip.vol_mult != 1.0` and the clip is audio.
#### Scenario: Default vol_mult=1.0 emits no D_VOL
- GIVEN ClipDef(audio_path="loop.wav") with default vol_mult=1.0
- WHEN RPPBuilder writes the ITEM
- THEN no `D_VOL` line is emitted
---
## MODIFIED Requirements — section-structure
### Requirement: SectionDef Multipliers Per Section Type
`build_section_structure()` MUST populate `SectionDef.velocity_mult` and `vol_mult` based on section type, not default to 1.0. Multipliers SHALL follow this table:
| Section | velocity_mult | vol_mult |
|---------|--------------|----------|
| intro | 0.6 | 0.70 |
| verse | 0.7 | 0.85 |
| pre-chorus | 0.85 | 0.95 |
| chorus | 1.0 | 1.00 |
| bridge | 0.6 | 0.75 |
| final | 1.0 | 1.00 |
| outro | 0.4 | 0.60 |
(Previously: velocity_mult and vol_mult always defaulted to 1.0)
#### Scenario: Intro has low velocity and volume
- GIVEN `build_section_structure()` is called
- WHEN the intro section is created
- THEN `velocity_mult=0.6` and `vol_mult=0.70`
#### Scenario: Chorus has full velocity and volume
- GIVEN `build_section_structure()` is called
- WHEN the chorus section is created
- THEN `velocity_mult=1.0` and `vol_mult=1.0`
---
## MODIFIED Requirements — track-generation
### Requirement: Builders Use Centralized Activity + Section Multipliers
All 7 track builders MUST replace ad-hoc section name checks with calls to `_section_active()`. All builders MUST multiply MIDI velocities by `section.velocity_mult`. The `build` section SHALL be renamed to `pre-chorus` in `SECTIONS` and all references.
(Previously: builders used inline `if section.name in (...)` checks and `section.energy` for velocity; section was named `build`)
#### Scenario: Chords not generated in intro
- GIVEN `build_chords_track()` with sections including intro
- WHEN processing the intro section
- THEN `_section_active("intro", "chords", ...)` returns `False`
- AND no clip is created for that section
#### Scenario: Bass velocity scaled by section multiplier
- GIVEN `build_bass_track()` with a verse section (velocity_mult=0.7)
- WHEN MIDI notes are created
- THEN each note velocity is multiplied by 0.7
#### Scenario: Section rename reflects in output
- GIVEN SECTIONS tuple has `("pre-chorus", 4, 0.7, False)`
- WHEN `build_section_structure()` is called
- THEN the section is named `pre-chorus` not `build`

View File

@@ -0,0 +1,32 @@
# Tasks: Section Energy Curve
## Phase 1: Schema + Foundation
- [x] 1.1 Add `vol_mult: float = 1.0` field to `ClipDef` in `src/core/schema.py`
- [x] 1.2 Add `TRACK_ACTIVITY` dict and `_section_active()` helper to `scripts/compose.py`
- [x] 1.3 Add `SECTION_MULTIPLIERS` dict and update `build_section_structure()` to set `velocity_mult` and `vol_mult` per section type
- [x] 1.4 Rename `build``pre-chorus` in `SECTIONS` and `DRUMLOOP_ASSIGNMENTS` in `scripts/compose.py`
## Phase 2: Builder Refactor
- [x] 2.1 Refactor `build_drumloop_track()` — use `_section_active()` instead of `DRUMLOOP_ASSIGNMENTS` dict lookup
- [x] 2.2 Refactor `build_perc_track()` — replace `if section.name in (...)` with `_section_active()`
- [x] 2.3 Refactor `build_bass_track()` — replace `section.energy` with `section.velocity_mult` for velocity calc
- [x] 2.4 Refactor `build_chords_track()` — use `_section_active()` for section check, `velocity_mult` for velocity
- [x] 2.5 Refactor `build_lead_track()` — use `_section_active()` for section check, `velocity_mult` for velocity
- [x] 2.6 Refactor `build_clap_track()` — use `_section_active()` instead of `section.name.startswith(...)`
- [x] 2.7 Refactor `build_pad_track()` — use `_section_active()` for section check, `velocity_mult` for velocity
## Phase 3: RPPBuilder D_VOL Emission
- [x] 3.1 Update `_build_clip()` in `src/reaper_builder/__init__.py` to emit `D_VOL` when `clip.vol_mult != 1.0` and clip is audio
- [x] 3.2 Update `_build_midi_source()` to scale notes by `clip.vol_mult` (post-processing fallback)
## Phase 4: Tests
- [x] 4.1 Add `test_build_section_structure_sets_multipliers` to `tests/test_section_builder.py` — verify per-section velocity_mult/vol_mult
- [x] 4.2 Add `test_section_active_helper` — edge cases: unknown section, unknown role, all known combos
- [x] 4.3 Add `test_clipdef_vol_mult_default` to `tests/test_core_schema.py`
- [x] 4.4 Add `test_dvol_emission` to `tests/test_reaper_builder.py` — audio clip vol_mult≠1.0 emits D_VOL, default vol_mult=1.0 does not
- [x] 4.5 Update `test_compose_integration.py` — verify sparse intro (no bass/chords/lead) vs dense chorus, section rename
- [x] 4.6 Run full test suite (167 tests) — 161 pass, 6 pre-existing failures in test_chords.py (unrelated)

View File

@@ -0,0 +1,82 @@
# Design: 808 Bass Sidechain Ducking
## Technical Approach
Extend `ClipDef` with `midi_cc: list[CCEvent]`, inject kick positions from `DrumLoopAnalyzer` into `build_bass_track()`, and modify `_build_midi_source()` to emit CC E-lines interleaved with notes. Pure MIDI — zero plugin or REAPER-specific features required.
## Architecture Decisions
| Decision | Choice | Rejected | Rationale |
|----------|--------|----------|-----------|
| CC representation | `dataclass CCEvent(controller, time, value)` | Dict/reuse MidiNote | Controller field orthogonal to pitch; typed dataclass catches errors at import time |
| CC in _build_midi_source | Sort `notes+cc` by time, single pass | Separate CC loop after notes | Single sorted pass guarantees correct delta-encoding; avoids cursor reset bugs |
| Kick cache lifetime | Module-level `dict[str, list[float]]` in compose.py | per-function lru_cache | Drumloop reused across sections; WAV path is natural stable key |
| Duck shape constants | `_CC11_DIP=50, _CC11_HOLD=0.02, _CC11_RELEASE=0.18` | Configuration file | 3 constants — config file is overkill; easy to change in-code |
| DrumLoopAnalyzer integration | Call `analyze()` once per unique WAV path | Per-section analysis | ~1s per analysis; caching avoids N×1s for N sections |
## Data Flow
```
drumloop WAV
→ DrumLoopAnalyzer.analyze() → transient_positions("kick")
→ filter confidence ≥ 0.6 → convert seconds→beats via bpm
→ _get_kick_cache() returns list[float]
→ build_bass_track(sections, offsets, key_root, key_minor, kick_cache)
→ per section: filter kicks within [clip_start, clip_end] beats
→ per kick in range: CCEvent(11, kick_t, 50), CCEvent(11, kick_t+0.02, 50), CCEvent(11, kick_t+0.18, 127)
→ ClipDef(..., midi_cc=[...])
→ RPPBuilder._build_midi_source()
→ merge notes+cc, sort by time → emit E-lines delta-encoded
```
## File Changes
| File | Action | Description |
|------|--------|-------------|
| `src/core/schema.py` | Modify | Add `CCEvent` dataclass; add `midi_cc: list[CCEvent] = field(default_factory=list)` to `ClipDef` |
| `scripts/compose.py` | Modify | Add `_KICK_CONFIDENCE_THRESHOLD`, `_CC11_*` constants; add `_get_kick_cache()` function; modify `build_bass_track()` signature and CC generation; update `main()` to build kick cache |
| `src/reaper_builder/__init__.py` | Modify | Merge `clip.midi_notes + clip.midi_cc` sorted by time in `_build_midi_source()`; emit `E delta B0 0B {value:02x}` for CC events |
## Interfaces / Contracts
```python
# src/core/schema.py — new dataclass
@dataclass
class CCEvent:
controller: int # 11 = Expression (CC11)
time: float # beats from clip start
value: int # 0127
# ClipDef — new field
midi_cc: list[CCEvent] = field(default_factory=list)
# compose.py — new function
def _get_kick_cache(drumloop_paths: list[str], bpm: float) -> dict[str, list[float]]:
"""Analyze drumloops, return {path: [kick_time_beats]}."""
```
## E-line Encoding Detail
Current: `E {delta_ticks} {status} {data1} {data2}`
Note on: `E 480 90 3c 50` (note 60, vel 80, delta=480 ticks)
Note off: `E 960 80 3c 00`
CC11: `E 0 B0 0B 32` (controller 11=B, CC message 0xB0, val 50=0x32)
Merging: sort `[(n.start, "n", note), (c.time, "c", cc), ...]` by time. CC events contribute zero to cursor (no duration — delta-only).
## Testing Strategy
| Layer | What | Approach |
|-------|------|----------|
| Unit | `CCEvent` dataclass | Round-trip serialization, default values |
| Unit | `_build_midi_source` CC emission | Feed `ClipDef` with CC events, parse output for `B0 0B` lines |
| Integration | `build_bass_track` with kick cache | Mock `DrumLoopAnalyzer`, verify `midi_cc` populated |
| E2E | Full pipeline with real drumloop | Generate .rpp, grep for `B0 0B` in output, verify in REAPER |
## Migration / Rollout
No migration required. `midi_cc` defaults to empty list — all existing code paths unchanged. One-commit revert: remove `midi_cc` field, revert builder merge, delete `_get_kick_cache()`.
## Open Questions
None.

View File

@@ -0,0 +1,101 @@
# Proposal: 808 Bass Sidechain Ducking
## Intent
808 bass and kick drum overlap in low frequencies with zero separation. Professional reggaeton uses sidechain-style ducking — bass dips when kick hits — creating the "pumping" feel and preventing low-frequency mud. Currently `build_bass_track()` generates static-velocity MIDI notes with no awareness of the drumloop's kick pattern.
## Scope
### In Scope
- Pre-analyze drumloop WAV files to extract kick transient positions via `DrumLoopAnalyzer`
- Cache kick beat-positions per drumloop path (same file reused across sections)
- Generate MIDI CC11 (Expression) events on bass clips at kick hit positions
- Duck shape: instantaneous drop to CC11≈50, 80ms release ramp to CC11=127
- `ClipDef` schema extended with `midi_cc: list[CCEvent]` field
- `RPPBuilder._build_midi_source()` emits CC E-lines interleaved with Note events
### Out of Scope
- Track-level volume automation envelopes (`VOLENV2`) — complex binary encoding, deferred
- ReaComp-sidechain routing via ReaScript — Phase 2 enhancement only
- DrumLoopAnalyzer integration at composition time (not pre-cached) — deferred to Phase 2
- Ducking for non-bass tracks (chords, lead, pad)
- User-configurable duck depth/shape — constants only
## Capabilities
### New Capabilities
- `midi-cc-events`: MIDI CC event emission in `.rpp` source — CC11 Expression events interleaved with notes in E-line stream
- `kick-detection-cache`: `DrumLoopAnalyzer` tied into composition pipeline; kick positions cached per drumloop WAV path
### Modified Capabilities
- `bass-generation`: `build_bass_track()` accepts kick position data and generates per-note velocity ducking OR CC11 events synchronized to kick hits
- `rpp-clip-encoding`: `_build_midi_source()` emits `E B0 0B xx` lines alongside Note On/Off
## Approach
**Principle**: MIDI CC11 (Expression) is the simplest `.rpp`-native sidechain. No REAPER-specific features, no binary envelope encoding, no ReaScript bridge. Pure MIDI standard — works with any synth (Serum 2 confirmed).
**Data flow**:
```
Drumloop WAV
→ DrumLoopAnalyzer.analyze() → transient_positions("kick")
→ beat-positions cache (dict[str, list[float]])
→ build_bass_track(sections, offsets, key_root, key_minor, kick_cache={})
→ generates CCEvent objects {controller=11, time, value}
→ ClipDef.midi_cc = [...]
→ RPPBuilder._build_midi_source() emits E-lines
```
**CC11 ducking shape per kick hit** (all times in beats relative to clip start):
| Offset from kick | CC11 Value | Description |
|-----------------|------------|-------------|
| kick_time | 50 | Instant dip (~-9dB) |
| kick_time + 0.02| 50 | Hold through transient |
| kick_time + 0.18| 127 | Release complete (80ms ≈ 0.16 beats) |
**Key decision — MIDI CC11 vs alternatives**:
| Option | Verdict | Why |
|--------|---------|-----|
| **A: MIDI CC11 (Expression)** | ✅ Chosen | `.rpp` MIDI source format supports `E B0 0B xx` lines. Serum 2, most synths respond. Trivial builder change. |
| B: Track volume envelope (VOLENV2) | ❌ Rejected | Binary/chunk encoding in `.rpp` — fragile, hard to debug, no benefit over CC11 for this use case. |
| C: ReaScript ReaComp sidechain | ⏸️ Deferred | Works only in Phase 2 with REAPER running. Use as future enhancement for non-MIDI audio basses. |
## Affected Areas
| Area | Impact | Description |
|------|--------|-------------|
| `src/core/schema.py` | Modified | Add `CCEvent` dataclass (`controller`, `time`, `value`); add `midi_cc: list[CCEvent]` to `ClipDef` |
| `scripts/compose.py` | Modified | Add `_get_kick_cache()`, pass to `build_bass_track()`, generate CC11 events in bass clips |
| `src/reaper_builder/__init__.py` | Modified | `_build_midi_source()` interleaves CC events into E-line stream |
| `src/composer/drum_analyzer.py` | Unchanged | Already exports `transient_positions("kick")` — zero changes needed |
| `tests/test_compose_integration.py` | Modified | Verify CC events present in bass clips, correct CC11 values at kick positions |
| `tests/test_reaper_builder.py` | Modified | Verify `_build_midi_source()` emits `B0 0B` E-lines |
## Risks
| Risk | Likelihood | Mitigation |
|------|------------|------------|
| Synth doesn't respond to CC11 | Low | Serum 2, Omnisphere, Diva all support it. Add `_CC11_VOLUME_MIN` constant for easy disable (set to 127 = no ducking). |
| DrumloopAnalyzer misclassifies kick transients | Med | Only use transients with `confidence > 0.6`; add `KICK_CONFIDENCE_THRESHOLD = 0.6` constant. |
| CC events overlap MIDI notes in E-line stream | Low | Sort all events (notes + CC) by absolute time; REAPER E-lines are monotonic delta-encoded. |
## Rollback Plan
Delete `midi_cc` from `ClipDef` and revert builder to skip CC events. Remove `_get_kick_cache()` from compose.py. No schema migrations needed — `midi_cc` defaults to empty list (zero behavioral change). One-commit revert.
## Dependencies
- `librosa` (already a project dependency via `drum_analyzer.py`)
- `DrumLoopAnalyzer` (already implemented and tested)
- No new packages, no external APIs.
## Success Criteria
- [ ] Bass MIDI clips contain CC11 (Expression) E-lines at kick hit positions
- [ ] CC11 value drops to ~50 at kick onset, recovers to 127 within 0.18 beats
- [ ] DrumLoopAnalyzer correctly identifies kick transients in all 5 drumloop variants
- [ ] Kick position cache avoids re-analyzing same WAV across sections
- [ ] 110 existing tests pass unchanged
- [ ] `.rpp` output opens in REAPER without errors; bass audibly ducks when kick hits
- [ ] `validate_rpp_output()` reports no regressions

View File

@@ -0,0 +1,76 @@
# Delta Spec: 808 Bass Sidechain Ducking
## ADDED Requirements
### Requirement: MIDI CC11 Event Data Model
The schema MUST support an `CCEvent` dataclass with controller, time, and value fields, and `ClipDef` MUST accept an optional `midi_cc: list[CCEvent]` field defaulting to empty list.
#### Scenario: CCEvent round-trips correctly
- GIVEN `CCEvent(controller=11, time=0.5, value=50)`
- WHEN serialized/deserialized via dataclass
- THEN all fields preserved exactly
#### Scenario: ClipDef with midi_cc
- GIVEN a `ClipDef` with `midi_cc=[CCEvent(11, 0.0, 50), CCEvent(11, 0.18, 127)]`
- WHEN clip is processed by builder
- THEN builder sees `midi_cc` field and can iterate it
### Requirement: Kick Position Cache
A kick-cache dict `{drumloop_wav_path: list[beat_positions]}` SHALL be computed once per session, keyed by WAV path. `DrumLoopAnalyzer.transient_positions("kick")` MUST be the source, filtered by `confidence >= KICK_CONFIDENCE_THRESHOLD` (default 0.6).
#### Scenario: Cache hit
- GIVEN drumloop WAV already analyzed in same session
- WHEN `build_bass_track()` requests kick positions for that path
- THEN cached positions returned without re-analyzing WAV
#### Scenario: Cache miss
- GIVEN drumloop WAV not yet cached
- WHEN kick positions requested
- THEN `DrumLoopAnalyzer.analyze()` runs, positions cached by path key
### Requirement: CC11 Ducking on Kick Hits
For each kick transient position in the bass clip's time span, the system MUST emit CC11 events forming a ducking envelope: instantaneous drop to value 50 at kick time, hold at 50 for 0.02 beats, ramp to 127 by 0.18 beats after kick.
#### Scenario: Single kick duck
- GIVEN kick at beat 1.0 within a 4-beat bass clip
- WHEN CC events generated
- THEN emits `CCEvent(11, 1.0, 50)`, `CCEvent(11, 1.02, 50)`, `CCEvent(11, 1.18, 127)`
#### Scenario: No kicks in clip
- GIVEN drumloop with no kick transients in clip time range
- THEN `midi_cc` is empty list — no CC events emitted
## MODIFIED Requirements
### Requirement: RPPBuilder MIDI Source Encoding
`_build_midi_source()` MUST emit MIDI CC events as `E B0 0B xx` lines interleaved with note events, all sorted by absolute start time. Delta-encoding MUST continue for all E-lines.
#### Scenario: CC events interleaved with notes
- GIVEN clip with `midi_notes=[Note(60, 0.5, 1.0)]` and `midi_cc=[CCEvent(11, 0.0, 50)]`
- WHEN `_build_midi_source()` called
- THEN E-lines emitted in time order: CC at 0.0, Note at 0.5
- AND CC line reads `E 0 B0 0B 32` (delta=0, CC11, value=50=0x32)
#### Scenario: Delta sequencing across note+CC
- GIVEN CC at 0.0, note at 0.5 beats
- WHEN building E-lines
- THEN CC delta = 0×960 = 0; note delta = 0.5×960 - 0 = 480
- AND cursor reset correctly after CC event ticks
### Requirement: Bass Track Generation
`build_bass_track()` SHALL accept an optional `kick_cache: dict[str, list[float]]` parameter. When kick data is present for the drumloop used in each section, `midi_cc` events SHALL be generated and added to the bass `ClipDef`.
#### Scenario: Bass clip with ducking CC
- GIVEN kick cache has `[1.0, 2.5]` for drumloop, section covers beats 0-16
- WHEN bass track built
- THEN bass clip at that section has `midi_cc` with 2×3 CC events (one envelope per kick in range)
- AND note generation unchanged from existing behavior
#### Scenario: No kick cache provided
- GIVEN `kick_cache` is `{}` or omitted
- THEN `midi_cc` is empty — zero behavioral change from current output

View File

@@ -0,0 +1,26 @@
# Tasks: 808 Bass Sidechain Ducking
## Phase 1: Schema — Foundation
- [x] 1.1 Add `CCEvent` dataclass (`controller: int`, `time: float`, `value: int`) to `src/core/schema.py`
- [x] 1.2 Add `midi_cc: list[CCEvent] = field(default_factory=list)` to `ClipDef` in `src/core/schema.py`
- [x] 1.3 Update `asdict` if used; verify `song.validate()` passes with empty `midi_cc`
## Phase 2: Kick Cache + CC Generation
- [x] 2.1 Add constants `_KICK_CONFIDENCE_THRESHOLD=0.6`, `_CC11_DIP=50`, `_CC11_HOLD=0.02`, `_CC11_RELEASE=0.18` to `scripts/compose.py`
- [x] 2.2 Add `_get_kick_cache(drumloop_paths: list[str], bpm: float) -> dict[str, list[float]]` to `scripts/compose.py`
- [x] 2.3 Modify `build_bass_track()` to accept `kick_cache: dict[str, list[float]]` parameter; generate CC events for kicks in range
- [x] 2.4 Update `main()` to build kick cache from drumloop paths and pass to `build_bass_track()`
## Phase 3: Builder CC Emission
- [x] 3.1 Modify `_build_midi_source()` in `src/reaper_builder/__init__.py` to merge `notes + cc` events and emit `E B0 0B {value:02x}` lines
- [x] 3.2 Verify delta cursor correctly advances across CC events (CC events contribute zero ticks)
## Phase 4: Testing
- [x] 4.1 Unit test `CCEvent` dataclass round-trip in `tests/test_schema.py`
- [x] 4.2 Unit test `_build_midi_source()` emits `B0 0B` lines for clips with `midi_cc`
- [x] 4.3 Integration test `build_bass_track()` populates `midi_cc` when kick cache present
- [x] 4.4 Regression: run existing 261 tests, verify all pass unchanged

View File

@@ -0,0 +1,98 @@
# Design: Smart Chord Engine
## Technical Approach
New `ChordEngine` class in `src/composer/chords.py`. Pure Python, seed-based `random.Random`, using existing `CHORD_TYPES` and `NOTE_NAMES` from `composer/__init__.py`. Voice leading: greedy scoring of candidate voicings. `build_chords_track()` imports and delegates.
## Architecture Decisions
| Decision | Choice | Rejected | Rationale |
|----------|--------|----------|-----------|
| RNG strategy | `random.Random(seed)` instance | Global `random.seed()` | Isolates ChordEngine from other modules; no side effects |
| Voice scoring | Greedy min-semi distance per chord | Global optimization (DP) | Simple, fast, produces musical results for ≤12 chords; DP overkill |
| Inversion encoding | `dict[str, int]``{"root":0, "first":1, "second":2}` | Enum class | Follows existing dict-based config pattern (`CHORD_TYPES`) |
| Emotion mapping | Hardcoded `dict[str, list[int]]` degree offsets | Data file | 4 modes, 7 entries each — file indirection adds complexity for no benefit |
| Chord output format | `list[list[int]]` (list of MIDI note lists) | Dict with metadata | Directly feedable to existing `MidiNote` factory; no schema change |
## Data Flow
```
User: --emotion dark --seed 42
build_chords_track() → ChordEngine("Am", seed=42)
├── progression(8, emotion="dark", bpc=4, inversion="root")
│ │
│ ├── EMOTION_PROGRESSIONS["dark"] → [0, 5, 10, 7]
│ ├── get_chord_degrees(root, scale, degrees) → [chords]
│ ├── voice_leading(chords, "root") → [voicings]
│ └── apply_inversion(voicings, "root") → list[list[int]]
MidiNote list → ClipDef → TrackDef
```
## File Changes
| File | Action | Description |
|------|--------|-------------|
| `src/composer/chords.py` | Create | `ChordEngine` class + `EMOTION_PROGRESSIONS` |
| `scripts/compose.py` | Modify | `build_chords_track()` imports + delegates to `ChordEngine` |
| `tests/test_chords.py` | Create | Unit tests for R1-R4, integration for R7 |
## Interfaces
```python
# src/composer/chords.py
class ChordEngine:
def __init__(self, key: str, seed: int = 42): ...
def progression(
self, bars: int, emotion: str = "classic",
beats_per_chord: int = 4, inversion: str = "root"
) -> list[list[int]]: ...
# Internal
def _get_degrees(self, emotion: str) -> list[int]: ...
def _voice_leading(self, chords: list[list[int]], inversion: str) -> list[list[int]]: ...
def _score_voicing(self, prev: list[int], cand: list[int]) -> int: ...
def _apply_inversion(self, voicing: list[int], inversion: str) -> list[int]: ...
```
```python
# EMOTION_PROGRESSIONS — degree offsets (semitone from root) per emotion
# Pattern: [(degree, quality), ...]
EMOTION_PROGRESSIONS = {
"romantic": [(0, "min"), (8, "maj"), (4, "maj"), (10, "maj")], # i-VI-III-VII
"dark": [(0, "min"), (5, "min"), (10, "maj"), (7, "min")], # i-iv-V-v
"club": [(0, "min"), (10, "maj"), (8, "maj"), (4, "maj")], # i-VII-VI-III
"classic": [(0, "min"), (8, "maj"), (4, "maj"), (10, "maj")], # i-VI-III-VII
}
```
## Voice Leading Algorithm
```
For position i (0..n-1):
1. Build all voicings of chord[i] (root + inversions → candidate lists)
2. If i > 0: for each candidate, score = sum(abs(c[j] - prev[j])) across voices
3. Filter candidates where score ≤ 4 per voice
4. Select lowest-total-score candidate (greedy)
5. If no candidate passes filter: keep raw chord (no voicing penalty)
```
Returns minimum-movement path through chord sequence.
## Testing Strategy
| Layer | What | Approach |
|-------|------|----------|
| Unit | Determinism (R1) | `ChordEngine(seed=42).progression(8)` × 3 calls — assert equality |
| Unit | Voice leading ≤4 (R2) | Run progression, verify all adjacent pairs |
| Unit | Inversions (R3) | Assert bass note = target (root/3rd/5th) |
| Unit | Emotion divergence (R4) | 4 emotions → assert 4 distinct outputs |
| Integration | CLI --emotion flag (R7) | `compose.py --emotion dark` → verify ChordEngine called |
## Open Questions
- [ ] Should `--emotion` be a CLI flag or auto-detected from section type? Per proposal, explicit flag.

View File

@@ -0,0 +1,75 @@
# Proposal: Smart Chord Engine
## Intent
Current chord generation (`build_chords_track`) produces static root-position block chords with zero voice leading — every chord jump resets all 3 voices, producing audible jumps and amateur-sounding progressions. Add a `ChordEngine` class with voice leading, inversion selection, emotion modes, and genre-specific reggaeton progressions.
## Scope
### In Scope
- New `src/composer/chords.py` with `ChordEngine` class
- Voice leading: minimize semitone movement, max 4 semitone jump per voice
- Inversion selection: root, first, second inversion
- 4 emotion modes: romantic, dark, club, classic
- Genre-specific reggaeton chord progressions per emotion
- Deterministic: seed-based reproducibility
- Modify `build_chords_track()` in `scripts/compose.py` to use `ChordEngine`
### Out of Scope
- Seventh/suspended/diminished chord types (use existing `CHORD_TYPES`)
- Real-time chord generation (only batch/offline)
- Other genres beyond reggaeton
- Chord rhythm/pattern generation (only chord selection + voicing)
## Capabilities
### New Capabilities
- `chord-engine`: `ChordEngine` class with seed-based deterministic progression generation, voice leading, and inversion selection
### Modified Capabilities
- `chords-track-generation`: `build_chords_track()` delegates to `ChordEngine` instead of hardcoded i-VI-III-VII
## Approach
**Pure Python, zero new dependencies** — all chord logic runs on MIDI note numbers using existing `NOTE_NAMES`, `SCALE_INTERVALS`, and `CHORD_TYPES` from `composer/__init__.py`.
Voice leading: score candidate voicings by total semitone distance from previous chord; select lowest-score candidate within the 4-semitone max-jump constraint.
Emotions → progression profiles:
| Emotion | Degrees | Quality flavor |
|----------|---------|----------------|
| romantic | i-VI-III-VII | softer, wider voicings |
| dark | i-iv-V-v | minor-focused |
| club | i-VII-VI-V | driving, ascending |
| classic | i-VI-III-VII | tight block chords |
## Affected Areas
| Area | Impact | Description |
|------|--------|-------------|
| `src/composer/chords.py` | New | `ChordEngine` class |
| `scripts/compose.py` | Modify | `build_chords_track()` uses `ChordEngine` |
| `tests/test_chords.py` | New | Unit tests for voice leading, emotion modes, inversions |
## Risks
| Risk | Likelihood | Mitigation |
|------|------------|------------|
| Voice leading sounds worse than static | Low | 4-semitone cap prevents unnatural jumps; inversions smooth transitions |
| Emotion modes too similar | Med | Each has distinct degree set and quality bias |
## Rollback Plan
Revert `build_chords_track()` to hardcoded progression. Delete `src/composer/chords.py`. One commit.
## Dependencies
None. Uses existing `composer/__init__.py` constants only.
## Success Criteria
- [ ] `ChordEngine(seed=42).progression(8)` returns identical output on repeated calls
- [ ] No voice leap exceeds 4 semitones
- [ ] All 4 emotion modes produce distinct chord sequences
- [ ] `build_chords_track()` produces MIDI notes with `<4 semitone jumps between consecutive chords`
- [ ] Existing tests pass unchanged

View File

@@ -0,0 +1,47 @@
# Chords Specification
## Purpose
Chord progression generation with voice leading, inversion selection, and emotion-aware patterns for reggaeton. Deterministic and testable.
## Requirements
| # | Requirement | Strength | Key Scenarios |
|---|------------|----------|---------------|
| R1 | `ChordEngine(key, seed)` MUST produce identical progressions for same seed+key | MUST | Same seed → same notes; different seed → different notes |
| R2 | Voice leading MUST minimize semitone movement between consecutive chords, capping at 4 semitones per voice | MUST | 2-chord transition ≤4 semitones per voice; 8-bar progression all leaps ≤4 |
| R3 | SHALL support 3 inversion modes: `root`, `first`, `second` | SHALL | Root: lowest note = root; First: lowest = third; Second: lowest = fifth |
| R4 | MUST support 4 emotion modes: `romantic`, `dark`, `club`, `classic` | MUST | Each emotion yields distinct degree sequence; unknown emotion → `classic` fallback |
| R5 | `progression(bars, emotion, beats_per_chord, inversion)` SHALL return `list[list[int]]` — ordered chord voicings as MIDI note lists | SHALL | 8 bars @ 4 BpC → 8 chords; empty list for 0 bars |
| R6 | Reggaeton progressions SHOULD use genre-appropriate cadences per emotion | SHOULD | Romantic: i-VI-III-VII; Dark: i-iv-V-v; Club: i-VII-VI-V; Classic: i-VI-III-VII |
| R7 | `build_chords_track()` SHALL delegate to `ChordEngine` instead of hardcoded progression | SHALL | CLI `--emotion dark` → dark progression in output |
### Scenario: Deterministic reproducibility
- GIVEN `ChordEngine("Am", seed=42)`
- WHEN `progression(bars=8)` called twice
- THEN both calls return identical `list[list[int]]`
### Scenario: Voice leading within bounds
- GIVEN any 2 consecutive chords from a progression
- WHEN computing voice leading
- THEN no voice moves more than 4 semitones from its previous position
### Scenario: Emotion modes diverge
- GIVEN `ChordEngine("Am", seed=0)` with emotions `romantic`, `dark`, `club`, `classic`
- WHEN `progression(8)` called per emotion
- THEN all 4 output sequences differ
### Scenario: Invalid emotion falls back
- GIVEN `ChordEngine("Am")` with emotion `"angry"` (unknown)
- WHEN `progression(8)` called
- THEN defaults to `classic` progression, no error raised
### Scenario: Integration with compose.py
- GIVEN `python scripts/compose.py --key Am --emotion dark --output test.rpp`
- WHEN build completes
- THEN Chords track contains voicings matching dark-emotion progression

View File

@@ -0,0 +1,27 @@
# Tasks: Smart Chord Engine
## Phase 1: Foundation
- [x] 1.1 Create `src/composer/chords.py` with `EMOTION_PROGRESSIONS` dict and `ChordEngine.__init__(key, seed)`
- [x] 1.2 Implement `ChordEngine._get_degrees(emotion)` — resolve emotion → degree/quality list with `classic` fallback
- [x] 1.3 Implement `ChordEngine._apply_inversion(voicing, inversion)` — reorder notes so target is lowest (root=0, first=1, second=2)
## Phase 2: Core
- [x] 2.1 Implement `ChordEngine._score_voicing(prev, cand)` — sum abs semitone diff per voice pair
- [x] 2.2 Implement `ChordEngine._voice_leading(chords, inversion)` — greedy min-score path, cap 4 semitones/voice
- [x] 2.3 Implement `ChordEngine.progression(bars, emotion, bpc, inversion)` — full pipeline: degrees → chords → voice leading → output
## Phase 3: Integration
- [x] 3.1 Modify `build_chords_track()` in `scripts/compose.py` to import + instantiate `ChordEngine`, delegate chord generation
- [x] 3.2 Add `--emotion` and `--inversion` CLI flags to `scripts/compose.py` (default: `romantic`, `root`)
- [x] 3.3 Wire section energy (`vm`) from existing section loop into note velocity scaling
## Phase 4: Testing
- [x] 4.1 Create `tests/test_chords.py` — unit test determinism: same seed → same output (R1)
- [x] 4.2 Test voice leading: assert max semitone diff ≤ 4 across all adjacent chord pairs (R2)
- [x] 4.3 Test inversions: assert bass note matches root/third/fifth (R3)
- [x] 4.4 Test emotion divergence: all 4 emotions produce distinct progressions (R4)
- [x] 4.5 Integration: `compose.py --emotion dark --output test.rpp` produces chords track using dark progression (R7)

View File

@@ -0,0 +1,88 @@
# Design: Transitions FX
## Technical Approach
Add `build_fx_track()` to `scripts/compose.py` that places audio FX clips from the sample library at 7 section boundaries. Uses `SampleSelector.select_one(role="fx")` with per-type character hints. Reuses `ClipDef.fade_in/out`. New track inserted after Clap, before Pad — after main tracks, before sends are wired.
## Architecture Decisions
| Decision | Choice | Rejected | Rationale |
|----------|--------|----------|-----------|
| One FX track vs per-section | Single dedicated track | Per-section tracks | Simpler; one import per sample in REAPER; manageable clip count (79) |
| Sample selection | Weighted random top-5 | Pinned specific files | Variety across runs; selector scoring already works |
| Boundary timing | Hardcoded beat-offset map | Audio analysis | Section structure is deterministic; bar counts are fixed |
| Riser+impact at chorus | Two clips, same boundary | Single combined clip | Requires different timing; riser before boundary, impact on it |
## Data Flow
```
SECTIONS → offsets (bar → beat)
FX_TRANSITIONS map: {boundary_idx: (type, start_offset, length, fade_in, fade_out)}
build_fx_track(sections, offsets, selector, seed)
├── for each entry in FX_TRANSITIONS:
│ ├── boundary_beat = offsets[boundary_idx] * 4
│ ├── position = boundary_beat + start_offset
│ ├── sample = selector.select_one(role="fx", seed=seed+idx)
│ └── ClipDef(position, length, audio_path, fade_in, fade_out)
TrackDef("Transition FX", volume=0.72, clips=[...], send_level={...})
```
## Boundary → FX Map
| # | Boundary | Beat | FX Type | Position | Length | Fade In | Fade Out |
|---|----------|------|---------|----------|--------|---------|----------|
| 2 | verse→build | 48 | sweep | 46 | 2 | 0.3 | 0.0 |
| 3 | build→chorus | 64 | **riser** | 60 | 4 | 1.5 | 0.0 |
| 3 | build→chorus | 64 | **impact** | 64 | 2 | 0.0 | 0.3 |
| 4 | chorus→verse2 | 96 | transition | 94 | 2 | 0.2 | 0.2 |
| 5 | verse2→chorus2 | 128 | riser | 124 | 4 | 1.0 | 0.0 |
| 6 | chorus2→bridge | 160 | sweep | 158 | 2 | 0.2 | 0.2 |
| 7 | bridge→final | 176 | riser | 172 | 4 | 1.0 | 0.0 |
| 8 | final→outro | 208 | sweep | 206 | 2 | 0.3 | 0.5 |
## File Changes
| File | Action | Description |
|------|--------|-------------|
| `scripts/compose.py` | Modify | Add `FX_TRANSITIONS` dict + `build_fx_track()` (~50 lines); call in `main()` after clap track, before return tracks |
## Key Implementation Detail
`SampleSelector.select_one()` has a `seed` kwarg — new in the selector API. If not yet supported, use `select(role="fx", limit=5)` with manual `random.choice()`. Since FX is in `ATONAL_ROLES`, key compatibility scoring is skipped (neutral 0.5).
## Track Ordering
```
tracks = [
build_drumloop_track(...), # 0
build_perc_track(...), # 1
build_bass_track(...), # 2
build_chords_track(...), # 3
build_lead_track(...), # 4
build_clap_track(...), # 5
build_fx_track(...), # 6 ← NEW
build_pad_track(...), # 7
]
return_tracks = create_return_tracks() # 8 (Reverb), 9 (Delay)
```
Send wiring applies to all non-return tracks automatically via existing loop. FX track sends: Reverb=0.08, Delay=0.05.
## Testing Strategy
| Layer | What | How |
|-------|------|-----|
| Unit | `build_fx_track` returns TrackDef with 8 clips | Mock selector via `SampleSelector.__init__` patching |
| Unit | Clip positions match boundary map | Assert `clip.position` values equal expected beats |
| Integration | End-to-end .rpp output | `compose.py --bpm 99 --key Am --output test.rpp`; grep for "Transition FX" `<TRACK` block |
| Existing | 110 tests pass | `pytest` before/after regression |
## Open Questions
None — all dependencies exist today (`SampleSelector`, `ClipDef.fade_in/out`, `SECTIONS` structure).

View File

@@ -0,0 +1,77 @@
# Proposal: transitions-fx
## Intent
9 sections play back-to-back with zero transition — the song feels like disjointed loops. Add transition FX (risers, impacts, sweeps) at section boundaries to glue sections into a coherent arrangement.
## Scope
### In Scope
- Place transition FX clips (audio samples) on a dedicated "Transition FX" track at section boundaries
- Riser/wash FX: 24 beats before section changes (e.g., build → chorus drop)
- Impact/hit FX: on the downbeat of CHORUS, FINAL, VERSE2 entries
- Filter sweep simulation via fade-in/fade-out on adjacent clips
- Transition plan: which boundary gets which FX type + duration
- Reuse existing FX-role samples from library (impacts, risers, transition FX, wash)
### Out of Scope
- Synthesized FX generation (numpy waveform synthesis) — deferred to future
- MIDI CC filter automation in RPP (no CC support in builder today)
- Per-track volume automation curves
- Reverse cymbal (no suitable samples in library)
## Capabilities
### New Capabilities
- `transition-fx`: Placement of audio FX clips at section boundaries for arrangement glue
### Modified Capabilities
None — existing section/track structure unchanged.
## Approach
**Audio samples from library** — the library has 57 FX-role samples including:
- Impacts: `fx_C2_126_boomy` (2.5s, from `impact.wav`)
- Risers: `fx_C#5_123_aggressive` (30s), `fx_G3_143_boomy` (6.6s, "RISER 3")
- Transition FX: 4 "transicion fx" variants (1.01.7s)
- Wash/noise: `fx_G#6_136_aggressive` (3.3s)
- Short shots/gates: "CAMTAZO 12" (1.52.0s), "PUERTA" (0.2s)
Place audio clips on a new "FX" track at section boundaries:
- **Riser/wash**: starts 24 beats BEFORE boundary, ends on boundary downbeat
- **Impact**: starts on boundary downbeat, short duration (12 beats)
- Use existing `fade_in`/`fade_out` on ClipDef for filter-like sweeps
- Use SampleSelector with `role="fx"` to pick compatible samples
## Affected Areas
| Area | Impact | Description |
|------|--------|-------------|
| `scripts/compose.py` | Modified | Add `build_fx_track()` — places FX clips between sections |
| `src/core/schema.py` | Unchanged | `ClipDef` already has `position`, `length`, `audio_path`, `fade_in`, `fade_out` |
| `src/reaper_builder/__init__.py` | Unchanged | Audio clip building already works |
## Risks
| Risk | Likelihood | Mitigation |
|------|------------|------------|
| FX samples may not match project key | Low | FX role is ATONAL_ROLES — key scoring skipped by selector |
| Long riser samples exceed needed duration | Low | Clip length sets playback window; sample trimmed automatically |
| No suitable riser for specific boundary | Med | Fall back to fade_in on next section clip |
## Rollback Plan
Remove `build_fx_track()` call from `main()`. Existing tracks untouched.
## Dependencies
- `data/sample_index.json` (already exists)
- `SampleSelector` with `role="fx"` (already works)
## Success Criteria
- [ ] `python scripts/compose.py --bpm 99 --key Am` produces .rpp with FX clips at section boundaries
- [ ] At least one FX clip between: build→chorus, chorus→verse2, bridge→final, outro end
- [ ] FX clips have appropriate fade_in/fade_out curves
- [ ] 110 existing tests continue to pass
- [ ] Song renders without gaps — FX clips overlap/bridge sections

View File

@@ -0,0 +1,95 @@
# Transition FX Specification
## Purpose
Glue sections together by placing audio FX clips at arrangement boundaries using existing `role="fx"` library samples.
## Requirements
### Requirement: FX Track Existence
The system MUST create a dedicated "Transition FX" audio track with clips at 7 section boundaries.
#### Scenario: FX track present in arrangement
- GIVEN a 9-section song
- WHEN `compose.py` runs
- THEN a track named "Transition FX" exists with 7+ audio clips at boundary positions
### Requirement: Riser Before Climax
A riser/wash FX MUST start 24 beats before build→chorus and bridge→final boundaries, ending ON the boundary downbeat.
#### Scenario: Riser before chorus
- GIVEN build ends at beat 64 (bar 16)
- WHEN FX is built
- THEN a riser at position 60 (beat 60), length 4, `fade_in` ≥ 1.0s
#### Scenario: Riser before final
- GIVEN bridge ends at beat 176 (bar 44)
- WHEN FX is built
- THEN a riser at position 172, length 4, `fade_in` ≥ 1.0s
#### Scenario: Riser before chorus2
- GIVEN verse2 ends at beat 128 (bar 32)
- WHEN FX is built
- THEN a riser at position 124, length 4, `fade_in` ≥ 1.0s
### Requirement: Impact on Section Downbeat
An impact/stab FX MUST start on beat 1 of CHORUS (beat 64) and FINAL (beat 176).
#### Scenario: Impact on chorus beat 1
- GIVEN chorus starts at beat 64
- WHEN FX is built
- THEN an impact clip at position 64, length 12 beats, `fade_out` ≥ 0.2s
#### Scenario: Impact on final beat 1
- GIVEN final starts at beat 176
- WHEN FX is built
- THEN an impact clip at position 176, length 12 beats
### Requirement: Transition Sweeps Between Verses
Short transition FX MUST bridge chorus→verse2 (beat 96) and chorus2→bridge (beat 160).
#### Scenario: Sweep bridges chorus to verse2
- GIVEN chorus ends at beat 96
- WHEN FX is built
- THEN a transition clip at position 94, length 2 beats, `fade_in` and `fade_out` > 0
### Requirement: FX Sample Selection
The system SHALL select FX samples via `SampleSelector.select_one(role="fx")`, favoring short samples for impacts, long for risers.
#### Scenario: FX role returns candidates
- GIVEN 57 FX samples in library with ATONAL_ROLES including "fx"
- WHEN `select(role="fx")` is called
- THEN non-empty result returned; key scoring skipped (neutral 0.5)
### Requirement: Fade Curves
FX clips MUST use `fade_in`/`fade_out`. Risers: `fade_in` ≥ 0.3s. Impacts: `fade_out` ≥ 0.2s.
#### Scenario: Riser fades in, impact fades out
- GIVEN riser and impact clips defined
- WHEN ClipDef is created
- THEN riser.fade_in > 0 AND impact.fade_out > 0
### Requirement: FX Track Mixing
The FX track SHALL have volume ≤ 0.80 and send to Reverb/Delay returns.
#### Scenario: FX track has moderate volume and sends
- GIVEN "Transition FX" track created
- WHEN track is defined
- THEN volume = 0.72, send_level includes reverb (0.08) and delay (0.05)

View File

@@ -0,0 +1,27 @@
# Tasks: Transitions FX
## Phase 1: FX Transition Map
- [x] 1.1 Add `FX_TRANSITIONS` dict to `scripts/compose.py`: `{boundary_index: (type, start_offset, length, fade_in, fade_out)}` with 8 entries matching design boundary map
- [x] 1.2 Add `FX_ROLE = "fx"` constant referencing ATONAL_ROLES membership
## Phase 2: Build FX Track
- [x] 2.1 Implement `build_fx_track(sections, offsets, selector, seed=0)` — iterates `FX_TRANSITIONS`, computes clip positions from offsets, selects FX samples
- [x] 2.2 For each boundary: call `selector.select_one(role="fx", seed=seed + idx)` to pick sample
- [x] 2.3 Create `ClipDef(position, length, name, audio_path, fade_in, fade_out)` per boundary
- [x] 2.4 Build `TrackDef("Transition FX", volume=0.72, clips=[...], send_level={reverb: 0.08, delay: 0.05})`
- [x] 2.5 Add docstring explaining boundary map and FX types (riser/impact/sweep/transition)
## Phase 3: Integration
- [x] 3.1 Call `build_fx_track()` in `main()` after clap track, before pad track
- [x] 3.2 Verify send wiring loop handles new track (existing code; confirm no regression)
## Phase 4: Testing & Verification
- [x] 4.1 Write unit test: `build_fx_track` returns TrackDef with exactly 8 clips
- [x] 4.2 Write unit test: clip positions and fade values match design's boundary map
- [x] 4.3 Write unit test: all clips have `audio_path` set (not None)
- [x] 4.4 Write integration test: `compose.py --bpm 99 --key Am --output /tmp/test.rpp` produces valid .rpp with "Transition FX" track
- [x] 4.5 Run full `pytest` suite — all 110 existing tests pass