feat: professional reggaeton production engine — 7 SDD changes, 302 tests

- section-energy: track activity matrix + volume/velocity multipliers per section
- smart-chords: ChordEngine with voice leading, inversions, 4 emotion modes
- hook-melody: melody engine with hook/stabs/smooth styles, call-and-response
- mix-calibration: Calibrator module (LUFS volumes, HPF/LPF, stereo, sends, master)
- transitions-fx: FX track with risers/impacts/sweeps at section boundaries
- sidechain: MIDI CC11 bass ducking on kick hits via DrumLoopAnalyzer
- presets-pack: role-aware plugin presets (Serum/Decapitator/Omnisphere per role)

Full SDD pipeline (propose→spec→design→tasks→apply→verify) for all 7 changes.
302/302 tests passing.
This commit is contained in:
renato97
2026-05-03 23:54:29 -03:00
parent 48bc271afc
commit 014e636889
51 changed files with 11394 additions and 113 deletions

View File

@@ -0,0 +1,98 @@
# Design: Smart Chord Engine
## Technical Approach
New `ChordEngine` class in `src/composer/chords.py`. Pure Python, seed-based `random.Random`, using existing `CHORD_TYPES` and `NOTE_NAMES` from `composer/__init__.py`. Voice leading: greedy scoring of candidate voicings. `build_chords_track()` imports and delegates.
## Architecture Decisions
| Decision | Choice | Rejected | Rationale |
|----------|--------|----------|-----------|
| RNG strategy | `random.Random(seed)` instance | Global `random.seed()` | Isolates ChordEngine from other modules; no side effects |
| Voice scoring | Greedy min-semi distance per chord | Global optimization (DP) | Simple, fast, produces musical results for ≤12 chords; DP overkill |
| Inversion encoding | `dict[str, int]``{"root":0, "first":1, "second":2}` | Enum class | Follows existing dict-based config pattern (`CHORD_TYPES`) |
| Emotion mapping | Hardcoded `dict[str, list[int]]` degree offsets | Data file | 4 modes, 7 entries each — file indirection adds complexity for no benefit |
| Chord output format | `list[list[int]]` (list of MIDI note lists) | Dict with metadata | Directly feedable to existing `MidiNote` factory; no schema change |
## Data Flow
```
User: --emotion dark --seed 42
build_chords_track() → ChordEngine("Am", seed=42)
├── progression(8, emotion="dark", bpc=4, inversion="root")
│ │
│ ├── EMOTION_PROGRESSIONS["dark"] → [0, 5, 10, 7]
│ ├── get_chord_degrees(root, scale, degrees) → [chords]
│ ├── voice_leading(chords, "root") → [voicings]
│ └── apply_inversion(voicings, "root") → list[list[int]]
MidiNote list → ClipDef → TrackDef
```
## File Changes
| File | Action | Description |
|------|--------|-------------|
| `src/composer/chords.py` | Create | `ChordEngine` class + `EMOTION_PROGRESSIONS` |
| `scripts/compose.py` | Modify | `build_chords_track()` imports + delegates to `ChordEngine` |
| `tests/test_chords.py` | Create | Unit tests for R1-R4, integration for R7 |
## Interfaces
```python
# src/composer/chords.py
class ChordEngine:
def __init__(self, key: str, seed: int = 42): ...
def progression(
self, bars: int, emotion: str = "classic",
beats_per_chord: int = 4, inversion: str = "root"
) -> list[list[int]]: ...
# Internal
def _get_degrees(self, emotion: str) -> list[int]: ...
def _voice_leading(self, chords: list[list[int]], inversion: str) -> list[list[int]]: ...
def _score_voicing(self, prev: list[int], cand: list[int]) -> int: ...
def _apply_inversion(self, voicing: list[int], inversion: str) -> list[int]: ...
```
```python
# EMOTION_PROGRESSIONS — degree offsets (semitone from root) per emotion
# Pattern: [(degree, quality), ...]
EMOTION_PROGRESSIONS = {
"romantic": [(0, "min"), (8, "maj"), (4, "maj"), (10, "maj")], # i-VI-III-VII
"dark": [(0, "min"), (5, "min"), (10, "maj"), (7, "min")], # i-iv-V-v
"club": [(0, "min"), (10, "maj"), (8, "maj"), (4, "maj")], # i-VII-VI-III
"classic": [(0, "min"), (8, "maj"), (4, "maj"), (10, "maj")], # i-VI-III-VII
}
```
## Voice Leading Algorithm
```
For position i (0..n-1):
1. Build all voicings of chord[i] (root + inversions → candidate lists)
2. If i > 0: for each candidate, score = sum(abs(c[j] - prev[j])) across voices
3. Filter candidates where score ≤ 4 per voice
4. Select lowest-total-score candidate (greedy)
5. If no candidate passes filter: keep raw chord (no voicing penalty)
```
Returns minimum-movement path through chord sequence.
## Testing Strategy
| Layer | What | Approach |
|-------|------|----------|
| Unit | Determinism (R1) | `ChordEngine(seed=42).progression(8)` × 3 calls — assert equality |
| Unit | Voice leading ≤4 (R2) | Run progression, verify all adjacent pairs |
| Unit | Inversions (R3) | Assert bass note = target (root/3rd/5th) |
| Unit | Emotion divergence (R4) | 4 emotions → assert 4 distinct outputs |
| Integration | CLI --emotion flag (R7) | `compose.py --emotion dark` → verify ChordEngine called |
## Open Questions
- [ ] Should `--emotion` be a CLI flag or auto-detected from section type? Per proposal, explicit flag.

View File

@@ -0,0 +1,75 @@
# Proposal: Smart Chord Engine
## Intent
Current chord generation (`build_chords_track`) produces static root-position block chords with zero voice leading — every chord jump resets all 3 voices, producing audible jumps and amateur-sounding progressions. Add a `ChordEngine` class with voice leading, inversion selection, emotion modes, and genre-specific reggaeton progressions.
## Scope
### In Scope
- New `src/composer/chords.py` with `ChordEngine` class
- Voice leading: minimize semitone movement, max 4 semitone jump per voice
- Inversion selection: root, first, second inversion
- 4 emotion modes: romantic, dark, club, classic
- Genre-specific reggaeton chord progressions per emotion
- Deterministic: seed-based reproducibility
- Modify `build_chords_track()` in `scripts/compose.py` to use `ChordEngine`
### Out of Scope
- Seventh/suspended/diminished chord types (use existing `CHORD_TYPES`)
- Real-time chord generation (only batch/offline)
- Other genres beyond reggaeton
- Chord rhythm/pattern generation (only chord selection + voicing)
## Capabilities
### New Capabilities
- `chord-engine`: `ChordEngine` class with seed-based deterministic progression generation, voice leading, and inversion selection
### Modified Capabilities
- `chords-track-generation`: `build_chords_track()` delegates to `ChordEngine` instead of hardcoded i-VI-III-VII
## Approach
**Pure Python, zero new dependencies** — all chord logic runs on MIDI note numbers using existing `NOTE_NAMES`, `SCALE_INTERVALS`, and `CHORD_TYPES` from `composer/__init__.py`.
Voice leading: score candidate voicings by total semitone distance from previous chord; select lowest-score candidate within the 4-semitone max-jump constraint.
Emotions → progression profiles:
| Emotion | Degrees | Quality flavor |
|----------|---------|----------------|
| romantic | i-VI-III-VII | softer, wider voicings |
| dark | i-iv-V-v | minor-focused |
| club | i-VII-VI-V | driving, ascending |
| classic | i-VI-III-VII | tight block chords |
## Affected Areas
| Area | Impact | Description |
|------|--------|-------------|
| `src/composer/chords.py` | New | `ChordEngine` class |
| `scripts/compose.py` | Modify | `build_chords_track()` uses `ChordEngine` |
| `tests/test_chords.py` | New | Unit tests for voice leading, emotion modes, inversions |
## Risks
| Risk | Likelihood | Mitigation |
|------|------------|------------|
| Voice leading sounds worse than static | Low | 4-semitone cap prevents unnatural jumps; inversions smooth transitions |
| Emotion modes too similar | Med | Each has distinct degree set and quality bias |
## Rollback Plan
Revert `build_chords_track()` to hardcoded progression. Delete `src/composer/chords.py`. One commit.
## Dependencies
None. Uses existing `composer/__init__.py` constants only.
## Success Criteria
- [ ] `ChordEngine(seed=42).progression(8)` returns identical output on repeated calls
- [ ] No voice leap exceeds 4 semitones
- [ ] All 4 emotion modes produce distinct chord sequences
- [ ] `build_chords_track()` produces MIDI notes with `<4 semitone jumps between consecutive chords`
- [ ] Existing tests pass unchanged

View File

@@ -0,0 +1,47 @@
# Chords Specification
## Purpose
Chord progression generation with voice leading, inversion selection, and emotion-aware patterns for reggaeton. Deterministic and testable.
## Requirements
| # | Requirement | Strength | Key Scenarios |
|---|------------|----------|---------------|
| R1 | `ChordEngine(key, seed)` MUST produce identical progressions for same seed+key | MUST | Same seed → same notes; different seed → different notes |
| R2 | Voice leading MUST minimize semitone movement between consecutive chords, capping at 4 semitones per voice | MUST | 2-chord transition ≤4 semitones per voice; 8-bar progression all leaps ≤4 |
| R3 | SHALL support 3 inversion modes: `root`, `first`, `second` | SHALL | Root: lowest note = root; First: lowest = third; Second: lowest = fifth |
| R4 | MUST support 4 emotion modes: `romantic`, `dark`, `club`, `classic` | MUST | Each emotion yields distinct degree sequence; unknown emotion → `classic` fallback |
| R5 | `progression(bars, emotion, beats_per_chord, inversion)` SHALL return `list[list[int]]` — ordered chord voicings as MIDI note lists | SHALL | 8 bars @ 4 BpC → 8 chords; empty list for 0 bars |
| R6 | Reggaeton progressions SHOULD use genre-appropriate cadences per emotion | SHOULD | Romantic: i-VI-III-VII; Dark: i-iv-V-v; Club: i-VII-VI-V; Classic: i-VI-III-VII |
| R7 | `build_chords_track()` SHALL delegate to `ChordEngine` instead of hardcoded progression | SHALL | CLI `--emotion dark` → dark progression in output |
### Scenario: Deterministic reproducibility
- GIVEN `ChordEngine("Am", seed=42)`
- WHEN `progression(bars=8)` called twice
- THEN both calls return identical `list[list[int]]`
### Scenario: Voice leading within bounds
- GIVEN any 2 consecutive chords from a progression
- WHEN computing voice leading
- THEN no voice moves more than 4 semitones from its previous position
### Scenario: Emotion modes diverge
- GIVEN `ChordEngine("Am", seed=0)` with emotions `romantic`, `dark`, `club`, `classic`
- WHEN `progression(8)` called per emotion
- THEN all 4 output sequences differ
### Scenario: Invalid emotion falls back
- GIVEN `ChordEngine("Am")` with emotion `"angry"` (unknown)
- WHEN `progression(8)` called
- THEN defaults to `classic` progression, no error raised
### Scenario: Integration with compose.py
- GIVEN `python scripts/compose.py --key Am --emotion dark --output test.rpp`
- WHEN build completes
- THEN Chords track contains voicings matching dark-emotion progression

View File

@@ -0,0 +1,27 @@
# Tasks: Smart Chord Engine
## Phase 1: Foundation
- [x] 1.1 Create `src/composer/chords.py` with `EMOTION_PROGRESSIONS` dict and `ChordEngine.__init__(key, seed)`
- [x] 1.2 Implement `ChordEngine._get_degrees(emotion)` — resolve emotion → degree/quality list with `classic` fallback
- [x] 1.3 Implement `ChordEngine._apply_inversion(voicing, inversion)` — reorder notes so target is lowest (root=0, first=1, second=2)
## Phase 2: Core
- [x] 2.1 Implement `ChordEngine._score_voicing(prev, cand)` — sum abs semitone diff per voice pair
- [x] 2.2 Implement `ChordEngine._voice_leading(chords, inversion)` — greedy min-score path, cap 4 semitones/voice
- [x] 2.3 Implement `ChordEngine.progression(bars, emotion, bpc, inversion)` — full pipeline: degrees → chords → voice leading → output
## Phase 3: Integration
- [x] 3.1 Modify `build_chords_track()` in `scripts/compose.py` to import + instantiate `ChordEngine`, delegate chord generation
- [x] 3.2 Add `--emotion` and `--inversion` CLI flags to `scripts/compose.py` (default: `romantic`, `root`)
- [x] 3.3 Wire section energy (`vm`) from existing section loop into note velocity scaling
## Phase 4: Testing
- [x] 4.1 Create `tests/test_chords.py` — unit test determinism: same seed → same output (R1)
- [x] 4.2 Test voice leading: assert max semitone diff ≤ 4 across all adjacent chord pairs (R2)
- [x] 4.3 Test inversions: assert bass note matches root/third/fifth (R3)
- [x] 4.4 Test emotion divergence: all 4 emotions produce distinct progressions (R4)
- [x] 4.5 Integration: `compose.py --emotion dark --output test.rpp` produces chords track using dark progression (R7)