# Tasks: Smart Chord Engine ## Phase 1: Foundation - [x] 1.1 Create `src/composer/chords.py` with `EMOTION_PROGRESSIONS` dict and `ChordEngine.__init__(key, seed)` - [x] 1.2 Implement `ChordEngine._get_degrees(emotion)` — resolve emotion → degree/quality list with `classic` fallback - [x] 1.3 Implement `ChordEngine._apply_inversion(voicing, inversion)` — reorder notes so target is lowest (root=0, first=1, second=2) ## Phase 2: Core - [x] 2.1 Implement `ChordEngine._score_voicing(prev, cand)` — sum abs semitone diff per voice pair - [x] 2.2 Implement `ChordEngine._voice_leading(chords, inversion)` — greedy min-score path, cap 4 semitones/voice - [x] 2.3 Implement `ChordEngine.progression(bars, emotion, bpc, inversion)` — full pipeline: degrees → chords → voice leading → output ## Phase 3: Integration - [x] 3.1 Modify `build_chords_track()` in `scripts/compose.py` to import + instantiate `ChordEngine`, delegate chord generation - [x] 3.2 Add `--emotion` and `--inversion` CLI flags to `scripts/compose.py` (default: `romantic`, `root`) - [x] 3.3 Wire section energy (`vm`) from existing section loop into note velocity scaling ## Phase 4: Testing - [x] 4.1 Create `tests/test_chords.py` — unit test determinism: same seed → same output (R1) - [x] 4.2 Test voice leading: assert max semitone diff ≤ 4 across all adjacent chord pairs (R2) - [x] 4.3 Test inversions: assert bass note matches root/third/fifth (R3) - [x] 4.4 Test emotion divergence: all 4 emotions produce distinct progressions (R4) - [x] 4.5 Integration: `compose.py --emotion dark --output test.rpp` produces chords track using dark progression (R7)