- section-energy: track activity matrix + volume/velocity multipliers per section - smart-chords: ChordEngine with voice leading, inversions, 4 emotion modes - hook-melody: melody engine with hook/stabs/smooth styles, call-and-response - mix-calibration: Calibrator module (LUFS volumes, HPF/LPF, stereo, sends, master) - transitions-fx: FX track with risers/impacts/sweeps at section boundaries - sidechain: MIDI CC11 bass ducking on kick hits via DrumLoopAnalyzer - presets-pack: role-aware plugin presets (Serum/Decapitator/Omnisphere per role) Full SDD pipeline (propose→spec→design→tasks→apply→verify) for all 7 changes. 302/302 tests passing.
99 lines
4.2 KiB
Markdown
99 lines
4.2 KiB
Markdown
# Design: Smart Chord Engine
|
||
|
||
## Technical Approach
|
||
|
||
New `ChordEngine` class in `src/composer/chords.py`. Pure Python, seed-based `random.Random`, using existing `CHORD_TYPES` and `NOTE_NAMES` from `composer/__init__.py`. Voice leading: greedy scoring of candidate voicings. `build_chords_track()` imports and delegates.
|
||
|
||
## Architecture Decisions
|
||
|
||
| Decision | Choice | Rejected | Rationale |
|
||
|----------|--------|----------|-----------|
|
||
| RNG strategy | `random.Random(seed)` instance | Global `random.seed()` | Isolates ChordEngine from other modules; no side effects |
|
||
| Voice scoring | Greedy min-semi distance per chord | Global optimization (DP) | Simple, fast, produces musical results for ≤12 chords; DP overkill |
|
||
| Inversion encoding | `dict[str, int]` → `{"root":0, "first":1, "second":2}` | Enum class | Follows existing dict-based config pattern (`CHORD_TYPES`) |
|
||
| Emotion mapping | Hardcoded `dict[str, list[int]]` degree offsets | Data file | 4 modes, 7 entries each — file indirection adds complexity for no benefit |
|
||
| Chord output format | `list[list[int]]` (list of MIDI note lists) | Dict with metadata | Directly feedable to existing `MidiNote` factory; no schema change |
|
||
|
||
## Data Flow
|
||
|
||
```
|
||
User: --emotion dark --seed 42
|
||
│
|
||
▼
|
||
build_chords_track() → ChordEngine("Am", seed=42)
|
||
│
|
||
├── progression(8, emotion="dark", bpc=4, inversion="root")
|
||
│ │
|
||
│ ├── EMOTION_PROGRESSIONS["dark"] → [0, 5, 10, 7]
|
||
│ ├── get_chord_degrees(root, scale, degrees) → [chords]
|
||
│ ├── voice_leading(chords, "root") → [voicings]
|
||
│ └── apply_inversion(voicings, "root") → list[list[int]]
|
||
│
|
||
▼
|
||
MidiNote list → ClipDef → TrackDef
|
||
```
|
||
|
||
## File Changes
|
||
|
||
| File | Action | Description |
|
||
|------|--------|-------------|
|
||
| `src/composer/chords.py` | Create | `ChordEngine` class + `EMOTION_PROGRESSIONS` |
|
||
| `scripts/compose.py` | Modify | `build_chords_track()` imports + delegates to `ChordEngine` |
|
||
| `tests/test_chords.py` | Create | Unit tests for R1-R4, integration for R7 |
|
||
|
||
## Interfaces
|
||
|
||
```python
|
||
# src/composer/chords.py
|
||
class ChordEngine:
|
||
def __init__(self, key: str, seed: int = 42): ...
|
||
def progression(
|
||
self, bars: int, emotion: str = "classic",
|
||
beats_per_chord: int = 4, inversion: str = "root"
|
||
) -> list[list[int]]: ...
|
||
|
||
# Internal
|
||
def _get_degrees(self, emotion: str) -> list[int]: ...
|
||
def _voice_leading(self, chords: list[list[int]], inversion: str) -> list[list[int]]: ...
|
||
def _score_voicing(self, prev: list[int], cand: list[int]) -> int: ...
|
||
def _apply_inversion(self, voicing: list[int], inversion: str) -> list[int]: ...
|
||
```
|
||
|
||
```python
|
||
# EMOTION_PROGRESSIONS — degree offsets (semitone from root) per emotion
|
||
# Pattern: [(degree, quality), ...]
|
||
EMOTION_PROGRESSIONS = {
|
||
"romantic": [(0, "min"), (8, "maj"), (4, "maj"), (10, "maj")], # i-VI-III-VII
|
||
"dark": [(0, "min"), (5, "min"), (10, "maj"), (7, "min")], # i-iv-V-v
|
||
"club": [(0, "min"), (10, "maj"), (8, "maj"), (4, "maj")], # i-VII-VI-III
|
||
"classic": [(0, "min"), (8, "maj"), (4, "maj"), (10, "maj")], # i-VI-III-VII
|
||
}
|
||
```
|
||
|
||
## Voice Leading Algorithm
|
||
|
||
```
|
||
For position i (0..n-1):
|
||
1. Build all voicings of chord[i] (root + inversions → candidate lists)
|
||
2. If i > 0: for each candidate, score = sum(abs(c[j] - prev[j])) across voices
|
||
3. Filter candidates where score ≤ 4 per voice
|
||
4. Select lowest-total-score candidate (greedy)
|
||
5. If no candidate passes filter: keep raw chord (no voicing penalty)
|
||
```
|
||
|
||
Returns minimum-movement path through chord sequence.
|
||
|
||
## Testing Strategy
|
||
|
||
| Layer | What | Approach |
|
||
|-------|------|----------|
|
||
| Unit | Determinism (R1) | `ChordEngine(seed=42).progression(8)` × 3 calls — assert equality |
|
||
| Unit | Voice leading ≤4 (R2) | Run progression, verify all adjacent pairs |
|
||
| Unit | Inversions (R3) | Assert bass note = target (root/3rd/5th) |
|
||
| Unit | Emotion divergence (R4) | 4 emotions → assert 4 distinct outputs |
|
||
| Integration | CLI --emotion flag (R7) | `compose.py --emotion dark` → verify ChordEngine called |
|
||
|
||
## Open Questions
|
||
|
||
- [ ] Should `--emotion` be a CLI flag or auto-detected from section type? Per proposal, explicit flag.
|