- compose-test-sync: fix 3 failing tests (NOTE_TO_MIDI, DrumLoopAnalyzer mock, section name) - generate-song: CLI wrapper + RPP validator (6 structural checks) + 4 e2e tests - reascript-hybrid: ReaScriptGenerator + command protocol + CLI + 16 unit tests - 110/110 tests passing - Full SDD cycle (propose→spec→design→tasks→apply→verify) for all 3 changes
130 lines
6.1 KiB
Markdown
130 lines
6.1 KiB
Markdown
# Design: reascript-hybrid
|
|
|
|
## Technical Approach
|
|
|
|
Phase 2 runs inside REAPER via a self-contained Python ReaScript. Our Python generates the ReaScript file and drives it via a JSON file protocol — no network, no distant API. REAPER controls timing; we just poll for the result.
|
|
|
|
## Architecture Decisions
|
|
|
|
### Decision: JSON file protocol over python-reapy
|
|
|
|
**Choice**: JSON files via `fl_control_command.json` / `fl_control_result.json` in REAPER's ResourcePath
|
|
**Alternatives considered**: python-reapy (network/WebSocket, REAPER distant API)
|
|
**Rationale**: No network dependency. REAPER owns the timing — avoids race conditions when REAPER is busy. Simpler debugging: JSON is readable in any editor.
|
|
|
|
### Decision: Self-contained ReaScript with no external imports
|
|
|
|
**Choice**: Generated ReaScript uses only the built-in ReaScript API (no `import json` — use `os` and string manipulation)
|
|
**Alternatives considered**: Importing Python's `json` module via Python 3.x ReaScript support
|
|
**Rationale**: Maximum compatibility across REAPER versions. JSON parsing via hand-rolled parser is ~20 lines of string splitting. Avoids any import-time failures.
|
|
|
|
### Decision: Separate commands.py for protocol testability
|
|
|
|
**Choice**: `commands.py` exposes `read_command`, `write_result`, `ReaScriptCommand`, `ReaScriptResult`
|
|
**Alternatives considered**: Protocol classes in `__init__.py`
|
|
**Rationale**: Unit test the protocol without instantiating ReaScriptGenerator or touching REAPER. The protocol is stable and worth isolating.
|
|
|
|
### Decision: Track calibration via JSON array, not direct API calls from Python
|
|
|
|
**Choice**: `track_calibration` list in command JSON describes volume/pan/sends per track
|
|
**Alternatives considered**: Python calls REAPER API directly for each calibration step
|
|
**Rationale**: Keeps the interface stateless and retry-friendly. If REAPER crashes mid-calibration, the command JSON is still valid for replay.
|
|
|
|
## Data Flow
|
|
|
|
```
|
|
scripts/run_in_reaper.py src/reaper_scripting/ REAPER
|
|
│ │ │
|
|
│ generate(cmd) │ │
|
|
│──────────────────────────> ReaScriptGenerator │
|
|
│ │ generates .py │
|
|
│ write_command(cmd.json) │ │
|
|
│────────────────────────────>│ │
|
|
│ │ write to ResourcePath() │
|
|
│ │────────────────────────>│
|
|
│ │ │ Action triggered
|
|
│ │ │ reads command.json
|
|
│ │ │ executes pipeline
|
|
│ │ │ writes result.json
|
|
│ │<─────────────────────────│
|
|
│ read_result() │ │
|
|
│<─────────────────────────────│ │
|
|
```
|
|
|
|
## File Changes
|
|
|
|
| File | Action | Description |
|
|
|------|--------|-------------|
|
|
| `src/reaper_scripting/__init__.py` | Create | `ReaScriptGenerator.generate(path, cmd)` — writes self-contained ReaScript |
|
|
| `src/reaper_scripting/commands.py` | Create | `ReaScriptCommand`, `ReaScriptResult` dataclasses + `write_command()`, `read_result()` |
|
|
| `scripts/run_in_reaper.py` | Create | CLI: generate script → write command JSON → poll result → print LUFS |
|
|
|
|
## Interface Contracts
|
|
|
|
### ReaScriptGenerator
|
|
|
|
```python
|
|
class ReaScriptGenerator:
|
|
def generate(self, path: Path, command: ReaScriptCommand) -> None:
|
|
"""Write a self-contained ReaScript .py to path."""
|
|
```
|
|
|
|
The generated script reads `fl_control_command.json`, runs the pipeline, writes `fl_control_result.json`.
|
|
|
|
### Command JSON schema (`fl_control_command.json`)
|
|
|
|
```json
|
|
{
|
|
"version": 1,
|
|
"action": "calibrate" | "verify_fx" | "render",
|
|
"rpp_path": "absolute path",
|
|
"render_path": "absolute path for WAV output",
|
|
"timeout": 120,
|
|
"track_calibration": [
|
|
{
|
|
"track_index": 0,
|
|
"volume": 0.85,
|
|
"pan": 0.0,
|
|
"sends": [{"dest_track_index": 5, "level": 0.05}]
|
|
}
|
|
]
|
|
}
|
|
```
|
|
|
|
### Result JSON schema (`fl_control_result.json`)
|
|
|
|
```json
|
|
{
|
|
"version": 1,
|
|
"status": "ok" | "error" | "timeout",
|
|
"message": "",
|
|
"lufs": -14.2,
|
|
"integrated_lufs": -14.2,
|
|
"short_term_lufs": -12.1,
|
|
"fx_errors": [{"track_index": 2, "fx_index": 1, "name": "", "expected": "Serum_2"}],
|
|
"tracks_verified": 8
|
|
}
|
|
```
|
|
|
|
## Phase 2 Pipeline (ReaScript)
|
|
|
|
1. `GetFunctionMetadata` — verify API availability
|
|
2. `Main_openProject(rpp_path)` — load .rpp
|
|
3. Iterate tracks: `TrackFX_GetCount` + `TrackFX_GetFXName` per slot → collect `fx_errors`
|
|
4. For each `track_calibration` entry: `SetMediaTrackInfo_Value(VOLUME/PAN)` + `CreateTrackSend`
|
|
5. `Main_RenderFile` → render to `render_path`
|
|
6. `CalcMediaSrcLoudness(render_path)` → extract `integrated_lufs`, `short_term_lufs`
|
|
7. Write result JSON
|
|
|
|
## Testing Strategy
|
|
|
|
| Layer | What | How |
|
|
|-------|------|-----|
|
|
| Unit | `ReaScriptCommand`/`ReaScriptResult` JSON round-trip | `pytest tests/test_commands.py` — serialize/deserialize, version mismatch raises `ProtocolVersionError` |
|
|
| Unit | ReaScriptGenerator output is valid Python | `pytest tests/test_reagenerator.py` — parse generated script with `ast.parse`, check it contains required API calls |
|
|
| Integration | Full pipeline with REAPER | `pytest tests/test_phase2.py -k integration` — skipped in CI, runs against live REAPER |
|
|
|
|
## Open Questions
|
|
|
|
- [ ] Should `render_path` default to the .rpp's folder with `_rendered.wav` suffix?
|
|
- [ ] Do we need to handle REAPER's `__startup__.py` registration automatically, or is manual Action registration acceptable for Phase 1? |