Modelling Black-box Audio Effects with
Time-varying Feature Modulation

Marco Comunità, Christian J. Steinmetz, Huy Phan, Joshua D. Reiss

Paper Code Dataset Video VST plugin

Abstract


Deep learning approaches for black-box modelling of audio effects have shown promise, however, the majority of existing work focuses on nonlinear effects with behaviour on relatively short time-scales, such as guitar amplifiers and distortion. While recurrent and convolutional architectures can theoretically be extended to capture behaviour at longer time scales, we show that simply scaling the width, depth, or dilation factor of existing architectures does not result in satisfactory performance when modelling audio effects such as fuzz and dynamic range compression. To address this, we propose the integration of time-varying feature-wise linear modulation into existing temporal convolutional backbones, an approach that enables learnable adaptation of the intermediate activations. We demonstrate that our approach more accurately captures long-range dependencies for a range of fuzz and compressor implementations across both time and frequency domain metrics. We provide sound examples, source code, and pretrained models to facilitate reproducibility


Samples


Audio Effect Reference LSTM-96 GCN-3 GCNTF-3 (ours)
Fuzz - Att: 50ms, Rel: 50ms
Fuzz - Att: 10ms, Rel: 250ms
Fuzz - Att: 1ms, Rel: 2500ms
Comp - Att: 1ms, Rel: 2500ms
MComp - Att: 1ms, Rel: 1000ms

Citation



    @inproceedings{10097173,
        author={Comunità, Marco and Steinmetz, Christian J. and Phan, Huy and Reiss, Joshua D.},
        booktitle={ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, 
        title={Modelling Black-Box Audio Effects with Time-Varying Feature Modulation}, 
        year={2023},
        pages={1-5},
        doi={10.1109/ICASSP49357.2023.10097173}}
    }