what is a diffgfdn?
By Leandro Alvarez · · 8 min read
A plain-English tour of the Differentiable Geometric Feedback Delay Network — the reverb topology that powers Arna.
the short version.
Arna is built on a Differentiable Geometric Feedback Delay Network, or DiffGFDN. It is a reverb topology where the routing matrix has a geometric interpretation (the network corresponds to the way sound bounces around a physical room) and where every internal parameter is differentiable. Differentiable means we can train them — by gradient descent, end-to-end, against a target.
In Arna's case, the target is a corpus of measured impulse responses. The network learns decay times, mode densities, and damping curves the way a neural network learns weights. The knobs you see in the plugin sit on top of that trained network and translate musician-friendly values (decay in seconds, body in percent) into the underlying parameters.
feedback delay networks, briefly.
A feedback delay network (FDN) is the workhorse of algorithmic reverb. The structure is simple: a bank of delay lines whose outputs feed back into the inputs through a square mixing matrix. The matrix decides how much energy bounces from each line into every other line; the delay lengths set the modal density and the early character; the feedback gain sets the decay.
Classical FDNs are tuned by ear. Engineers pick delay lengths that are mutually prime, choose orthogonal matrices like Hadamard or Householder for stability, and adjust feedback gains until the tail sounds "like a hall". The math is elegant; the tuning is artisanal.
geometric, then.
A geometric FDN constrains the mixing matrix so that it corresponds to a real spatial topology. Instead of a generic orthogonal matrix, the matrix is built from the way sound actually bounces around a room — wall by wall, edge by edge. The early reflections you hear are the same ones the geometry predicts; the late field is the same network running longer. Early and late always agree about size.
That is an audible difference. In Arna, when you raise the Early knob, the reflections you hear are the reflections that match the room implied by the rest of the parameters. The plugin doesn't crossfade between an early stage and a late stage — it's one network all the way through.
differentiable, then.
Differentiablemeans every parameter inside the network — the delay lengths, the mixing weights, the per-line damping — has a defined derivative with respect to the output. That sounds dry, but it's the entire reason a DiffGFDN exists.
Once everything is differentiable, you can hand the network a target impulse response (an actual recorded room, or a target plate), measure the loss between what the network produces and the target, and use gradient descent to nudge the parameters toward the target. The same machinery that trains a neural network trains the reverb.
Arna's presets — Cathedral, Hall, Plate, Drum Room, the rest — were trained this way. Each one started as a target IR or a target frequency-domain envelope; the DiffGFDN was trained until its output matched. The knobs then became interpolations through the trained parameter space.
what about the brain?
The Brain is a separate spectral analyser sitting in front of the DiffGFDN. It listens to what you feed Arna — peaks, valleys, transients, sustained content — and modulates a small set of network parameters in response. Loud transients open up the bloom; quiet sustained passages tighten it. The reverb adapts to the source instead of applying a static impulse to it.
The Brain is what lets one knob do a lot. Without it, the journey from Tight to Crown would sound mechanical. With it, the same automation curve produces tape-like, breathing texture, because the reverb is making thousands of tiny decisions per second about how loud to be.
so why this, instead of convolution?
Convolution reverb plays a recorded impulse response back at the source. It is exact, but static — the room has no opinion about what you feed it. Algorithmic reverbs (FDNs and their cousins) compute the tail in real time, which is what makes adaptation, modulation, and freeze gestures possible.
A DiffGFDN sits in the middle. It is algorithmic — the tail is computed live, the network can be modulated, the feedback gain can be lifted to unity for an infinite tail — but the parameters were trained against measured rooms, so it sounds like a real space when you want it to.
further reading.
- Reverberation — Wikipedia — the basics of room acoustics and IRs.
- Jot, J.-M. & Chaigne, A. (1991), "Digital Delay Networks for Designing Artificial Reverberators", AES Convention 90 — the canonical FDN paper.
- Engel, J. et al. (2020), "DDSP: Differentiable Digital Signal Processing", ICLR — the framing that made differentiable DSP a thing.
why we ship arna with juce 8.
JUCE 8 was a real upgrade for the kind of reverb plugin Arna wants to be. Notes on what changed and what it lets us do.