I'm a long time reader of Sound On Sound, and a studio owner. A colleague of mine recently discovered that the suite of Universal Audio Powered Plug-ins, which we all love and use, band-limits the audio in varying degrees depending on the plug-in. As only one example, the 1176 emulation will not pass any audio above 28kHz, rendering any session that uses this plug-in on the master bus as effectively a 56kHz session, no matter the sample rate of the original project. I love the UA plug-ins, but feel there is some dishonesty at play. Engineers like myself (and I'm sure you!) are expected to deliver the highest quality product we can to our clients, and that includes high-bandwidth audio if they choose it.
Nick Lloyd, via email
SOS Technical Editor Hugh Robjohns replies: There's a small but important technical point I should make first of all: the sample rate determines the potential audio bandwidth, but the actual audio bandwidth does not alter the project sample rate. There are sometimes good technical reasons for, and benefits in, restricting the bandwidth within a high sample-rate project.
Your colleague is quite correct, though, in his assertion that some — but certainly not all — of UA's plug-ins restrict the processed audio signal bandwidth to some degree. The very short explanation is that this is a deliberate and pragmatic engineering compromise, and without it the UAD plug-ins just wouldn't sound as good as they do.
Is there any dishonesty involved? No! This is a simple disparity between intelligent and pragmatic engineering versus misguided expectations derived from marketing hype. At the end of the day it is the sound that matters, not what an FFT spectrum display looks like.
Before I explain the sensible reasons for UAD's band-limiting approach, it might be worth revisiting the real world of audio engineering, where everything is band-limited to some degree. The vast majority of microphones and loudspeakers, for example, roll off at around 25kHz (or lower), and most analogue audio equipment — preamps, dynamics processors, mixing consoles and all the rest — is also all band-limited. There are perfectly intelligent engineering reasons for deliberately curtailing the frequency response in this way and, most importantly of all, our own ears are band-limited too. For that reason, I'd hazard a very confident guess that your colleague didn't detect UAD's band-limiting just by listening!
To get the detailed explanation, I spent a very interesting 35 minutes on the phone with Bill Putnam Jr, the co-founder of UA, discussing the company's approach to plug-in design and the reasons for restricting the audio bandwidth in some cases.
Where UA need to model complex non-linearities and the characteristic artifacts of transformers, valves, transistors and other circuit components and topologies, they write the plug-in code to run internally at a fixed 192kHz sample rate, upsampling the source audio as necessary and down-sampling again after processing. However, even at 192kHz there is still a finite limit to the highest frequency at which these non-linearities can be computed accurately without creating aliases and other processing inaccuracies. The more complex the model, the greater this problem becomes.
Consequently the UAD boffins deliberately, but gently, roll off the high-frequency audio response well above the audible range, to ensure that the modelling within the audible range is as accurate and precise as possible. This is a normal engineering trade-off, and in these cases it balances achieving the most accurate modelling across the audible part of the signal bandwidth (but sacrificing the modelling of ultrasonic frequencies), against processing the entire project bandwidth but with audibly less accurate modelling. Not surprisingly, the UAD boffins choose to prioritise sound quality, and design their emulations to sound as close to the original units as they possibly can — even where that means sacrificing the ability to process ultrasonic (and thus inaudible) signal elements.
Interestingly, Bill told me that when the team are developing a plug-in (something that can easily take up to a year) they carefully evaluate how far the processed audio bandwidth can be extended while retaining the required accuracy of sound modelling. That's why different plug-ins roll off at different frequencies: every plug-in's algorithms are individually optimised, with the ear being the final arbiter. Moreover, most vintage devices are bandwidth-limited anyway, and some actually become unstable at ultrasonic frequencies. Bill cited the team's work in developing the latest Pultec EQ emulation, where they discovered an unstable filter pole around 60kHz in the hardware unit. If that had been modelled accurately it would cause serious aliasing problems for any high sample-rate project!
Logically, it might seem that processing at a higher sampling rate — 384kHz, say — would remove the bandwidth restriction, and I put that to Bill. However, he explained that although processing at a higher rate would permit a proportionally wider audio bandwidth to remain artifact-free, it would impose far less acceptable compromises at the low-frequency end of things, too. Specifically, the precision of low-frequency control parameters would suffer dramatically because the difference between, say, 20Hz and 30Hz turnover settings in a high-pass filter becomes such a small proportion of the total signal bandwidth. Retaining the required parameter precision would demand impractically lengthy filter coefficients and become very difficult to process. For these reasons UA feel that processing at 192kHz offers the best engineering compromise in maximising control parameter precision and effect modelling accuracy across the audible bandwidth.
Not surprisingly, it is the emulations involving the most complex non-linearities that are band-limited: plug-ins like the Urei 1176 and Neve 33609 compressors, and the Manley Massive Passive equaliser, for example. These emulations all roll off smoothly above 28 to 35kHz. In contrast, emulations of devices like the new Dangerous Bax EQ, where there is no requirement to model complex non-linearities, have no bandwidth restrictions at all — the processed audio signal extends right up to the Nyquist limit for the project's sample rate.
In summary, Bill and his team of engineers believe that the end results entirely justify their band-limiting tactic. Moreover, he questions the logic of anyone insisting on processing ultrasonic material that they can't possibly hear, since they can't know whether it forms relevant audio content or spurious noise. I must say I share that view entirely, and while there are perfectly sane reasons for digitising and processing audio at high sample rates in some situations, the audio bandwidth will always be curtailed somewhere in the signal chain, either by the microphones, the preamps, the converters, the plug-in effects processing, the speakers or the listener's ears. In reality, it will be a combination of all of them, but it is only the sound we hear at the end of the complete chain that actually matters — not what the FFT spectrum display looks like, or imprudent expectations of what a high sample-rate source should deliver.