In short, MQA's designers never thought about (or perhaps utterly failed to comprehend) how digital actually works in general, and here in particular how their reconstruction filter design would actually perform in the time domain while actually performing its design task of signal waveform reconstruction via convolution.

In this case, it suffices to conduct a simple Einsteinan thought experiment, to see how well MQA's signal waveform reconstruction performs in the time domain. MQA may well be correct, in their claim that the 'impulse response' of their filter is extremely and 'ideally' brief in temporal duration, even briefer than a sampling interval (which seems borne out by Stereophile's measurements of MQA's failure to reject spurious ultrasonic sampling images). But this would mean that MQA's signal waveform 'reconstruction' consists merely of isolated 'impulse response' spikes, one spike at each sample dot. MQA cannot itself build any bridge whatsoever even connecting the sample dots at all, let alone connecting these sample dots by a reconstructed bridge curved path that accurately replicates the original pre-sampled signal waveform's path, that actually did connect the sample dots to one another - especially in all those cases, almost universal for high frequency transients, where both sample dots framing a sampling interval are unlucky samplings, so the transient peak that needs to be correctly reconstructed in the middle of the sampling interval is actually much higher than the two framing sample dots, not lower as MQA's dual isolated spikes reconstruct. Thus, MQA's time domain distortion, in its version of signal waveform 'reconstruction', is even worse than that of the other modern revisionist 'time-optimized' reconstruction filter designs.

In short, MQA, by succeeding in designing a reconstruction filter with even more 'ideally' short impulse response, hence purportedly better time domain transient response, actually designed a filter that produces even worse time domain transient response in its actual reconstruction of the original signal waveform.

Our empirically derived lesson here is clear and unambiguous, and it corroborates and supports our a priori technical analysis above. The better the digital filter's so-called 'impulse response' is, the worse is its actual time domain transient response performance when actually performing every playback filter's chief (and only) design task of convolution.

MQA's designers could have instantly seen this giant fatal flaw in their whole program if they had simply conducted a test that obeyed Goldilocks' lesson, by making their filter actually perform the design task of convolution, by simple conducting a pulse response test employing 2 (or more) sample dots as the test pulse input signal. Then they would have instantly seen that their system's 'reconstructed' output actually built no time domain connecting bridge signal path at all between sample dots, but instead merely output two isolated spikes.

Yet again, we see that the truth about the way digital actually works is the complete opposite of these digital engineers' completely backwards beliefs and practices, and that therefore they don't have a clue about how digital actually works. Yet again, we have to  incredulously ask, why, oh why, oh why did these brilliant PhD digital engineers, professedly so sensitive to time domain performance, never even actually test their own filter design in the time domain, in actual time domain performance, by making it actually perform its design task of convolution?


*     *     *     *     *     *


Incidentally, some have recently opined that the single sample Kronecker delta is illegal for the digital impulse response test because its spectral content as a narrow impulse violates the sampling theorem's Nyquist limit. I myself had written this several years ago, but I have since come to realize that this is actually a false red herring. It might be true that the narrow single sample impulse has spectral content beyond the Nyquist limit, but the double fisted truth is that 1) this bandwidth 'violation' doesn't matter to the validity of the consequent impulse response test, and 2) the single sample impulse response test remains invalid for many other various important reasons besides its alleged spectral range, e.g. violation of the Goldilocks rule by totally failing to activate convolution performance by the filter under test.

As to point 1), note that the only reason for the Nyquist frequency boundary stricture is to avoid a problem in the recording half of a digital system, namely the sampling process' conversion, from a higher-than-Nyquist frequency input into the sampling process, to a lower-than-Nyquist alias frequency. This conversion actually occurs because of a fact not widely recognized by digital engineers: any sampling system is actually intrinsically incapable of even recognizing any frequency above Nyquist, and thus actually imposes its own intrinsic bandwidth 'brickwall' filter to any such input signal, and indeed to all input signals (we see this intrinsic sampling system brickwall filtering happening with a movie, lacking any anti-aliasing filter, sampling wagon wheels moving ever faster - this sampling system simply cannot recognize nor reproduce wagon wheel spokes spinning any faster than Nyquist, so it automatically imposes its intrinsic sampling brickwall filter).

A sampling system does respond to this supra-Nyquist signal, but does so with a sampled alias sub-Nyquist lower frequency signal (a helpful analogy: think of an analog amplifier responding to a signal amplitude overdrive, and being simply incapable of outputting a signal amplitude higher than its clipping point, so instead it outputs a clipped alias lower amplitude signal, instead of the full expected amplified signal amplitude). Thus, the recording half of a digital system does not rebel in any way if presented with a supra-Nyquist signal, but instead merely imposes its intrinsic bandwidth filter and converts it to a sub-Nyquist alias.

Now, note that this frequency conversion is wrought only by the sampling process itself, and within the prior recording half of a digital system. But in this case, the single sample Kronecker delta is injected directly into the digital reconstruction filter, which is within the later playback half of a digital system, long after the prior recording half's sampling process. Thus, anything that the earlier sampling process might do, to convert a supra-Nyquist signal to a sub-Nyquist alias signal, is utterly irrelevant, because our single sample test impulse signal is injected into the playback half, long after the sampling process within the earlier recording half. Hence it necessarily a priori cannot possibly affect any of the earlier recording half's sampling process behavior, and cannot possibly even enter our later playback filter considerations here.

Several other considerations further strengthen our proof here of the irrelevance of supra-Nyquist spectral content in the single sample Kronecker delta impulse test signal, that is put into the playback reconstruction filter to 'test' it. We and also the reconstruction filter can directly see exactly where this one sample dot is, and what amplitude it has, and when it occurs, without ambiguity. We and the reconstruction filter can also directly and correctly infer, without any ambiguity about aliasing, what spectral frequency is being represented by this single sample impulse, namely all frequencies (and frequencies that, as in any impulse response test, are deliberately input to the DUT beyond that DUT's bandwidth capability, in order to probe and test the signal-altering nature and impact of that DUT's bandwidth limitations).

Speaking of DUT capabilities to handle high frequency bandwidth, the reconstruction filter itself (the DUT being tested here by our 'impulse response test') has no trouble in handling supra-Nyquist frequencies, and indeed does so all the time, when it accepts input from the sampling process, since the sampled signal itself contains tons of supra-Nyquist sampling image information and energy.

Also, even a very wide rectangular pulse as a test signal would still contain 'illegal' supra-Nyquist spectral content, due to its steep sides and sharp corners. Indeed, even the 3 dot shaped pulse we designed as a test signal still contains some 'illegal' supra-Nyquist spectral energy. In fact, digital's strict Nyquist bandwidth limitation is actually violated every time there is a straight line anywhere between sample points (which is why a correct signal waveform reconstruction always draws curves between and among sample dots, but never a straight line - and indeed also why the sinc bandwidth limited function itself never completely dies out to a zero amplitude straight line). The Nyquist violation occurs not because of the bandwidth of the straight line itself, but rather because the transition into and out of that straight line, to any other path, requires that transition to execute an infinitely sharp corner, in any digital system that employs quantized amplitude steps, even if that corner is very small, e.g. merely 1 LSB (thanks to Kevin Halverson for this insight). Infinitely sharp corners, no matter how small, violate the Nyquist limit by containing infinite high frequency spectral energy, and they require infinite bandwidth to execute, thus being beyond the capability of any bandwidth limited system (digital or analog).

When we conduct our pulse response test using our 3 dot shaped pulse design, the tiny negative ears at the base of the sinc function's reconstructed pulse accurately plot the curved signal waveform path that a bandwidth limited system must follow, in transitioning back to the 'straight' line of zero amplitude. We tried widening our shaped pulse test signal to 5 dots and then to 7 dots. As sinc's correctly reconstructed pulse got wider, its sides became less steep, and thus the transition from (and then back to) the zero amplitude 'straight' line could be accomplished with less dramatic curvature, hence with progressively smaller amplitude negative ears (which also indicates that these ears do not represent Gibbs ringing).

The only way for a very narrow impulse, containing supra-Nyquist spectral energy, to cause any relevant problems to a digital system, would be to feed this impulse in analog form into the recording end of a digital system, and to insert it after the A/D convertor's anti-aliasing input filter but before the A/D convertor's sampling process. That's a very different arena of digital than the playback reconstruction we are focusing on here.

In sum, any supra-Nyquist frequency spectral content, injected into the playback digital system after the sampling process, necessarily a priori cannot possibly cause any type of spectral frequency violation that affects only the recording sampling process itself.


*     *     *     *     *     *


By the way, the sinc brickwall filter not only provides virtually perfect reconstruction in the time domain, but also provides perfect reconstruction in the frequency domain, with flat amplitude response and linear phase response throughout the passband - whereas all revisionist reconstruction filters give us the worst of both worlds, with poor reconstruction performance in both the time and frequency domains.

Also, all these revisionist engineers erroneously believe that digital filter design necessarily requires a compromise tradeoff between two conflicting performance capabilities: time domain performance vs. frequency domain performance - but the sinc brickwall filter's perfect performance in both time and frequency domains proves all of them wrong, and proves yet again that they don't have a clue in comprehending how digital actually works.

Note further that, in an analog impulse response test, it is physically impossible to generate and provide a truly instantaneous impulse (nor would we want to employ an impulse test signal that was too brief to activate the DUT's actual performance, e.g. a test impulse briefer than a solid state DUT's turn-on delay). So analog necessarily already uses a shaped pulse of substantial duration, not the infinitesimally short duration of a conceptually ideal impulse. And this substantial duration obviously contains plural 'instants' of time, thereby fully activating the analog filter to actually perform its design task of convolution (including both multiplication and addition), just as Goldilocks taught all of us as children.

There's much more to explore and analyze regarding the impulse response test, and we devote an entire installment of this serialized article Digital Done Wrong to just this one key topic. But the highlights above should suffice for your questions and challenges regarding MQA's unique, nearly 'ideal' impulse response, and its dire time domain distortion consequences.


Final Comment for this Installment


Throughout human knowledge (including all science and engineering), a proposed theory or model can be rejected via a priori analysis, so that it is overthrown before it even leaves the starting gate, before any empirical a posteriori experiments or tests are performed. Examples of such a priori analysis techniques include a reductio ad absurdum, or taking a model or parameter to the limit.

Here we can combine these techniques by considering a priori what happens if we take the revisionist digital engineering theory, for improving digital's time domain transient response performance by shortening impulse response, and its apotheosis in MQA, all the way to this theory's very own proposed ideal goal target as a limit.

This theory's ideal target goal, to achieve the best possible time domain transient response from their playback digital reconstruction filter, is a filter design with the briefest possible impulse response (i.e. a filter design whose output, in response to an input test impulse, replicates the brevity of that input test impulse as closely as possible, thereby passing the gold standard evaluative benchmark impulse response test with flying colors).

However, we can see a priori that this ideal target goal impulse response, of this revisionist digital movement's theory, does not provide the best possible time domain transient response performance from this playback digital reconstruction filter. Indeed, it provides precisely the very opposite, namely the worst possible time domain transient response accuracy (the worst possible distortion) in its 'reconstruction' of the original pre-sampled signal waveform.

Indeed, this ideal goal totally fails to accomplish any reconstruction whatsoever in the time domain. It simply and merely leaves standing each of the isolated spikes that each of the individual sample dots already represents, and totally fails to build any portion of the required time domain bridges (reconstructed signal paths) between and among these sample dot spikes. This time-domain-ideal reconstruction filter does nothing, and might as well not even exist nor be in the signal path.

Whenever any theory's ideal goal provides the worst possible performance instead of the best possible performance (here in the time domain), we know a priori that this theory is and must be wrong, indeed so very wrong that it must be backwards, preaching the very opposite of the truth.

We also know that the preachers and practitioners of this theory cannot possibly have a clue of comprehension about the way that their chosen field of science (here digital) actually works, since they are espousing (and foisting upon the consumer) a theory that does the complete opposite of what they claim, and that (here in digital) butchers the consumer's signal waveform - and moreover does this butchering in the very time domain that they pretend such expertly knowledgeable and caring fealty to !!!


    back to table of contents