the new CD standard required. They felt that they could get better results by using a DAC chip that had only 14 bits of resolution, and by also then adding noise shaping averaging to effectively improve the bit resolution of this 14 bit system, to the point where it could deliver 16 bits of musical resolution (at least for music's middle and lower frequencies). However, as DAC chips later improved, and became capable of true 16 bit (then true 18 and 20 bit) PCM resolution, the apparent need for noise shaping averaging disappeared from PCM. After all, if 16 bits defined perfect sound forever, why should anyone bother to improve upon the 16 bit resolution of 16/44 PCM?
     So the basic tool of noise shaping averaging was already available, well known for 1 bit sigma delta systems (where it is virtually a requirement), and even lurking at the fringes of multibit PCM systems. The only reason it was not applied to all PCM systems was the haughty industry attitude that 16 bits of PCM defined perfect resolution, so there was no need to try for better resolution from 16 bit PCM. But now we know better. The measurements of Hotline 49, suggesting that we need at least 21 bits of resolution, have borne fruit in today's 20 bit and 24 bit PCM systems, and we can hear that they do sound better than 16 bit, with better, finer, more natural revelation of music's manifold subtleties.
     So all we still need is a change in attitude. Instead of haughtily regarding 16 bit PCM as perfect sound forever, we should humbly regard it as a crude, noisy digital signal that only coarsely approximates the correct musical waveform, just as we already regard the very crude and noisy 1 bit digital signal of sigma delta systems. The technique of noise shaping averaging can be used to improve this crude 16 bit resolution, just as this technique is used to improve the crude 1 bit resolution of sigma delta systems.
     Once we learn the lesson that super PCM, this hybrid of PCM and noise shaping averaging borrowed from sigma delta, sounds better than PCM, we can start using this new hybrid super PCM for all digital work. Our new multibit PCM systems might give us true 20 bit and true 24 bit resolution, but we shouldn't repeat the mistake of turning haughty again, and assuming that they give us perfect sound forever. Why not use noise shaping averaging to improve their sound as well?


Purcell Noise Shaping Averaging

     How does the Purcell do its noise shaping averaging? It's helpful to think of this averaging as a filtering operation, which integrates high frequency noise to detect the true trend of the lower frequency music waveform amidst that noise.
     What are the basic tools needed for this filtering operation?
     The first required tool is upsampling. That's right, upsampling is merely a subordinate tool required by virtually every digital filtering operation, including the noise shaping averaging operation of the Purcell. Upsampling is not an end in itself, it does not provide a sonic benefit per se, and it does not extend the bandwidth of the music on the original recording (thus one should not expect to see any music bandwidth extension on measurements of the Purcell).
     Even the humble digital reconstruction filter, built into nearly every multibit PCM CD player and D/A convertor, relies on upsampling as a subordinate tool, in order to do its filtering work. The job of this digital reconstruction filter is simply to provide a phase correct brickwall filter in a digitally computed format (instead of using a complex analog filter, which has many problems, including phase errors). As discussed more extensively in IAR Hotline, this brickwall filter is required (as per the Nyquist theorem) to correctly reconstruct, from skimpily infrequent digital data samples, the higher frequencies of the musical waveform (say all frequencies above 2000 Hz in a 20,000 Hz bandwidth system; below 2000 Hz one can derive the correct waveform via the simple follow-the-dots model, but above 2000 Hz one must rely on a true brickwall filter to reconstruct the correct music waveform). This digital reconstruction filter operates by first upsampling the data stream coming off the 16/44 CD, usually by 4 or 8 times, from 44.1 kHz to 176.4 or 352.8 kHz.
     Why does even a humble digital reconstruction filter require upsampling as a subordinate tool? The computer, which does the number crunching that implements the brickwall filter, requires several computations per incoming data point, in order to effciently produce an accurate integrated running average that appropriately smooths the input data in the right way, to produce a filtering of all frequencies above the brickwall filter point. You might say that the computer has to be faster than the filter point in order to implement that filter frequency.
     Analogously, consider a filter in a speaker crossover, whose job it is to filter out frequencies above 5000 Hz from the midrange driver. Its job would be to average, integrate, smooth out the incoming music signal, so that only frequencies below 5000 Hz remain. In order to do this job, the filter has to be able to look at all musical frequencies higher than 5000 Hz, and reject them or absorb them. If the crossover filter lacks the ability to look accurately at frequencies above the 5000 Hz filter point (say because its capacitors and inductors fail to act properly above 5000 Hz), then the filter won't be able to accurately filter out frequencies above 5000 Hz. In sum, for any filter to work properly, it has to be able to look both above and below the filter point.
     Likewise, a computer implementing a filter via digital computation has to be able to look both above and below the filter point. To enable this computer to look above the filter point, all we have to do is make its number crunching run faster, so it can look at and reject higher frequencies. So it's convenient to run this computer at several times (4 or 8 times) the sampling rate of the incoming data stream that we want to filter (this also yields further benefits beyond the scope of this discussion, such as moving the unwanted spectrum images farther away).
     Of course, if we raise the computer's number crunching rate to several times the true sample rate of the input digital data, we have a new minor problem. The computer's number crunching is devouring numbers 4 times or 8 times faster than we have numbers coming off the CD. So what can we do? Luckily, it turns out that all we have to do is feed the computer some extra dummy numbers, to fill up its faster devouring rate. We can simply stuff extra numbers in between the genuine numbers coming off the CD. Those extra numbers might be simply set to zero, or they might simply be repetitions of the previous genuine number from the CD. For example, let's look at just two sample periods coming off the CD. Suppose the two genuine numbers coming off the CD in these two sample periods were the two numbers 5, 6. We could feed the computer 4 times as many numbers, at a 4 times faster rate, by feeding 50006000 to the computer (or 55556666).
     This simple charade is enough to make the computer happy, and allow it to do its filtering calculation. Note emphatically that this charade does not create any new information, nor does it extend the true sampled information of the incoming data to some higher frequency or wider bandwidth. This charade merely stuffs dummy data into the computer, in order to allow it to crunch numbers at a faster rate, which in turn is done in order to allow the computer to effectively work as a filter. So this charade is actually merely a subordinate tool to a subordinate tool, the end goal being to effectively use a computer as an averaging filter.
     This simple charade has a name. It's called upsampling. As you can see, upsampling hardly deserves the accolades bestowed upon it by the popular press. It is not an end in itself. By itself, it does not accomplish anything. It certainly does not extend the bandwidth of music or magically make it sound better.
     Even a simple digital reconstruction filter in a humble CD player already employs and relies on the subordinate tool of upsampling. Although such digital filters do execute averaging or integrating of the signal, they don't perform the additional step of noise shifting or noise shaping. A computer can be programmed to do this additional step. In this case it relies even more on upsampling as a subordinate tool. Noise shifting or shaping involves a computer calculation in which lower frequency statistical trends are detected amidst broadband noise. The computer calculation highlights these lower frequency trends, reducing the noise that surrounds them in their lower frequency portion of the spectrum. But noise, like energy, is an entity that demands conservation; you can't just get rid of it. So what happens to the noise that the computer calculation reduces in the lower frequency portion of the spectrum? It gets dumped (by this same computer calculation) into the higher frequency portion of the spectrum, as extra (worse) noise there.
     How does this benefit music? It can benefit music if you set up the noise shifting computer calculation to look at a very wide spectrum, much wider (extending to a much higher frequency) than the musical spectrum. If you do this, then the lower frequency portion, where the noise shifting computer calculation reduces noise, can correspond to the entire musical spectrum. And the higher frequency portion, where the extra noise gets added, can correspond to frequencies entirely above the musical spectrum (where they can later be filtered out and not heard). In this way, the noise shifting computer calculation can reduce noise (thus improving effective bit resolution) for essentially the entire musical spectrum, while dumping the extra noise entirely above the musical spectrum.
     Note, however, that this plan can work only if the computer calculation works with and looks at a spectrum that is much wider than the musical spectrum, that extends to a frequency several times higher than the music. In order to work with and look at a higher frequency, a computer has to do its calculations faster, since higher frequency events happen faster (by definition). In other words, the computer must do its work with and look at a sampling rate which is several times faster than the basic minimum sampling rate that was sufficient to capture the music spectrum (which of course is what is coming off your CD).
     It's time once again to call in our old friend, that subordinate tool, that charade called upsampling. Our end goal this time is to employ noise shifting (shaping) averaging, to reduce noise and improve bit resolution within the musical spectrum. In order to do this by computer calculation, we have to get the computer going faster than the input sampling rate that covers only the musical spectrum, so that the computer can look at a wider spectrum, extending to higher frequencies, and dump the extra noise above the musical spectrum, into the higher frequencies of this wider spectrum. It's easy enough to get computers to crunch numbers faster, by simply increasing the clock rate. But, once again, we have to supply extra numbers in the input data stream to the computer, since we have now increased its speed so it is devouring numbers several times faster than they are coming off the CD. So we upsample. To upsample, we simply take the stream of genuine data coming of the CD, and create a new stream with 4 (or 8) times as many slots, flowing by 4 (or 8) times faster. We then simply fill the many empty slots with dummy numbers. No big deal. Doesn't do anything magic. All it does do is keep the computer happy, allowing it to crunch numbers at a faster rate.
     Of course, we now know that allowing the computer to crunch numbers at a faster rate in turn allows it to look at and work with a wider spectrum, including frequencies above the musical spectrum. And that in turn allows it to dump extra noise above the musical spectrum and reduce noise within the musical spectrum, as part of a noise shifting (shaping) calculation. And that in turn allows it to improve effective bit resolution within the musical spectrum, thanks to the statistical trend averaging that occurs with noise shaping. And, finally, it is this improved effective bit resolution that provides audible sonic benefits to music.
     The charade of upsampling might be merely a subordinate tool of noise shaping averaging, but it is a necessary tool. Without upsampling, the Purcell would lack the wider spectrum in which to dump the extra noise, and thus it would be crippled in its ability to reduce noise and improve bit resolution within the musical spectrum. Thus, you should always set up the Purcell to upsample, to output a higher sampling rate than the input (assuming you want to enjoy the benefits of improved effective bit resolution). One should not expect to see the Purcell yield any improvement in effective bit resolution or reduction in noise unless one does set up an upsample (for example, merely asking the Purcell for an increase in word length, say from 16 to 24, should not furnish any benefit unless one also simultaneously asks for an upsample, an increase in sampling rate).
     The second tool used by the Purcell, in order to achieve its goal of noise shaping averaging, is an increase in word length. This would typically call for setting the Purcell to a 24 bit output word length when the input word length is 16 bits (from a 16/44 CD). Note that increasing the word length is not the same as increasing bit resolution. Increasing the word length is simply a matter of outputting each digital sample (word) with more bits than were input. There's nothing magical about this. And increasing the word length per se does not improve the bit resolution of the music signal nor improve the sound. For example, if you accept 16 bit data sample words as input and then simply add 8 zeros as the least significant bits, you can now output 24 bit words -- but you haven't added any new information, nor have you improved the resolution of the signal so it more accurately describes the music.
     So, just as we saw above with upsampling, increasing the word length per se is not a goal of the Purcell, nor is it per se responsible for any sonic improvement. Why then should the Purcell bother creating a longer word length with more bits for the output, and why should we set up the Purcell's adjustable settings to do so? The Purcell uses the increased word length as a subordinate tool, just as it used upsampling as a subordinate tool. The Purcell's primary goal is noise shaping averaging, an operation which can truly improve the effective bit resolution. Noise shaping averaging is accomplished by digital computation, a process which produces additional detail as less significant digits (as any multiplying or averaging arithmetic calculation does). This additional detail actually can describe the trend of the music signal, averaged over a number of samples, more accurately than the resolution of any one sample. Analogously, the average family has 2.3748 children, a more accurate figure than you could obtain by looking at any one family as a data sample (the closest you could get there would be a crudely approximate 2 or 3, since no one has .3 children, let alone .3748 children).
     This additional detail can describe the music signal waveform more accurately than any one crudely approximate 16 bit sample, so it can improve the effective bit resolution. This additional detail is especially helpful in more accurately describing the trend of music signals for which there are many 16 bits samples taken at 44.1 kHz, namely music's middle and lower frequencies. The Purcell calculates this additional musical detail as part of its noise shaping averaging operation.
     It's important to emphasize this. The additional musical detail comes from the noise shaping averaging process, accomplished by the computer calculations; it does not come from simply increasing the word length so the Purcell's output has more bits than the input.
     Of course, now that the Purcell's noise shaping averaging calculations have given us this additional musical detail, it sure would be nice to be able to pass this higher resolution digital data on to the D/A convertor. Setting the Purcell's output word length, to more bits than the input word length, gives the Purcell a place to put this additional musical detail, so it can be passed on to your D/A convertor. Thus, increasing the word length per se does not accomplish anything beneficial, but it is a useful subordinate tool that allows the Purcell to easily pass improved musical resolution, from its noise shaping averaging operation, on to your D/A convertor.
     It's probably useful to set the Purcell's output word length to the maximum that your D/A processor can profitably accept as input. By doing this, you can at least be sure that there won't be needless truncation of improved musical detail that the Purcell has recovered.
     Moreover, you should also ask for the maximum upsampling that the Purcell can give you, consistent with what your D/A convertor can accept as an input sampling rate. A greater upsampling is better because it creates a wider high frequency spectrum for the Purcell's noise shaping computer to use as a dumping ground for the extra noise, and it gives the computer, with its noise shaping filter algorithm, more computation samples per cycle of the input music signal. This in turn allows the Purcell's computer to further reduce noise and further improve effective bit resolution within the musical spectrum.
     This is especially relevant for music's treble frequencies, which are the closest to that perilous border dividing the musical spectrum where noise is reduced, and bit resolution is improved, from the upper spectrum where extra noise is dumped and resolution is actually worsened by a noise shaper. If you want the Purcell to improve not just music's middle and lower frequencies but also music's treble frequencies, it is crucial that you set up the Purcell for the maximum possible upsampling (e.g from 44.1 to 192 kHz). This will yield the most computational oversamples possible (e.g. 4) per cycle of musical treble waveform at the top of the musical spectrum (e.g. 20,000 Hz). Merely 4 oversamples doesn't furnish much benefit from averaging, and therefore can't reduce noise nor improve effective bit resolution by much, but it might allow the Purcell's computer to at least accomplish some benefit for music's highest treble notes. Recall that music's lower frequency notes naturally benefit far more, since they have many more than 4 oversamples to average, per cycle of music waveform.
     If you were to set the Purcell's upsampling to output at only 96 kHz instead of 192 kHz (given the same 16/44 input), there would only be half as many oversamples for the computation to average, so bit resolution could not be improved as much, over most of the musical spectrum, and especially so for music's highest treble notes, where there would now be only 2 oversamples to average together.
     One of our chief criticisms of Bitstream was that music's treble transients sound like a burst of fuzzy noise instead of the genuine musical sound. This sonic finding was corroborated by our measurements, showing very bad noise for the treble frequencies (much worse than in multibit PCM CD players). even Philips' own graphs showed the system noise climbing very steeply, still within the musical spectrum below 20,000 Hz, for music's treble frequencies. Thus, Bitstream already failed to be effective enough in reducing noise, and improving bit resolution, for the upper end of the musical spectrum. It was dumping the extra noise above the musical spectrum, but so close to

(Continued on page 23)