something must be going wrong, even with just slight degradations of the eye pattern. But that something would not be an amplitude error. Why not? Because, if there is just a slight degradation in eye pattern quality, then your CD player would still guess correctly which is the nearest whole integer number of time periods T to the actual time of zero crossing. The actual analog zero crossing of the eye pattern could be quite off the mark, up to half a time period T away from where it should be, and your CD player would still guess correctly which the nearest whole integer number is, and thereby would still create the correct music signal amplitude as represented in the digital data stream it generates.
      But then, what happens if the eye pattern gets quite bad in quality, say with time jitter or wander that takes some zero crossings to the halfway point, so that your CD player guesses wrong? In this case, your CD player would still create the correct music signal amplitude. That's because the digital data stream your CD player creates, from the analog waveform input from the CD, also contains parity check bits. These parity check bits had better check out. If they don't, then the digital error checking chips, a liitle later in your CD player's chain, can tell that the first early stage in your player guessed wrong when it guessed that 7.5 periods between zero crossings should be interpreted as 7, not 8, so it put the wrong sequence of 1's and 0's in its freshly created data stream. Furthermore, there is enough power and redundancy in this parity error checking so that your CD player's error chip not only can tell that your CD player's early stage guessed wrong, but also it can fix the error. In other words, it not only knows that the guess was wrong and produced the wrong data stream at that point, but it also knows what data stream is right at that point, so it can right the wrong and conceal the fact that there was ever any goof. Obviously, all's well that ends well. If the error correction chip can conceal the original wrong guess due to poor quality eye pattern, and can make the amplitude data correct, then there is no amplitude error in the final music signal you hear, and therefore there cannot be any amplitude based sonic clue that the eye pattern quality was poor. In sum, even if the eye pattern quality gets pretty poor, no amplitude errors result, so erroneous music signal amplitudes cannot be the clue that enables us to hear so sensitively when the eye pattern quality deteriorates even slightly.
      Incidentally, if the eye pattern quality gets very bad, then even the error correction circuits cannot figure out what the correct amplitude should be for a certain sample, so your CD player will interpolate between the previous and succeeding music samples (and if these are also uncorrectably wrong, then your player will do a temporarily soft mute). If and when such interpolation occurs, then we might be able to hear sonic degradation from it, but interpolation happens only when the eye pattern has gotten really bad, presumably much worse than it ever got during our experiments with various cleaning fluids. In assembling all the sonic evidence above about changes in sonic quality, we heard the sonic quality of the music being continuously better or worse (due to slight external factor changes that could only affect eye pattern quality). If the worse performances had been due to interpolation, then the music would have sounded good most of the time, with only occasional bursts of inferiority, since interpolation occurs only occasionally. So it follows that the inferior sonics we heard were not due to amplitude errors so bad that interpolation needed to be invoked.
      Now, if amplitude errors cannot explain why we hear sonic changes with slight changes in eye pattern quality, what else is there that could explain it? Well, remember that the music signal we hear is a two dimensional waveform, with amplitude being but one of the two dimensions, and time being the other dimension. Remember too that your CD player is responsible for actually creating the whole output music signal, and that if it creates a music waveform with the right amplitude at the wrong time, then the music will be distorted as surely as if it had created the wrong amplitude at the right time. Clearly, we should now look critically at that other dimension, time.
      When your CD player literally creates the music signal, it has to create both dimensions, amplitude and time. It has to decide at precisely what amplitude to place each sample point, and it also has to decide precisely when, in the continual ongoing march of time, to play that sample point from the DAC chip. If it plays the right amplitude but at the wrong time, even a slightly wrong time, then it will be distorting your music waveform. What's worse, if the precise moment it chooses to play each sample varies from ideal periodicity in a jittery or wandering manner, then your music will be distorted via intermodulation distortion (FM variety), with the jitter or wander modulating the distortion (just as flutter and wow do in analog turntables and analog tape machines).
      How does your CD player decide precisely when to play each sample? In an ideal CD player configuration there would be a master clock right next to the DAC chip, where the critical final conversion from digital to analog is made. This is to minimize time indeterminacy and jitter at the point that counts most, the final conversion from digital to analog music that you hear. Then, isolating and buffered stages would take this master clock signal to other parts of the nearby digital circuitry (after the FIFO buffer) that need it. On the other hand, the transport, the eye pattern signal coming from it, and all the nearby input circuitry would be totally isolated from the master clock, and would not be tied to it in any way. This input circuitry would feed the FIFO buffer, and the transport speed would be entirely controlled by a monitor that simply sensed whether this FIFO buffer was getting too empty or too full.
      This nearby input circuitry interprets the analog eye pattern waveform into a single bit digital bit stream, then decodes the specially coded bit stream, and then assembles the single bit data stream into digital words representing 16 bit amplitude (plus error correction data, etc.). All this input circuitry should have its own local timing clock, to tell it how long the period T should be, and to guide (strobe) all its digital decoding and word assembly. The clocking necessary to guide all this input circuitry would not in any way be tied to the master clock (next to the DAC chip) that guides the later circuitry. In other words, the FIFO buffer would be a totally asynchronous firewall between early transport input circuitry and later music processing circuitry. With such a system setup, the timing of the eye pattern zero crossings should not have any effect upon the timing of the final music samples as output from your CD player.
      Incidentally, this FIFO buffer firewall could theoretically be placed anywhere in the digital processing chain (even at the point where the incoming data is still a stream of single bits), and practical engineering considerations would dictate the best practical location. The important caveat is that, wherever the FIFO buffer firewall is placed, any timing needs of the circuitry before the FIFO buffer must be totally isolated from the master clock driving the DAC chip.
      Unfortunately, it seems that practical CD players do not conform to this ideal configuration. Instead, they seem to somehow tie the timing of the transport and input word assembly circuitry to the master clock. This would mean that these CD players effectively tie the zero crossings of the eye pattern to the master clock. Generally, there is some kind of feedback or phase locked loop, which ties the two together. The laudable intent of this tie is to adjust the transport and input circuitry to obey the master clock. But all feedback systems and phase locked loops have finite limitations, limitations relating to limited gain, compromises in time constants, and less than infinite rejection. We suspect that some of the temporal meanderings of the transport and some of the temporal meanderings of the zero crossings in a suboptimal quality eye pattern, somehow manage to slightly contaminate the master clock and/or timing of the digital signal as it propagates through your CD player toward the DAC chip, thanks to this tie-in between master clock and transport eye pattern.
      Once these contaminating time variations reach the input of the DAC chip, the game's over - you have distortion of your final music signal from your CD player. And even very slight timing variations, just a few picoseconds worth, can create distortion that's as bad as if a whole amplitude bit had been dropped or were wrong.
      CD players often do contain a FIFO buffer for what's called timebase correction, which does reduce the transport's high wow and flutter. But merely having and using a FIFO buffer is not sufficient, for it to be an effective firewall that can protect the timing integrity of your master clock from contamination. For a FIFO buffer to be an effective isolator, it is also necessary that the timing needs of the circuitry before the FIFO buffer not be tied in any way to the timing by the master clock of the circuitry after the FIFO buffer. This means that the timing of all the circuitry prior to the FIFO buffer must be totally asynchronous relative to the timing of all the circuitry after the FIFO buffer.
      Even a slight degradation of the eye pattern quality could produce a few picoseconds' worth of temporal indeterminacy as to exactly when each and every zero crossing occurs. For instance, a slight reduction in CD surface reflectivity could slightly reduce the eye amplitude in the eye pattern, or could make the ramp slope at the zero crossing less vertically steep, which would make the precise time at which the zero crossing occurred less precisely determinate. Also, a slight reduction in reflectivity could worsen the signal to noise ratio, thereby making noise a larger contributory factor in degrading the determination of precisely when each zero crossing occurred. Slowly acting AGC circuits in the laser-reading photodetector amplifier might be able to approximately compensate for gross variations in eye pattern amplitude, but not slight variations, and they could not improve a degraded signal to noise ratio.
      Thus, it at least makes sense that even slight degradation of eye pattern quality might adversely affect not the exactitude of the amplitude dimension of the final music waveform, but rather the exactitude of the time dimension of the final music waveform that your CD player is wholly responsible for creating. If the time dimension is not exact for each and every sample, then distortion of the music waveform results, and that distortion could well be the sonic degradation that we in fact hear, when the eye pattern is not at its best quality.

Adding It All Up

      If we add up everything we've learned in the four lessons above, it suddenly becomes clear that all kinds of CD tweaks can and indeed should be effective in making a sonic difference. Those who deride CD tweaks as snake oil are actually themselves the peddlers of improbability, or perhaps they naively subscribe to the popular misbelief that bits is bits, that digital is robust if not immune to external influences, and therefore nothing can possibly make any sonic difference. Now we know better. Digital is vulnerable, fragile, and as susceptible to analog external influences as analog ever was. Digital is as fertile a ground for inventive tweaking as the vinyl LP ever was, and legitimately so.

back to table of contents