|
The Sampling Theorem does not recover everything, you can't recover what it didn't measure. However, the complete DAC-System might be able to make a fair approximation. Keep in mind the potential error that I speak of becomes more prominent at higher frequencies. Also, keep in mind that at 10khz you are taking 4.41 samples per cycle, but at 20khz you are only taking 2.2 samples per cycle. At 1khz, you are taking 44.1 samples.
If you only take two samples per cycle then the odds of catching zero crossing are extremely slim, and if you catch zero crossover to resolve phase, then you miss the peak amplitudes, and if you catch the peak amplitude then you miss the zero crossing. However, this is happening at frequencies that we probably can not really hear at normal listening volumes.
But we are somewhat off on a tangent here. Red Book CD does a pretty good job of reproducing music. If we bump that up to 96k samples per second, then at 20khz we are taking 4.8 samples which will do a fair job of capturing frequency, phase, and amplitude. (1khz = 96 samples per cycle, 10khz = 9.6 s/c, 20hz = 4.8 s/c) perhaps overkill at the lower frequencies, but more than sufficient at realistic higher frequencies.
But bumping up to 24b/96k increases the file size considerably. The word length increases by 1.5x and the sample density increases by 2.18x, making the overall file size about 3.27x larger. In the past that was completely impractical, though with storage being so cheap, today the file size doesn't matter much.
You can't know what you don't know, but with enough computing power, you can make a pretty good guess.
Though this is something of a distraction from my central point, CDs can sound better than they do if the content is mixed right. As others have attested, they have CDs that sound stunning, and as they would also attest, I assume, they have CDs that are very bland.
Again, the problem is not the medium, but rather the nature of the content.
... in my opinion.
Steve/bluewizard |
|