One of the most commonly misunderstood aspects of signal processing is the Nyquist-Shannon Sampling Theorem. It has been noted before that many engineers misunderstand the theorem, selecting sampling frequencies that are often too low, causing errors in systems design. An understanding of the underlying theory behind signal sampling is also useful to the communications engineer who wants a deeper understanding of the application of analog and digital filtering and information processing in communications equipment.

## The Need for Sampling

Sampling analog, continuous, time-domain signals is an unavoidable step in enabling digital computers to represent data. Think about this quickly. If we want to describe a signal on a digital computer, we have to do it by representing the signal as a series of discrete values. Even if we use a formula to generate the signal values, the computer still represents each *separate* value for a given input value. A fundamental question that we must answer is, given a *continuous,* time-domain signal, how many *discrete* values do we require to accurately describe it?

Here is an example of how sampling a signal can go wrong. In this example, I have a wave with a frequency of 10Hz being sampled 9 times per second. Look at the shape of the wave that the samples are actually describing. It looks like a much lower frequency wave with a frequency of 1 Hz! You can see that the samples have not sufficiently described the waveform and we are missing information!

## The Periodic Impulse Signal

Sampling in *theory*, means that we take an *instantaneous* reading of a continuous signal at specific intervals. The frequency at which we take the readings is the inverse of the time between taking each sample. The act of sampling a continuous time-domain signal is effectively the same as multiplying the continuous time-domain signal by 1 at the time of taking the sample, and multiplying it by zero at any other time.

### Time-Domain

We can describe the the act of sampling itself as a pulse train of unit impulse functions that occur at the time that each sample is taken, as shown below. This is also referred to as a *comb function.*

Where:

- n is an integer from -∞ to ∞
- T is the period between samples, in seconds.
- a is the offset from zero, in seconds.

### Frequency-Domain:

The Fourier transform of the time-domain *comb function* above is simply another comb function in the frequency-domain with the characteristics shown in the image below. Remember that T is the period between time-domain samples.

## Sampled (Discrete Time) Signal

We can describe a discrete time-domain signal as the product of a continuous time-domain signal and an impulse train as below:

### Fourier Transform and Spectrum

The Fourier transform of a discrete time signal is shown below:

The way to think about the above is that you are taking the Fourier transform of the continuous time-domain signal x(t), at every location where the time-domain comb function is equal to 1. Another way to think about the above integral is to recall that multiplication in the time-domain is equivalent to *convolution* in the frequency-domain and vice versa. There for we can also write:

Let’s start by considering the arbitrary frequency-domain representation of a continuous, time-domain signal x(t) below:

From what we know about convolution, we can see that the convolution of the frequency-domain representation of the signal x(t) and the frequency-domain representation of the impulse train would be the following:

## Aliasing

In the image above you will notice that the bandwidth of the continuous time-domain signal x(t) easily extends beyond frequency ±4π/T Rad.s^{-1}. When we sample the continuous signal with a period T, we see that the spectrum of the signal repeats itself every 2π/T Rad.s^{-1}. This repetitive component introduced by the act of sampling causes the frequency components of our signal to be distorted! By looking at the image you can see the areas of overlap. For instance, at π/T Rad.s^{-1}, you can see that the magnitude of the frequency component of the sampled signal is now the sum of the original and all of the repeated spectra! This means that the magnitude of this frequency component is now more than double what it used to be! This will have drastic effects on the shape of the sampled time-domain signal.

So how can we fix it? Well, what if we doubled our sampling Frequency and used a period between samples of 0.5T? The resulting frequency-domain representation of the discrete time-domain signal is shown below:

We can see in the image above that the aliasing error introduced is now considerably smaller than it was before and the frequency components of the signal are not as greatly affected. This translates into less distortion of the discrete time-domain signal x[n] in comparison to the continuous time-domain signal x(t).

## Applying Nyquist Sampling Rate.

The Shannon-Nyquist sampling theorem states that the *minimum* rate at which you can sample a signal x(t) is at twice the frequency of the maximum frequency component of x(t). However, this assumes that the frequency components of the signal above the defined bandwidth are negligible and do not cause any error! Remember that the bandwidth of analog signals is simply defined as the frequency at which the magnitude is -3dB below the maximum. That would imply an enormous aliasing error if we were to sample at the *minimum* rate!

### Oversampling

As we saw above, one way to reduce the amount of aliasing error is to increase the frequency of sampling. This spreads the repeated frequency spectra further apart in the frequency-domain. This technique is called *oversampling* and is a useful strategy if you are planning on using a digital filter to remove any higher order frequency components or perform frequency separation for channel tuning etc. But then you would still have to sample higher than *those* components in order to accurately represent them in the discrete signal! Oversampling on its own is not a guaranteed solution!

The fundamental truth is, you cannot get away from the need for analog filtering. You will often see an analog, band-pass filter placed prior to the input to an Analog to Digital Converter. The purpose of this filter is to limit the maximum *significant* frequency component of the analog signal, prior to entering the sampling circuit. This allows the sampling circuit to use a defined sampling rate to *reliably* describe the analog signal in a discrete form.

### The Need for Analog Filtering

Whenever you are about to sample an analog signal, start by considering the analog signal’s bandwidth, and how that is defined. If we looked at the human voice speaking normally, we would notice that most power is in the range of 500 to 2000Hz and that we could get a clear signal if we limited ourselves to the frequencies between 300 to 3400Hz.

Ok, great! Let’s go ahead and sample at 8KHz and we will be A-Ok! Right? Wrong.

You have forgotten about the higher frequency components of the voice. They don’t make up a large proportion of the power of the signal, and we don’t need them to understand what is being said. But those higher frequency components will still have enough power to distort the sampled signal if we don’t attenuate them to a truly negligible value!

Filtering allows us to limit the bandwidth of our signal, attenuating the unnecessary higher order frequency components that we don’t need, preventing them from introducing any significant aliasing error. If we were to pass the original signal x(t) through a low pass filter, and then sample it with the original sampling period T, we would see the following:

You can see that in this case, because the analog filter has removed the higher order frequency components of the original signal, there is no longer any significant aliasing error in the sampled signal! Now, if we were to apply the technique of oversampling, our error would be completely negligible (as shown below)

—

That’s all for now!

Rob