Digital communication systems require received signals to be filtered and amplified before they can be demodulated and passed to the analog to digital converter. Similarly, transmitted signals must also be passed to an analog amplifier and filter before being transmitted. These components insert additional noise into the transmitted/received signals, negatively affecting the performance/reliability of a communications system.
Noise Factor, Noise Figure and Noise Temperature allow us to characterize the noise performance of these components.
Noise Factor provides a way to measure the additional noise added to a signal as it passes through a component.
If we are looking at a component that amplifies the signal by gain G, then we know that the system will amplify the input noise as well as add additional noise. This is modeled in the diagram below:
An ideal amplifier that adds no additional noise, will still amplify the input noise (Pni) by the gain G. The output Noise Power of an ideal component is given by:
A realistic component will insert additional noise (Pna) to the system. We model Pna as entering the component before it is amplified by the gain G. Therefore the output Noise Power (Pno) of a realistic component is given by:
The Noise Factor of the component is defined below:
This is equivalent to looking at the ratio of the SNR of the signal entering a component/system and the SNR of the signal output:
Understanding Noise Factor
You should be able to draw the following conclusions from the above equations:
The Noise Factor of an ideal system is 1.
The SNR of the input and output signals of an ideal system are equal.
The Noise Factor of a realistic system is always greater than 1.
The output SNR of a real system will always be smaller than the input SNR.
Input Noise Due to Thermal Noise
If we assume that the input noise Pni is purely due to thermal noise (the minimum possible noise level), then we define Pni to be:
where To is the standard operating temperature, 290° Kelvin. It is important to note, when calculating Noise Factor of a component, we always use To = 290° K for Pni. Similarly, when testing the Noise Performance of a component, the test is always conducted at this temperature.
The calculation for Noise Factor of a system is thus:
This will have some implications further on when we discuss Noise Temperature.
Noise Figure is simply the logarithmic scale equivalent of Noise Factor, expressed in decibels (dB).
We can also relate Noise Figure to values of SNR:
Noise Temperature gives us another way to describe how much noise a system adds to a signal. In this case, we look at the total noise performance of the system, and calculate an equivalent temperature (Te) that would yield the same noise power at the output via additional thermal noise. It is important to realize that the noise temperature of a component describes the additional noise that the component inserts onto a signal before it is amplified, as shown in the figure below.
Why Noise Temperature?
This is actually a good question. The reason we use Noise Temperature is because it allows us an easy way to combine the effects of an antenna and a receiver together. Antennas are responsible for receiving signals and passing them to a radio receiver where they can be amplified and demodulated. Antennas also receive noise from the environment they are in as part of the received signal. Antenna Temperature defines the mount of noise that can be measured at an antenna’s terminals. Antenna Temperature is not a physical property of the antenna itself, but rather, a function of the antenna’s design and the environment it is installed in. I will cover Antenna Temperature in some more detail in a future post!
A Simple Example
Assume that we measure the Noise Density at the output terminals of an arbitrary amplifier with a gain factor of 100 and establish it to be 7×10-19 Watts/Hz. What is the Noise Temperature (Te) of this system?
The Noise Density at the output is given by:
Te is therefore given by:
Converting To Noise Factor
We can also calculate the Noise Factor from the Noise Temperature relatively easily. Recall that:
We can also convert from Noise Factor to Noise Temperature by making Te the subject of the above formula:
I found the following sources of information extremely useful for this article, they also cover things from a slightly different angle and extend the ideas presented above to cover cascaded systems:
In this post I want to discuss Noise Spectral Density.
Noise Spectral Density
Noise Spectral Density or Noise Density, (No) is a measurement of the noise power per Hertz. For white noise, which is constant with respect to frequency we can simply divide the total noise power by the bandwidth of the system. Assuming that thermal noise is the predominant form of noise in our system, recall the formula for thermal noise:
P = kTB
This means that the Noise Density is simply:
No = kT
where: k = Boltzmann Constant (1.38064852 x 10-23)
T = Temperature in degrees Kelvin.
At a normal operating temperature of 290° Kelvin, the typical Noise Density is just under 4.004×10-21 Watts/Hz, or on the decibel scale -173.975 dBm/Hz. This produces a total noise power of -100.96 dBm in a 20MHz wide channel.
Other Types of Noise
What if you are calculating the noise density for a type of noise that is NOT constant with frequency, for instance grey noise? In this case, The Noise Power and Noise Density would both be functions of frequency. The Noise Density would be the derivative of the Noise Power with respect to frequency. The total Noise Power is the integral of the Noise Density with respect to frequency.
Thankfully, most noise types in communications systems can be approximated as white noise, and we can leave out the calculus for now!
Here is a great video from the guys at Analog Devices that explains how to convert Noise Spectral Density to RMS Noise and the assumptions we should be aware of!
The first major contributor to noise inside electronic components comes in the form of Thermal or Johnson-Nyquist Noise. This noise is present even when there is no current actually passing through any component. It is present even when the device is turned off!
Thermal noise is caused by the random movements of electrons inside resistive electrical components. A perfect capacitor or perfect inductor should exhibit no thermal noise as they have no resistance. As we add up the random movements of all of the electrons, the net result of the random movements do not add up to zero. In fact, at any given time, we will find a net movement of charge (a net electrical current) in some direction through the component.
As the temperature of the electrical component is increased, the electrons gain more kinetic energy and the energy of their random movements increases resulting in a higher net movement of charge and a higher noise level! Johnson-Nyquist Noise is independent of frequency and can usually be modeled as white noise.
Thermal Noise does not account for ALL of the noise in a system. Rather, it represents the minimum amount of noise that will be found in an ideal system. There are many other sources of noise present in electronic and optical communication systems that must be accommodated!
Before we can go any further, we have to look at something called the Boltzmann constant. This physical constant is the result of dividing the Gas constant Rby Avogadro’s Constant NA and defines the linear ratio between the average kinetic energy of particles in a gas and the temperature of the gas. If the temperature of the gas increases, the average kinetic energy of the particles of the gas increases by a linearly proportional amount!
“What on earth are we talking about gases for?” you might ask. As it turns out, electrons inside a metallic conductor can be modeled as a gas! The Boltzmann constant is defined in Joules per degree Kelvin and has a value of:
k =1.38064852 × 10-23 Joules/Kelvin
Going very much further into this topic is not for the faint of physics and is really beyond the scope of this post.
Calculating Thermal Noise:
The video above has a great description of where thermal noise comes from and how we derive the formula:
P = kTB
P = Thermal Noise Power in expressed in dBm (Decibels above a milliwatt)
k = Boltzmann Constant
T = Temperature in Kelvin (0° Celsius = 273° Kelvin)
B = System Bandwidth in Hz
Remember that some systems do not use very high order filters. This means that when we are looking at a band-limited system, it may be necessary to take into account the additional noise power introduced to the system by noise in the transition bands of the filters. This information is referred to as the “Equivalent Noise Bandwidth“.
Thankfully, the filters used in modern digital communication systems are generally designed to have very small transition bands for the sake of spectral efficiency, and so we can generally treat this contribution as negligible.
Calculate the expected Thermal noise power for a Wi-Fi receiver, using a 20MHz wide channel at a temperature of 300° Kelvin.
Do the same calculation for the same Wi-Fi receiver, but using a 160MHz wide channel.
Do the same calculation for a standard GSM channel of 200kHz
Do the same for a LoRa channel (125MHz) and a Sigfox channel (100 Hz).
Completing the above calculations should tell you something about the Noise floor for each of these Radio technologies.
Shot noise is a result of the fluctuation in the rate of flow of individual electrons/photons in a system. Shot Noise can only exist if there is an electrical current flowing in a device or in the case of an optical sensor, if there is a stream of photons arriving at the detector. I really like the analogy to a series of raindrops falling on a tin roof given by Frank Rice in the American Journal of Physics. The intervals between the arrival of the raindrops is actually random, thus the total water flowing onto the tin roof actually fluctuates around an average rate!
Shot noise can be naturally suppressed in some electronic components. As noted in the article above, electrons tend to repel each other and obey the Pauli Exclusion principle. This implies that it is unlikely to get large groups of electrons moving together through a system, and thus the noise currents due to shot noise will be naturally limited. It is worthwhile to note from the article above, that photons do not have the same repulsive effect on each other and correlations between their movements can cause much higher shot noise.
One of the supposedly simple things that plagues me in communications theory is the idea of noise. We are all very comfortable talking about noise. We refer to noise, the noise floor and interference all the time quite glibly. Yet I must admit, I have always felt like I never truly understood the topic.
In this post I want to take a deeper look at noise: its definition, where it comes from, its characteristics and how to measure it in modern communications systems.
What is Noise?
Looking this up online, I found a wonderful definition (thanks Google!) that made some sense:
noise (noun) technical Irregular fluctuations that accompany a transmitted electrical signal but are not part of it and tend to obscure it.
Noise is defined as the deviation from an ideal signal, and is usually associated with random processes. By definition it corrupts the information content and fidelity of the signal, particularly at low levels.
Where Does Noise Occur?
Noise exists in all forms of modern electronic and optical communications systems. Noise occurs as a result of the way electrons and photons behave in different physical mediums. The electrons that power our electronic devices and the photons that traverse fiber-optic communication systems exhibit different random behaviors as they . We define each of these phenomena as a different source of noise. In some situations a certain source of noise may be dominant due to a single physical phenomenon. In other situations noise could be dominated by another factor, or a mix of factors!
Noise vs Interference
I feel it is important to clearly differentiate between noise and interference as the terms are used separately in communications systems. For example consider Signal to Noise Ratio (SNR) vs Signal to Interference Noise Ratio (SINR). If you have ever wondered about the difference between the two terms, you may find some illumination below.
I have looked around for a good definition of the difference between noise and interference that I can quote, and I have found nothing that satisfies my need for a generalized but precise definition. I will therefore go ahead and say the following:
Interference is typically a deterministic signal (or sum of deterministic signals) that is transmitted on a specific set of frequencies that disrupts a communication signal on the same frequency. A good example of this would be multiple competing radios transmitting messages simultaneously on the same frequency in the same location. Another example of a source of interference would be a wide-band signal jammer designed to disrupt wireless communications. Interference typically comes from specific, external sources (i.e other transmitting devices) and only exists on specific frequencies. It can also be temporary, like intermittent interference caused by the duty cycle of a certain signal or transmitter.
Noise is the result of random processes that cause fluctuations in electronic signals and is produced by the physical operation of electronic/optical components and circuits. Noise can be modeled as a random process with a certain probability density function. A good example of this would be thermal noise or shot noise present in electronic equipment. Noise typically comes from inside radio/electronic equipment and you cannot move away from it and you cannot turn it off! Some forms of noise like thermal noise are completely inescapable, and cannot be reduced or removed.
Colors of Noise
One way of classifying noise signals is to look at the signal’s power as a function of frequency (called the power spectrum of the signal). Noise signals are assigned colors based on the power level as frequency increases.
Consider a noise signal that has constant power with respect to frequency. This means that the noise signal has an equal amount of power between 0 – 10 Hz, 10 – 20Hz, 100 – 110Hz, and 2010 – 2020Hz and so on. If you drew a graph of the power against frequency, the average power would be a flat line. This is termed white noise. Most forms of electronic noise can be modeled as white noise as they maintain a roughly constant power level through out the device’s band of operation.
Other Colors of Noise
Other colors of noise are defined by how the noise power changes as a function of frequency. For instance, pink noise loses power at a constant rate of 10dB / Decade of frequency. Pink Noise is actually constant on a logarithmic scale, i.e there is the same amount of power in 40-60Hz as there is between 400-600Hz and 4000-6000Hz.
Brown noise decreases faster, at a rate of -20dB/Decade. Blue noise on the other hand INCREASES with frequency at +10dB/Decade.
If you want to read about the other colors of noise and which physical phenomena they are found in, read the wikipedia article and check out their references. If you want to play with a noise generator and hear what different noise colors sound like, check out the web based noise generator at White Noise & Co.
One of the most commonly misunderstood aspects of signal processing is the Nyquist-Shannon Sampling Theorem. It has been noted before that many engineers misunderstand the theorem, selecting sampling frequencies that are often too low, causing errors in systems design. An understanding of the underlying theory behind signal sampling is also useful to the communications engineer who wants a deeper understanding of the application of analog and digital filtering and information processing in communications equipment.
The Need for Sampling
Sampling analog, continuous, time-domain signals is an unavoidable step in enabling digital computers to represent data. Think about this quickly. If we want to describe a signal on a digital computer, we have to do it by representing the signal as a series of discrete values. Even if we use a formula to generate the signal values, the computer still represents each separate value for a given input value. A fundamental question that we must answer is, given a continuous, time-domain signal, how many discrete values do we require to accurately describe it?
Here is an example of how sampling a signal can go wrong. In this example, I have a wave with a frequency of 10Hz being sampled 9 times per second. Look at the shape of the wave that the samples are actually describing. It looks like a much lower frequency wave with a frequency of 1 Hz! You can see that the samples have not sufficiently described the waveform and we are missing information!
The Periodic Impulse Signal
Sampling in theory, means that we take an instantaneous reading of a continuous signal at specific intervals. The frequency at which we take the readings is the inverse of the time between taking each sample. The act of sampling a continuous time-domain signal is effectively the same as multiplying the continuous time-domain signal by 1 at the time of taking the sample, and multiplying it by zero at any other time.
We can describe the the act of sampling itself as a pulse train of unit impulse functions that occur at the time that each sample is taken, as shown below. This is also referred to as a comb function.
n is an integer from -∞ to ∞
T is the period between samples, in seconds.
a is the offset from zero, in seconds.
The Fourier transform of the time-domain comb function above is simply another comb function in the frequency-domain with the characteristics shown in the image below. Remember that T is the period between time-domain samples.
Sampled (Discrete Time) Signal
We can describe a discrete time-domain signal as the product of a continuous time-domain signal and an impulse train as below:
Fourier Transform and Spectrum
The Fourier transform of a discrete time signal is shown below:
The way to think about the above is that you are taking the Fourier transform of the continuous time-domain signal x(t), at every location where the time-domain comb function is equal to 1. Another way to think about the above integral is to recall that multiplication in the time-domain is equivalent to convolution in the frequency-domain and vice versa. There for we can also write:
Let’s start by considering the arbitrary frequency-domain representation of a continuous, time-domain signal x(t) below:
From what we know about convolution, we can see that the convolution of the frequency-domain representation of the signal x(t) and the frequency-domain representation of the impulse train would be the following:
In the image above you will notice that the bandwidth of the continuous time-domain signal x(t) easily extends beyond frequency ±4π/T Rad.s-1. When we sample the continuous signal with a period T, we see that the spectrum of the signal repeats itself every 2π/T Rad.s-1. This repetitive component introduced by the act of sampling causes the frequency components of our signal to be distorted! By looking at the image you can see the areas of overlap. For instance, at π/T Rad.s-1, you can see that the magnitude of the frequency component of the sampled signal is now the sum of the original and all of the repeated spectra! This means that the magnitude of this frequency component is now more than double what it used to be! This will have drastic effects on the shape of the sampled time-domain signal.
So how can we fix it? Well, what if we doubled our sampling Frequency and used a period between samples of 0.5T? The resulting frequency-domain representation of the discrete time-domain signal is shown below:
We can see in the image above that the aliasing error introduced is now considerably smaller than it was before and the frequency components of the signal are not as greatly affected. This translates into less distortion of the discrete time-domain signal x[n] in comparison to the continuous time-domain signal x(t).
Applying Nyquist Sampling Rate.
The Shannon-Nyquist sampling theorem states that the minimum rate at which you can sample a signal x(t) is at twice the frequency of the maximum frequency component of x(t). However, this assumes that the frequency components of the signal above the defined bandwidth are negligible and do not cause any error! Remember that the bandwidth of analog signals is simply defined as the frequency at which the magnitude is -3dB below the maximum. That would imply an enormous aliasing error if we were to sample at the minimum rate!
As we saw above, one way to reduce the amount of aliasing error is to increase the frequency of sampling. This spreads the repeated frequency spectra further apart in the frequency-domain. This technique is called oversampling and is a useful strategy if you are planning on using a digital filter to remove any higher order frequency components or perform frequency separation for channel tuning etc. But then you would still have to sample higher than those components in order to accurately represent them in the discrete signal! Oversampling on its own is not a guaranteed solution!
The fundamental truth is, you cannot get away from the need for analog filtering. You will often see an analog, band-pass filter placed prior to the input to an Analog to Digital Converter. The purpose of this filter is to limit the maximum significant frequency component of the analog signal, prior to entering the sampling circuit. This allows the sampling circuit to use a defined sampling rate to reliably describe the analog signal in a discrete form.
The Need for Analog Filtering
Whenever you are about to sample an analog signal, start by considering the analog signal’s bandwidth, and how that is defined. If we looked at the human voice speaking normally, we would notice that most power is in the range of 500 to 2000Hz and that we could get a clear signal if we limited ourselves to the frequencies between 300 to 3400Hz.
Ok, great! Let’s go ahead and sample at 8KHz and we will be A-Ok! Right? Wrong.
You have forgotten about the higher frequency components of the voice. They don’t make up a large proportion of the power of the signal, and we don’t need them to understand what is being said. But those higher frequency components will still have enough power to distort the sampled signal if we don’t attenuate them to a truly negligible value!
Filtering allows us to limit the bandwidth of our signal, attenuating the unnecessary higher order frequency components that we don’t need, preventing them from introducing any significant aliasing error. If we were to pass the original signal x(t) through a low pass filter, and then sample it with the original sampling period T, we would see the following:
You can see that in this case, because the analog filter has removed the higher order frequency components of the original signal, there is no longer any significant aliasing error in the sampled signal! Now, if we were to apply the technique of oversampling, our error would be completely negligible (as shown below)
The term Passband signal in the current context refers to the modulated signal that results from a baseband signal modulating a carrier wave. Passband signals have some interesting characteristics that we will cover by referring to the diagrams below. (Disclaimer: illustrative purposes only).
Properties of Passband Signals
Shifted Frequency Response
The complete frequency response (including both positive frequencies and negative frequencies) of the baseband signal is preserved in the passband signal, but is now centered around the positive and negative frequencies of the modulated carrier wave. We say that the baseband frequency response has been “moved up” from 0 Hz to the frequency of the carrier.
If we were to describe the bandwidth of the real baseband signal in the first image, we would simply describe it by its positive frequency components (in green). We don’t stop to include the negative frequency components. Think about the voice signal in a narrow-band digital phone. We measure the bandwidth from 0Hz to the maximum significant frequency component, its bandwidth is 3400 Hz. In comparison, the bandwidth of the passband signal is measured from its smallest significant frequency component to its largest significant frequency component. This value is double that of the old baseband signal bandwidth. This is a common phenomenon in allmost forms of wireless communications and modulation types involving real signals.
There is an analog amplitude modulation variant called single-sideband modulation that conditions the input and output signals of the modulator using mixers and filters respectively to eliminate the frequency doubling effect. I never encountered something like this in digital communications technologies.
As shown in the second image, complex baseband signals do not have a symmetrical frequency response around the 0 Hz mark. As a result, when they are used to modulate a complex carrier, you will simply see the complete frequency response shifted up to the carrier frequency. There is no guarantee we will see the same “bandwidth doubling” effect that we do with the symmetrical frequency response of a real valued signal. That said, the same mechanics are involved that shift the frequency response to the positive and negative carrier frequencies!
Symmetry around 0 Hz.
The real frequency components of a passband signal are symmetrical around the 0 Hz mark. That means that the real frequency components at negative frequencies are equal to the realfrequency components at positive frequencies for the passband signal. We will not concern ourselves with the imaginary frequency components that occur when dealing with complex signals as our focus is on communications signals which are predominantly real numbered.
If you conduct a Fourier transform on the modulated carrier signal, you will see the real, negative-frequency components of the modulated signal centered around -fc. You will notice that these are the perfect mirror image of the real, positive-frequency components centered around fc. If you look at real world version of the same signal through a spectrum analyzer, you will see only the positive-frequency components.
Signal filtering plays a fundamental role in electronics and communications. Filters modify specific frequency components of time-domain signals and are used as a tool for signal quality improvement, information recovery and frequency separation . Filters are a fundamental frequency domain tool and as a component in electronic circuits and digital signal processing allow us to:
Isolate circuits from DC (0Hz) currents.
Suppress high and low frequency noise in received signals.
Separate the frequency components of received signals for further processing and analysis.
There are many different analog and digital filter designs, with varying implementations and transfer functions. However, the general idea of a filter is that its transfer function should attenuate the magnitude of specific frequency components of a signal, or introduce a known phase-delay to specific frequency components whilst leaving other frequency components of the signal unchanged. Typically, in the communications industry, we are mostly interested in the amplitude-frequency effects of a filter.
The ideal filter
An ideal filter multiplies the passband frequency components by 1 (i.e does not change them in any way), and attenuates the noise (i.e signal we don’t care about) in the stopband frequencies by an infinite amount. The transition from passband to stopband for an ideal filter is instantaneous. The frequency response of such a filter is shown below, on the left. The corresponding time domain impulse response of the filter is shown on the right.Of course, this description is of an idealised sinc filter, shown below, and is not practically realizable.
There are also some things that even idealized filters cannot do. A filter cannot remove common mode or differential mode disturbances and interference in the passband. That means if someone else is using the same frequencies as you are, there isn’t much that can be done to remove their interference!
Filter Response Types:
A low-pass filter allows all of the frequencies below the cut-off frequency to pass through it and it attenuates the higher frequency components of a signal. As the frequency increases so the amount of attenuation increases. Low pass filters are useful in suppressing high frequency noise, and limiting the bandwidth of analog signals.
High Pass Filters
A high-pass filter allows all of the frequencies above a certain value to pass through it and it attenuates the lower frequency components of a signal. As the frequency decreases towards zero so the amount of attenuation increases. High pass filters are useful for isolating equipment from DC currents and also from low frequency noise sources like AC power signals at 50Hz, or in the case of old telephone systems, the 20Hz ringing signal.
Constructed from the combination of a high-pass filter with a low cutoff frequency, and low-pass filter with a higher cut-off frequency. Band-pass filters find widespread use in the RF front end of telecommunications equipment, predominantly in limiting power of transmissions to a specific frequency band and also in eliminating out-of-band noise from received signals. Another increasingly common use of analog band-pass filters in telecommunications is in co-location or co-existence filters inside radio equipment or handsets. Modern handsets have a slew of different radios simultaneously operating at different frequency bands and access technologies. Ensuring that the radios can peacefully co-exist in such close quarters without degrading each others’ performance is a major design challenge!
Band-Stop / Rejection Filter
Band Stop or Rejection Filters work in exactly the opposite way to band-pass filters. They are constructed by placing a high-pass filter with a high cut off, in series with a low pass filter with a low cut-off frequency. Band Rejection filters are useful for eliminating interference on specific frequencies.
Digital & Analog Filters
Filters can be implemented for analog or digital signals. Digital Filters by comparison are implemented by digital signal processing and operate on digital information.
Analog filters can be implemented in various forms depending on the application:
Passive electronic filters consisting of Resistors, Inductors and Capacitors.
Active electronic filters that use amplifiers which are very common.
Surface Acoustic Wave filters are often used for super heterodyne receivers at the intermediate frequency in digital receivers in radios and in television sets.
Cavity filters are mechanical boxes with a specific geometry that enables high-fidelity filtering of high power microwave signals.
Analog Filters are usually constructed out of a physical circuit and operate on analog, continuous time-domain signals. Analog Filters play an extremely important role in communications, especially as an important step in signal conditioning prior to entering an analog to digital converter. Analog filters also used to have a role to play in pulse shaping for older modulation types such as spread spectrum technologies (that require a high symbol rate).
Digital filters have sever key advantages over analog filters in that they are not affected by tolerances in component values, manufacturing processes, temperature differences and aging. The performance of digital filters is also vastly superior to that of analog filters achieving much higher stop band rejection, smaller transition bands, low passband distortion and linear or even zero phase delay!
Digital filters are useful for processing almost any form of digital information! In communications equipment, digital filtering is used for conditioning digital signals prior to modulation or after being converted from analog to digital values. Digital filters find applications in pulse shaping and can also be used to remove higher frequency noise components from over sampled digital signals. This whitepaper and tutorial details an example of using oversampling and a digital filter for pulse shaping of a transmitted digital signal to enhance spectral efficiency!
Properties of Filters
Pass Band & Stop Band
The pass band is a term that collectively refers to the range of frequencies that a filter allows through it. The stop band is the term used to collectively describe the range of frequencies that are sufficiently attenuated by the filter for us to ignore. The amount of attenuation required in the stop band is called the stop band attenuation. The frequencies at which the passband stops are called the cut-off or edge frequencies. The cut-off frequencies in analog filters are widely accepted to be the frequencies at which the amplitude of the frequency response is attenuated by -3dB. Digital filters are less standardized, the attenuation level that determines the cut-off frequency is usually specified. Common values are 99%, 90%, 70,7% and 50% [reference]
The transition band refers to the range of frequencies between the pass band and stop band that are not sufficiently attenuated by the filter for us to ignore. All practical filters have a finite rate at which they can transition from the passband to the stop band. Some filter implementations are capable of achieving very high frequency roll-off, minimizing the size of the transition band. Some digital filters are capable of roll-off rates as high as -36dB/Hz!
Passband & Stop Band Ripple
Some filter implementations like the Chebyshev and Elliptical filters can introduce a “ripple” in the passband and/or the stop band of the signal, causing the signal to be distorted. The maximum tolerable Passband ripple of a filter is generally specified in the design requirements.
Phase-Delay & Phase-Response
The phase delay measures the amount by which a single frequency component is delayed when traveling through the filter. This short time delay has the effect of delaying the phase of the sinusoidal wave relative to where we were expecting it.
Quick Note: It is important to realize that phase-delay is actually dependent on units of time and is converted to an angular measurement by multiplying by the frequency. Thus, for a constant time delay through a system, as the frequency increases, so the phase-delay angle will increase too! You can prove this in your head by thinking of a wave of 1 Hz going through a system with a time delay of 0.25 seconds. We know that the phase will be shifted back by -90 degrees or π/2 radians. Imagine we had a wave of 2Hz going through the same system. The same time delay results in a phase-delay of -180 degrees or π radians!
The phase-delay as a function of frequency is shown as the phase-response of a filter. Here is an image of a phase response:
Filters can be designed to have zero phase, linear phase or non-linear phase responses. A zero phase response system is one that does not change the phase of the signal at all, which implies that it introduces no delay to the time domain signal at any frequency. A linear phase response system is one that introduces a constant delay to all frequencies, like in the thought experiment above.
Obviously, if designing a control system or time-delay sensitive system, we would prefer zero phase response (zero delay) in our signal, which would imply instantaneous measurement or control, but often we have to settle for a linear phase-response if we are dealing with real-time systems.
Systems with a non-linear phase response have a phase-response that changes with frequency! A non-linear phase response can cause distortion of the time domain signal as different frequency components will now arrive at their peak amplitude at different times relative to each other. This kind of distortion either speeds up or slows down the rate of change of a time domain signal and is referred to as ringing. Non-linear phase response is a concern in systems design where accurate replication of the time domain signal is a key requirement such as digital receivers (another reference here).
Typically, digital filters can be designed with a zero or perfectly linearphase-response and this is not an issue, but unfortunately, physically realizable filters have a much poorer performance in this regard!
Group delay is defined as the derivative of the phase-response with respect to frequency and has units of time. Group delay is also a measure of the non-linearity of the phase-response of a system. A linear phase response system will have a constant group-delay. A highly non-linear phase response will have a rapidly changing group-delay!
To think of group delay, remember the following:
Phase delays are caused by time delays in the system.
Phase-delay is calculated from time-delay by multiplying by the frequency of interest. You could say that phase-delay is measured in frequency seconds (Hz.s) or (rad.s.s-1 = rad)
If everything has the same time delay, then the phase-delay will be linear with respect to frequency, and the gradient of the line will be equal to the time-delay of the system. Negative gradient will indicate a time-delay).
If we take the derivative of phase-delay with respect to frequency, we will simply get the time delay of the system!
So, group-delay is actually just a measure of the time delay of the entire system with respect to frequency. Imagine a square pulse arrives at the input of the system. Group delay describes how each frequency component of that square pulse will be delayed through the system!
Group delay is a useful way to evaluate the normalized response of a filter design. That is the logical way to think about it. Here is a picture of a Chebyshev filter showing frequency-response and group-delay.
Quality-factor is actually not a term used specifically for filter design but for many applications in engineering and physics, including antennas and other forms of resonant systems. The Q factor of a resonator or oscillator is the ratio of its central frequency to the bandwidth over which it works. Example, we build a filter with a central frequency of 1850 Hz and a total bandwidth of 3100Hz. The Q factor of such a filter would be approximately 0.6.
All filters regardless of their type, digital or analog, will introduce some form of loss to the passband signal. Insertion loss in telecommunications refers to the loss incurred by inserting a device into the path of the signal. A good reference discussing the sources of insertion loss can be found here.
You should also be aware that the transducer you are using to create the analog signal itself also has a frequency range of operation and will also filter out frequencies outside of its own range. The picture below is the frequency response chart of a Shure SM57 microphone. As you can see from the chart it is much less sensitive to lower frequencies and much more sensitive to higher frequencies in the audible range. This means that the microphone will distort the original signal by attenuating lower frequencies below 200 Hz and amplifying frequencies above 2kHz. You can read more on microphones and their response charts here.
Loss of Information
Whenever we use a transducer to create an analog signal, or a filter to limite the bandwidth of a signal, we must always accept that we are distorting the original input and losing information. The question however is how much information is an acceptable amount to lose and do we care about the information we are losing? For instance, when capturing the sound of the human voice most of the energy is concentrated within the bands of 200 to 2000 Hz. Band limiting the signal to 3400 Hz will result in a band limited baseband signal that still allows people to communicate clearly over a digital phone. Similarly, audio destined for high quality musical playback can be band limited to 20Hz – 20kHz because we cannot hear any of the higher or lower frequencies and it makes no discernible difference to us! The same can be said of images and the color gamut that can be captured by a camera, supported by a video codec or displayed by a monitor!
Here are some great resources I found on the topic of both Analog and Digital Filters.
The difference between the baseband signal and the passband signal in communications is really quite a simple one. The baseband signal refers to any signal that has not modulated a carrier waveform.
NOTE: The use of the verb “modulated” there may made you think twice. If so, you are not alone. I always used to think that the carrier waveform was the object that acted on our baseband signal. This is the wrong way to think of it. The process of modulation is actually the process of how our baseband signal modifies the carrier waveform to create a modulated, passband signal! Thus we actually say that the baseband signal modulates the carrier waveform.
The important thing to realize is that a baseband signal can be an analog signal, a pulse code modulated signal (also actually analog really), or it can be digital information. Provided that the signal has not been used to modulate a high frequency carrier waveform, it is still considered to be baseband. Let’s go back to our model of a wireless communications system to understand where we may find baseband signals:
Properties of Baseband Signals
The term Baseband is used due to the fact the signal has a frequency component that starts close to 0 Hz relative to the carrier wave’s frequency. Baseband signals have a defined bandwidth starting at a frequency greater than or equal to 0 Hz and ending at the highest non-negligible frequency component of the signal. And now that I have said that, and it made some sense, let me immediately seem to contradict myself.
It is important to note that there is no such thing as a practical time domain signal with a finite bandwidth as I just described in the previous sentence. If you were to see a practical time-domain signal with a finite bandwidth, it would have to continue on and on and on forever! These kind of signals exist in theory only.
The truth is that every practical time-domain signal we deal with has an infinite number of frequency components because it has to be limited in time for us to capture it! You will see time-domain signals with most of their power concentrated in a certain set of frequencies and negligible power in higher or lower bands. But they will always have some power in higher frequenciy component. You can attenuate the contribution of these frequency components with minimal error, but the total bandwidth of the signal will never be truly finite.
The picture below showing the benefits of a wide-band voice codec, actually illustrates the point very nicely (even though they stop counting at only 7kHz or so which makes it less than awe-inspiring). You can see how the energy in the voice signalabove 4kHz is much less than that below 4 kHz. We can also apply the narrow-band filter of the digital telephone (shown in blue) to attenuate frequencies higher than 3400Hz and lower than 300Hz so that their contribution becomes negligible and we can minimize any errors that could be caused by sampling at 8 kHz. However, even the filtered signal (shown in blue) still does not have a truly finite bandwidth, its higher frequency components are just small enough for us to ignore.
The fact that all practical time-domain signals have infinite bandwidth has some critical implications for sampling time domain signals (read here).
Baseband signals have a specific frequency response that describes the magnitude of each frequency component. Generally the value of each frequency component can be positive (add a sinusoidal wave of a given amplitude at some frequency) or negative (subtract a sinusoidal wave of given amplitude at some frequency).
Baseband signals don’t only have components with positive-frequencies. They also have components that operate at negative-frequencies. When I said that a baseband signal “starts close to 0 Hz relative to the carrier wave’s frequency” I omitted to mention that the baseband signal’s frequency-domain representation is in fact centered around the 0 frequency mark.
If you want to understand where these negative frequencies come from, I would suggest reading more about the Fourier Transform, Euler’s identity, and the complex sinusoid. Effectively what we need to understand is that when we look at the frequency domain representation of any real numbered, time-domain signal, we will end up with negative frequency components that have the exact same values as their corresponding positive frequency components. i.e the frequency response of a real, time-domain signal is symmetrical around the 0 Hz mark.
Here are some real, time-domain, baseband signals and their frequency-domain representations.
Real vs Complex time-domain signals
Most engineering problems deal with real time-domain signals only. Real time-domain signals have real frequency components that are symmetrical around 0Hz mark and the imaginary frequency contributions on the positive and negative frequency bands cancel each other out! The above pictures show only the real frequency components.
Complex signals by comparison, (i.e time-domain signals that have both real and imaginary number components) have real and imaginary frequency components that are not symmetrical around the 0Hz mark and do not neatly cancel each other out. If you really want to read about it, go here.
Many physical objects have a frequency range over which they perform most of their work. Your ears for instance, can generally only hear frequencies between 20Hz and 20 kHz. Your own voice when speaking normally, concentrates most power in the range of 500 to 2000Hz. You can think of these as the bandwidth of their operation. Other real world sources have their own bandwidths of operation. A tuning fork has a very narrow bandwidth tuned to a single note. An organ pipe, or harp string is tuned to only release a specific frequency and its higher order harmonics. A Hi-Fi amplifier has to have a very wide operating bandwidth to allow it to amplify your music uniformly across all frequencies in the audible range. The loud-speakers in your car that turn the analog signal into sound waves are usually designed to work only on low, medium or higher frequencies in the audible range.
Your Wi-Fi modem has to operate equally well across a wide range of electromagnetic frequencies in the 2.4GHz and the 5GHz bands of the electromagnetic spectrum. The sun generates electromagnetic radiation across an enormous bandwidth that includes infra-red radiation we feel as heat, visible light, ultra-violet rays that damage our skin as well as X-Rays and Gamma rays. Thankfully, most of the energy that the sun releases is green visible light instead of the destructive X-rays and Gamma rays that fly out of other stars!
All of the objects and systems we have been talking about above, have a bandwidth of operation and a specific behavior in the frequency domain.
Analyzing objects in the frequency domain is a fundamental tool in mathematics that simplifies and provides insight into the analysis and design of electronics, control systems, communications systems, structural engineering, mechanical engineering, statistics and many other disciplines!
The Fourier Transform
Any signal that varies with time can be referred to as a time-domain signal. All time-domain signals have a corresponding frequency-domain representation. The frequency-domain description tells us about the frequency-components that make up the total energy of the signal. You can describe any continuous time-domain signal as a sum of sinusoidal waves of increasing frequency and varying amplitude. The mathematical method to do this is called the Fouriertransform. Here is a great picture that explains the Fourier transform really well in a visual way!
Forms of the Fourier Transform
I am not going to dig into the details of how the Fourier Transform works in a blog. However, it does seem sensible to understand the various forms of the Fourier Transform and the contexts in which we make use of them!
Fourier Transform – Integral
The Fourier transform as I was taught it with respect to time-domain signal f(t), has the following form:
This form of the Fourier transform is great for continuous time-domain signals that we want to analyze from a mathematical perspective, but this form does not easily lend itself to solution by digital computers.
Discrete-Time Fourier Transform
The Discrete Time Fourier Transform is applied to signals that are not continuous in time and is useful for analyzing a sampled signal. The signal x(nT) is an infinitely long series of very short pulses of continuous, varying amplitude. It should be noted that the Discrete-Time Fourier Transform has a practical weakness in that the input is an infinitely long pulse train, and the output X(Ω) is a continuous function of the frequency variable and cannot be represented exactly by digital computers.
Discrete Fourier Transform
This is the form of the Fourier transform used by digital computers. The output is a discrete function of the frequency variable and this can easily be represented by a computer!
To calculate the DFT and generate the discrete frequencies of the output samples, we have to start with a finite number of discrete-time samples of the input signal, we denote the number of samples as N.
In the Signals & Systems textbook I have, they write: “Let us choose the value of N sufficiently large, so that our set of samples adequately represents all of x[n]”. In the digital communications world, this would translate to all the sample values of the last received symbol.
We select our discrete angular frequencies as Ω = 2πk/N where k is some integer value between 0 and N-1.
In this case, where N is the number of time domain samples we have collected, and k is the current frequency sample we are calculating for, the discrete Fourier Transform is:
To figure out the frequency each discrete frequency k represents, recall:
Fast Fourier Transform (FFT)
This is the method used by modern computers and radio receivers to calculate the Discrete Fourier Transform of a received time-domain signal and retrieve the symbol information encoded inside it. The Fast Fourier transform implements an algorithm to reduce the number of computational steps in calculating the Discrete Fourier Transform.
The complexity of calculating the DFT is easily seen to rise in proportion with the square of the total number of time-domain samples, N. That means that for a 1024 sample signal, a 1024 sample DFT will result, requiring in excess of 1 million complex multiplications! The Fast Fourier Transform enables the same calculation to be carried out using many fewer complex multiplications as shown in the table below:
If you are looking for a single, comprehensive source that will take you from basic mathematics all the way to the Fourier transform, I found this to be a fantastic, free resource (click on the picture!)
What does a digital communications link actually look like?
This is a useful question to answer as it gives us a model we can continuously refer back to as we learn more about communications. Having a model means that when things get confusing later on, we can go back and see which technique, technology or innovation fits where. A simplistic model of a communications link is shown below, consisting of a source, transmitter, channel, receiver and destination.
The source is basically the signal we want to send. It could be your voice or a TV image or some music, or it could even be some digital data in the form of a frame. It could be a great many things, but for now let’s just accept that the source is some kind of generic information. We will get into the details later! Next is the transmitter responsible for (you guessed it) transmitting the signal to the other side.
The “channel” comes next. In this context we are using the word “channel” to collectively refers to the time and space that the information we are sending must travel through. The channel could be the glass tube of a piece of fiber, the twisted pairs of copper wires of an Ethernet cable or even the room you are standing in that separates you from the Wi-Fi router your phone is communicating with. It is important to realize, right now, that the channel in this context does not refer to the choice of radio frequency like when you switch the channel on the radio or TV, which is also a valid use of the word, but with a different meaning. To re-iterate, in the current context, the channel means the entire physical medium through which the transmitted signal must travel. A channel is characterized by the effects it has on the transmitted signal, which become apparent at the receiver. The receiver is what hears the transmitted signal as it has been affected by the channel and converts that signal back into an approximation of the original message to be processed by the destination. If the approximation is good enough, the destination will be able to recover the original message. If the approximation is not good enough, then the message is lost and the transmitter will have to try again.
Why Digital Communications?
I was asked this question in a job interview many years ago. My future boss stared at me as I fumbled with the answer. I blanked. I had never actually thought of it. Why the hell do we communicate using digital communications, instead of analogue? I could think of a hundred reasons, but couldn’t put my finger on one that summed the answer up in a sentence. But thankfully i have learned the answer: Noise immunity.
Digital signals lend themselves easily to being stored, and the information can be easily copied, true. But most significantly they are also significantly more immune to noise in a channel, making them easier to replicate at the receiver. Here is a picture (acquired from two separate introductory courses to digital communications, here and here) that illustrates the point extraordinarily well,
The Digital, Wireless Communications Link
Let’s go a little deeper and get to the key parts of this, the digital bit and the wireless bit! A more detailed block diagram of a digital, wireless communications link is shown below.
As we progress through this series of blog posts we will look at each of these blocks in more detail, but for now here is a brief summary of what each block does.
The Input Signal is very simply the information we want to transmit across the wireless link. This information is typically already in a digital format, although it could also be an analogue, continuous time domain signal (your voice entering the telephone perhaps?).
Source Coding is the process through which the original input signal is stored in some digital format and is compressed to reduce the storage and transmission requirements of the original information. You can think of source coding as removing redundant bits to lower (improve) the required storage space or data rate of some information. The simplest form of source coding could be analogue to digital conversion. An example of source coding of a digital signal would be the audio codecs used for a digital voice calls such as G.711 or G.729. For data, source coding would be built into the file or data you were sending, e.g. an MP3 music file that compresses digital information from raw, pulse code modulated audio of a .wav file.
Encryption exists to secure the message against interception (confidentiality), spoofing (authenticity) or from being tampered with (integrity).
Channel coding is the process of adding redundant information to the message to allow limited forward error correction and to minimize the need to resend messages that have been affected by channel induced errors. Effectively, channel coding adds a dimension of reliability to communications even in the presence of interference and noise. It would be accurate to state here that modern digital wireless communications technologies rely very heavily on channel coding techniques.
Modulation is the process of mapping the information in the coded information stream onto a carrier signal to create a digitally modulated waveform. This can be done in many ways, but typically involves manipulating the amplitude, frequency or phase of the carrier wave in a predetermined, finite number of ways. Each possible manipulation of the carrier wave is referred to as a symbol and carries a specific sequence of binary information.
The transmitter’s role is to further process and amplify the digitally modulated signal before it is fed into the antenna. The antenna then radiates the signal into the wireless channel which as we have mentioned is actually the physical space and time through which the signal must travel.
The antenna on the receiver is responsible for “hearing” and passing the electromagnetic signal into the receiver. The receiver amplifies the (typically very small) received signal and passes it to the demodulator so that it can be converted from the detected complex waveform back into a series of 1’s and 0’s.
The channel decoder then takes chunks of the received information and uses the redundant information (from the channel coder) to perform forward error correction on the received digital information, recovering the original encrypted message. The encrypted message is de-crypted before being passed to the source decoder which recovers the original information!
One of the things that I have not shown in the diagram above is the synchronization necessary between the transmitter and receiver. It is imperative that the receiver is synchronized to the same frequency and phase as the transmitter. There must also be synchronization of the symbols and of the frames sent so that data can be reliably reproduced on the other side!