psdZoom.m version 01Jul23
(Power Spectral Density measurements and Frequency Translation)
Author: Paul Mennen
Email: paul@mennen.org
Overview
Demonstrates how to compute and display power spectra on fabricated data as well as how to use frequency translation
(also known as "zoom") to see more detail (i.e. higher frequency resolution) in a narrow frequency band.
As an introduction to spectral analysis, let's consider the design of a 40kHz spectrum analyzer (i.e. frequency
components of the input signal can be measured in the 0 to 40kHz range). Also, let's assume that our spectrum analyzer
with have a dynamic range of 70dB (i.e. we are interested in measurements as low as 70dB below the peak signal).
Front end
The front end consists of the analog signal conditioning portion of the analyzer as well as the ADC (analog-to-digital converter).
Here is the block diagram for the front end:
Analog input
→
Programmable gain stages
→
Anti-aliasing filter
→
12-bit A/D converter
→
microprocessor
Because of our 70dB dynamic range requirement, all three blocks must have low noise as well as be linear to within 0.03%.
Otherwise, spurious harmonics will appear above the -70dB level that you would mistakenly believe are real signals from the input.
This level of linearity is modest, allowing the use of relatively inexpensive components. However, an experienced analog
designer would be needed since relatively simple mistakes can easily exceed that threshold. 70dB is sufficient for many applications,
but higher quality instrumentation would usually have an 80dB dynamic range, or more - requiring even more experience and
care for the front-end design.
For special-purpose applications, the programmable gain section may not be needed, but for more general-purpose instrumentation,
the gain needs to be variable to accommodate a wide range of input signal levels. Older designs often used relays to switch the
various gain stages in and out, but newer designs usually use MOSFETs as switches, although care must be used to achieve the
desired linearity.
Before designing the anti-aliasing filter we must decide on a sampling rate for the ADC. The smallest sampling rate that can be
used to represent the data accurately is twice the analysis bandwidth. (This is known as the Nyquist frequency). So in this
case we need to sample at at least 80kHz. However 80kHz isn't possible because that would require a "brick wall filter",
i.e. a filter that passes all the frequencies from 0 to 40kHz and rejects all the frequencies above 40kHz. So we need to pick
a higher sampling rate. Such a choice is a compromise. A higher sampling rate makes the AA filter design easier and less expensive,
but a lower sampling rate saves processing time and may also allow the use of a somewhat less expensive ADC.
For this design, we have chosen a sampling rate of 102.4 kHz (i.e. a Nyquist frequency of 51.2 kHz. This means that in the range
between 40 kHz and 51.2 kHz, the response of the AA filter has no impact on the measurements. (That range is known as the
transition band. The ratio of these two frequencies (40/51.2) is 78.125% is sometimes called the front-end efficiency because
78.125% of the data computed by the FFT will be useable - i.e. there will be 100 good alias-free lines from an FFT with 128 outputs
(i.e. an FFT with 256 real input points or 128 complex input points). The fact that the sampling rate is a multiple of a power of two
is no accident. Since the simplest FFT has an input length that is a power of two, such a choice simplifies the design and the
programming.
The frequency response of the AA filter for this application will look something like this:
This is an 8th-order elliptic filter design that meets the 70dB stopband requirement. The passband ripple is ±.05 dB which
represents a limitation of the spectrum analyzer amplitude accuracy. (The passband frequency response of the gain stage,
the ADC, and the digital filter used for the zoom analysis, all affect the amplitude accuracy as well.)
If you wanted a smaller passband ripple or a higher dynamic range, you would either need to increase the filter order or
relax the front-end efficiency (or both).
For higher frequency designs, the filter would generally be constructed using passive components (capacitors and inductors)
but in this frequency range, a carefully designed solid state (op-amp) filter could achieve this specification with a lower
manufacturing cost than the passive design.
The psdZoom application can't easily simulate these 3 front-end blocks, so for this program, we create simulated data
that represents the output of the ADC. In fact, this is not a particularly accurate simulation, because the linearity
and flatness of the simulated data buffer far exceeds what would be possible with a real-world front end. Never-the-less
this application can help you understand many of the computational methods and design choices involved.
Simulated input data
Since software can't include a real front end, we simulate what the digital output of the front end
would look like when the analog input consists of the sum of random noise with three sinewaves at
various frequencies and amplitudes. This input can be adjusted using these controls which are to
the left of the upper plot in the psdZoom figure window.
The sine wave frequencies are in Hz, and you may enter any value between 0 and 40,000Hz.
To change a frequency, right click on it and press the down arrow to erase the previous value
and type in the new value. Entering a sine wave amplitude of .001, 1, or 1000 will result in
a nominal -60, 0, and 60 dB output respectively. Enter an amplitude of zero to disable any of
the sinusoids.
Uniform random noise is generated which is then shaped by an iir digital filter. The denominator
is fixed, but you can adjust the three numerator coefficients. The exact shape of the random noise
added to the 3 sinusoids is not important to the simulation, but you can adjust those values if you
like. You can enter numbers by right clicking as described for the sine wave frequencies, or you can
left click just to the right or left sides of the number to increment or decrement (respectively) by 1.
Use smaller numbers if you want less noise, or enter three zeros to disable the noise component
entirely.
At the start and also whenever any of the above 9 parameters are changed, a new input data buffer
is computed consisting of 222 points (approximately 4 million points) representing exactly 40.96
seconds of data at the front-end sampling rate (102.4 kHz). Once the entire 41 seconds of data has
been analyzed, we start reusing the same data again from the beginning of the buffer. For a more
realistic simulation we should recompute the input buffer rather than reusing it, but this buffer is
long enough that you probably won't notice that it is being reused. On a slow computer (especially
with older Matlab versions) you will notice a delay when you change one of these input parameters
as it computes the 4 million points, but on a fast computer you may barely notice the delay.
Baseband processing (0 - 40 kHz)
Simulated front end
(222 points)
→
get next frame (128 - 4096 points)
→
FFT
→ →
Hanning (optional)
→ →
Compute magnitude & convert to dB
→
Averaging (exponential)
→
lower plot
The lower plot always displays the full 0-40kHz frequency range of the input signals. The input from the
front end is processed in chunks (called frames) of 128, 256, 512, 1024, 2048, or 4096 points. The reason
we choose frame sizes that are a power of two is that this makes FFT algorithm simpler and more efficient.
The frame is then sent to the FFT. The input to the FFT in the baseband case is real, however in Matlab,
the FFT input is always complex, so we set the imaginary components of the FFT input to zero.
For a complex input FFT, the number of output points is the same as the number of input points.
(The imaginary parts of the output in general will not be zero.) In the block diagram, a single arrow
represents a real signal and a double arrow represents a complex signal.
To understand the FFT output let's consider a specific example where
the frame size is 512 points. With our 102.4kHz sample rate, these 512 points represent 5 milliseconds of
data which means there will be 200 frames of data generated every second. (We say that the frame rate
is 200Hz.) The first output of the FFT (which we will call bin 0, or the DC term) will be the average value
of the input, so this value will be large if there is a DC offset in the input. The second term (bin 1) is
the lowest frequency term. Suppose the input consisted of a pure sinusoid with a frequency of exactly 200Hz.
This means that one period of this sinusoid fills the frame exactly. In that case, bin 1 will contain a nonzero
value, and all the other bins will be zero. Whether this non-zero output falls into the real or imaginary part
of the bin or some combination of both depends on the phase of the input (i.e. where the sinusoid starts).
If the input consisted of a sine wave of exactly 400Hz, 600Hz, 800Hz, etc. then it would be bin 2,3,4 respectively
which would contain the non-zero term. Extrapolating you can see that bin 256 (the 257th output since we started
counting at zero) would correspond to an input frequency of 200*256 = 51.2 kHz which corresponds to the Nyquist
frequency. (This bin is often called the Fmax term which would be the only non-zero term if the input consisted
of a repeated pattern of alternating ones and minus ones.) We discard the remaining 255 terms
(the "negative frequency" terms) since they add no new information, and in fact, will be the complex conjugate
of the positive frequency terms. In a hardware implementation, or a non-Matlab implementation,
most often a real input FFT is used which means that the negative frequency terms are not computed or stored. This
is more efficient than using the complex input FFT. We shouldn't feel bad about using the Matlab FFT however.
Even though it computes and stores the negative frequency terms it does recognize that the imaginary
components of the input are zero and adapts the algorithm for that situation. The Matlab FFT is so well optimized,
you would be hard-pressed to beat it by implementing a real input FFT algorithm.
Even with a sinusoid input, the examples above where all the outputs are zero except for a single bin won't happen
with data from a real front end partly because the input will include some noise but also because the input frequency
will not usually be an exact multiple of 200Hz (again using our 512 frame size example). The FFT bin that is closest
to the input frequency will contain the largest value, but the sine wave energy will also spread out (leak) into all the
neighboring bins. This somewhat inconvenient effect is referred to as "leakage". When we can get the desired frequency
information from the measurement despite the leakage we usually use the boxcar window (i.e. no window at all) but
when the leakage obscures the measurement a window (such as Hanning) is used to reduce the leakage.
These 257 complex outputs are fed into the Hanning block which implements the Hanning window using its simple 3-point
convolution kernel. We could have chosen to implement the Hanning window by multiplying the time domain signal
by the window shape (in which case the order of the FFT and Hanning blocks above would be reversed). Which method is
more efficient depends on the implementation details and the type of window chosen. Hanning is the
most popular window to use, but a more sophisticated spectrum analyzer would usually offer several different window
choices. To learn more about other window types that may be used, read the documentation for the
winplt.m application here.
Before the data is sent to the magnitude computation, it is reduced in size again, from 257 points to 201 points
(with our 512 frame size example) covering the frequencies between 0 (DC) and 40 kHz. The reason the bins beyond
40kHz (up to 51.2 kHz) are not displayed is that this is the transition region of the front-end anti-aliasing
filter, so this data is not reliable (i.e. may contain aliases above the 70dB dynamic range of the analyzer).
The arc-tangent of the ratio between the imaginary and real parts of the FFT output represents the phase information
for each bin but this information is not displayed because, for a real-world input, it is nearly impossible to
make sense of this information. (However, the phase is quite important for multi-channel measurements.) Ignoring
the phase, the remaining useful information is contained in the magnitude of the complex result (the square root
of x2 + y2) where x and y are the real and imaginary parts respectively. This gives us energy
units, although if we skip the square root operation we would get power units. Sometimes the spectrum is displayed
in these linear units (energy or power) but more often we want a logarithmic display so we can see a wide dynamic
range. The decibel (dB) is a convenient unit for this, which is 10 times the log of the power unit output or 20
times the log of the energy unit output.
And lastly, we send the magnitude data to the averaging unit. Additive averaging is a common method where we choose
a specific number of frames to average together. Intermediate results are usually displayed while these frames are
accumulated, and once the required number of frames has been accumulated the display stops showing the final result.
The other common method (which is used here) is called exponential averaging which unlike additive averaging continues
forever (or until you hit the stop button). As an example, suppose we select 64 as the averaging parameter.
The first output of the averager will be the same as the input to the averager, but after that, the averager output
(which I will call p) will be computed as:
p = P/64 + (63/64)p
where P is the output from the magnitude computation. Suppose you change the amplitude value of one of the input sine
waves from 1 to zero. You won't see the peak representing that frequency disappear immediately, but rather it will
slowly decay until it is no longer visible. The disadvantage of using a larger averaging parameter (512 for example)
is that this decay will be slower and you will have to wait longer before the output represents the new input. The
advantage however is that the display will be smoother and more accurate. If you set the parameter to 1 (i.e. no
averaging) there would be no delay (other than the frame time) but the output will be much noisier. It's easy to
experiment with the averaging parameter to arrive at a setting that gives you the desired compromise between
smoothness and response time.
Zoom processing
As you will see in some of the examples below, it is often useful to focus the spectral analysis
to within a narrow band of interest so we can see more detail in that band. Zoom processing
(sometimes called "frequency translation" is an effective way to do this. This type of analysis
is somewhat more complicated than the baseband processing discussed above, but we can understand
how it works from the following block diagram:
Simulated front end
(222 points)
→
× e-jωt
→ →
Decimate by 2n filter
→ →
get next frame
64 - 2048 points
→ →
FFT
→ →
Same last 3 boxes as above
Hanning Magnitude Averaging
→
upper plot
The first block is the frequency translater (or shifter). It shifts the frequency content of the input signal
by the amount "ω" by multiplying the input signal by the complex exponential e-jωt
which also can be written as cos(ωt) + sin(ωt)j. Suppose for example we want to focus the
analysis on a region around 14kHz. The sine and cosine waves we would use for the translator would be the ones
that would have all their energy in bin 70 of our 512-point FFT. (i.e. the sine wave would cycle 70 times within
a 512-point frame.
Then we want to lower the sample rate to focus on a small region around 14kHz (in this example). Suppose we want
to zoom in by a factor of 8 (i.e. to have 8 bins of frequency information for every baseband bin. To do this
we would need to lower the sample rate by a factor of 16. (The extra factor of 2 is because the data is now complex,
essentially doubling the effective data rate.) Your first instinct might be that we can just save every 16th
data point while discarding the remaining 15. That will indeed increase the frequency resolution, but aliases
from other parts of the baseband spectrum will leak into the band we are focusing on rendering the result useless.
So before decimating the data, we must use a low pass filter on the data before decimating. This is similar to
the reasoning behind the front-end anti-aliasing filter, but this time we need a digital filter instead of an
analog one. Since the entire baseband bandwidth is Fs/2 we want our filter to pass the frequencies up to Fs/32
and reject all the signals after that. Of course, such a brick wall filter is unrealizable, but since we are using
only 200 out of the 256 FFT bins our pass band only has to go out to
(256-56)/256 * (Fs/32) = .78125 * Fs/32 = .0244 * Fs..
And the stop band must begin at (256+56)/256 * (Fs/32) which is about .0381 * Fs.
To give us an idea of the sharpness of this transition band, let's type:
plt([0 .0244 .0381 .5],[1 1 0 0],'Linewidth',2)
which gives us this figure showing the desired filter shape. Notice how steep the roll-off is in the transition band.
This makes designing the filter a challenge, and this challenge grows increasingly more difficult as you approach
the maximum decimation rate of the analyzer, which for this program is a decimate by 1024. (Although I have shown
the stopband extending all the way from .0381 to .5, there are actually 16 don't care regions inside that range.
However, it is difficult to take advantage of these don't care regions because they are narrow and become even
narrower with higher decimation rates.)
To solve this problem, we will use a much simpler decimate-by-two filter and run it multiple times to achieve
higher decimation rates. So to achieve the maximum decimate by 1024 we will run the input data thru the filter 10 times.
After each time running the filter, we will discard every other output point (i.e. decimate by two).
To achieve our desired 78.125% efficiency (100 good lines out of 128) the decimate-by-two filter must have a stopband
that passband that goes out to .78125 * (Fs/4) = .1953125 * Fs . The stop band must have
the same width as the passband which means that the stop band begins at .3046875 * Fs .
To maintain the expected 70dB dynamic range, the stop band ripples must be under 70 dB.
The erip application was used to design this filter and the result is shown here.
Even though we need to run the decimate-by-two filter multiple times, this still turns out to be less computationally
intensive than a single-pass filter.
This filter has 7 zeros and 5 poles (i.e. 8 numerator and 6 denominator coefficients) and is implemented in psdZoom.m
with the following Matlab statement:
This filter has a stopband attenuation of 70.9dB, exceeding the 70dB requirement slightly. The passband
ripples of this filter are ±.02dB. (You can verify this by expanding the display in the passband
region using either the erip or editz applications.)
This is better than the flatness of the front-end AA filter which is desirable so that we don't degrade the
amplitude accuracy too much further. Unlike the analog AA filter, designing the digital decimating filter for more
dynamic range or better amplitude accuracy doesn't require more skill from the designer, but will require a higher
order filter with more computational demand.
It works perfectly well to use the filter described above for every filter pass. However, it turns out that we
can speed up the filtering somewhat by using different filters for some passes. For example, consider the decimate
by 4 case which requires two passes thru the decimate-by-two filter. For the first pass, half of the passband
won't be used because its output will again be decimated. So for the first pass, the passband only has to go
out to .09765625 instead of the .1953125 required for the second (and last) pass. And the stopband also can be
half the size. This means we can use a simpler filter (only 7 roots - counting both poles and zeros) compared with
the 12 roots for the first pass as compared with the 12 roots required for the last pass. This simpler filter is
implemented with the following command:
This filter has a stopband rejection of 80 dB and a passband ripple of ±.0035 dB. Again you can use the
editz or erip application to verify these performance characteristics.
Let's take this one step further, by considering the decimate by 8 case. For the same reason as before, the
first pass can use an even simpler filter shown here (25 lines out of 128), with the second pass using
the 50-line filter shown above, and the last pass using the 100-line filter above that. This filter is implemented
with the following command:
Which yields a stopband rejection of 77.5 dB and a passband ripple of ±.0046 dB. For the decimate by 16
case and above, we could use even simpler filters, but for simplicity, I chose to use this 6th-order FIR filter
for all passes except for the last two.
In most implementations, you will have one filter for the real part and a second identical one for the
imaginary part. However, since complex numbers are the native Matlab data type, here we only need one call
to filt for each decimate-by-two stage.
The complex data from the decimating filters is then processed in frames as in the baseband case. When the frame
size popup is set to 512 (for example) the baseband frame size is 512 real points, while the frame size for
zoom processing is set to 256 complex points. (An FFT with 256 complex inputs yields the same number of outputs
as an FFT with 512 real inputs.) The two halves of the FFT output are swapped (using fftshift) so that the FFT
DC-term (now representing 14kHz in our example) is in the middle of the FFT output.
Consider the case where the zoom factor is set to one. This means the decimating filter will be run just once.
If you set the zoom frequency to 20kHz you will see that the zoomed spectrum is identical to the baseband spectrum,
except for perhaps some barely noticeable effects due to imperfections in the decimate-by-2 filter. (With a
different center frequency, the spectrum will be a shifted version of the baseband spectrum.) So a zoom factor
of one is not useful since you have nothing to show for the extra work of frequency shifting and
filtering. However, once you select the higher zoom factors, we will see in the examples below that you will
get an advantage from these extra computations.
The psdZoom figure window
From the baseband spectrum in the lower plot, you might think that the input consists entirely of shaped random
noise. However, looking more carefully we can notice an artifact near 14kHz. So we set the zoom center frequency
at 14kHz and set the zoom factor to 8 to increase the resolution in that region to 25Hz (compared to the 200Hz
baseband resolution). Now we can clearly see that this artifact consisted of two sinusoids at approximately
13.6 and 14.2 kHz.
In the baseband spectrum, the noise signal was enough to nearly completely obscure the sine waves. The reason
we can see them in the zoomed spectrum is that the noise in each bin of the baseband spectrum is now spread out
into 8 bins in the zoomed spectrum. This makes the noise power low enough in each bin that the sine wave signal
can easily rise above the noise.
Now, to be able to measure the frequencies of the sine waves more accurately, let's increase the resolution
by another factor of 4 (by selecting a zoom factor of 32). Now we can see that the largest sine wave is
closer to 13.606 kHz. (Click anywhere in the upper plot to select this trace, and then click on the up arrow
(peak finder) button near the lower left corner of the figure. That will move the cursor directly to this peak
and you can read the frequency in the respective x-axis edit box.)
But notice that now we can see that the input includes yet another sinusoid, this one at about 14.23 kHz. (Click
in the up arrow peak finder two more times and it will land on this 3rd highest peak.) The reason we can see the
third sine wave here but not with the lower zoom factor is the same as before. We are again spreading out the
noise power into more bins, so there is even less noise per bin than before. The noise has not been eliminated.
It is still the same total noise power as before. If we were to compute the noise power between two frequencies
by summing the noise power over many bins, we would get the same answer regardless of the zoom factor (provided
we have enough resolution to pick the band edges accurately in each measurement).
If we wanted to measure the frequency of any of these peaks more accurately, we can continue to increase the
zoom factor which in turn increases the resolution of the frequency measurements. We will have to adjust the
zoom center frequency so that the sine wave of interest remains in the zoomed region.
In this example, we have kept the same settings as before except the frame size has been increased to 1024.
The inputs are the same as well except for the third sinusoid whose frequency was changed slightly and its
amplitude was increased by a factor of 5000. The leakage from this large sine wave is so large that it obscures
everything else in the upper zoomed plot. In the lower plot, we also see mostly the large sine wave, but we can
see that there is some noise source as well, but it's not big enough to say anything about its shape or character.
It appears that our biggest problem with this measurement is the leakage, so here we turn on the Hanning window
to help deal with that problem. The Hanning window has made a dramatic difference. In the baseband plot, we can
now see the large sine wave as well as the shape of the underlying noise. We can't see the other two sine waves
in the baseband display, but we now can see all the sine wave signals in the zoomed plot ... thanks to the
leakage reduction afforded by the Hanning window.
This measurement is the same as the one shown in the previous figure, except that instead of using the
"Hanning" popup choice as above, here we have chosen "hannP" which is short for "Hanning with power correction".
You may notice that the noise level here is a few dB lower than in the figure above. If you were to try to estimate
the noise power within a particular frequency range you would find that the previous spectrum would yield an
incorrect answer, but we would get a good estimate using the power-corrected Hanning window. So why wouldn't we
always use the power correction? To understand why, look at the amplitude of the first peak in the spectrum
(near 13.6 kHz). The figure above shows this peak at about 20dB (the correct result) whereas here we see a value
that is a few dB less than that. So apparently we can only get accurate amplitude measurements for amplitude
or noise, but not both at the same time. This is one of the drawbacks of using windowing, but it is something
we put up with because the benefits are often overwhelming as seen in this example.
Here we show a measurement of an input consisting of 3 large sine waves (0 dB) at 6 ,8, and 10 kHz with a
small amount of noise added in. The "disable aaFilt" checkbox has been checked. This causes the digital filter
that precedes the zoom processing decimation to be bypassed. Suppose we now zoom in by a factor of 4 on the
frequencies above 10kHz to see if there are any signals in that region. Now we see two large peaks at 15.6
and 17.6 kHz. These additional peaks are clearly spurious because they are so large that we would also see them
in the baseband spectrum, but we do not. However, in some cases, the aliases are much smaller and we would have no
way of knowing if the peaks represent real signals or not. Omitting these filters renders the zoomed spectrum
useless, which is why this option (to disable the filters) is not provided on any real spectrum analyzer. It is
included here for demonstration purposes. Likewise, if you omitted the analog AA filter of the front end (or
designed, built, or tested it improperly) even the baseband spectrum would be rendered useless.
After unchecking the checkbox, we will see this result for the zoomed spectrum, which agrees with what we would
expect to see given the chosen simulated input parameters.
Sometimes we would like to start the analysis at DC but we are only interested in a small portion of the full
40kHz spectrum (i.e. we want a lower bandwidth baseband). In this example, we are looking at a signal consisting
of 3 sine waves all near 100Hz. So a maximum analysis frequency of 150 or 200 Hz would be sufficient.
Choosing a zoom factor of 256 gives us a bandwidth of 40,000/256 = 156.25 Hz. Then we set the zoom frequency to
half of that (78.125Hz) so that our analysis window starts at 0 Hz.
Instead of using zoom processing, a more common way of generating a lower baseband spectrum is to simply filter
and decimate the original data from the front end. The real to complex FFT is usually somewhat less efficient
(to generate the same number of outputs) but this is balanced by the fact that we don't need to multiply by the
complex exponential making the computational load for both methods is similar.
Here the simulated input consists of three sinusoids with frequencies of 30.4kHz, 34.8kHz, and 0Hz all with an
amplitude of 1 volt with some noise mixed in. Since the sinusoids are actually cosine functions which begin at
one (at t = 0), the zero-frequency cosine wave is effectively just a 1-volt DC term. In the zoomed plot, both
of these frequencies are multiples of the bin width, so they appear as a single peak at 0dB (because the boxcar
window was selected). Note that the bin width (ΔF) is shown to the left of each plot. In the baseband plot,
the 30.4kHz peak is a multiple of the bin width so it appears as a single 0dB peak, but the 34.8kHz signal is
not. Since it is a multiple of half the bin width, its energy is split mostly between two adjacent bins,
although some of its energy also spreads to neighboring bins.
In both the upper and lower plots, the DC term also appears as a single peak at 6dB, not at 0dB as you might first
guess. If you have studied calculus, you have probably once proved that the average value of a squared sine wave
is one-half which is why the DC term is twice the other two terms. Since the 0dB reference is somewhat arbitrary,
we could have chosen to represent the DC term as 0dB and the other two terms as -6 dB. Of course, it is important
to specify what reference is being used.
You probably have noticed that this display includes a circle marker for each FFT output bin. This is especially
useful when a small frame size has been selected (as in this example). To add these markers to the lower plot,
first, click anywhere on the lower trace and then click the button labeled "o" in the 4-button group near the lower
left corner. This will add the markers but remove the lines connecting the markers. If you want to see both the
markers and the line (as shown here) click again on the "o" button, and click a third time to return to the
original presentation. You can do the same thing for the upper plot. Just click anywhere on the upper trace before
clicking on the "o" button.
One interesting design challenge, no matter if the front end is real or simulated, is how to set the Y-axis limits.
Because the amplitudes can vary widely, setting these limits to a fixed value is not workable. One approach
is to provide a set of knobs so the user can easily set the desired limits. This is not usually too much of an
imposition, but it is tempting to try to relieve the user of this responsibility by setting the limits automatically.
At first, this seems like a trivial task. The ideal limits would allow the spectrum to have its maximum
near the upper limit and the minumum near the lower limit with perhaps some headroom (perhaps 5 or 10 percent of
the plot height) so that the trace doesn't touch the upper or lower limit. So we can simply search for
the largest and smallest value and set the limits to give the desired headroom.
However, after trying that simplistic approach, we find that it doesn't work well. That's because the spectrum is
often noisy (especially with little or no averaging) so every display update will change the Yaxis limits, sometimes
by a dramatic amount. Especially with a rapid display update rate, the wildly updating axis limits will make the
plot look so jittery that it is difficult to see what is happening in the plot.
The solution to this problem used here is to calculate the ideal display limits as described above, but instead
of setting the axis limits to this ideal setting right away, we slowly adjust the limits so that they move toward
the ideal limits at a prescribed rate. This prescribed rate is a compromise. A fast rate is better because then
when the input changes, the display stabilizes quickly on the appropriate limits for the new input. But a slow
rate is better because it reduces the jitter we observe when the limits are adjusted rapidly. The compromise I
chose was to use a relatively fast time constant (one second) when the limits need to be widened because the current
spectrum is exceeding the current limits. The somewhat faster rate was chosen in this situation because it is
undesirable to not be able to see the part of the spectrum that is beyond the upper or lower axis limit. On the
other hand, when the limits need to be narrowed because the spectrum doesn't span enough of the plot height,
the problem is less urgent since we can at least see the whole spectrum even though the detail may not be as good
as desired. So in that case we use a slower (four-second) time constant. You may want to experiment by dramatically
increasing and decreasing the amplitude of one of the sinusoidal inputs while the display is continuing to update,
so you can observe how these two different time constants behave.
In addition to describing how to use the psdZoom application, I have tried to give you an introduction to the
concepts needed to design an instrument that displays the spectrum corresponding to an analog input signal.
If you are mathematically inclined, there are many books that can help you explore these ideas further. However
if you don't have the background or patience for all those equations, you can go a little farther in some of these
concepts by viewing a talk that I prepared years ago. You can find that talk
here