volume30-number3(1).pdf

(2045 KB) Pobierz
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
833691810.051.png
Editor’s Notes
spikes the equipment is responding to—or producing—are causing
problems an analog designer has to solve.
The big ray of hope here is digital signal processing . DSP to the
rescue of analog! Some very hairy analog problems are indeed
being solved with DSP. Your computer’s modem, cellular phone,
and ac motor controller are doing things via the analog variables
in ordinary phone lines, rf, and motors that many of us never
believed possible 30 years ago. It’s still analog, but it’s aided by
digital (programmers’) intelligence and the space-, cost-, and power
savings of the analog hardware that we call DSPs. A
Dan.Sheingold@analog.com
ANALOG DIALECTIC
A little more than 30 years ago,
the undersigned held a
responsible technical marketing
position with George A.
Philbrick Researches, Inc., and
was editor of a journal called
“The Lightning Empiricist.” For
the edification of a sizeable
portion of our readers who
weren’t around at the time,
GAP/R, a company with annual sales of $6M, virtually owned the
operational amplifier market—such as it was. The only significant
competitors were then a lot smaller—Tom Brown’s Burr-Brown
Research Corporation, and Al Pearlman and the late Roger Noble’s
Nexus Research Laboratory, Inc.
Aye, what’s in a name? Do you find it interesting that “Research”
was part of all three names? The founders of a new company, Ray
Stata and Matt Lorber, did. They believed that the existing
companies were getting prices* that would interest only researchers
(who had much fatter budgets in those days), and that a significant
industrial volume market for operational amplifier modules with
a wide variety of performance was waiting to be tapped (ICs with
decent performance were still pretty far over the horizon, so you
didn’t need venture capital to buy an expensive fab; you could
still build modules in your own garage, or in this case, a loft—or
was it a basement?—in Cambridge, MA).
So they gave their company the highly descriptive and serviceable
name, Analog Devices, Inc., and offered high-performance op amps
at reasonable prices, along with strong application support. They
were right, of course! The company’s sales took off almost
immediately. And ADI was aided by an incredible stroke of luck—
both Philbrick and Nexus were acquired by the giant Teledyne,
Inc., and merged. As often happens in such circumstances, the
effect was like to putting two resistors in parallel; their joint
effectiveness actually decreased. Not only that, but ADI was able
to acquire the services of a number of talented, and perhaps
disgruntled, former employees of Teledyne Philbrick Nexus, a move
that proved synergistic. The rest is history! Should the next sentence
read: “And with its primacy in analog devices, ADI lived happily
ever after?”
Ah, but what’s in a name? Don’t you find it interesting that one of
the fastest-growing product lines at Analog Devices is the SHARC
general-purpose digital signal processor—a DIGITAL
COMPUTER, for gosh sakes!? Yes, the analog stuff is doing very
well indeed, too, but isn’t that a bit of a stretch? If we some day
end up getting more revenue from “digital” products than “analog”
products, (in no way a sure thing, say our “analog” guys!), will
“Analog” in our name be a misnomer?
No! As we’re endlessly fond of saying, hardware design problems
are analog problems. If the name of the game is signal processing ,
any signal that’s a voltage or a current—or even a time interval—
has resolution limitations that are analog in nature (noise, jitter,
bandwidth, etc.), even if the voltage or current level you’re trying
to recognize is representing a “1” or a “0”, and the time is a
sampling interval. And don’t forget interference: those parasitic
*For example, the P2 solid-state parametric electrometer was priced at $227
(in mid-1960s US dollars!)
THE AUTHORS
Dave Robertson (page 3) is a
design engineer in the Analog
Devices High-Speed Converter
group in Wilmington, MA. He
joined Analog on graduating from
Dartmouth College, with BA and
BE degrees. He has worked on
high-speed D/A and A/D con-
verters, including the AD568,
AD668, AD773, AD872, and
AD871, and is now developing and adapting high-speed converters
for communications systems. In his free time Dave plays rugby
and entertains his two children.
Doug Mercer (page 7) is an
Analog Devices Fellow doing
circuit design on CMOS DAC
architectures in the High Speed
Converter Group in Wilmington,
MA. He joined ADI in 1977 after
graduating from Rensselaer
Polytechnic Institute with a BSEE.
In his free time, he tries to improve
his golf game.
Joe DiPilato (page 7) is a Product
Marketing Manager in the High
Speed Converter group at Analog
Devices in Wilmington, MA. He
has a BSEE from Worcester
Polytechnic Institute and an MBA
from Anna Maria College. He is
focused on defining and marketing
high-speed A/D- and D/A-con-
verter-based products for use in
the communications marketplace. When he’s not working, Joe
enjoys water sports, tennis, golf, the beach, and spending quality
time with his wife, Lisa.
[ more authors on page 22 ]
Cover: The cover illustration was designed and executed by
Shelley Miles , of Design Encounters , Hingham MA.
One Technology Way, P.O. Box 9106, Norwood, MA 02062-9106
Published by Analog Devices, Inc. and available at no charge to engineers and
scientists who use or think about I.C. or discrete analog, conversion, data handling
and DSP circuits and systems. Correspondence is welcome and should be addressed
to Editor, Analog Dialogue , at the above address. Analog Devices, Inc., has
representatives, sales offices, and distributors throughout the world. Our web site is
http://www.analog.com/ . For information regarding our products and their
applications, you are invited to use the enclosed reply card, write to the above address,
or phone 617-937-1428, 1-800-262-5643 (U.S.A. only) or fax 617-821-4273.
2
ISSN 0161–3626
©Analog Devices, Inc. 1996
Analog Dialogue 30-3 (1996)
833691810.062.png 833691810.072.png 833691810.080.png 833691810.001.png 833691810.002.png
Selecting Mixed-Signal
Components for
Digital Communication
Systems—An
Introduction
by Dave Robertson
Communications is about moving information from point A to
point B, but the computer revolution is fundamentally changing
the nature of communication. Information is increasingly created,
manipulated, stored, and transmitted in digital form—even signals
that are fundamentally analog. Audio recording/playback, wired
telephony, wireless telephony, audio and video broadcast—all of
these nominally analog communications media have adopted, or
are adopting, digital standards. Entities responsible for providing
communications networks, both wired and wireless, are faced with
the staggering challenge of keeping up with the exponentially growing
demand for digital communications traffic. More and more,
communications is about moving bits from point A to point B.
Digital communications embraces an enormous variety of
applications, with radically different constraints. The transmission
medium can be a twisted pair of copper wire, coaxial cable, fiber-
optic cable, or wireless—via any number of different frequency
bands. The transmission rate can range from a few bits per second
for an industrial control signal communicating across a factory
floor to 32 kbits/second for compressed voice, 2 Mb/s for MPEG
compressed video, 155 Mbps for a SONET data trunk, and
beyond. Some transmission schemes are constrained by formal
standards, others are free-lance or developmental. The richness of
design and architectural alternatives produced by such variety
boggles the mind. The digital communications topic is so vast as to
defy a comprehensive treatment in anything less than a shelf of books.
A communications jargon and a bewildering array of acronyms
have developed, making it sometimes difficult for the
communications system engineer and the circuit hardware designer
to communicate with one another. Components have often been
selected based on voltage-oriented specifications in the time
domain for systems whose specifications are expressed in frequency
and power. Our purpose here, and in future articles, will be to
take a fairly informal overview of some of the fundamentals, with
an emphasis on tracing the sometimes complex relationship
between component performance and system performance.
The “communications perspective” and analytic tool set have also
contributed substantially in solving problems not commonly
thought of as “communications” problems. For example, the
approach has provided great insight into some of the speed/
bandwidth limits inherent in disk-drive data-recovery problems,
where the channel from A to B includes the writing and reading of
data in a magnetic medium—and in moving data across a high
speed bus on a processing board.
Shannon’s law—the fundamental constraint : In general, the
objective of a digital communications system is:
• to move as much data as possible per second
• across the designated channel
• with as narrow a bandwidth as possible
• using the cheapest, lowest-power, smallest-space (etc.)
equipment available.
System designers are concerned with each of these dimensions to
different degrees. Claude Shannon, in 1948, established the
theoretical limit on how rapidly data can be communicated:
This means that the maximum information that can be transmitted
through a given channel in a given time increases linearly with the
channel’s bandwidth, and noise reduces the amount of information
that can be effectively transmitted in a given bandwidth, but with
a logarithmic sensitivity (a thousandfold increase in noise may
result in a tenfold reduction in maximum channel capacity).
Essentially, the “bucket” of information has two dimensions:
bandwidth and signal-to-noise ratio (SNR). For a given capacity
requirement, one could use a wide-bandwidth channel with
relatively poor SNR, or a narrowband channel with relatively good
SNR (Figure 1). In situations where bandwidth is plentiful, it is
common to use cheap, bandwidth-hungry communications
schemes because they tend to be insensitive to noise and
implementation imperfections. However, as demand for data
communication capacity increases (e.g., more cellular phones)
bandwidth is becoming increasingly scarce. The trend in most
systems is towards greater spectral efficiency, or bits capacity per
unit of bandwidth used. By Shannon’s law, this suggests moving
to systems with better SNR and greater demands on the transmit
and receive hardware and software.
Let’s examine the dimensions of bandwidth (time/frequency
domain) and SNR (voltage/power domain) a little more closely by
considering some examples.
BEST SPECTRAL EFFICIENCY
LEAST SNR REQUIREMENTS
(CHEAPEST?)
BANDWIDTH – Hz
Figure 1. Shannon’s capacity limit: equal theoretical capacity.
IN THIS ISSUE
Volume 30, Number 3, 1996, 24 Pages
Editor’s Notes, Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Selecting mixed-signal components for digital communication systems—Introduction . 3
TxDAC™ CMOS D/A converters are optimized for the communication
Transmit path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Buffered multiplexers for video applications . . . . . . . . . . . . . . . . . . . . . . . . 10
Single-chip direct digital synthesis (DDS) vs. the analog phase-locked loop . . . 12
For efficient signal processing in embedded systems, take a DSP, not a RISC . . 14
New-Product Briefs:
Operational Amplifiers and variable-compression Audio Preamplifier . . 16
DSPs and Digital and Analog Audio chips . . . . . . . . . . . . . . . . . . . . . 17
ADC, 2-channel Data-Acquisition system, and Direct Digital
Synthesis chip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Transceivers and switched-cap Regulated Inverter . . . . . . . . . . . . . . . . 19
Ask The Applications Engineer—22: Current-feedback amplifiers—I . . . . . 20
Worth Reading, More authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Potpourri . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Analog Dialogue 30-3 (1996)
3
833691810.003.png
PCM: A simple (but common) case: Consider the simple case
of transmitting the bit stream illustrated in Figure 2a, from a
transmitter at location A to a receiver at location B (one may
assume, that the transmission is via a pair of wires, though it could
be any medium.) We will also assume that the transmitter and
receiver have agreed upon both the voltage levels to be transmitted
and the timing of the transmitted signals. The transmitter sends
“high” and “low” voltages at the agreed-upon times, corresponding
to 1s and 0s in its bit stream. The receiver applies a decision element
(comparator) at the agreed-upon time to discriminate between a
transmitted “high” and “low”, thereby recovering the transmitted
bit stream. This scheme is called pulse code modulation (or PCM).
Application of the decision element is often referred to as “slicing”
the input signal stream, since a determination of what bit is being
sent is based on the value of the received signal at one instant in
(slice of) time. To transmit more information down this wire, the
transmitter increases the rate at which it updates its output signal,
with the receiver increasing its “slicing” rate correspondingly.
threshold could then be set at the “average” value of the incoming
bit stream, which should be some value between the transmitted
“1” and transmitted “0” (half-way between, if the density of ones
and zeros are equal.) For timing, a phase-locked loop could be
used—with a center frequency somewhere near the agreed-upon
transmit frequency; it would “lock on” to the transmitted signal,
thereby giving us an exact frequency to slice at. This process is
usually called clock recovery ; the format requirements on the
transmit signal are related to the performance characteristics of
the phase-locked-loop. Figure 3 illustrates the elements of this
simplified pulse receiver.
Bandwidth Limitations: The real world is not quite so simple.
One of the first important physical limitations to consider is that
the transmission channel has finite bandwidth. Sharp-edged square
wave pulses sent from the transmitter will be “rounded off” by a
low bandwidth channel. The severity of this effect is a function of
0100111 00
OPTIMUM
THRESHOLD
a.
a.
OPTIMUM SLICING TIME
b.
MISPLACED
THRESHOLD
0000000 00
c.
BAD SLICING
FREQUENCY
0
1
1
0
0
0
1
1
1
1
1
0
0
0
b.
Figure 2. Simplified bit voltage transmission (PCM).
This simple case, familiar to anyone who has had an introductory
course in digital circuit design, reveals several of the important
elements in establishing a digital communications system. First,
the transmitter and receiver must agree upon the “levels” that are
to be transmitted: in this case, what voltage constitutes a
transmitted “1”, and what voltage level constitutes a transmitted
“0”. This allows the receiver to select the right threshold for its
decision element; incorrect setting of this threshold means that
the transmitted data will not be recovered (Figure 2b). Second,
the transmitter and receiver must agree on the transmission
frequency; if the receiver “slices” at a different rate than the bits
are being transmitted, the correct bit sequence will not be recovered
(2c). In fact, as we’ll see in a moment, there must be agreement
on both frequency and phase of the transmitted signal.
How difficult are these needs to implement? In a simplified world,
one could assume that the transmitted signal is fairly “busy”,
without long strings of consecutive ones or zeros. The decision
c.
d.
PHASE
LOCKED
LOOP
RECOVERED CLOCK
RECEIVED SIGNAL
CLK
RECOVERED
DATA
COMPARATOR
e.
DATA THRESHOLD
AVERAGER
KEY ELEMENTS – DECISION ELEMENT, REFERENCE RECOVERY,
TIMING RECOVERY
Figure 3. Idealized PCM.
Figure 4. Scope waveforms vs. time (L) and eye diagrams (R).
4
Analog Dialogue 30-3 (1996)
833691810.004.png 833691810.005.png 833691810.006.png 833691810.007.png 833691810.008.png 833691810.009.png 833691810.010.png 833691810.011.png 833691810.012.png 833691810.013.png 833691810.014.png 833691810.015.png 833691810.016.png 833691810.017.png 833691810.018.png 833691810.019.png 833691810.020.png 833691810.021.png 833691810.022.png 833691810.023.png 833691810.024.png 833691810.025.png 833691810.026.png 833691810.027.png 833691810.028.png 833691810.029.png 833691810.030.png 833691810.031.png 833691810.032.png 833691810.033.png 833691810.034.png 833691810.035.png 833691810.036.png 833691810.037.png 833691810.038.png 833691810.039.png 833691810.040.png 833691810.041.png 833691810.042.png 833691810.043.png 833691810.044.png 833691810.045.png 833691810.046.png 833691810.047.png 833691810.048.png 833691810.049.png 833691810.050.png 833691810.052.png 833691810.053.png 833691810.054.png 833691810.055.png 833691810.056.png 833691810.057.png 833691810.058.png 833691810.059.png 833691810.060.png 833691810.061.png 833691810.063.png 833691810.064.png 833691810.065.png 833691810.066.png 833691810.067.png 833691810.068.png
 
the channel bandwidth. (Figure 4). In the extreme case, the
transmitted signal never gets to a logical “1” or “0”, and the
transmitted information is essentially lost. Another way of viewing
this problem is to consider the impulse response of the channel.
An infinite bandwidth channel passes an impulse undistorted
(perhaps with just a pure time delay). As the bandwidth starts to
decrease, the impulse response “spreads out”. If we consider the
bit signal to be a stream of impulses, inter-symbol interference
(ISI) starts to appear; the impulses start to interfere with one-
another as the response from one pulse extends into the next pulse.
The voltage seen at the Receive end of the wire is no longer a
simple function of the bit sent by the transmitter at time t 1 , but is
also dependent on the previous bit (sent at time t 0 ), and the
following bit (sent at time t 2 ).
Figure 4 illustrates what might be seen with an oscilloscope
connected to the Receive end of the line in the simple noisy
communications system described above for the case where the
bandwidth restriction is a first-order lag (single R-C). Two kinds
of response are shown, a portion of the actual received pulse train
and a plot triggered on each cycle so that the responses are all
overlaid. This latter, known as an “eye” diagram, combines
information about both bandwidth and noise; if the “eye” is open
sufficiently for all traces, 1s can be easily distinguished from 0s. In
the adequate bandwidth case of Figure 4a, one can see
unambiguous 1s, 0s, and sharp transitions from 1 to 0. As the
bandwidth is progressively reduced, (4b, 4c, 4d, 4e), the 1s and 0s
start to collapse towards one another, increasing both timing- and
voltage uncertainty. In reduced-bandwidth and/or excessive-noise
cases, the bits bleed into one another, making it difficult to
distinguish 1s from 0s; the “eye” is said to be closed (4e).
As one would expect, it is much easier to design a circuit to recover
the bits from a signal like 4a than from 4d or 4e. Any misplacement
of the decision element, either in threshold level or timing, will be
disastrous in the bandlimited cases (d, e), while the wideband case
would be fairly tolerant of such errors. As a rule of thumb, to send
a pulse stream at rate F S , a bandwidth of at least F S /2 will be needed
to maintain an open eye, and typically wider bandwidths will be
used. This excess bandwidth is defined by the ratio of actual
bandwidth to F S /2. The bandwidth available is typically limited by
the communication medium being used (whether 2000 ft. of
twisted-pair wire, 10 mi of coaxial cable etc.), but it is also necessary
to ensure that the signal processing circuitry in the transmitter
and receiver do not limit the bandwidth.
Signal processing circuitry can often be used to help mitigate the
effects of the intersymbol interference introduced by the
bandlimited channel. Figure 5 shows a simplified block diagram
of a bandlimited channel followed by an equalizer, followed by
the bit “slicer”. The goal of the equalizer is to implement a transfer
function that is effectively the inverse of the transmission channel
over a portion of the band to extend the bandwidth. For example,
if the transmission channel is acting as a low pass filter, the equalizer
might implement a high-pass characteristic, such that a signal
passing through the two elements will come out of the equalizer
undistorted over a wider bandwidth.
Though straightforward in principle, this can be very difficult to
implement in practice. To begin with, the transfer function of the
transmission channel is not generally known with any great
precision, nor is it constant from one situation to the next. (You
1 The field of disk-drive read -channel design is a hotbed of equalizer
development in the ongoing struggle to improve access specs.
and your neighbor down the street have different length phone
wires running back to the phone company central office, and will
therefore have slightly different bandwidths.) This means that these
equalizers usually must be tunable or adaptive in some way.
Furthermore, considering Figure 5 further, we see that a passive
equalizer may flatten out the frequency response, but will also
attenuate the signal. The signal can be re-amplified, but with a
probable deterioration in signal-to-noise ratio. The ramifications
of that approach will be considered in the next section. While they
are not an easy cure-all, equalizers are an important part of many
communications systems, particularly those seeking the maximum
possible bit rate over a bandwidth-constrained channel. There are
extremely sophisticated equalization schemes in use today,
including decision feedback equalizers which, as their name
suggests, use feedback from the output of the decision element to
the equalization block in an attempt to eliminate trailing-edge
intersymbol interference. 1
RECEIVER
BANDLIMITED
CHANNEL
DECISION
ELEMENT
TRANSMITTER
EQUALIZER
CHANNEL
RESPONSE
EQUALIZER
RESPONSE
f
f
f
COMPOSITE RESPONSE
Figure 5. Channel equalization.
Multi-level symbols—sending more than one bit at a time:
Since the bandwidth limit sets an upper bound on the number of
pulses per second that can be effectively transmitted down the
line, one could decide to get more data down the channel by
transmitting two bits at a time. Instead of transmitting a “0” or
“1” in a binary system, one might transmit and receive 4 distinct
states, corresponding to a “0” (00), “1” (01), “2” (10), or “3”
(11). The transmitter could be a simple 2-bit DAC, and the receiver
could be a 2-bit ADC. (Figure 6). In this kind of modulation,
called pulse-amplitude modulation (PAM), additional information
has been encoded in the amplitude of the bit stream.
Communication is no longer one bit at a time; multiple-bit words,
or symbols , are being sent with each transmission event. It is then
necessary to distinguish between the system’s bit rate, or number
of bits transmitted per second, and its symbol rate, or baud rate,
which is the number of symbols transmitted per second. These two
rates are simply related:
bit rate = symbol rate (baud) × bits / symbol
The bandwidth limitations and intersymbol interference discussed
in the last section put a limit on the realizable symbol rate, since
they limit how closely spaced the “transmission events” can be in
time. However, by sending multiple bits per symbol, one can
increase the effective bit rate, employing a higher-order modulation
scheme. The transmitter and receiver become significantly more
complicated. The simple switch at the transmitter has now been
replaced with a DAC, and the single comparator in the receiver is
VARIABLE
GAIN
AMPLIFIER
n-BIT
SYMBOL
n-BIT
DAC
n-BIT
ADC
CHANNEL
ATTENUATION
ADDITIVE NOISE
GAIN
CONTROL
CLOCK
Figure 6. Simplified PAM transmitter/receiver.
Analog Dialogue 30-3 (1996)
5
833691810.069.png 833691810.070.png 833691810.071.png 833691810.073.png 833691810.074.png 833691810.075.png 833691810.076.png 833691810.077.png 833691810.078.png 833691810.079.png
 
Zgłoś jeśli naruszono regulamin