He definitely did not say that the maximum bit rate is twice the bandwidth rather, the maximum bit rate is given by the bandwidth, the SNR, and some specific properties of the sequence $\lbrace a_k \rbrace$. He was interested in the relationship between the entropy of the information source that produces the transmitted sequences $\lbrace a_k \rbrace$, the bandwidth, and the signal to noise ratio in the received signal. In terms of a $M$-QAM constellation, the consequence is that we can't increase $M$ arbitrarily when $M$ is made too large, the system cannot transmit information reliably any more. Rate: since we're limited in the choice of symbol amplitudes, we can't transmit as fast as we wish.Power: since the symbol amplitudes cannot be made too small, a minimum amount of power is required.For example, if we constrain the symbols to take values either $-0.5$ or $0.5$, and the noise magnitude is very unlikely to be larger than $0.5$, then transmission can be done very reliably however, we pay a price in two different ways: Transmission under the effects of noise restrict the choice of symbols $a_k$: they must belong to a finite set whose elements cannot be easily mistaken for one another. ![]() The simplest model of a communications system assumes that the received signal $r(t)$ is corrupted with white Gaussian noise with power spectral density $N_0/2$: $$r(t) = s(t) n(t).$$ In practice, however, noise, distortion, and other imperfections prevent transmission of $a_k$ numbers with infinite precision. Even if one is constrained to choose from a constellation, such as $M$-QAM, one could choose $M$ to be as large as needed. All one has to do is let the $a_k$ be arbitrary real numbers, with infinite precision. If this was all there was to it, then an infinite amount of information could be transmitted in any amount of time. This is what it means to say that " $2B$ pieces of information can be transmitted per Hz of bandwidth available." The minimum possible value is $\alpha=1$, when $p(t)$ is a sinc pulse. The value of $\alpha$ is the "spectral efficiency" of the pulse $p(t)$. Up to a certain point, being willing to accept a small increase in the cost of the digital filtering will allow one to achieve a bigger savings in the cost of the analog filtering, but beyond that point further increases in the oversampling ratio will impose increasing costs on the digital size, while achieving smaller and smaller savings on the analog side.Let's say you want to transmit a sequence of numbers $\lbrace a_k \rbrace_.$$ Doubling the oversampling rate will require that the digital circuitry process twice as many samples per second, and may in some cases have to do more with each sample. The disadvantage of oversampling is that if one simplifies the analog filter design, that will require the digital filter to remove any unwanted signals which the analog filtering left in. ![]() Using 4x oversampling, one could extend the analog filter's transition range up to 20,000Hz designing a filter with a 2.5 octave transition region is a lot easier than designing one with a 1/3-octave transition range (generously assuming the stop band need not start until almost 4,500Hz). If one uses 2x oversampling, and has digital software that can filter out everything above 3,999Hz, then the analog filter can have its transition range extend from 3,500Hz to almost 12,000Hz-a much easier job. If one's goal is to have an 8Khz output sample rate with a passband that extends to 3,500Hz, then if one doesn't use oversampling one will need to have an analog filter that drops like a rock between 3,500Hz and something between 4,000 and 4,500Hz. Use of oversampling will tend to shift some of the filtering requirements from the analog domain to the digital domain. You can see also that, at a given ADC speed, oversampling will require more time so an overall slower speed.Īnother possible drawback is that it may result in additional noise if, for instance, the lower sampling speed allows you to integrate on a longer time. The drawback of oversampling is of course higher speed required for the ADC and the processing unit (higher complexity and cost), but there may be also other issues. The extreme case is the one of sigma-delta converters, where a 1-bit ADC (just a comparator) is run at very high speed (\$2^N\$ samples/value, where N is the resolution in bits) to achieve the highest linearity, because the 1-bit conversion is linear by definition. Several ADC architectures use oversampling with averaging to obtain higher precision than the converter itself achieves. Reasonable sampling rates go from twice the Nyquist rate (four-five times the maximum frequency) up. First thing: the Nyquist rate is not sufficient to obtain a correct sampling of a signal, it's just the theoretical minimum.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |