Digitalization of mobile communication
Digital Signals
TDMA Systems
Noise Free Transmission
Signal analysis
Digital Filter
Digital Signal Processor
Speech Coding
Protection agains Interception
Digital signal processing revolutionized mobile communications. For this reason, the basic principles of digital signal processing and their advantages will be discussed in more detail.
Digitalization of mobile communication
The introduction of the cellular mobile telephony increased the demand. The new technologies made it possible to support hundreds of thousands of subscribers in a radio network. But there were still weak points:
- Low capacity
- Poor transmission quality at cell boundaries with possible call drops
- No protection against eavesdropping
As before, attempts were made to address capacity demands by making more channels available. But the number radio channels are limited. So it was required to use the existing channels more efficiently. Would it be possible to enable more calls per MHz of bandwidth? With analogue technology this was possible in the end. With digital signal processing it was possible to compress speech signals to allow more digital channels.
Digital Signals
With the invention of the telephone, something fundamentally new happened in communication. A sound wave was converted into an “electrical signal” and transmitted. The medium of the transmission changed. The information was no longer transmitted by sound, but by an electric field.
In the 1960s, a fundamentally new change happened again. An electrical signal could be converted into a “digital signal” and vice versa. It was converted into a new medium, not a real physical medium, but rather a mathematical medium, which consists only of numbers. It was not only transformed, but also “frozen” in storage. (See PCM) Transmission of numbers has significant advantages as described below.
The representation of an electrical signal in a sequence of numbers, enabled a fundamentally new way to modify a signal. The signal goes from a physical state to a digital state that is stored in memory and can be manipulated through arithmetic operations.
TDMA Systems
As already described, it is possible to record speech with a sampling rate of 8 kHz and with an accuracy of 8 bits and convert it back into an analog signal with little loss of quality. Speech therefore corresponds to a “bit stream” of 64 kbit/s.
With telephone networks, it has always been a problem to carry as many telephone calls as possible over one line. This was especially important for long distance calls. It was not cost-effective to provide one line per call participant. Was it possible to share a line somehow? Therefore, in the 1950s, Bell Laboratories developed methods to transmit digital signals (i.e. sequences of 1 and 0) at high speed. Speeds of 2 Mbit/s were achieved. Today this is incredibly slow but given the technology at the time this was remarkable. A so-called T1 channel was standardized with which data could be transmitted over long distances at 1.544 Mbit/s. This meant that 24 * 64 kbit/s could be transmitted. Each speech signal was given a so-called time slot. Such a process is called Time Division Multiple Access: TDMA.
Before the T1 system was introduced in 1962, there was an FDMA (Frequency Division Multiple Access) system. Like a radio, each speech stream was modulated into a speech channel at a specific frequency. However, the wires used did not allow high frequencies and so FDMA could only transmit around 12 voice streams at the same time. The digital TDMA process was therefore twice as efficient as the analog FDMA system.
A digital system allows signals to be transmitted more efficiently.
Noise Free Transmission
Digital transmission has another advantage. If ones and zeros are transmitted, exactly ones and zeros arrive (if there are no transmission errors). If analog speech is transmitted, speech and noise is received, which is caused by thermal processes during transmission. Over long distances, a signal weakens and must be amplified. In analog systems, however, not only the signal but also the noise is amplified. So with every amplification, noise is added. This is not the case with digital transmission. Although a noisy signal also arrives with digital transmission, this can easily be removed electronically.
Absence of noise is a mayor advantage of digital transmission and storage.
Signal Analysis
If you have a digital signal block, you can analyze it “at your leisure”. Like a bunch of numbers in an Excel sheet. For example, you can determine performance by squaring the values and adding them together. You can determine what the highest amplitude is in the signal. With certain formulas, which we will discuss later, it can also be determined whether there are particularly high or low frequencies in the signal; you can determine the so-called spectrum.
Digital Filters
It is possible to manipulate or edit the digital signals. For example, it is possible to smooth a signal curve. We take 10 values from the signal series and form the average. This means we add all the values and divide it by 10. If you want to avoid dividing, we can also multiply each value by 0.1 before adding all the values. The calculation then looks like this:
y(n) = 0,1*x(n) + 0,1*x(n-1) + 0,1*x(n-2) + 0,1*x(n-3) + …+ 0,1*x(n-9)
y(n) is the generated sample at the nth location (or at time n) and x(n) is the original sample at time n. x(n-1) is the previous sample, etc.
Many engineers researched this type of signal processing in the 1960s. The averaging as shown above is called low-pass filtering because the averaging process cuts out high frequencies. (see illustration). However, multiplying by 0.1 is just one way of averaging. More generally, you can use 10 possibly different factors, a0 to a9. Certain “filtering” results in different “vectors” of factors. In general, the “filter calculation” looks like this.
y(n) = a0*x(n) + a1*x(n-1) + a2*x(n-2) + a3*x(n-3) + …+ a9*x(n-9)
A calculation like the above summation is called a “digital filter”. Manipulating digital signals as shown is called digital signal processing.

It turned out that signals could be much better processed digitally than by analogue means. For this reason, digital signal processing has largely replaced analog electronics over time, at least for lower frequencies.
However, digital signal processing has a catch: it has to be fast. Let’s take a sampling frequency of 8 kHz. You only have 125 microseconds (1/8000 s) to perform 10 multiplications and additions and convert y(n) into an analog value. In this case we speak of real time. Multiplication is critical here because it requires a lot of computing cycles in a normal microprocessor. Normal processors were therefore not suitable for digital signal processing, at least in the 1970s.
Digital Signal Processors
First of all, you need a multiplier for digital signal processing. This must be hard-wired to allow short execution. In the best case, two values (e.g. with a width of 16 bits) are multiplied together in one processor cycle. The result is then always 32 bits wide. The result should now be added to a register, an accumulator. With every cycle one “Multiply Accumulate”. Exactly one element of a digital filters could be calculated and summed up per cycle.
Parallel to the arithmetic operation, the next values should be loaded into the input registers of the multiplier. This requires separate data lines and address lines. Such an architecture is called a Harvard Architecture. It is different to the architecture of a normal processor with a „Neumann Architecture“ where data and programs are stored in the same memory. Processors that effectively carry out the operations as described above are called digital signal processors.
The first digital signal processors came onto the market in the 1980s. The most powerful one came from Texas Instruments and was called TMS 320 10. It could perform a multiply accumulate in just 400 ns. This is 2.5 million multiplications and additions per second. With a sampling rate of 8 kHz it was possible to perform over 300 multiplications of two samples. This made digital signal processing in real time possible for the first time.

Speech Coding
The 1980s saw numerous developments in the field of digital signal processing. There was great interest in the processing of speech signals. Since the telecommunication industry was transmitting speech signals, speech processing was of course an important research item. Especially Bell Laboratories worked on Speech. There were three targets.
- Speech Recognition
- Speech Synthesis
- Speech Compression (Coding)
The goal of Speech Recognition and Speech Synthesis was man-machine communication. There was the application to replace the „human operator“ in the telephone network. For example a computer recognized „key words“ that where spoken by the user after pressing a key on the telephone. Such a key word could be e.g. „Collect Call“. The computer that recognized the word would initiate the related service. In fact speech recognition saved millions of dollars for AT&T in the 80th.
Speech coding was important for the (long-distance) transmission of speech. The more a speech signal could be compressed, the more speech signals can be transmitted over a line.
To understand how to compress speech, it is important to understand how speech is generated. A source filter principle applies when generating a speech signals. The source is either the periodic vibration of the vocal folds in the larynx or a noise source somewhere in the throat or mouth area. The pharynx-oral cavity, the vocal tract, forms a variable resonance space for the sound waves generated by the source. In this resonance space, certain frequencies are amplified and others suppressed. The zones with amplifications are called fomants. In the range from 100 Hz to 3.5 kHz there are typically three such formants. The position of the formants determine which vowels are heard. The formants therefore contain the essential information for language.

Technically, each formant can be represented as a special filter. Research at Bell Laboratories had developed methods for estimating these filters from the speech signal. These filters are simple and can be described with a few parameters. Now, there is an inverse filter for every filter. Using this filter it is possible to “undo” the filtering. So if a speech signal is inversely filtered a signal can be extracted that is the input signal. If this signal is generated by the vocal fold, i.e. is voiced, the periodicity can be determined.
It can be expected, that such a periodical signal has similar shapes with each pulse. So only one pulse has to be encoded and for the following pulses only smaller changes of this pulse. If this „periodical“ effects are filtered out, a signal remains that is „the rest“, the residual signal. This residual signal resembles noise. It has a flat spectrum and is easy to encode.

The filter estimation process is called Linear Predictive Coding (LPC). This was developed in the seventies and refined in the eighties. It is possible to code the filters, the periodicity of the vocal folds and the residual signal for a speech block separately. As can be shown, significantly fewer bits are needed for this compared to the speech signal is quantized directly. A variant, for example, of encoding the residual signal, is to use a codebook in which various noise signals are stored. In the encoding process the different code book signals are tried out, and the entry is selected that produces the least audible differences to the original signal. Only the number of the code book entry is used for coding.
This process was developed in 1986 by Bishnu Atal and Manfred Schroeder at Bell Laboratories and continues to form the basis of speech coding to this day. These are called Code Excited Linear Predictive Codecs (CELP). Speech research didn’t just take place in Bell Laboratories. Manfred Schroeder was also head of the Third Physical Institute in Göttingen where intensive research was carried out on speech recognition, speech synthesis and speech coding in the 1980s. Another area of research in Göttingen was psychoacoustics, which was also important for the coding of speech and music signals.


By the early 1990s, available signal processors were advanced enough to calculate voice codecs in real time. So nothing stood in the way of an application for the emerging digital mobile communications.
Protection agains Interception
Another good reason for digital transmission was the security and privacy of telephone calls. In all analog systems, the language was transmitted FM-modulated and completely unprotected. In the C-Netz system there was a so-called “concealment” of the analog signal by spectrally reflecting it. But this too could be easily reversed.
With AMPS, TACS and NMT, a skilled amateur radio operator could simply sit on one of the radio channels and listen in on conversations. Apparently that is what happened. A very prominent example is a telephone conversation recorded in 1989. It was a very intimate phone call between Prince Charles and his lover Camilla Bowls. Prince Charles used a TACS telephone for this purpose. This recording was kept secret for a long time but was released in 1992 and led to a scandal that rocked the British royal family.
Unlike analog signals, a digital signal is very easy to encrypt. A digital secret key is needed that has a certain length and consists of a sequence of 0 and 1. For encryption, the digital signal is logically XORed with this sequence before it is transmitted. At the receiving end, an XOR is applied again to get the original sequence back. As long as the key is not known, decryption is impossible.