Voice and audio signals are analogic, whereas data network is digital. The transformation of the analogic signal to a digital one is made by Analog-to-Digital Converter (ADC).
This process of Analog-to-Digital Converter or Pulse Code Modulation (PCM) is done in three steps:
- Codification (codification)
In the quantization process a compression of the voice could be used as it will be explained in this chapter:
Sampling is the process of encoding an analog signal in digital form by reading (sampling) its level at precisely spaced intervals of times. The obtained values are called samples.
This process is shown in the following images:
Sampling usually happens at equally separated intervals; this interval is called the sampling interval. The reciprocal of sampling interval is called the sampling frequency or sampling rate. The unit of sampling rate is Hz
The condition that must follow sampling frequency is given by the sampling theorem “It states, that a band limited signal with no frequency components above a certain cut-off frequency is uniquely determined by its discrete values at equally spaced points, provided these samples are taken at a sampling rate equal to or greater than twice the cut-off frequency”
In agreement with the sampling theorem, the telephone audio signals (with frequency between 300 Hz to 3400 Hz), should be sampled at a frequency equal or greater than 6800 Hz (2 xs 3400).
Actually, we usually take the sampling frequency or sampling rate at 8000 Hertz. So, 8000 samples per second are taken that correspond to equally separated intervals of:
T=1/8000= 0.000125 sec. = 125 µs
Therefore, two consecutive samples of a same signal are separated 125 µs and that is called the sampling interval.
Quantization is the process of converting the height of the obtained samples to a finite number of discrete values. There are several methods to quantify that we will explained according to its complexity.
It is necessary to use a finite number of discrete values to represent approximately the amplitude of the samples. All the amplitude range that the samples can take are divided in an equal number of intervals. All the samples whose amplitude falls within an interval, take the same value.
The quantization process necessarily introduces an error, since the real amplitude of the sample is replaced, by an approximate value. This error is called quantization noise.
The quantization noise could be reduced increasing the number of quantifization intervals, but practical limitations force that the number of intervals can not exceed a certain value.
A quantization of this type, in which all the intervals have the same width, is called uniform quantization.
The following image show the effect of the quantifization of an analogic signal. The number of quantization intervals is eight.
The original signal is the continuous line.
The samples, reconstructed in the remote terminal, are represented by points
The reconstructed signal is the intermitent line.
The quantization noise of each sample, gives a deformation or distortion in the reconstructed signal. It is shown here by the intermitent and points line.
Not uniform quantization
In a uniform quantization the distortion or noise does not depend on the sample amplitude. Therefore, when the amplitude is lower the influence of the error or quantization noise is greater. The situation is critical for signals whose analogical amplitude is near the one of a quantification interval.
In order to solve this problem there arr two solutions:
- To increase the quantization intervals - if there are more intervals the errors or noise will be less but we need more binary numbers to quantify a sample and therefore we end up needing more bandwidth to transmit it.
- By means of a not uniform quantization. A finite number of intervals are used. Each one does not have the same width. So, they are not uniform. The width of intervals at low level is shorter than the width at high levels which are greater. This way, it is like weak signals have a high number of quantization levels, reducing the distortion or noise. The strong signals on the other hand have a worse distorsion or noise behaviour than the corresponding to a uniform quantifization, but still good enough.
Therefore, what we can do is to use a not uniform quantization by means of a codec (compressor-decompressor) and then a uniform quantization as you can see in the following image:
The not uniform quantization process follows a certain feature called encoding law.
There are two types of encoding laws: continuous and segmented.
In continuous encoding laws, the quantization intervals have different width, growing from small values, corresponding to low level signals, to greater values, corresponding to high level signals.
In segmented encoding laws, the operation range is divided into a finite number of groups. Each interval of the same group has the same width, being different from other groups.
Normally, the encoding laws used are segmented.
G.711 A Law(a-law) and µ Law (u-law) encoding scheme
The two main encoding laws used nowadays are A law (a-law) and µ law (u-law), that are also known as g.711 codec . A Law (a-law) is used mainly in European PCM systems , and the µ law (u-law) is used in American PCM systems.
The A law is formed by 13 straight line segments (in fact they are 16 segments, but the three central segments are aligned, so they are reduced to 13)
The mathematical formulation of the A Law is:
y= Ax / 1+ LA --------------------- for 0 =< x =< 1/A
y= 1+ L (Ax) / 1+ LA ------------- for 1/A=< x =< 1
being L neperian logarithm.
The parameter A take the value of 87.6. x and y represent the input and the output signal of the compressor
The mathematical formulation of the µ law is:
y= L(1+µx) / L (1+µ)-------------- for 0 =< x =< 1
where µ= 255
In the following image is represented the A law (a-law) graphically :
Differential quantization (Differential PCM)
In audi vocal signals , the LF (low frequency) are generally more common. For that reason the level of two consecutive samples differ generally a very small amount. Taking advantage of this circumstance, the differential quantization has been created
In the differential quantization, instead of treating each sample separately, it is is quantified and codified the difference between a sample and the previous one. As the number of quantization intervals necessary to quantify the difference between two consecutive samples is less than the necessary to quantify one isolated sample, then, the differential quantization let reduce the transmission frequency, since this is proportional to the quantifization intervals.
Delta differential quantization and ADPCM (Adaptative delta PCM)
If we increase the sampling frequency in a differential quantization , two consecutive samples have very little diference in their level. Therefore, a single quantifization interval can be used to quantify the difference.
With this method just an only bit by sample is needed, and the transmission speed (bit rate) would be equal to sampling speed. This type of quantization is known as delta quantization.
In this delta quantization, the level of the output variation is unique. In other type of delta quantization the variation is not fixed and depends on the variations of the input signal. For example, ADPCM or Adaptative delta PCM is based on fitting the scale of quantifization dynamically depending of the small or great differences of the input signal.
Codification - Decodification
Codification is the process by means of which a quantified sample is represented by a binary number with “1 ' s” and “0 ' s”.
Usually in telephony 256 intervals of quantization are used to represent all the possible samples values (for example for G.711 or A law and µ law). Therefore 8 bits to represent all the intervals are needed (28 = 256). Others codecs that use ADPCM or delta quantifization use less intervals and therefore need less bits to codificate the samples.
The device that makes the quantifization and the codification is called encoder.
Decoding is the process by means of which the samples are reconstructed, from the numerical signal. This process is made in a device called decoder.
The group of an encoder and a decoder in a same equipment, is called codec .
IMPORTANT: If we want to calculate the bit-rate of a codec we only need to multiply the sampling rate expressed in samples by second or Herzios by the bits necessary to quantify each sample and that gives us the bit-rate of the codec.
Anyway as there are complex codecs with compression, bit-rate cannot be always deduced this way
G.711.1 speech codec was standardized by ITU-T in 2008.
If you require information about this codec you can visit G.711.1