Ñòóäîïåäèÿ
rus | ua | other

Home Random lecture






Next lecture


Date: 2015-10-07; view: 442.


Resampling

A resampling - this change the sampling rate of digital signals. As applied to digital images the resampling means resizing of images. There are many different algorithms for resampling of images.

For example, to zoom of image in 2 times, you can simply duplicate each of its lines, and each of its columns (and to reduce - to throw out). This method is called nearest neighbor. It is possible intermediate columns and rows to get with help of linear interpolation of the adjacent columns and lines. This method is called bilinear interpolation.

 

Can also each point to get a new image as a weighted sum of a large number of points of the original image (bicubic and other types of interpolation).

Highest quality is obtained using resampling algorithms that take into account the need to work not only with time, but with the frequency of the image. Now we consider resampling algorithm, which is based on the idea of maximum preservation of the frequency image information. The algorithm is built on the principle of interpolation / filtering / decimation

 

The most qualitative resampling come out with using algorithms, which take into account the need to work not only with the temporal area, but also with the frequency area of the image. Now we consider the resampling algorithm, which is based on the idea of maximum preservation of the frequency image information.

The algorithm is built on the principle of interpolation / filtering / decimation.

The work of the algorithm will be considered only for one-dimensional signal, since the two-dimensional image can be resized to fit horizontally (in rows) and then - vertically (in columns). Thus, the two-dimensional image resampling is reduced to one-dimensional signal resampling.

Suppose we need to "stretch" a one-dimensional signal of length n points to a length of m points, ie - times. You need to perform three steps for this. The first step - the interpolation with zeros, which increases the length of the signal in m times.

It's necessary to multiply all counts of the original signal by m, and then, after each count of the signal, insert the m-1 zero value. The spectrum of the signal is changed as follows. That part of the spectrum, which initially contained in the digital signal remains unchanged (this is what we are seeking). The noise occurs above the old half of the sampling rate (reflected copy of the spectrum), which is necessary to get rid of by filtration.

The second step - is filtering out the interference with the low-pass filter.

Now we have a signal that is in m times longer than the original, but it has kept its frequency information and has not acquired outside frequency information (which we filtered). This step will be the finish, if our goal was the lengthening of the signal in m times. But our task requires shortening the signal in n times. To do this, follow step 2. The first step is anti-aliasing filtering. Since the sampling frequency is reduced by n times the spectrum of the signal can be maintained only by its low-frequency part (by Nyquist–Shannon sampling theorem). All frequencies above half of the furure sampling rate need to be removed using anti-aliasing filter with a cutoff frequency equal to half of the current sample rate. The second step is the decimation of the received signal in n times. Simply choose from the signal every n-th point, and drop ho rest. This algorithm is very similar to the ADC, which also filters out the unwanted frequency of the signal first, and then measures the value of a signal at regular intervals, discarding the value in the other moments of time.

Note that the two low-pass filtering used in this algorithm one after another, you can (and should)replace by one. For this, the cutoff frequency of the single low-pass filter must be chosen equal to the minimum of the cut-off frequencies of the two separate low-pass filters.

Another significant improvement in the algorithm - a search for common factors of the numbers m and n. For example, it is obvious that in order to squeeze the signal of 300 points to 200 points, it's enough to put in the algorithm m = 2 and n = 3.

Note that the above algorithm requires a very large amount of computation, since the size of the intermediate one-dimensional signal by resampling can be about the hundreds of thousands. There is a way to significantly improve the performance of the algorithm and reduce memory consumption. This is called ðîlóphase filteriïg. It is based on the fact that it's not necessary to calculate all the points in the long transitional signal. After all, most of them will be discarded after thinning. Polyphase filtering directly allows you to express counts of the resulting signal through counts of the original signal and the anti-aliasing filter.

Note that here we do not discuss details of the algorithm such as the correction of the image boundaries, the choice of the signal phase interpolation and decimation, and building a good anti-aliasing filter. We only note that we need to pay special attention to both the frequency and spatial characteristics of the filter for resampling images. If you optimize the filter only in the frequency domain, it will lead to large ripple in the filter kernel. And during the image resampling, the images ripple in the filter kernel lead to pulsations of brightness near the sudden changes in the brightness of the image (Gibbs phenomenon), as in the last image in Fig. 12.

 


<== previous lecture | next lecture ==>
Application of digital signal processing | Application of digital signal processing
lektsiopedia.org - 2013 ãîä. | Page generation: 0.173 s.