In the last post, many moons ago, I introduced the 2-D FFT and discussed the magnitude and phase components of the spatial Fourier domain. In the next few posts, I would like to describe a concrete application of the 2-D FFT, namely blurring. Blurring and deblurring are useful image processing techniques that can be used in a variety of fields. I will explain what blur is mathematically and how it is performed artificially. In future posts, I’ll go into more depth about what happens in the spatial domain, different types of blur, and some current deblurring technology.

When we look at a blurred image, we see that the colors from various objects are spread out in the image over an area around the objects. Mathematically, this is exactly what is happening. Instead of the light from one point in the scene corresponding with the light at one point in the image, the light from each point in the scene is spread out over multiple points in the image. The function that describes this spread is called the point spread function (PSF). The PSF is also the impulse response of the imaging system, and in applications dealing with blur, is sometimes referred to as the “blur kernel”.

Let’s use an example to demonstrate the FFT method of blurring. Here is a picture of Toni Morrison, an African-American author and winner of the Pulitzer and Nobel prizes. We will use a Gaussian blur kernel, a common type of blur, to blur this image.

The following code shows how to load the image and plot it. The *imread* function is used to load the jpeg file and is the same function used in the last post. However, in this post, we use *imagesc* to display the image. After loading the image into the Matlab workspace, we delete the second and third indices of the third dimension of the image, since they are nearly identical to the first index. The rest of the code after the plotting command just format the figure and axes so that the aspect ratio of the image is preserved and the image looks nice.

%Blur Demo %Import image origimage = imread('tonimorrison','jpg'); %Reduce image to 2-D origimage = origimage(:,:,1); %Plot image figure, imagesc(origimage) axis square colormap gray title('Original Image') set(gca, 'XTick', [], 'YTick', [])

The following code shows how to create a Gaussian blur kernel. First, the size of the blur kernel is selected. The size of the kernel determines the amount of blur. Second, the variance of the Gaussian function is chosen. A larger variance will result in more blur, and a smaller variance will result less blur, as the color from each point is spread out over a smaller area. Third, the mean of the Gaussian is chosen. The mean should just be in the center of the kernel image. Next, the values of the blur kernel matrix are filled using the *meshgrid* function and the equation for a two-dimensional Gaussian with mean *m* and variance *s^2*. Finally, the blur kernel is plotted the same way as the image.

%Blur Kernel ksize = 31; kernel = zeros(ksize); %Gaussian Blur s = 3; m = ksize/2; [X, Y] = meshgrid(1:ksize); kernel = (1/(2*pi*s^2))*exp(-((X-m).^2 + (Y-m).^2)/(2*s^2)); %Display Kernel figure, imagesc(kernel) axis square title('Blur Kernel') colormap gray

Now that we have the image and blur kernel, we can blur the image. But first, let’s take a step back and talk about the theory behind the code. As we said before, blurring is just spreading out the information from each point into the surrounding points. The function that accomplishes this in the image domain is called convolution. Convolution involves sliding a window over an image, then setting each point at the center of the window to the sum of the pointwise multiplications of the values of the window and the values of the points in the image that overlap each other. This website gives a good visual and mathematical description of how this works. Although convolution will correctly blur an image, there exists another method that is faster, called the Fast Fourier Transform (FFT). According to the convolution theorem, convolution in the time (or image) domain is equivalent to multiplication in the frequency (or spatial) domain. The image domain is the 2-D equivalent of the time domain, and the spatial domain is the 2-D equivalent of the Fourier domain. Thus, we have:

In our case, *f* is the image, *g* is the blur kernel, and *f * g* gives us the blurred image. Therefore, we can accomplish the blur operation by a series of FFTs, rather than a convolution.

Let’s look through the code, now. In order for the FFT of the kernel to be multiplied with the FFT of the image, the kernel must be placed into an image of the same size as the original image. The kernel can be placed anywhere in the image; it doesn’t really matter. After the kernel is placed into a new image, 2-D FFTs of the kernel image and original image are taken. The FFT of the kernel is then modified so that there are no zero values. This is not important now because we are performing a multiplication with the FFT of the kernel, but later, when we deblur, we will need to perform a division, and any zero values in the FFT of the kernel will prompt a warning. To ensure that we are performing symmetric operations, we should remove the zero values from the FFT of the kernel first. After the zeros are removed from the FFT of the kernel, a multiplication is performed with the FFT of the original image. This is a pointwise multiplication, not a matrix multiplication, which would give much different results and probably an error, in this case. Next, the FFT of this product is transformed back into the image domain with an inverse 2-D FFT. This procedure follows the second equation mentioned earlier. Finally, the blurred image is displayed using *imagesc*. Scroll down below the code to see the results.

%Embed kernel in image that is size of original image [h, w] = size(origimage); kernelimage = zeros(h,w); kernelimage(1:ksize, 1:ksize) = kernel; %Perform 2D FFTs fftimage = fft2(double(origimage)); fftkernel = fft2(kernelimage); %Set all zero values to minimum value fftkernel(find(fftkernel == 0)) = 1e-6; %Multiply FFTs fftblurimage = fftimage.*fftkernel; %Perform Inverse 2D FFT blurimage = ifft2(fftblurimage); %Display Blurred Image figure, imagesc(blurimage) axis square title('Blurred Image') colormap gray set(gca, 'XTick', [], 'YTick', [])

Ok, so this looks really bad. Although the image is blurred, it also seems to be aliased in both dimensions. So, where did we go wrong? The answer is that we need to use padding before we do the FFT. There are several good ways to pad the edges of the image. The method I use below sets each point in the padded region around the image to the closest edge pixel. To visualize the type of padding I use, imagine vertical lines in the padded regions above and below the image, horizontal lines to the right and left of the image, and squares equal to the corner pixels in the corners of the padded region. This method is good enough for our purposes, but other methods could also be used, such as setting the padded areas to mirror images of the regions of the image close to the edges. The essential requirement of the padding is that it must have approximately the same values as the regions close to the edges of the image.

To implement the padding function, create the *padimage* function, as written below, in a separate file or within the main file, and modify the previous block of code in the following way.

%Pad image origimagepad = padimage(origimage, ksize); %Embed kernel in image that is size of original image + padding [h1, w1] = size(origimagepad); kernelimagepad = zeros(h1,w1); kernelimagepad(1:ksize, 1:ksize) = kernel; %Perform 2D FFTs fftimagepad = fft2(origimagepad); fftkernelpad = fft2(kernelimagepad); fftkernelpad(find(fftkernelpad == 0)) = 1e-6; %Multiply FFTs fftblurimagepad = fftimagepad.*fftkernelpad; %Perform Reverse 2D FFT blurimagepad = ifft2(fftblurimagepad); %Remove Padding blurimageunpad = blurimagepad(ksize+1:ksize+h,ksize+1:ksize+w); %Display Blurred Image figure, imagesc(blurimageunpad) axis square title('Blurred Image - with Padding') colormap gray set(gca, 'XTick', [], 'YTick', [])

function Ipad = padimage(I, p) %This function pads the edges of an image to minimize edge effects %during convolutions and Fourier transforms. %Inputs %I - image to pad %p - size of padding around image %Output %Ipad - padded image %Find size of image [h, w] = size(I); %Pad edges Ipad = zeros(h+2*p, w+2*p); %Middle Ipad(p+1:p+h, p+1:p+w) = I; %Top and Bottom Ipad(1:p, p+1:p+w) = repmat(I(1,1:end), p, 1); Ipad(p+h+1:end, p+1:p+w) = repmat(I(end,1:end), p, 1); %Left and Right Ipad(p+1:p+h, 1:p) = repmat(I(1:end,1), 1, p); Ipad(p+1:p+h, p+w+1:end) = repmat(I(1:end,end), 1, p); %Corners Ipad(1:p, 1:p) = I(1,1); %Top-left Ipad(1:p, p+w+1:end) = I(1,end); %Top-right Ipad(p+h+1:end, 1:p) = I(end,1); %Bottom-left Ipad(p+h+1:end,p+w+1:end) = I(end,end); %Bottom-right

As you can see, the image is now blurred with no strange edge effects. In future posts, I’ll discuss the effects of blurring in the spatial domain, different types of blur, and some current deblurring technology.

*In the original post, there were several errors in the section of the code that performed blurring with padding, which have been corrected. Additionally, there was a mismatch between the code that created the final deblurred figure and the code that was presented in the post, which caused some confusion for the readers. The deblurred figure in the post was actually created using the blurred figure before its padding was removed, which made a large difference in the final product. Deblurring is actually more complicated that what I presented in the original post, and I will address deblurring methods in later posts in this series. I apologize for any confusion. In addition, I would like to thank several people who pointed out these errors in the comments section.

Pingback: Transparency and blur | Jessica Herrington 2016

Thank you for presenting this code I learned a few things.

I noticed it doesn’t work like the “conve (A,B,’same’)”. As you increase the kernel size the image shifts toward the right hand side.

It depends on your application it might affect your result.

yes i noticed it shifting to the right too, can someone please explain this?

the shifting to right hand side is from the placement of the kernel.

a better method is to place the kernel in the center of the image and use fftshift and ifftshift

Hi Eric,

thank you for this article. Something still puzzles me though. How come padding in spatial domain can eliminate aliasing? The width of a pixel does not change and this gives the same sampling frequency. The maximum frequency in the fourier transform will still be half the sampling frequency. So if the initial maximum frequency was not comprising the whole bandwidth of the image neither will the maximum frequency of the padded image right?

Best regards, Michiel.

Really nice article. The fact that the kernel has to be written into an matrix with the same size as the original image (and not just the kernelsize) gave me the final push to understand manipulation in Fourier space.

Thanks for explaining!

Good way oof explaining, and nice article to take facts concerning my presentation focus, which i am going to present in academy.

There is certainly a great deal to learn about this subject.

I love all the points you made.

There is certainly a great deal to learn about this subject.

I love all the points you made.

A planeisgly rational answer. Good to hear from you.

Ã‘am Ã±am Ã±am. QuÃ© ricos Sonia! si me pongo yo con esto, es que me pierdo, jajajaa. Empiezo a picar, y no paro, lo mÃo es problemÃ¡tico con los panes.Besitos.

Very nicely explained. Thanks!

thank you.

Hi, Thank you so much . You helped a lot.

But what if I want to form a motion blurred image with the blur kernel along an “S” curve or to form accelerated motion blurred image. Can it be done by matlab? I really want to solve these problem, but I have not ideal how to do it. Thanks.

Hahaaahh. I’m not too bright today. Great post!

Thanks for writing such an eaeyuto–ndsrstand article on this topic.

Thanks for spending time on the computer (writing) so others don’t have to.

sometimes for me syncronicities are…a bless…too many times,but looking under this <feeling> is more than i know…makes me feel damned…to hear some tiny thoughts…

pI just had a brief email exchange with Clemons on this issue. I have emailed him back, requesting permission to quote him. I do not believe in exposing personal correspondences without permission. Hopefully, he will agree, as I found his short comment overly complimentary of Fallow’s role in this./p

Thank you so much . You helped a lot !!!!!!!!

is there any other method available for removing the edges in the blurred image?

Please have a look at your code, at the following line;

blurimage = blurimage(ksize+1:ksize+h, ksize+1:ksize+w);

this can not be correct; becasue you are assignming “blurimage(ksize+1:ksize+h, ksize+1:ksize+w)” , to “blurimage” with the same dimention, but somehow blurimage is shifted as much as ksize.

Nnother error occurs here (fftunblurimage = fftblurimagepad./fftkernelpad;),

fftblurimagepad and fftkernelpad has been defined initally anywhere in your codes. in last block, should it be as follows:

%Unblur Image

%Pad image

fftblurimagepad = padimage(blurimage, ksize);

…..

Thanks

PS

PS,

You are correct about the errors. Thanks for pointing them out. Please see the revised post.

-Eric

Thanks Eric, now the code work prefectly.

I get an error when unpadding image_

%Unpad image

blurimage = blurimage(ksize+1:ksize+h, ksize+1:ksize+w);

This is the error:

??? Index exceeds matrix dimensions.

Another error:

%Divide FFTs

fftunblurimage = fftblurimagepad./fftkernelpad;

fftblurimagepad and fftkernelpad are undefined.

Muhammad,

You are correct about the errors. Thanks for pointing them out. Please see the revised post.

-Eric

Hi, your tutorial is help a lot! but the code is not revised…..the error of ??? Index exceeds matrix dimensions.

Error in ==> MainApp at 48

blurimage = blurimagepad(ksize+1:ksize+h,ksize+1:ksize+w);

Or you have another post about this revise? please point out what is the URL.

Thank you very much!

why not just use conv2 for 2D convolution…

Yunfei,

Thanks for your question. The choice of whether to use an FFT or a convolution is dictated by the size of the kernel. For small kernels, convolution is faster, but for large kernels, the FFT is faster. I have not compared the speeds of the two options for various kernel sizes, but this would be a very interesting comparison.

Additionally, some effects, such as bokeh, require an FFT. I recommend reading this webpage for more information on blurring:

http://www.jhlabs.com/ip/blurring.html

Thanks,

Eric