Home | Download | Documents | Tips & Tutorials | Code Library | Help / User Group | Order

Blur filters tutorial, Part I

 

Blur filters (and convolution filters in general) are among the most important. Convolution filters include all sorts of blur, sharpening and edge-detection effects, emboss and many others. You may not know it, but they are often applied as a part of more complex effects. The tutorial below will show you the way to make fast effective blur filters.

Convolution filters in general are best described by the following approach: applying a filter to current pixel, you should retrieve all pixels within a convolution kernel, sum them together according the weight of every pixel, set by a specific matrix, and divide the sum by a factor which is usually just equal to sum of corresponding pixel weights. The convolution matrix (so called convolution kernel) for a simplest blur effect looks like this:

11...1
11...1
.........1
1111

That means: we retrieve all pixels within kernel of specific width and height, simply add them together (all weight are equal to 1), divide to pixels amount (width*height) and assign the resulting value to a center pixel. This simple algorithm is often called "Average". There are many other effects like this; among them, Gussian Blur is the most widely known. The difference of Gaussian Blur is that pixel weights aren't equal - they decrease from kernel center to edges according to a bell-shaped curve called Gaussian.

So, basically, to do a blur, we need to pass through the image, from (0..0) to (X,Y), and for every channel of every pixel we have to retrieve all the pixels around (within a kernel), add them together and divide onto a pixels amount.

Here goes the simple straightforward implementation of algorithm described above. For easy reading the filter code fragments are followed by explanations; do not paste explanations into FM window! Just get the filter code and open it with your copy of FM.

%ffp
Category: "FM Tutorial"
Title: "Blur 1"
Author: "Ilyich the Toad"

 

What was it for? - Here we define the filter category, name and other properties.


ctl[0]:"Radius X", range=(2,scaleFactor*X/2),val=3,pagesize=3,linesize=1
ctl[1]:"Radius  Y", range=(2,scaleFactor*Y/2),val=3,pagesize=3,linesize=1

 

Controls definition section: here we describe sliders for input blur kernel width and height respectively. Take notice, labels on both are set dynamically using image dimensions. Also take notice, both are scaled using "scaleFactor" to make them read correct on different zoom levels.


ForEveryTile:
{
int kernelheight; int kernelwidth;
int xlook; int ylook;
int sum; int amount;

 

Variables difinition. Besides kernel width and height, we define two variables for pixels sum and amount, respectively, and two variables to be used as counters for loops.


setCtlRange(0,2,scaleFactor*X/2); setCtlRange(1,2,scaleFactor*Y/2);

 

And here we set control ranges for two sliders above - and again, we set them dynamically. Minimum kernel size is set to 2 since lesser kernel (0 or 1  pixel) means no sense.


kernelheight=max(ctl(1)/scaleFactor, 1);
kernelwidth=max(ctl(0)/scaleFactor, 1);

 

Pay attention on two things here: first, we "unscale" controls so "real" kernel size is set according to preview zoom. This way we make preview correct on different zoom factors. Second, we take care on kernel sizes so their scaled values never get less than 1. Imagine trying to run the filter with 2 pixels kernel size on 25% zoom preview: the scaled kernel for calculating preview should be equal to 0.5 pixel. Since we use only integer math here, every value lower than 1 is considered equal to 0 and this way we easily get FM into calculating 0/0 expressions. Don't worry - FM won't crush on divide by zero, it just do nothing and you get black transparent preview. By using max function above we guarantee that scaled kernel size will be no less than 1 always, so at least we get something in preview.


for (y=0; y < Y; y++) {

 if(updateProgress(y,Y)) break;

 

We place an expression for checking for user break input and updating the progress indicator at every row calculated.


 for (x=0; x < X; x++) {
  for (z= 0; z < Z; z++) {

  sum=0; amount=0;

 

Re-initializing sum and amount values for every next pixel calculated.


  for (ylook=0; ylook<kernelheight; ylook++)
  {

 

Outer loop: retrieving kernel pixels row-by-row.


  for (xlook=0; xlook<kernelwidth; xlook++)
  {

 

Inner loop: retrieving pixels column-by-column.


  sum+= src(x+xlook-kernelwidth/2,y+ylook-kernelheight/2,z);
  amount++;

 

Calculating sum and total amount of pixels within kernel.


  }; //accumulate pixels in a raw
  }; //accumulate pixels in a column

 

Closing inner, then outer loops.


  pset(x,y,z,sum/amount);

 

Calculating the current pixel value (sum/amount) and setting current pixel.


  }//for z
 }//for x
}//for y

 

Closing loops for passing the image.


updateProgress(0, 100);

 

Reset progress bar back to 0.


true;
}

 

Closing "ForEveryTile" handler and ending filter.

So, what do we see about this agorithm? First and foremost, we see it's drastically slow on large images and large blur settings. Let's try to figure out why it is so. If we increase source image size n times, we increase pixels amount n*n times. Then, to produce "visually the same" effect, we have to increase both kernel width and height n times also. As a result, we have to retrieve n^2 more pixels to calculate every one of n^2 more pixels. That means the filter speed decrease proportionally to 4th power of source image size. Sure nobody ever going to use that filter on an A4 or A3 image for print!

What can we do about that? Oh well, this code is far from being optimized. For instance, we can eliminate calculating kernel pixels amount within loop body: it is equal to kernel width*height anyway. But I'd suggest to leave this for a while: if you ever try to modify this blur into some none-linear one (namely, the pixel weight will not be equal but controlled by some complex funcion like gaussian), you find this way of calculating total weight way simpler compared to thinking of integral functions... Anyway, that's not the main reason of slow speed: the slowest function is retrieving pixels, so we need to concentrate on minimizing this first of all.

Major solution is simple: instead of doing one pass across image, calculating square kernel for every pixel, we may work two pass, retrieving a horizontal strings of pixels and making "X-only" blur at first pass, then retrieving a vertical strings of "horizontally blurred pixels" and doing an "Y-only" blur at second pass. This will lead to the same result in less time: this way, as we increase kernel size, caculation time needed increase proportionally to 1st power of that, not 2nd, so overall calculation time increase as 3nd power of source image size instead of 4th. In the next tutorial you will be shown how exactly you can do that...

 

Home | Download | Documents | Tips & Tutorials | Code Library | Help / User Group | Order