Circular and radial blur can be implemented in different ways. The method I am going to describe here is reasonably efficient, provided that there is a hyper-fast 1D box-based blur routine at one’s disposal (more on that in a future post). The algorithm is quite straightforward to implement, and also has the beautiful property of being able to do both circular and radial blur at the same time.

I will work on grayscale images, although as usual the process can by extended to color images by performing the same operations on the three R, G, and B channels.

The key to circular/radial blur is to not work on the image space directly, but on a dual space, where cartesian co-ordinates are transformed to polar co-ordinates around the central pixel. Like this:

Each column in the transformed image is one of the ‘spokes’ that go from the center to one of the pixels at the perimeter in the original image. The length of the largest spoke is half a diagonal, and the perimeter of a WxH image has *2·(W+H-2)* pixels. So the transformed image is a buffer of dimensions *ceil(sqrt(W^2+H^2)/2)* and *2·(W+H-2)*.

We also need an inverse transform that restores the original image from its polar form.

Note that, for better results, the transform and also its inverse must do proper filtering. Since the spokes are diagonals that do not follow the arrangement of the pixels in the original image, the process of transforming and un-transforming is not exactly reciprocal. i.e., un-transforming the transformed image does not restore the original image *identically*. In simpler words: *this process adds some little blur due to filtering*. However, this is ok, because we’re aiming at circular/radial blur after all.

Below are the schematics of the type of filtering I am using in RCSDK. When I sample a pixel centered at *(xm,ym)* along a spoke, I integrate the original image, constrained to a 1×1 square area. This integration simply takes the (up to) 4 overlapped pixels, and weighs each of them by the corresponding surface overlapping factor:

Note also that the inverse transform must ‘undo’ the filtering, by splatting contributions to the final image using the same surface overlapping factors.

…and here comes the interesting part.

1- If we **1D-blur the rows** in the polar-space image, and then apply the inverse transform, we get a **circular blur** of the original image.

2- If we **1D-blur the columns** in the polar-space image, and then apply the inverse transform, we get a **radial blur** of the original image.

A fast box-based 1D blur implementation can run in *O(n)*, regardless of the radius of the blur. Let’s assume a square image of side *S*. The size of the transformed image is *2·(S+S-2)·sqrt(2)·S/2*, which means a quadratic complexity, or linear with respect to the number of pixels. The algorithm is made of fairly simple arithmetic operations, and allows for multi-threading.

Here you can take a look at a couple of filtering examples taken from my RCSDK Unit Testing System:

*Some bonus remarks:*

1- If the amount of blur becomes progressively small as you approach the center, radial blur becomes *lens blur*.

2- If the amount of blur used in radial blur is different for each color component, you get *chromatic aberration*.

3- If you work with spectral colors, instead of RGB, chromatic aberration looks great even when the blur stretches colors along long streaks.

*Some more final remarks:*

– In general, blur operations are clamped to the boundaries of the buffer they operate on. However, in the case of circular blur, one must warp-around from the first to the last spoke.

– It is not really necessary to create the transformed polar image, which is (much) larger than the original. One can feed the 1D blur with transformed pixels directly, and save some memory. Doing so doesn’t imply a performance penalty, because the algorithm runs through each spoke only once.