Recursion Is Not Evil
Most of the known approaches to the FFT implementation are based on avoiding the natural FFT recursion, replacing it by loops. But a recursion is not expensive anymore, if it is resolved at compile-time, as it happens with template class recursion. Moreover, this kind of recursion can give performance benefits, since the code has been better unrolled than usual loops. This idea seems to be very similar to the approach of Todd Veldhuizen [6], who rewrote the same Cooley-Tukey algorithm (Listing One) completely in template metaprograms. The nested loops became recursive templates with nonlinear complexity, which can be compiled at most for N=2{12}
on modern workstations taking much time and memory. Being quite efficient, this implementation has not been applied to real technical problems, because they often need to handle larger amounts of data. From these two points of view, I try to find a "golden section" using the efficiency of template metaprogramming and reducing compile-time to make the implementation applicable to huge signals limited only by physical memory.
The approach presented here exploits the original recursive nature of the FFT implementing the Danielson-Lanczos relation (Example 2) using template class recursion. The necessary assumption is that the length of the signal N=2P
is a static constant and is passed as template parameter P
. I start in the high abstraction level dividing the algorithm from Listing One into two parts: the scrambling and the Danielson-Lanczos section. Listing Two represents the initial template class GFFT
with member function fft(T* data)
including two parts of the transform.
Listing Two
template<unsigned P, typename T=double> class GFFT { enum { N = 1<<P }; DanielsonLanczos<N,T> recursion; public: void fft(T* data) { scramble(data,N); recursion.apply(data); } };
The main point now is the implementation of DanielsonLanczos
template class using recursive templates, where P
is the power of 2 defining N
. Type T
is default type of the data elements. The implementation of the function scramble
will be mentioned briefly later and for now could be taken over from Listing One.
DanielsonLanczos
template class in Listing Three depends on the integer N
defining the current length of the data in the recursion and on the same type T
. To avoid the nonlinear number of instantiated templates, I define only one template class DanielsonLanczos<N/2,T>
per recursion level. Therefore, the total number of the template classes to be instantiated is P
+1. The constant P
can not be large because of the physical memory limits. For instance, if you have a data with complex elements of double precision (2x8 bytes per element), then P
may vary from 1 to 27 on a 32-bit platform. The case P
=28 corresponds to 4GB of data and there is no memory for some other program variables. P
can be bigger on 64-bit processors, but it's limited again by available physical memory. Such a number of the instantiated template classes should not provide any compilation problems.
The recursive idea of the Danielson-Lanczos relation is realized by two recursive calls of the member function apply
: the first time with the original signal data and the second time shifted by N
. Every next recursion level divides N
by 2. The last one is specialized for N
=1 and includes empty member function apply
.
Listing Three
template<unsigned N, typename T=double> class DanielsonLanczos { DanielsonLanczos<N/2,T> next; public: void apply(T* data) { next.apply(data); next.apply(data+N); T tempr,tempi,c,s; for (unsigned i=0; i<N; i+=2) { c = cos(i*M_PI/N); s = -sin(i*M_PI/N); tempr = data[i+N]*c - data[i+N+1]*s; tempi = data[i+N]*s + data[i+N+1]*c; data[i+N] = data[i]-tempr; data[i+N+1] = data[i+1]-tempi; data[i] += tempr; data[i+1] += tempi; } } }; template<typename T> class DanielsonLanczos<1,T> { public: void apply(T* data) { } };
After the recursion has been finished, the data is modified in the loop, where cos
and sin
functions used to compute the complex roots of unity (c,s
). The resulting (tempr,tempi)
is a temporary complex number to modify (data[i+N],data[i+N+1])
and (data[i],data[i+1])
. This simple implementation in Listing Three has poor performance due to many computations of trigonometric functions.