I have a slightly unusual problem, but I am trying to avoid re-coding FFT.
In general, I want to know this: If I have an algorithm that is implemented for type
float, but it would work wherever a certain set of operations is defined (e.g. complex numbers, for which also define
*, ...), what is the best way to use that algorithm on another type that supports those operations? In practice this is tricky because generally numeric algorithms are written for speed, not generality.
I am working with values with a very high dynamic range, and so I would like to store them in log space (mostly to avoid underflow).
What I'd like is the log of the FFT of some series:
x = [1,2,3,4,5] fft_x = [ log( x_val ) for x_val in fft(x) ]
Even this will result in significant underflow. What I'd like is to store log values and use
+ en lugar de
logaddexp en lugar de
My thought of how to do this was to implement a simple LogFloat class that defines these primitive operations (but operates in log space). Then I could simply run the FFT code by letting it use my logged values.
class LogFloat: def __init__(self, sign, log_val): assert(float(sign) in (-1, 1)) self.sign = int(sign) self.log_val = log_val @staticmethod def from_float(fval): return LogFloat(sign(fval), log(abs(fval))) def __imul__(self, lf): self.sign *= lf.sign self.log_val += lf.log_val return self def __idiv__(self, lf): self.sign *= lf.sign self.log_val -= lf.log_val return self def __iadd__(self, lf): if self.sign == lf.sign: self.log_val = logaddexp(self.log_val, lf.log_val) else: # subtract the smaller magnitude from the larger if self.log_val > lf.log_val: self.log_val = log_sub(self.log_val, lf.log_val) else: self.log_val = log_sub(lf.log_val, self.log_val) self.sign *= -1 return self def __isub__(self, lf): self.__iadd__(LogFloat(-1 * lf.sign, lf.log_val)) return self def __pow__(self, lf): # note: there may be a way to do this without exponentiating # if the exponent is 0, always return 1 # print self, '**', lf if lf.log_val == -float('inf'): return LogFloat.from_float(1.0) lf_value = lf.sign * math.exp(lf.log_val) if self.sign == -1: # note: in this case, lf_value must be an integer return LogFloat(self.sign**int(lf_value), self.log_val * lf_value) return LogFloat(self.sign, self.log_val * lf_value) def __mul__(self, lf): temp = LogFloat(self.sign, self.log_val) temp *= lf return temp def __div__(self, lf): temp = LogFloat(self.sign, self.log_val) temp /= lf return temp def __add__(self, lf): temp = LogFloat(self.sign, self.log_val) temp += lf return temp def __sub__(self, lf): temp = LogFloat(self.sign, self.log_val) temp -= lf return temp def __str__(self): result = str(self.sign * math.exp(self.log_val)) + '(' if self.sign == -1: result += '-' result += 'e^' + str(self.log_val) + ')' return result def __neg__(self): return LogFloat(-self.sign, self.log_val) def __radd__(self, val): # for sum if val == 0: return self return self + val
Then, the idea would be to construct a list of
LogFloats, and then use it in the FFT:
x_log_float = [ LogFloat.from_float(x_val) for x_val in x ] fft_x_log_float = fft(x_log_float)
This can definitely be done if I re-implement FFT (and simply use
LogFloat wherever I would use
float before, but I thought I would ask for advice. This is a fairly recurring problem: I have a stock algorithm that I want to operate in log space (and it only uses a handful of operations like '+', '-', '', '/', etc.).
This reminds me of writing generic functions with templates, so that the return arguments, parameters, etc. are constructed from the same type. For exmaple, if you can do an FFT of floats, you should be able to easily do one on complex values (by simply using a class that provides the necessary operations for complex values).
As it currently stands, it looks like all FFT implementations are written for bleeding-edge speed, and so won't be very general. So as of now, it looks like I'd have to reimplement FFT for generic types...
The reason I'm doing this is because I want very high-precision convolutions (and the N^2 runtime is extremely slow). Any advice would be greatly appreciated.
*Note, I might need to implement trigonometric functions for
LogFloat, and that would be fine.
Esto funciona porque
LogFloat is a commutative ring (and it doesn't require implementation of trigonometric functions for
LogFloat). The simplest way to do it was to reimplement FFT, but @J.F.Sebastian also pointed out a way of using the
Python generic convolution, which avoids coding the FFT (which, again, was quite easy using either a DSP textbook or the Wikipedia pseudocode).
preguntado el 09 de marzo de 12 a las 22:03
I confess I didn't entirely keep up with the math in your question. However it sounds like what you're really wondering is how to deal with extremely small and large (in absolute value) numbers without hitting underflows and overflows. Unless I misunderstand you, I think this is similar to the problem I have working with units of money, not losing pennies on billion-dollar transactions due to rounding. If that's the case, my solution has been Python's built-in decimal-math module. The documentation is good for both 2 Python y 3 Python. The short version is that decimal math is an arbitrary-precision floating- and fixed-point type. The Python modules conform to the IBM/IEEE standards for decimal math. In Python 3.3 (which is currently in alpha form, but I've been using it with no problems at all), the module has been rewritten in C for an up to 100x speed up (in my quick tests).
You could scale your time domain samples by a large number s to avoid underflow, and then, if
F( f(t) ) = X(j*w)
F(s f(s*t)) <-> X(w/s)
Now using the convolution theorem you can work out how to scale your final result to remove the effect of the scaling factor.