Diferentes resultados para el mismo algoritmo en matlab

I'm doing an assignment of linear algebra, to compare the performance and stability of QR factorization algorithms Gram-Schmidt and Householder.

My doubt comes when calculating the following table:

enter image description here

Where the matrices Q and R are the resulting matrices of the QR factorizations by applying the Gram-Schmidt and householder to a Hilbert matrix A, I is the identity matrix of dimension N; and || * || is the Frobenius norm‎.

When I do the calculations on different computers i have different results in some cases, may be due to this?. The above table corresponds to the calculations performed in a 32-bit computer and the next table in a 64-bit:

enter image description here

These results in matlab involves computer architectures in which the calculations were made?

preguntado el 24 de agosto de 12 a las 03:08

I spent at least an hour today trying to figure out why I was getting different results on two different boxes. Turned out I'd gotten two test inputs reversed on one box. -

Hot Licks, if your result diff from Melkhiah66, please show them. -

You are asking for floating point determinism... If you want to understand how hard it is to achieve, here is a good blog from Glenn Fiedler gafferongames.com/networking-for-game-programmers/… -

2 Respuestas

I'm really interested by an answer if you find one!
Unfortunately plenty of things can change the numerical results...

For being efficient, some LAPACK algorithm iterate on sub-matrix blocks. For optimal efficiency, the size of the blocks has to fit somehow the size of CPU L1/L2/L3 caches...

The size of block is controlled by LAPACK routine ILAENV see http://www.netlib.org/lapack/lug/node120.html

Of course, if block size differ, result will differ numerically... It is possible that the lapack/BLAS DLL provided by Matlab are compiled with a differently tuned version of ILAENV on the two machines, or that ILAENV has been replaced with a customized optimized version taking cache size into account, you could check by yourself making a small C program which call ILAENV and link it to DLL provided by Matlab...

For underlying BLAS, it's even worse: if an optimized version is used, some fused mul-add FPU instruction might be used when available by exemple, and they are not necessarily available on all FPU. AFAIK, Matlab use ATLAS http://math-atlas.sourceforge.net/, and you'll have to inquire about how the lirary was produced... You would have to track differences in the result of basic algebra operations (like matrix*vector or matrix*matrix ...).

ACTUALIZACIÓN: Even if ILAENV were the same, QR uses elementary rotations, so it obviously depend on sin/cos implementation. Unfortunately no standard tells exactly how sin and cos should bitwise-behave, they can be off a few ulp from rounded exact result, and differ from one library to another and will give different results on different architectures/compilers (hardwired in x87 FPU). So unless you provide your own version of these functions (or work in ADA) and compile them with specially crafted compiler options, and maybe finely control the FPU modes there's close to no chance to find exactly same results on different architectures... You will also have to ask Matlab if they took special care to insure floating point deterministic results when they compiled those libraries.

Respondido 24 ago 12, 12:08

That depends on matlab implementation. Do you get the same result when rerun on same architecture? If yes, this problem may caused by precision. Sometimes, it is caused by different FPU (floatingpoint process uint) of CPU. You may try on more 32-bit/64-bit with different CPU.

The best answer should be reply by your matlab provider. Just call them if you have valid license.

De acuerdo con esto aquí.

one cause of difference is that if calculations are done on with the x87 instructions, it get held in 80 bit precision. depending on compiler optimisations, it numbers may stay at 80bit for a few operation before being truncated back to 64 bit. this can cause variations. see http://gcc.gnu.org/wiki/x87note para más información.

the gcc man pages says that using sse (instead of 387) is default on x86-64 platforms. you should be able to force it on 32bit using something like -mfpmath=sse -msse -msse2

Respondido 24 ago 12, 03:08

I would hope that Matlab would use the same precision throughout. The fact that one CPU is 32-bit and another is 64-bit should not change the way arithmetic is done -- IEEE float should be identical in all cases, and integer arithmetic should be done to the same precision. - Lamidas calientes

@HotLicks Agree, but unfortunately it might be a case of wishful thinking :) If you try to compile ATLAS and LAPACK so that they behave bitwise identical on several architectures, you will understand how hard it is, and why Mathworks cannot be blamed... The library they use don't fullfill such requirement. Of course, they can rewrite every library they use, and maybe emulate floating point operations! That would be a new product, call it slowlab (but is will be slow to write too, so don't expect a release within next 10 years). - también conocido como agradable

No es la respuesta que estás buscando? Examinar otras preguntas etiquetadas or haz tu propia pregunta.