Error irrazonable en Hadoop

Estoy utilizando


where is dependent on other .so 's. I have added each such .so into the Distributed Cache. Interestingly, if I do not add any one of the .so it throws the corresponding exception (that this particular .so was not found). That means, all dependencies se encuentran las present in the Mapper. Then why am I getting this weird & unreasonable error when I am loading ?

IOException: Task process exit with nonzero status of 134 hadoop

I couldn't find any solution to this .. hence any help would be very good .

Gracias por adelantado

preguntado el 03 de mayo de 12 a las 12:05

What errors are in the task logs? Googling your error leads to lots of results. -

The error that I wrote above was from the mapper itself.. I couldn't find a meaningful solution to this error.. -

What is in the task logs? A code 134 means the JVM crashed. Does your code that uses the native libraries work outside of hadoop? Have you tried upping the amount of task memory? -

No, I m using JNI to call those native libraries; and it seems I am loading too many libraries exceeding the upper limit of the task. Do you have an idea about which parameter has to be upped? -

0 Respuestas

No es la respuesta que estás buscando? Examinar otras preguntas etiquetadas or haz tu propia pregunta.