Error irrazonable en Hadoop
Frecuentes
Visto 513 veces
1
Estoy utilizando
System.loadLibrary("native1");
where libnative1.so is dependent on other .so 's. I have added each such .so into the Distributed Cache. Interestingly, if I do not add any one of the .so it throws the corresponding exception (that this particular .so was not found). That means, all dependencies se encuentran las present in the Mapper. Then why am I getting this weird & unreasonable error when I am loading native1.so ?
IOException: Task process exit with nonzero status of 134 hadoop
I couldn't find any solution to this .. hence any help would be very good .
Gracias por adelantado
0 Respuestas
No es la respuesta que estás buscando? Examinar otras preguntas etiquetadas java hadoop mapreduce or haz tu propia pregunta.
What errors are in the task logs? Googling your error leads to lots of results. - Steve Severance
The error that I wrote above was from the mapper itself.. I couldn't find a meaningful solution to this error.. - Harsh
What is in the task logs? A code 134 means the JVM crashed. Does your code that uses the native libraries work outside of hadoop? Have you tried upping the amount of task memory? - Steve Severance
No, I m using JNI to call those native libraries; and it seems I am loading too many libraries exceeding the upper limit of the task. Do you have an idea about which parameter has to be upped? - Harsh