No se puede ejecutar el archivo jar abierto
Frecuentes
Visto 242 veces
0
I have taken the standard wordcount program below is the standard example code
package myorg.org;
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.util.*;
public class WordCount {
public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
output.collect(word, one);
}
}
}
public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
int sum = 0;
while (values.hasNext()) {
sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws Exception {
JobConf conf = new JobConf(WordCount.class);
conf.setJobName("wordcount");
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
conf.setMapperClass(Map.class);
conf.setCombinerClass(Reduce.class);
conf.setReducerClass(Reduce.class);
conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
JobClient.runJob(conf);
}
}
I created a java project in eclipse and added hadoop-common-2.0.0-cdh4.3.0.jar
y hadoop-core-2.0.0-mr1-cdh4.3.0
jar as we are using Hadoop , it compiled successfully and I created jar file and copied that jar file into edge server, then i created 2 sample input files and i run the hadoop jar command , its saying unable to open jar file
Below is the command i used for opening jar files
hadoop jar /home/a491882/Map-Reduce/WordCount.jar /home/a491882/Map-Reduce/input /home/a491882/Map-Reduce/output
I have the jar file present in that location and input files too, but unabele to find why this error is coming, in our hadoop cluster , we are using the hadoop-common-2.0.0-cdh4.3.0.jar and hadoop-core-2.0.0-mr1-cdh4.3.0 jar files
Por favor, sugiera cuál podría ser el problema.
0 Respuestas
No es la respuesta que estás buscando? Examinar otras preguntas etiquetadas hadoop mapreduce hdfs or haz tu propia pregunta.
Please clean your question up. If you want people to spend the time answering you, take a few extra minutes to write a sensible question. - Casey
Does the execution result in a stacktrace? What is the actual output produced? - Casey
When I just run the hadoop jar command from hadoop server , its giving error "unable to open jar file " which i created from eclipse . - user2883028
looks like an easy java problem - it could not find a jar file, or it can not be decompressed ... - xhudik
I found the error, when I copy into hadoop server and run the hadoop jar command , its working fine, as i got information from cloudera community , hadoop jar in HDFS will work only if using oozie . - user2883028