100字范文,内容丰富有趣,生活中的好帮手!
100字范文 > eclipse手动pom本地包_环境篇--Eclipse如何远程连接Hadoop集群调试

eclipse手动pom本地包_环境篇--Eclipse如何远程连接Hadoop集群调试

时间:2022-12-01 22:01:57

相关推荐

eclipse手动pom本地包_环境篇--Eclipse如何远程连接Hadoop集群调试

关注DLab数据实验室公众号 带你一起学习大数据~

写在前面:最近终于闲下来了,打算把之前了解到的内容整理一下,先从搭建环境开始吧~

现在接触大数据开发的朋友可能直接使用Spark或者其他的查询引擎了,Hadoop似乎已经不用了,最近突然想再好好的理解一下Hadoop,想做一些实验,所以又面临重新配置环境的问题,尽管曾经配过n多次,但是没有认真的整理下来,所以又不断的遇到了一些小坑。这次一定涨记性。废话不多说了,本文是在已有Hadoop集群的前提下进行的,主要是指导如何通过本地的Eclipse轻松的连接远程Hadoop集群进行任务在线调试。

一、在Eclipse中安装插件

1.百度搜索hadoop-eclipse-plugin-2.6.5.jar,根据你的Hadoop版本下载相应的插件,然后将该插件放入Eclipse安装目录的plugin目录下;

2.重启Eclipse,你会看到在Window->show view中多了MapReduce这个模块;

该模块就是用于配置你的远程Hadoop集群的,如下图:

Location name:随便起一个名字;

Map/Reduce Master:如果你的Hadoop集群没有特别指定,默认是50000-50020都可,一般大家都写50020;Host就是你的Master的地址,此处写IP或者域名皆可

DFS Master:一样的,注意端口跟你的集群配置一致即可,一般都是9000;

3.配置成功你双击之后会在你的Eclipse左上角出现了下面这个

如果你的配置没有问题,你会看到你的HDFS上面存的文件信息,这就表明我们已经成功连接Hadoop集群;

二、WordCount测试后

1.创建一个简单的maven项目,例如取名mrtest;

2.pom.xml

<project xmlns="/POM/4.0.0" xmlns:xsi="/2001/XMLSchema-instance"xsi:schemaLocation="/POM/4.0.0 /xsd/maven-4.0.0.xsd"><modelVersion>4.0.0</modelVersion><groupId>cn.edu.ruc.dbiir</groupId><artifactId>mrtest</artifactId><version>0.0.1-SNAPSHOT</version><packaging>jar</packaging><name>mrtest</name><url></url><properties><project.build.sourceEncoding>UTF-8</project.build.sourceEncoding><hadoop.version>2.6.5</hadoop.version></properties><dependencies><dependency><groupId>log4j</groupId><artifactId>log4j</artifactId><version>1.2.17</version></dependency><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-client</artifactId><version>${hadoop.version}</version></dependency><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-common</artifactId><version>${hadoop.version}</version></dependency><dependency><groupId>org.apache.hadoop</groupId><artifactId>hadoop-hdfs</artifactId><version>${hadoop.version}</version></dependency><dependency><groupId>junit</groupId><artifactId>junit</artifactId><version>3.8.1</version><scope>test</scope></dependency></dependencies></project>

3. WordCount.java

nnpackage cn.edu.ruc.dbiir.mrtest;import java.io.IOException;import org.apache.hadoop.classification.InterfaceAudience.Public;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapred.JobConf;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Reducer;import org.apache.hadoop.mapreduce.Mapper.Context;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;import org.apache.log4j.BasicConfigurator;import org.apache.log4j.Logger;public class WordCount {public static class WCMapper extends Mapper<LongWritable, Text, Text, IntWritable>{@Overrideprotected void map(LongWritable key, Text value,Context context)throws IOException, InterruptedException {// TODO Auto-generated method stub//super.map(key, value, context);String data = value.toString();String [] words = data.split(" ");Logger logger = Logger.getLogger(WCMapper.class);logger.error("Map-key:"+key+"|"+"Map-value:"+value);for(String w:words) {context.write(new Text(w), new IntWritable(1));}}}public static class WCReducer extends Reducer<Text, IntWritable, Text, IntWritable>{@Overrideprotected void reduce(Text key, Iterable<IntWritable> value,Context context) throws IOException, InterruptedException {// TODO Auto-generated method stub//super.reduce(arg0, arg1, arg2);int total =0;Logger logger = Logger.getLogger(WordCount.class);logger.error("Map-key:"+key+"|"+"Map-value:"+value);for(IntWritable v:value) {total+=v.get();}context.write(key, new IntWritable(total));}}public static void main(String[] args) throws Exception{BasicConfigurator.configure();Configuration conf = new Configuration();conf.set("fs.defaultFS", "hdfs://db-01:9000");conf.set("mapreduce.framework.name", "yarn");conf.set("yarn.resourcemanager.hostname", "db-01");//1.创建一个job和任务入口Job job = Job.getInstance(conf);//Job job = Job.getInstance(new Configuration());job.setJarByClass(WordCount.class);((JobConf)job.getConfiguration()).setJar("Your own path by maven install/mrtest/target/mrtest-0.0.1-SNAPSHOT.jar");//2.指定job的mapper和输出的类型<k2,v2>job.setMapperClass(WCMapper.class);job.setMapOutputKeyClass(Text.class);job.setMapOutputValueClass(IntWritable.class);//3.指定job的reducer和输出类型<k4,v4>job.setReducerClass(WCReducer.class);job.setOutputKeyClass(Text.class);job.setOutputValueClass(IntWritable.class);//4指定job的输入和输出FileInputFormat.setInputPaths(job, new Path(args[0]));FileOutputFormat.setOutputPath(job, new Path(args[1]));//5封装参数job.setNumReduceTasks(2);//6.提交job给yarnboolean res=job.waitForCompletion(true);}}

4.运行,在Run Configuration里面填上参数如下,

以上就完成了本地Eclipse远程调试Hadoop集群的WordCount示例,我们再总结一下:

下载并安装插件(jar包放入plugin目录)在插件中配置Hadoop集群的信息mr和hdfs编写wordcount代码运行

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。