java连接hdfs ha和调用mapreduce jar示例_Java教程-查字典教程网
java连接hdfs ha和调用mapreduce jar示例
java连接hdfs ha和调用mapreduce jar示例
发布时间:2016-12-28 来源:查字典编辑
摘要:JavaAPI连接HDFSHA复制代码代码如下:publicstaticvoidmain(String[]args){Configurati...

Java API 连接 HDFS HA

复制代码 代码如下:

public static void main(String[] args) {

Configuration conf = new Configuration();

conf.set("fs.defaultFS", "hdfs://hadoop2cluster");

conf.set("dfs.nameservices", "hadoop2cluster");

conf.set("dfs.ha.namenodes.hadoop2cluster", "nn1,nn2");

conf.set("dfs.namenode.rpc-address.hadoop2cluster.nn1", "10.0.1.165:8020");

conf.set("dfs.namenode.rpc-address.hadoop2cluster.nn2", "10.0.1.166:8020");

conf.set("dfs.client.failover.proxy.provider.hadoop2cluster", "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");

FileSystem fs = null;

try {

fs = FileSystem.get(conf);

FileStatus[] list = fs.listStatus(new Path("/"));

for (FileStatus file : list) {

System.out.println(file.getPath().getName());

}

} catch (IOException e) {

e.printStackTrace();

} finally{

try {

fs.close();

} catch (IOException e) {

e.printStackTrace();

}

}

}

Java API调用MapReduce程序

复制代码 代码如下:

String[] args = new String[24];

args[0] = “/usr/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar";

args[1] = "wordcount";

args[2] = "-D";

args[3] = "yarn.resourcemanager.address=10.0.1.165:8032";

args[4] = "-D";

args[5] = "yarn.resourcemanager.scheduler.address=10.0.1.165:8030";

args[6] = "-D";

args[7] = "fs.defaultFS=hdfs://hadoop2cluster/";

args[8] = "-D";

args[9] = "dfs.nameservices=hadoop2cluster";

args[10] = "-D";

args[11] = "dfs.ha.namenodes.hadoop2cluster=nn1,nn2";

args[12] = "-D";

args[13] = "dfs.namenode.rpc-address.hadoop2cluster.nn1=10.0.1.165:8020";

args[14] = "-D";

args[15] = "dfs.namenode.rpc-address.hadoop2cluster.nn2=10.0.1.166:8020";

args[16] = "-D";

args[17] = "dfs.client.failover.proxy.provider.hadoop2cluster=org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider";

args[18] = "-D";

args[19] = "fs.hdfs.impl=org.apache.hadoop.hdfs.DistributedFileSystem";

args[20] = "-D";

args[21] = "mapreduce.framework.name=yarn";

args[22] = "/input";

args[23] = "/out01";

RunJar.main(args);

相关阅读
推荐文章
猜你喜欢
附近的人在看
推荐阅读
拓展阅读
  • 大家都在看
  • 小编推荐
  • 猜你喜欢
  • 最新Java学习
    热门Java学习
    编程开发子分类