2015-11-14 8 views
0

Я сконфигурировал кластер Apache hadoop с 1 Namenode и 2 Datanodes на VMware Workstation, а Namenode работает нормально, также сделал вход в ssh-passwordless, но когда я пытаюсь запустить datanode, получите следующую ошибку ?INFO org.apache.hadoop.ipc.Client: Повторная попытка подключения к серверу: nn1.hcluster.com/192.168.155.131:9000. Уже tri ed 0 раз (а)

При получении данных в узлах данных возникает ошибка повторения для namenode под обоими datanodes, тогда как я попытался выполнить ping и подключиться к Namenode без ошибок.

Below is the log for datanode, 

2015-11-14 19:54:22,622 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: 
/************************************************************ 
STARTUP_MSG: Starting DataNode 
STARTUP_MSG: host = dn2.hcluster.com/192.168.155.133 
STARTUP_MSG: args = [] 
STARTUP_MSG: version = 1.2.1 
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 1 
5:23:09 PDT 2013 
STARTUP_MSG: java = 1.8.0_65 
************************************************************/ 
2015-11-14 19:54:23,447 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 
2015-11-14 19:54:23,485 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered. 
2015-11-14 19:54:23,486 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 
2015-11-14 19:54:23,486 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started 
2015-11-14 19:54:23,876 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered. 
2015-11-14 19:54:25,720 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri 
ed 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
2015-11-14 19:54:27,723 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri 
ed 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
2015-11-14 19:54:28,726 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri 
ed 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
2015-11-14 19:54:29,729 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri 
ed 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
2015-11-14 19:54:30,733 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri 
ed 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
2015-11-14 19:54:31,753 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri 
ed 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
2015-11-14 19:54:32,755 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri 
ed 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
2015-11-14 19:54:33,758 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri 
ed 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
2015-11-14 19:54:34,762 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri 
ed 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
2015-11-14 19:54:35,764 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: nn1.hcluster.com/192.168.155.131:9000. Already tri 
ed 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 
2015-11-14 19:54:35,922 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call to nn1.hcluster.com/192.168.155. 
131:9000 failed on local exception: java.net.NoRouteToHostException: No route to host 
     at org.apache.hadoop.ipc.Client.wrapException(Client.java:1150) 
     at org.apache.hadoop.ipc.Client.call(Client.java:1118) 
     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229) 
     at com.sun.proxy.$Proxy4.getProtocolVersion(Unknown Source) 
     at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422) 
     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:414) 
     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:392) 
     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:374) 
     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:453) 
     at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:335) 
     at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:300) 
     at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385) 
     at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321) 
     at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712) 
     at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651) 
     at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669) 
     at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795) 
     at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812) 
Caused by: java.net.NoRouteToHostException: No route to host 
     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
     at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) 
     at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) 
     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:511) 
     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:481) 
     at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:457) 
     at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:583) 
     at org.apache.hadoop.ipc.Client$Connection.access$2200(Client.java:205) 
     at org.apache.hadoop.ipc.Client.getConnection(Client.java:1249) 
     at org.apache.hadoop.ipc.Client.call(Client.java:1093) 
     ... 16 more 

2015-11-14 19:54:35,952 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************ 
SHUTDOWN_MSG: Shutting down DataNode at dn2.hcluster.com/192.168.155.133 
************************************************************/ 

От DataNode 1 и 2, NameNode и это GUI работает и все 3Desktop имеют возможность общаться с Афоризм через контакт или SSH Беспарольная тоже. Пожалуйста, помогите ..

ядро-site.xml под NameNode

<configuration> 
<property> 
<name>fs.default.name</name> 
<value>hdfs://nn01.hcluster.com:9000</value> 
</property> 
</configuration> 

ответ

0

убедитесь, что ваш NameNode работает нормально. В противном случае проверьте IP-адрес и имя компьютера в файле /etc/hosts.

Убедитесь, что вы указали этот хост «nn01.hcluster.com».