Taiwan Hadoop Forum

台灣 Hadoop 技術討論區
現在的時間是 2020-09-20, 22:06

所有顯示的時間為 UTC + 8 小時




發表新文章 回覆主題  [ 10 篇文章 ] 
發表人 內容
 文章主題 : 請教Hadoop起動後的問題?(已決解了,感謝jazz的指導)
文章發表於 : 2010-04-08, 17:51 
離線

註冊時間: 2010-04-08, 17:18
文章: 5
大家好,我起動了Hadoop後,出現了一些問題,想請大家幫幫忙。

小弟使用三台電腦做叢集,第一台做namenode+jobtracker,其它二台電腦做datanode+tasktracker

這是啟動後的狀況:
root@hadooper-1:/opt/hadoop# /opt/hadoop/bin/start-dfs.sh
starting namenode, logging to /tmp/hadoop/logs/hadoop-root-namenode-hadooper-1.out
192.168.1.20: starting datanode, logging to /tmp/hadoop/logs/hadoop-root-datanode-hadooper-2.out
192.168.1.30: starting datanode, logging to /tmp/hadoop/logs/hadoop-root-datanode-hadooper-3.out
192.168.1.10: starting datanode, logging to /tmp/hadoop/logs/hadoop-root-datanode-hadooper-1.out
192.168.1.10: starting secondarynamenode, logging to /tmp/hadoop/logs/hadoop-root-secondarynamenode-hadooper-1.out
root@hadooper-1:/opt/hadoop# /opt/hadoop/bin/start-mapred.sh
starting jobtracker, logging to /tmp/hadoop/logs/hadoop-root-jobtracker-hadooper-1.out
192.168.1.20: starting tasktracker, logging to /tmp/hadoop/logs/hadoop-root-tasktracker-hadooper-2.out
192.168.1.10: starting tasktracker, logging to /tmp/hadoop/logs/hadoop-root-tasktracker-hadooper-1.out
192.168.1.30: starting tasktracker, logging to /tmp/hadoop/logs/hadoop-root-tasktracker-hadooper-3.out

在網頁上輸入網址:http://host1:50070
Cluster Summary
6 files and directories, 0 blocks = 6 total. Heap Size is 4.94 MB / 992.31 MB (0%)
Configured Capacity : 0 KB
DFS Used : 0 KB
Non DFS Used : 0 KB
DFS Remaining : 0 KB
DFS Used% : 100 %
DFS Remaining% : 0 %
Live Nodes : 0
Dead Nodes : 0

在網頁上輸入網址:http://host1:50030
Cluster Summary (Heap Size is 4.94 MB/992.31 MB)
Maps Reduces Total Submissions Nodes Map Task Capacity Reduce Task Capacity Avg. Tasks/Node
0 0 0 0 0 0 -
Blacklisted Nodes
0

完全抓不到所有電腦的資訊,請問這是我那裡沒設定好?


最後由 sarsher 於 2010-04-15, 11:45 編輯,總共編輯了 1 次。

回頂端
 個人資料 E-mail  
 
 文章主題 : Re: 請教Hadoop起動後的問題?
文章發表於 : 2010-04-08, 21:26 
離線

註冊時間: 2009-11-09, 19:52
文章: 2897
sarsher 寫:
小弟使用三台電腦做叢集,第一台做namenode+jobtracker,其它二台電腦做datanode+tasktracker
這是啟動後的狀況:
root@hadooper-1:/opt/hadoop# /opt/hadoop/bin/start-dfs.sh
root@hadooper-1:/opt/hadoop# /opt/hadoop/bin/start-mapred.sh

在網頁上輸入網址:http://host1:50070
Live Nodes : 0
在網頁上輸入網址:http://host1:50030
Maps Reduces Total Submissions Nodes Map Task Capacity Reduce Task Capacity Avg. Tasks/Node
0 0 0 0 0 0 -
完全抓不到所有電腦的資訊,請問這是我那裡沒設定好?


1. 初步猜測跟 namenode -format 有點關聯,請問您有 format namenode 嘛??
2. 若有,請提供以下資訊:
(1) hadoop version : 0.18.3 ? 0.20.2 ?
(2) 執行 stat-dfs.sh 跟 start-mapred.sh 後,請在每一台下 jps 指令,並回報結果

整體看起來 namenode 跟 jobtracker 正常運作,但 datanode 跟 tasktracker 異常。
如果 jps 有看到 datanode 跟 tasktracker 但是卻都沒有 live node 的話,有可能是 NameSpace 不一致。
至於為什麼 tasktracker 都沒有進來,初步推斷....跟網路有關....

- Jazz


回頂端
 個人資料 E-mail  
 
 文章主題 : Re: 請教Hadoop起動後的問題?
文章發表於 : 2010-04-09, 08:19 
離線

註冊時間: 2010-04-08, 17:18
文章: 5
jazz大哥你好,小弟我是用 Ubuntu9.04-desktop Hadoop則是用0.20.2 ,以下是在起動後在每台機器下jps指令後的結果。

root@hadooper-1:/opt/hadoop# jps
29187 JobTracker
28995 SecondaryNameNode
30056 Jps
29312 TaskTracker
28618 NameNode

root@hadooper-2:~# jps
21062 TaskTracker
21383 Jps

root@hadooper-3:~# jps
21045 Jps
20722 TaskTracker

三台電腦的IP分別為192.168.1.10、192.168.1.20、192.168.1.30

還請jazz大哥多多指教,謝謝。


回頂端
 個人資料 E-mail  
 
 文章主題 : Re: 請教Hadoop起動後的問題?
文章發表於 : 2010-04-11, 21:49 
離線

註冊時間: 2009-11-09, 19:52
文章: 2897
sarsher 寫:
root@hadooper-1:/opt/hadoop# jps
29187 JobTracker
28995 SecondaryNameNode
30056 Jps
29312 TaskTracker
28618 NameNode

root@hadooper-2:~# jps
21062 TaskTracker
21383 Jps

root@hadooper-3:~# jps
21045 Jps
20722 TaskTracker


看起來都沒有 DataNode。

可以提供 datanode 啟動的 log 嘛?? 從您最初的訊息看起來,您是用 root 身份執行。

/tmp/hadoop/logs/hadoop-root-datanode-hadooper-1.log
/tmp/hadoop/logs/hadoop-root-datanode-hadooper-2.log
/tmp/hadoop/logs/hadoop-root-datanode-hadooper-3.log

我記得 0.20 如果 HDFS 進入 Safe Mode,則 JobTracker 也無法正常運作。
看來是得先把 datanode 為何連不上 namenode 的原因找出來。

- Jazz


回頂端
 個人資料 E-mail  
 
 文章主題 : Re: 請教Hadoop起動後的問題?
文章發表於 : 2010-04-12, 09:39 
離線

註冊時間: 2010-04-08, 17:18
文章: 5
jazz大哥你好,小弟貼上log,因為hadooper-2跟hadooper-3的log比hadoop-1還要多,所以就附上最後的log。
再次感謝jazz大哥的協助

hadooper-1
---------------------------------------------------------------------------------------------------------------------
2010-04-09 12:10:13,821 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = hadooper-1/192.168.1.10
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop ... ranch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2010-04-09 12:10:22,347 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /tmp/hadoop/hadoop-root/dfs/data is not formatted.
2010-04-09 12:10:22,347 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2010-04-09 12:10:22,526 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Registered FSDatasetStatusMBean
2010-04-09 12:10:22,534 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010
2010-04-09 12:10:22,545 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2010-04-09 12:10:32,866 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2010-04-09 12:10:33,189 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50075
2010-04-09 12:10:33,190 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50075 webServer.getConnectors()[0].getLocalPort() returned 50075
2010-04-09 12:10:33,190 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075
2010-04-09 12:10:33,190 INFO org.mortbay.log: jetty-6.1.14
2010-04-09 12:11:50,133 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075
2010-04-09 12:11:50,150 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=DataNode, sessionId=null
2010-04-09 12:12:05,213 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=DataNode, port=50020
2010-04-09 12:12:05,225 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2010-04-09 12:12:05,225 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2010-04-09 12:12:05,239 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50020: starting
2010-04-09 12:12:05,239 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020: starting
2010-04-09 12:12:05,241 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020: starting
2010-04-09 12:12:05,242 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration = DatanodeRegistration(hadooper-1:50010, storageID=, infoPort=50075, ipcPort=50020)
2010-04-09 12:12:05,279 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: New storage id DS-586620072-192.168.1.10-50010-1270786325246 is assigned to data-node 192.168.1.10:50010
2010-04-09 12:12:05,281 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.1.10:50010, storageID=DS-586620072-192.168.1.10-50010-1270786325246, infoPort=50075, ipcPort=50020)In DataNode.run, data = FSDataset{dirpath='/tmp/hadoop/hadoop-root/dfs/data/current'}
2010-04-09 12:12:05,282 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
2010-04-09 12:12:05,340 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got processed in 8 msecs
2010-04-09 12:12:05,343 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block scanner.
2010-04-09 12:14:53,351 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got processed in 5 msecs
2010-04-09 13:14:51,558 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got processed in 4 msecs
2010-04-09 14:14:52,765 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got processed in 2 msecs
2010-04-09 15:14:50,967 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got processed in 3 msecs
2010-04-09 16:14:52,172 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got processed in 3 msecs
2010-04-09 17:14:53,394 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got processed in 3 msecs
2010-04-09 18:14:51,598 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got processed in 2 msecs
2010-04-09 19:14:52,798 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got processed in 3 msecs
2010-04-09 19:27:18,273 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at hadooper-1/192.168.1.10
************************************************************/
2010-04-09 19:28:51,836 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = hadooper-1/192.168.1.10
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop ... ranch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2010-04-09 19:29:00,386 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /tmp/hadoop/hadoop-root/dfs/data: namenode namespaceID = 1490752479; datanode namespaceID = 208887474
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:233)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:148)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:298)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)

2010-04-09 19:29:00,390 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at hadooper-1/192.168.1.10
************************************************************/

hadoop-2
---------------------------------------------------------------------------------------------------------------------
2010-04-09 19:05:49,441 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 0 time(s).
2010-04-09 19:06:10,445 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 1 time(s).
2010-04-09 19:06:31,449 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 2 time(s).
2010-04-09 19:06:52,457 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 3 time(s).
2010-04-09 19:07:13,461 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 4 time(s).
2010-04-09 19:07:34,465 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 5 time(s).
2010-04-09 19:07:55,469 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 6 time(s).
2010-04-09 19:08:16,473 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 7 time(s).
2010-04-09 19:08:37,477 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 8 time(s).
2010-04-09 19:08:58,481 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 9 time(s).
2010-04-09 19:09:19,485 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 10 time(s).
2010-04-09 19:09:40,489 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 11 time(s).
2010-04-09 19:10:01,511 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 12 time(s).
2010-04-09 19:10:22,513 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 13 time(s).
2010-04-09 19:10:43,517 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 14 time(s).
2010-04-09 19:11:04,521 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 15 time(s).
2010-04-09 19:11:25,525 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 16 time(s).
2010-04-09 19:11:46,529 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 17 time(s).
2010-04-09 19:12:07,533 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 18 time(s).
2010-04-09 19:12:28,537 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 19 time(s).
2010-04-09 19:12:49,541 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 20 time(s).
2010-04-09 19:13:10,545 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 21 time(s).
2010-04-09 19:13:31,549 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 22 time(s).
2010-04-09 19:13:52,553 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 23 time(s).
2010-04-09 19:14:13,557 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 24 time(s).
2010-04-09 19:14:34,561 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 25 time(s).
2010-04-09 19:14:55,565 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 26 time(s).
2010-04-09 19:15:16,569 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 27 time(s).
2010-04-09 19:15:37,573 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 28 time(s).
2010-04-09 19:15:58,577 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 29 time(s).
2010-04-09 19:16:19,581 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 30 time(s).
2010-04-09 19:16:40,585 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 31 time(s).
2010-04-09 19:17:01,589 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 32 time(s).
2010-04-09 19:17:22,593 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 33 time(s).
2010-04-09 19:17:43,597 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 34 time(s).
2010-04-09 19:18:04,601 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 35 time(s).
2010-04-09 19:18:25,605 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 36 time(s).
2010-04-09 19:18:46,609 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 37 time(s).
2010-04-09 19:19:07,613 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 38 time(s).
2010-04-09 19:19:28,617 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 39 time(s).
2010-04-09 19:19:49,621 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 40 time(s).
2010-04-09 19:20:10,625 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 41 time(s).
2010-04-09 19:20:31,629 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 42 time(s).
2010-04-09 19:20:52,633 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 43 time(s).
2010-04-09 19:21:13,637 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 44 time(s).
2010-04-09 19:21:33,643 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.SocketTimeoutException: Call to /192.168.1.10:9000 failed on socket timeout exception: java.net.SocketTimeoutException: 20000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/192.168.1.10:9000]
at org.apache.hadoop.ipc.Client.wrapException(Client.java:771)
at org.apache.hadoop.ipc.Client.call(Client.java:743)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.sendHeartbeat(Unknown Source)
at org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:702)
at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1186)
at java.lang.Thread.run(Thread.java:619)
Caused by: java.net.SocketTimeoutException: 20000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/192.168.1.10:9000]
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:213)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
at org.apache.hadoop.ipc.Client.call(Client.java:720)
... 5 more

2010-04-09 19:21:54,677 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 0 time(s).
2010-04-09 19:22:15,681 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 1 time(s).
2010-04-09 19:22:36,685 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 2 time(s).
2010-04-09 19:22:57,689 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 3 time(s).
2010-04-09 19:23:18,693 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 4 time(s).
2010-04-09 19:23:39,697 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 5 time(s).
2010-04-09 19:24:00,701 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 6 time(s).
2010-04-09 19:24:21,705 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 7 time(s).
2010-04-09 19:24:42,709 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 8 time(s).
2010-04-09 19:25:03,713 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 9 time(s).
2010-04-09 19:25:24,717 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 10 time(s).
2010-04-09 19:25:45,721 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 11 time(s).
2010-04-09 19:26:06,725 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 12 time(s).
2010-04-09 19:26:27,729 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 13 time(s).
2010-04-09 19:26:48,733 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 14 time(s).
2010-04-09 19:27:09,737 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 15 time(s).
2010-04-09 19:27:27,653 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at hadooper-2/192.168.1.20
************************************************************/
2010-04-09 19:28:51,082 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = hadooper-2/192.168.1.20
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop ... ranch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2010-04-09 19:29:00,547 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /tmp/hadoop/hadoop-root/dfs/data: namenode namespaceID = 1490752479; datanode namespaceID = 208887474
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:233)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:148)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:298)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)

2010-04-09 19:29:00,549 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at hadooper-2/192.168.1.20
************************************************************/

hadooer-3
---------------------------------------------------------------------------------------------------------------------
2010-04-09 19:05:49,543 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 0 time(s).
2010-04-09 19:06:10,547 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 1 time(s).
2010-04-09 19:06:31,551 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 2 time(s).
2010-04-09 19:06:52,555 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 3 time(s).
2010-04-09 19:07:13,559 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 4 time(s).
2010-04-09 19:07:34,563 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 5 time(s).
2010-04-09 19:07:55,567 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 6 time(s).
2010-04-09 19:08:16,571 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 7 time(s).
2010-04-09 19:08:37,575 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 8 time(s).
2010-04-09 19:08:58,579 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 9 time(s).
2010-04-09 19:09:19,583 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 10 time(s).
2010-04-09 19:09:40,587 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 11 time(s).
2010-04-09 19:10:01,591 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 12 time(s).
2010-04-09 19:10:22,595 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 13 time(s).
2010-04-09 19:10:43,599 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 14 time(s).
2010-04-09 19:11:04,603 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 15 time(s).
2010-04-09 19:11:25,607 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 16 time(s).
2010-04-09 19:11:46,611 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 17 time(s).
2010-04-09 19:12:07,615 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 18 time(s).
2010-04-09 19:12:28,619 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 19 time(s).
2010-04-09 19:12:49,623 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 20 time(s).
2010-04-09 19:13:10,627 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 21 time(s).
2010-04-09 19:13:31,631 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 22 time(s).
2010-04-09 19:13:52,635 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 23 time(s).
2010-04-09 19:14:13,639 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 24 time(s).
2010-04-09 19:14:34,643 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 25 time(s).
2010-04-09 19:14:55,647 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 26 time(s).
2010-04-09 19:15:16,651 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 27 time(s).
2010-04-09 19:15:37,655 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 28 time(s).
2010-04-09 19:15:58,659 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 29 time(s).
2010-04-09 19:16:19,663 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 30 time(s).
2010-04-09 19:16:40,667 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 31 time(s).
2010-04-09 19:17:01,671 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 32 time(s).
2010-04-09 19:17:22,675 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 33 time(s).
2010-04-09 19:17:43,679 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 34 time(s).
2010-04-09 19:18:04,683 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 35 time(s).
2010-04-09 19:18:25,687 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 36 time(s).
2010-04-09 19:18:46,691 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 37 time(s).
2010-04-09 19:19:07,695 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 38 time(s).
2010-04-09 19:19:28,699 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 39 time(s).
2010-04-09 19:19:49,703 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 40 time(s).
2010-04-09 19:20:10,707 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 41 time(s).
2010-04-09 19:20:31,711 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 42 time(s).
2010-04-09 19:20:52,715 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 43 time(s).
2010-04-09 19:21:13,719 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 44 time(s).
2010-04-09 19:21:33,725 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.SocketTimeoutException: Call to /192.168.1.10:9000 failed on socket timeout exception: java.net.SocketTimeoutException: 20000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/192.168.1.10:9000]
at org.apache.hadoop.ipc.Client.wrapException(Client.java:771)
at org.apache.hadoop.ipc.Client.call(Client.java:743)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.sendHeartbeat(Unknown Source)
at org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:702)
at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1186)
at java.lang.Thread.run(Thread.java:619)
Caused by: java.net.SocketTimeoutException: 20000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/192.168.1.10:9000]
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:213)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
at org.apache.hadoop.ipc.Client.call(Client.java:720)
... 5 more

2010-04-09 19:21:54,767 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 0 time(s).
2010-04-09 19:22:15,771 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 1 time(s).
2010-04-09 19:22:36,775 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 2 time(s).
2010-04-09 19:22:57,779 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 3 time(s).
2010-04-09 19:23:18,783 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 4 time(s).
2010-04-09 19:23:39,787 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 5 time(s).
2010-04-09 19:24:00,791 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 6 time(s).
2010-04-09 19:24:21,795 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 7 time(s).
2010-04-09 19:24:42,799 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 8 time(s).
2010-04-09 19:25:03,803 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 9 time(s).
2010-04-09 19:25:24,807 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 10 time(s).
2010-04-09 19:25:45,811 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 11 time(s).
2010-04-09 19:26:06,815 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 12 time(s).
2010-04-09 19:26:27,819 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 13 time(s).
2010-04-09 19:26:48,823 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 14 time(s).
2010-04-09 19:27:09,827 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 15 time(s).
2010-04-09 19:27:30,831 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 16 time(s).
2010-04-09 19:27:31,833 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: /192.168.1.10:9000. Already tried 0 time(s).
2010-04-09 19:27:32,101 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at hadooper-3/192.168.1.30
************************************************************/
2010-04-09 19:28:55,803 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = hadooper-3/192.168.1.30
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop ... ranch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2010-04-09 19:29:05,025 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /tmp/hadoop/hadoop-root/dfs/data: namenode namespaceID = 1490752479; datanode namespaceID = 208887474
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:233)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:148)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:298)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)

2010-04-09 19:29:05,027 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at hadooper-3/192.168.1.30
************************************************************/


回頂端
 個人資料 E-mail  
 
 文章主題 : Re: 請教Hadoop起動後的問題?
文章發表於 : 2010-04-12, 22:28 
離線

註冊時間: 2009-11-09, 19:52
文章: 2897
sarsher 寫:
hadooper-1
************************************************************/
2010-04-09 19:28:51,836 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = hadooper-1/192.168.1.10
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop ... ranch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2010-04-09 19:29:00,386 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /tmp/hadoop/hadoop-root/dfs/data: namenode namespaceID = 1490752479; datanode namespaceID = 208887474
---------------------------------------------------------------------------------------------------------------------
hadoop-2
************************************************************/
2010-04-09 19:28:51,082 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = hadooper-2/192.168.1.20
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop ... ranch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2010-04-09 19:29:00,547 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /tmp/hadoop/hadoop-root/dfs/data: namenode namespaceID = 1490752479; datanode namespaceID = 208887474
---------------------------------------------------------------------------------------------------------------------
hadooer-3
************************************************************/
2010-04-09 19:28:55,803 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = hadooper-3/192.168.1.30
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop ... ranch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2010-04-09 19:29:05,025 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /tmp/hadoop/hadoop-root/dfs/data: namenode namespaceID = 1490752479; datanode namespaceID = 208887474


這是一個很經典的錯誤,namespaceID 不同。

第一次 format namenode 並啟動 datanode
 -> NameNode 的 namespaceID = 208887474
 -> DataNode 的 namespaceID = 208887474

第二次又 format namenode 並啟動 datanode
 -> NameNode 的 namespaceID = 1490752479
 -> DataNode 的 namespaceID = 208887474

解決方法一:移除 datanode 的所有資料,適用於過去沒有存放過任何資料到 HDFS

代碼:
hadooper-1:~$ rm -rf /tmp/hadoop/hadoop-root/dfs/data
hadooper-1:~$ /opt/hadoop/bin/hadoop-daemon.sh start datanode

hadooper-2:~$ rm -rf /tmp/hadoop/hadoop-root/dfs/data
hadooper-2:~$ /opt/hadoop/bin/hadoop-daemon.sh start datanode

hadooper-3:~$ rm -rf /tmp/hadoop/hadoop-root/dfs/data
hadooper-3:~$ /opt/hadoop/bin/hadoop-daemon.sh start datanode


解決方法二:修改 datanode 的 namespaceID
編輯每台 datanode 的 /tmp/hadoop/hadoop-root/dfs/data/current/VERSION 把
代碼:
namespaceID=208887474
改成
namespaceID=1490752479

然後重新啟動 datanode
代碼:
hadooper-1:~$ /opt/hadoop/bin/hadoop-daemon.sh start datanode
hadooper-2:~$ /opt/hadoop/bin/hadoop-daemon.sh start datanode
hadooper-3:~$ /opt/hadoop/bin/hadoop-daemon.sh start datanode



解決方法三:修改 namenode 的 namespaceID
編輯 namenode 的 /tmp/hadoop/hadoop-root/dfs/name/current/VERSION 把
代碼:
namespaceID=1490752479
改成
namespaceID=208887474

然後重新啟動 namdenode
代碼:
hadooper-1:~$ /opt/hadoop/bin/hadoop-daemon.sh start namenode


- Jazz


回頂端
 個人資料 E-mail  
 
 文章主題 : Re: 請教Hadoop起動後的問題?
文章發表於 : 2010-04-13, 11:28 
離線

註冊時間: 2010-04-08, 17:18
文章: 5
jazz大哥你好,先感謝你的指導,目前是已經測成功了。
但是,想請教幾個問題。

1.我是按照這個 http://www.classcloud.org/cloud/wiki/NC ... urse090824 裡的教學去做,
那麼我可能是在那個步驟錯誤,而導至這樣的結果呢?

2.後來成功起動後,我把hadoop關閉再打開,結果又出現同樣的錯誤。
請問在正常的情況下,將hadoop關閉後再起動前是否要下什麼指令?

3.最後請問在第一次啟動hadoop時,先格式化HDFS。那之後如果關閉hadoop再開啟的話,是否要再一次格式化HDFS嗎?

最後再次感謝jazz的指導,不然小弟可能還在原地打轉(重覆的安裝ubuntu及hadoop)~~~^_^


回頂端
 個人資料 E-mail  
 
 文章主題 : Re: 請教Hadoop起動後的問題?
文章發表於 : 2010-04-13, 21:50 
離線

註冊時間: 2009-11-09, 19:52
文章: 2897
sarsher 寫:
1.我是按照這個 http://www.classcloud.org/cloud/wiki/NC ... urse090824 裡的教學去做,
那麼我可能是在那個步驟錯誤,而導至這樣的結果呢?
2.後來成功起動後,我把hadoop關閉再打開,結果又出現同樣的錯誤。
請問在正常的情況下,將hadoop關閉後再起動前是否要下什麼指令?
3.最後請問在第一次啟動hadoop時,先格式化HDFS。那之後如果關閉hadoop再開啟的話,是否要再一次格式化HDFS嗎?


1, 3. 重新啟動不需要重新格式化 HDFS

就像我們在使用硬碟前會格式化 C:, D: 一樣,但不是每次開機都要格式化。
同樣的道理,只需要格式化一次 hdfs,後續就可以用 stop-all.sh 跟 start-all.sh 重複關閉/啟動 Hadoop。
主要造成錯誤的原因應該是您有做了兩次以上的 hadoop namenode -format 指令。

2.
這個問題或許該歸咎我們的教材沒有建議大家修改 hadoop.tmp.dir 參數從 /tmp/hadoop-${user.name} 改到別的路徑。
由於 /tmp 目錄有可能會清空,造成關機重開後必須重新 format namenode 的困擾。
否則一般初學者用 start-all.sh 跟 stop-all.sh 就可以重新啟動/關閉 HDFS 跟 MapReduce 環境。

- Jazz


回頂端
 個人資料 E-mail  
 
 文章主題 : Re: 請教Hadoop起動後的問題?(已決解了,感謝jazz的指導)
文章發表於 : 2010-07-01, 14:29 
離線

註冊時間: 2010-07-01, 14:27
文章: 8
原來如此
難怪每次重開機都要format之後才可以用
所以只要把目錄移到別的地方就可以嗎?


回頂端
 個人資料 E-mail  
 
 文章主題 : Re: 請教Hadoop起動後的問題?(已決解了,感謝jazz的指導)
文章發表於 : 2010-07-02, 02:19 
離線

註冊時間: 2009-11-09, 19:52
文章: 2897
wstd 寫:
原來如此
難怪每次重開機都要format之後才可以用
所以只要把目錄移到別的地方就可以嗎?


明確地說應該是把 hadoop-site.xml (0.18.x) 或 core-site.xml、hdfs-site.xml、mapreduce-site,xml (0.20.x) 設定檔中,
至少 hadoop.tmp.dir 參數改掉,例如:

代碼:
  <property>
     <name>hadoop.tmp.dir</name>
     <value>/var/lib/hadoop/cache/${user.name}</value>
  </property>


其次就是確認以下參數的預設值(*-default.xml)是否根據 hadoop.tmp.dir 了...
代碼:
  <property>
     <name>dfs.name.dir</name>
     <value>${hadoop.tmp.dir}/dfs/name</value>
  </property>
  <property>
     <name>dfs.data.dir</name>
     <value>${hadoop.tmp.dir}/dfs/data</value>
  </property>
  <property>
      <name>mapred.local.dir</name>
      <value>${hadoop.tmp.dir}/mapred/local</value>
  </property>
  <property>
       <name>mapred.system.dir</name>
       <value>${hadoop.tmp.dir}/mapred/system</value>
  </property>
  <property>
       <name>mapred.temp.dir</name>
       <value>${hadoop.tmp.dir}/mapred/temp</value>
  </property>


- Jazz


回頂端
 個人資料 E-mail  
 
顯示文章 :  排序  
發表新文章 回覆主題  [ 10 篇文章 ] 

所有顯示的時間為 UTC + 8 小時


誰在線上

正在瀏覽這個版面的使用者:Google [Bot] 和 1 位訪客


不能 在這個版面發表主題
不能 在這個版面回覆主題
不能 在這個版面編輯您的文章
不能 在這個版面刪除您的文章
不能 在這個版面上傳附加檔案

搜尋:
前往 :  
cron
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group
正體中文語系由 竹貓星球 維護製作