Taiwan Hadoop Forum

台灣 Hadoop 技術討論區
現在的時間是 2022-06-29, 02:56

所有顯示的時間為 UTC + 8 小時




發表新文章 回覆主題  [ 5 篇文章 ] 
發表人 內容
 文章主題 : Hadoop無法正常啟動
文章發表於 : 2013-08-06, 09:15 
離線

註冊時間: 2013-08-06, 08:42
文章: 5
4台機器,一台做Master(IP地址為172.16.1.4),三台做Slave(IP地址分別為172.16.1.1、2、3)。之前的一切配置都正常,Hadoop name –format命令執行似乎也正常,但是執行start-all.sh命令時,卻無法啟動namenode,詳細資訊如下:

Hadoop格式化操作
[Hadoop@CSCent43 ~]$ hadoop namenode -format
Warning: $HADOOP_HOME is deprecated.
13/08/06 08:24:09 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = CSCent43/172.16.1.4
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.2.0
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop ... branch-1.2 -r 1479473; compiled by 'hortonfo' on Mon May 6 06:59:37 UTC 2013
STARTUP_MSG: java = 1.7.0_09-icedtea
************************************************************/
Re-format filesystem in /usr/hadoop-12/tmp/dfs/name ? (Y or N) Y
13/08/06 08:24:15 INFO util.GSet: Computing capacity for map BlocksMap
13/08/06 08:24:15 INFO util.GSet: VM type = 64-bit
13/08/06 08:24:15 INFO util.GSet: 2.0% max memory = 932118528
13/08/06 08:24:15 INFO util.GSet: capacity = 2^21 = 2097152 entries
13/08/06 08:24:15 INFO util.GSet: recommended=2097152, actual=2097152
13/08/06 08:24:15 INFO namenode.FSNamesystem: fsOwner=Hadoop
13/08/06 08:24:15 INFO namenode.FSNamesystem: supergroup=supergroup
13/08/06 08:24:15 INFO namenode.FSNamesystem: isPermissionEnabled=true
13/08/06 08:24:15 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
13/08/06 08:24:15 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
13/08/06 08:24:15 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
13/08/06 08:24:15 INFO namenode.NameNode: Caching file names occuring more than 10 times
13/08/06 08:24:15 ERROR namenode.NameNode: java.io.IOException: Cannot remove current directory: /usr/hadoop-12/tmp/dfs/name/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:292)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1333)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1352)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1261)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1467)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)

13/08/06 08:24:15 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at CSCent43/172.16.1.4
************************************************************/

啟動Hadoop
[Hadoop@CSCent43 ~]$ start-all.sh
Warning: $HADOOP_HOME is deprecated.
starting namenode, logging to /usr/hadoop-12/libexec/../logs/hadoop-Hadoop-namenode-CSCent43.out
172.16.1.2: starting datanode, logging to /usr/hadoop-12/libexec/../logs/hadoop-Hadoop-datanode-CS605-BD-1.out
172.16.1.3: starting datanode, logging to /usr/hadoop-12/libexec/../logs/hadoop-Hadoop-datanode-CS-Cent64-2.out
172.16.1.1: starting datanode, logging to /usr/hadoop-12/libexec/../logs/hadoop-Hadoop-datanode-CS-Cent64-1.out
172.16.1.4: starting secondarynamenode, logging to /usr/hadoop-12/libexec/../logs/hadoop-Hadoop-secondarynamenode-CSCent43.out
starting jobtracker, logging to /usr/hadoop-12/libexec/../logs/hadoop-Hadoop-jobtracker-CSCent43.out
172.16.1.2: starting tasktracker, logging to /usr/hadoop-12/libexec/../logs/hadoop-Hadoop-tasktracker-CS605-BD-1.out
172.16.1.1: starting tasktracker, logging to /usr/hadoop-12/libexec/../logs/hadoop-Hadoop-tasktracker-CS-Cent64-1.out
172.16.1.3: starting tasktracker, logging to /usr/hadoop-12/libexec/../logs/hadoop-Hadoop-tasktracker-CS-Cent64-2.out

查看啟動情況
[Hadoop@CSCent43 ~]$ jps
4869 JobTracker
4983 Jps
4775 SecondaryNameNode

日誌檔中的資訊
2013-08-06 08:26:50,962 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = CSCent43/172.16.1.4
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.2.0
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop ... branch-1.2 -r 1479473; compiled by 'hortonfo' on Mon May 6 06:59:37 UTC 2013
STARTUP_MSG: java = 1.7.0_09-icedtea
************************************************************/
2013-08-06 08:26:51,113 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-08-06 08:26:51,124 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-08-06 08:26:51,125 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-08-06 08:26:51,125 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-08-06 08:26:51,260 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-08-06 08:26:51,278 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-08-06 08:26:51,279 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-08-06 08:26:51,312 INFO org.apache.hadoop.hdfs.util.GSet: Computing capacity for map BlocksMap
2013-08-06 08:26:51,312 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit
2013-08-06 08:26:51,312 INFO org.apache.hadoop.hdfs.util.GSet: 2.0% max memory = 932118528
2013-08-06 08:26:51,312 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^21 = 2097152 entries
2013-08-06 08:26:51,312 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152
2013-08-06 08:26:51,329 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=Hadoop
2013-08-06 08:26:51,329 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-08-06 08:26:51,329 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-08-06 08:26:51,340 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-08-06 08:26:51,340 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-08-06 08:26:51,370 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-08-06 08:26:51,389 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
2013-08-06 08:26:51,389 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2013-08-06 08:26:51,392 INFO org.apache.hadoop.hdfs.server.common.Storage: Cannot access storage directory /usr/hadoop-12/tmp/dfs/name
2013-08-06 08:26:51,394 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /usr/hadoop-12/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
2013-08-06 08:26:51,403 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /usr/hadoop-12/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
2013-08-06 08:26:51,404 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at CSCent43/172.16.1.4
************************************************************/

在Slave的tmp目錄下,沒有任何內容,使用 JPS命令查看,也沒有其他進程啟動。折騰了好幾天了,請各位大神幫助!!!


回頂端
 個人資料 E-mail  
 
 文章主題 : Re: Hadoop無法正常啟動
文章發表於 : 2013-08-06, 11:06 
離線

註冊時間: 2009-11-09, 19:52
文章: 2897
guoxinyou 寫:
Hadoop格式化操作
[Hadoop@CSCent43 ~]$ hadoop namenode -format
13/08/06 08:24:15 ERROR namenode.NameNode: java.io.IOException: Cannot remove current directory: /usr/hadoop-12/tmp/dfs/name/current

啟動Hadoop
2013-08-06 08:26:51,394 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /usr/hadoop-12/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
在Slave的tmp目錄下,沒有任何內容,使用 JPS命令查看,也沒有其他進程啟動。折騰了好幾天了,請各位大神幫助!!!


可以請您先幫忙在 CSCent43 這台下指令,並回報結果
代碼:
ls -al /usr/hadoop-12/tmp/dfs/name

- Jazz


回頂端
 個人資料 E-mail  
 
 文章主題 : Re: Hadoop無法正常啟動
文章發表於 : 2013-08-16, 10:13 
離線

註冊時間: 2013-08-06, 08:42
文章: 5
抱歉,因為有緊急事務外出,今日才回。執行命令結果如下:
[Hadoop@CSCent43 ~]$ ls -al /usr/hadoop-12/tmp/dfs/name
總用量 20
drwxrwxr-x. 5 root root 4096 8月 4 10:03 .
drwxrwxr-x. 4 root root 4096 8月 3 15:57 ..
drwxrwxr-x. 2 root root 4096 8月 4 10:02 current
drwxrwxr-x. 2 root root 4096 8月 3 15:53 image
drwxrwxr-x. 2 root root 4096 8月 3 16:02 previous.checkpoint


回頂端
 個人資料 E-mail  
 
 文章主題 : Re: Hadoop無法正常啟動
文章發表於 : 2013-08-16, 10:25 
離線

註冊時間: 2009-11-09, 19:52
文章: 2897
guoxinyou 寫:
抱歉,因為有緊急事務外出,今日才回。執行命令結果如下:
[Hadoop@CSCent43 ~]$ ls -al /usr/hadoop-12/tmp/dfs/name
總用量 20
drwxrwxr-x. 5 root root 4096 8月 4 10:03 .
drwxrwxr-x. 4 root root 4096 8月 3 15:57 ..
drwxrwxr-x. 2 root root 4096 8月 4 10:02 current
drwxrwxr-x. 2 root root 4096 8月 3 15:53 image
drwxrwxr-x. 2 root root 4096 8月 3 16:02 previous.checkpoint


很可能因為您一開始是用 root 啟動 Hadoop
所以 /usr/hadoop-12/tmp/dfs/name 底下的權限都是 root:root
但您這次又改用 Hadoop 這個使用者身份啟動 Hadoop
就會有無法啟動的問題。

解法:
(1) 用 root 啟動 (有安全疑慮,一般不建議這麼做, 若是作維運就千萬不能這麼做)
(2) chown -R Hadoop:Hadoop /usr/hadoop-12/tmp/dfs/name 把目錄擁有權交給 Hadoop 這個帳號身份

- Jazz


回頂端
 個人資料 E-mail  
 
 文章主題 : Re: Hadoop無法正常啟動
文章發表於 : 2013-08-17, 09:45 
離線

註冊時間: 2013-08-06, 08:42
文章: 5
非常感謝Jazz的不吝指教,問題就像您指出的那樣,按照你提示的命令做了修正,問題解決,再次感謝!


回頂端
 個人資料 E-mail  
 
顯示文章 :  排序  
發表新文章 回覆主題  [ 5 篇文章 ] 

所有顯示的時間為 UTC + 8 小時


誰在線上

正在瀏覽這個版面的使用者:沒有註冊會員 和 1 位訪客


不能 在這個版面發表主題
不能 在這個版面回覆主題
不能 在這個版面編輯您的文章
不能 在這個版面刪除您的文章
不能 在這個版面上傳附加檔案

搜尋:
前往 :  
cron
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group
正體中文語系由 竹貓星球 維護製作