Taiwan Hadoop Forum

台灣 Hadoop 技術討論區
現在的時間是 2018-10-19, 00:50

所有顯示的時間為 UTC + 8 小時




發表新文章 回覆主題  [ 14 篇文章 ]  前往頁數 12  下一頁
發表人 內容
 文章主題 : Nodes數為0還有50070無法開啟?
文章發表於 : 2013-10-11, 20:47 
離線

註冊時間: 2013-03-22, 20:28
文章: 26
我之前重新安裝了Hadoop叢集,2台虛擬機各一個Hadoop,主機名稱為Host01、Host02

之後開啟網頁發現50030的Nodes數是0,而50070則是無法連結。

但是我打開jps似乎是有的?
n4540@Host01:/opt/hadoop/bin$ jps
2307 TaskTracker
2117 JobTracker
2034 SecondaryNameNode
1855 DataNode
2350 Jps

n4540@Host02:~$ jps
1548 DataNode
1701 TaskTracker
1788 Jps


去50030查看logs沒有Host02,只有Host01,以下是Host01的log:

2013-10-11 20:07:45,273 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = Host01/140.129.25.15 STARTUP_MSG: args = []
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop ... branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.7.0_40
************************************************************/
2013-10-11 20:07:47,550 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-10-11 20:07:47,640 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-10-11 20:07:47,648 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-10-11 20:07:47,649 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-10-11 20:07:48,629 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-10-11 20:07:48,644 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-10-11 20:07:48,667 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-10-11 20:07:48,668 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-10-11 20:07:48,841 INFO org.apache.hadoop.hdfs.util.GSet: Computing capacity for map BlocksMap
2013-10-11 20:07:48,844 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 32-bit
2013-10-11 20:07:48,844 INFO org.apache.hadoop.hdfs.util.GSet: 2.0% max memory = 1013645312
2013-10-11 20:07:48,844 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^22 = 4194304 entries
2013-10-11 20:07:48,844 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304 2013-10-11 20:07:49,587 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=n4540
2013-10-11 20:07:49,587 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-10-11 20:07:49,587 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-10-11 20:07:49,654 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-10-11 20:07:49,654 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-10-11 20:07:51,254 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-10-11 20:07:51,453 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
2013-10-11 20:07:51,453 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2013-10-11 20:07:51,494 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /var/hadoop/hadoop-n4540/dfs/name does not exist
2013-10-11 20:07:51,522 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed. org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /var/hadoop/hadoop-n4540/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:395)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
2013-10-11 20:07:51,607 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /var/hadoop/hadoop-n4540/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:395)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)

2013-10-11 20:07:51,624 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Host01/140.129.25.15
************************************************************/


試了很久都還是一樣,我開新的虛擬機安裝Hadoop也是一樣行不通,請各位幫幫忙了,感謝!!


回頂端
 個人資料 E-mail  
 
 文章主題 : Re: Nodes數為0還有50070無法開啟?
文章發表於 : 2013-10-12, 20:24 
離線

註冊時間: 2009-11-09, 19:52
文章: 2897
RED 寫:
2013-10-11 20:07:51,494 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /var/hadoop/hadoop-n4540/dfs/name does not exist
2013-10-11 20:07:51,522 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed. org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /var/hadoop/hadoop-n4540/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.


請問有格式化 NameNode 嘛?
ls -al /var/hadoop/hadoop-n4540/dfs/name 是否有結果?

- Jazz


回頂端
 個人資料 E-mail  
 
 文章主題 : Re: Nodes數為0還有50070無法開啟?
文章發表於 : 2013-10-14, 14:36 
離線

註冊時間: 2013-03-22, 20:28
文章: 26
jazz 寫:

請問有格式化 NameNode 嘛?
ls -al /var/hadoop/hadoop-n4540/dfs/name 是否有結果?

- Jazz

我後來下這個指令:/opt/hadoop/bin/hadoop namenode -format 就可以了
之後50030成功出現2個Nodes,50070也顯示出來了
不過我記得我之前有format,但是前面有加sudo,這樣有關係嗎?還是只是我單純沒format好而已呢?

後來又出現問題了,關掉重開發現Nodes只剩下一個,Host02消失了....這是我哪邊錯了嗎?


回頂端
 個人資料 E-mail  
 
 文章主題 : Re: Nodes數為0還有50070無法開啟?
文章發表於 : 2013-10-14, 15:14 
離線

註冊時間: 2009-11-09, 19:52
文章: 2897
RED 寫:
我後來下這個指令:/opt/hadoop/bin/hadoop namenode -format 就可以了
之後50030成功出現2個Nodes,50070也顯示出來了
不過我記得我之前有format,但是前面有加sudo,這樣有關係嗎?還是只是我單純沒format好而已呢?
後來又出現問題了,關掉重開發現Nodes只剩下一個,Host02消失了....這是我哪邊錯了嗎?


有加 sudo 沒有加 sudo 有很大的關係,

有加 sudo = 執行身份 root = 格式化資料夾放在 /var/hadoop/hadoop-root/dfs/name
沒加 sudo = 執行身份 n4540 = 格式化資料夾放在 /var/hadoop/hadoop-n4540/dfs/name

所以如果 sudo 格式化一次,沒 sudo 又格式化一次。
先用 sudo 執行 NameNode 跟 DataNode,後來又沒用 sudo 執行 DataNode,
會遇到 namespaceID 不符(NN != DN)的問題。

- Jazz


回頂端
 個人資料 E-mail  
 
 文章主題 : Re: Nodes數為0還有50070無法開啟?
文章發表於 : 2013-10-14, 21:36 
離線

註冊時間: 2013-03-22, 20:28
文章: 26
非常感謝Jazz大!原來差別這麼大,我以後會小心的~


我後來又出現問題了,Nodes從2個變成1個,Host02的Datanode應該沒有開起來

JPS如下:
n4540@Host01:/opt/hadoop/bin$ jps
4112 JobTracker
3615 NameNode
4007 SecondaryNameNode
3812 DataNode
4413 Jps
4306 TaskTracker

n4540@Host02:~$ jps
1701 TaskTracker
1789 Jps


附上Host01的log:
代碼:
2013-10-14 21:27:28,628 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = Host01/140.129.25.15
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.2.1
STARTUP_MSG:   build =  -r ; compiled by 'n4540' on Fri Oct 11 21:29:41 CST 2013
STARTUP_MSG:   java = 1.7.0_40
************************************************************/
2013-10-14 21:27:29,661 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-10-14 21:27:29,743 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-10-14 21:27:29,752 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-10-14 21:27:29,752 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2013-10-14 21:27:31,193 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-10-14 21:27:31,229 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-10-14 21:27:39,086 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Registered FSDatasetStatusMBean
2013-10-14 21:27:39,138 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened data transfer server at 50010
2013-10-14 21:27:39,166 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2013-10-14 21:27:39,188 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop library
2013-10-14 21:27:44,356 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2013-10-14 21:27:44,487 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2013-10-14 21:27:44,517 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = false
2013-10-14 21:27:44,517 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50075
2013-10-14 21:27:44,518 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50075 webServer.getConnectors()[0].getLocalPort() returned 50075
2013-10-14 21:27:44,518 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075
2013-10-14 21:27:44,518 INFO org.mortbay.log: jetty-6.1.26
2013-10-14 21:27:45,518 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075
2013-10-14 21:27:45,538 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-10-14 21:27:45,539 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source DataNode registered.
2013-10-14 21:27:51,122 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort50020 registered.
2013-10-14 21:27:51,122 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort50020 registered.
2013-10-14 21:27:51,124 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2013-10-14 21:27:51,124 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration = DatanodeRegistration(Host01:50010, storageID=DS-945379289-140.129.25.15-50010-1381496270051, infoPort=50075, ipcPort=50020)
2013-10-14 21:27:51,148 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Finished generating blocks being written report for 1 volumes in 0 seconds
2013-10-14 21:27:51,168 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Finished asynchronous block report scan in 1ms
2013-10-14 21:27:51,168 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(140.129.25.15:50010, storageID=DS-945379289-140.129.25.15-50010-1381496270051, infoPort=50075, ipcPort=50020)In DataNode.run, data = FSDataset{dirpath='/var/hadoop/hadoop-n4540/dfs/data/current'}
2013-10-14 21:27:51,175 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
2013-10-14 21:27:51,179 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020: starting
2013-10-14 21:27:51,173 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2013-10-14 21:27:51,174 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2013-10-14 21:27:51,175 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50020: starting
2013-10-14 21:27:51,175 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020: starting
2013-10-14 21:27:51,211 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 3 blocks took 1 msec to generate and 27 msecs for RPC and NN processing
2013-10-14 21:27:51,211 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block scanner
2013-10-14 21:27:51,212 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Generated rough (lockless) block report in 0 ms
2013-10-14 21:28:21,890 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving blk_1009625901982189401_1018 src: /140.129.25.15:59902 dest: /140.129.25.15:50010
2013-10-14 21:28:21,954 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /140.129.25.15:59902, dest: /140.129.25.15:50010, bytes: 4, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-634622074_1, offset: 0, srvID: DS-945379289-140.129.25.15-50010-1381496270051, blockid: blk_1009625901982189401_1018, duration: 11843236
2013-10-14 21:28:21,956 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for blk_1009625901982189401_1018 terminating
2013-10-14 21:28:24,206 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling blk_6331116502402752021_1017 file /var/hadoop/hadoop-n4540/dfs/data/current/blk_6331116502402752021 for deletion
2013-10-14 21:28:24,210 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted blk_6331116502402752021_1017 at file /var/hadoop/hadoop-n4540/dfs/data/current/blk_6331116502402752021
2013-10-14 21:29:27,222 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call to Host01/140.129.25.15:9000 failed on local exception: java.io.EOFException
   at org.apache.hadoop.ipc.Client.wrapException(Client.java:1150)
   at org.apache.hadoop.ipc.Client.call(Client.java:1118)
   at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
   at com.sun.proxy.$Proxy5.sendHeartbeat(Unknown Source)
   at org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:1031)
   at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1588)
   at java.lang.Thread.run(Thread.java:724)
Caused by: java.io.EOFException
   at java.io.DataInputStream.readInt(DataInputStream.java:392)
   at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:845)
   at org.apache.hadoop.ipc.Client$Connection.run(Client.java:790)

2013-10-14 21:29:31,227 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Host01/140.129.25.15:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-10-14 21:29:32,229 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Host01/140.129.25.15:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-10-14 21:29:32,376 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at Host01/140.129.25.15
************************************************************/

其中Call to Host01/140.129.25.15:9000 failed on local exception: java.io.EOFException查了好像是版本不符合?
可是我從頭到尾都只用一個版本1.2.1,研究了許久始終摸不著頭緒...再請大大幫幫忙!感謝!!


回頂端
 個人資料 E-mail  
 
 文章主題 : Re: Nodes數為0還有50070無法開啟?
文章發表於 : 2013-10-15, 01:04 
離線

註冊時間: 2009-11-09, 19:52
文章: 2897
我想先確認一下 9000 這個 port 在 Host01 有沒有正常開啟。
請回報 netstat 的結果,謝謝~
代碼:
n4540@Host01:/opt/hadoop/bin$ netstat -nap | grep 9000


- Jazz


回頂端
 個人資料 E-mail  
 
 文章主題 : Re: Nodes數為0還有50070無法開啟?
文章發表於 : 2013-10-15, 19:19 
離線

註冊時間: 2013-03-22, 20:28
文章: 26
jazz 寫:
我想先確認一下 9000 這個 port 在 Host01 有沒有正常開啟。
請回報 netstat 的結果,謝謝~
代碼:
n4540@Host01:/opt/hadoop/bin$ netstat -nap | grep 9000


- Jazz

好的,以下是執行後的結果。

代碼:
n4540@Host01:/opt/hadoop/bin$ netstat -nap | grep 9000
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp6       0      0 140.129.25.15:9000      :::*                    LISTEN      1656/java       
tcp6       0      0 140.129.25.15:36622     140.129.25.15:9000      TIME_WAIT   -               
tcp6       0      0 140.129.25.15:36629     140.129.25.15:9000      TIME_WAIT   -               
tcp6       0      0 140.129.25.15:36628     140.129.25.15:9000      ESTABLISHED 1842/java       
tcp6       0      0 140.129.25.15:9000      140.129.25.15:36628     ESTABLISHED 1656/java       
tcp6       0      0 140.129.25.15:36631     140.129.25.15:9000      TIME_WAIT   -               


回頂端
 個人資料 E-mail  
 
 文章主題 : Re: Nodes數為0還有50070無法開啟?
文章發表於 : 2013-10-17, 11:50 
離線

註冊時間: 2009-11-09, 19:52
文章: 2897
看起來 NameNode 有正常跑在 9000 port

是有點古怪啦~而且您給的 log 還是 Host01 的 DataNode
錯誤訊息看起來是無法傳送 HeartBeat 可是原因不明,有可能是網路的原因。

我需要
代碼:
$ cat /etc/hosts
$ hostname

跟 NameNode 的 log
謝謝~

- Jazz


回頂端
 個人資料 E-mail  
 
 文章主題 : Re: Nodes數為0還有50070無法開啟?
文章發表於 : 2013-10-17, 15:48 
離線

註冊時間: 2013-03-22, 20:28
文章: 26
jazz 寫:
我需要
代碼:
$ cat /etc/hosts
$ hostname

跟 NameNode 的 log
謝謝~

- Jazz

好的,以下為hosts及hostname:
代碼:
n4540@Host01:/opt/hadoop/bin$ cat /etc/hosts
127.0.0.1   localhost
#127.0.1.1   n4540-desktop
140.129.25.15   Host01
140.129.25.137   Host02

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

代碼:
n4540@Host01:/opt/hadoop/bin$ hostname
Host01

以下是Host01的namenode的log:
代碼:
2013-10-17 15:36:56,446 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = Host01/140.129.25.15
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.2.1
STARTUP_MSG:   build =  -r ; compiled by 'n4540' on Fri Oct 11 21:29:41 CST 2013
STARTUP_MSG:   java = 1.7.0_40
************************************************************/
2013-10-17 15:36:57,722 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-10-17 15:36:57,888 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-10-17 15:36:57,892 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-10-17 15:36:57,892 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2013-10-17 15:36:59,245 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-10-17 15:36:59,939 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-10-17 15:37:01,121 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2013-10-17 15:37:01,210 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2013-10-17 15:37:05,936 INFO org.apache.hadoop.hdfs.util.GSet: Computing capacity for map BlocksMap
2013-10-17 15:37:05,936 INFO org.apache.hadoop.hdfs.util.GSet: VM type       = 32-bit
2013-10-17 15:37:05,939 INFO org.apache.hadoop.hdfs.util.GSet: 2.0% max memory = 1013645312
2013-10-17 15:37:05,939 INFO org.apache.hadoop.hdfs.util.GSet: capacity      = 2^22 = 4194304 entries
2013-10-17 15:37:05,939 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304
2013-10-17 15:37:11,038 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=n4540
2013-10-17 15:37:11,038 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2013-10-17 15:37:11,038 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2013-10-17 15:37:13,751 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2013-10-17 15:37:13,751 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2013-10-17 15:37:19,813 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2013-10-17 15:37:20,529 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
2013-10-17 15:37:20,529 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2013-10-17 15:37:20,884 INFO org.apache.hadoop.hdfs.server.common.Storage: Start loading image file /var/hadoop/hadoop-n4540/dfs/name/current/fsimage
2013-10-17 15:37:20,920 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files = 16
2013-10-17 15:37:20,998 INFO org.apache.hadoop.hdfs.server.common.Storage: Number of files under construction = 0
2013-10-17 15:37:20,998 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file /var/hadoop/hadoop-n4540/dfs/name/current/fsimage of size 1548 bytes loaded in 0 seconds.
2013-10-17 15:37:20,998 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Start loading edits file /var/hadoop/hadoop-n4540/dfs/name/current/edits
2013-10-17 15:37:21,011 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: EOF of /var/hadoop/hadoop-n4540/dfs/name/current/edits, reached end of edit log Number of transactions found: 0.  Bytes read: 4
2013-10-17 15:37:21,011 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Start checking end of edit log (/var/hadoop/hadoop-n4540/dfs/name/current/edits) ...
2013-10-17 15:37:21,011 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Checked the bytes after the end of edit log (/var/hadoop/hadoop-n4540/dfs/name/current/edits):
2013-10-17 15:37:21,011 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:   Padding position  = -1 (-1 means padding not found)
2013-10-17 15:37:21,011 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:   Edit log length   = 4
2013-10-17 15:37:21,011 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:   Read length       = 4
2013-10-17 15:37:21,012 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:   Corruption length = 0
2013-10-17 15:37:21,012 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog:   Toleration length = 0 (= dfs.namenode.edits.toleration.length)
2013-10-17 15:37:21,015 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Summary: |---------- Read=4 ----------|-- Corrupt=0 --|-- Pad=0 --|
2013-10-17 15:37:21,015 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Edits file /var/hadoop/hadoop-n4540/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
2013-10-17 15:37:21,031 INFO org.apache.hadoop.hdfs.server.common.Storage: Image file /var/hadoop/hadoop-n4540/dfs/name/current/fsimage of size 1548 bytes saved in 0 seconds.
2013-10-17 15:37:21,186 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/var/hadoop/hadoop-n4540/dfs/name/current/edits
2013-10-17 15:37:21,187 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/var/hadoop/hadoop-n4540/dfs/name/current/edits
2013-10-17 15:37:21,396 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2013-10-17 15:37:21,397 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 13164 msecs
2013-10-17 15:37:21,411 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.safemode.threshold.pct          = 0.9990000128746033
2013-10-17 15:37:21,411 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2013-10-17 15:37:21,411 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.safemode.extension              = 30000
2013-10-17 15:37:21,411 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of blocks excluded by safe block count: 0 total blocks: 4 and thus the safe blocks: 4
2013-10-17 15:37:21,414 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode ON
The reported blocks is only 0 but the threshold is 0.9990 and the total blocks 4. Safe mode will be turned off automatically.
2013-10-17 15:37:21,497 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
2013-10-17 15:37:21,630 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered.
2013-10-17 15:37:21,656 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort9000 registered.
2013-10-17 15:37:21,656 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort9000 registered.
2013-10-17 15:37:21,658 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: Host01/140.129.25.15:9000
2013-10-17 15:37:21,656 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
2013-10-17 15:37:26,816 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2013-10-17 15:37:26,916 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2013-10-17 15:37:27,022 INFO org.apache.hadoop.http.HttpServer: dfs.webhdfs.enabled = false
2013-10-17 15:37:27,070 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
2013-10-17 15:37:27,072 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070
2013-10-17 15:37:27,072 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070
2013-10-17 15:37:27,072 INFO org.mortbay.log: jetty-6.1.26
2013-10-17 15:37:27,847 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50070
2013-10-17 15:37:27,847 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070
2013-10-17 15:37:27,851 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2013-10-17 15:37:27,853 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
2013-10-17 15:37:27,853 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9000: starting
2013-10-17 15:37:27,855 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9000: starting
2013-10-17 15:37:27,860 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9000: starting
2013-10-17 15:37:27,860 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9000: starting
2013-10-17 15:37:27,871 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9000: starting
2013-10-17 15:37:27,875 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9000: starting
2013-10-17 15:37:27,875 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9000: starting
2013-10-17 15:37:27,880 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000: starting
2013-10-17 15:37:27,880 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9000: starting
2013-10-17 15:37:27,888 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9000: starting
2013-10-17 15:37:28,009 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Error report from Host02:50010: Shutting down. Incompatible version or revision.DataNode version '1.2.1' and revision '1503152' and NameNode version '1.2.1' and revision ' and hadoop.relaxed.worker.version.check is not enabled and hadoop.skip.worker.version.check is not enabled
2013-10-17 15:37:40,393 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: node registration from 140.129.25.15:50010 storage DS-945379289-140.129.25.15-50010-1381496270051
2013-10-17 15:37:40,413 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/140.129.25.15:50010
2013-10-17 15:37:40,425 INFO org.apache.hadoop.hdfs.StateChange: *BLOCK* NameNode.blocksBeingWrittenReport: from 140.129.25.15:50010 0 blocks
2013-10-17 15:37:40,475 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode extension entered
The reported blocks 4 has reached the threshold 0.9990 of total blocks 4. Safe mode will be turned off automatically in 29 seconds.
2013-10-17 15:37:40,475 INFO org.apache.hadoop.hdfs.StateChange: *BLOCK* processReport: from 140.129.25.15:50010, blocks: 4, processing time: 27 msecs
2013-10-17 15:38:00,491 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode ON
The reported blocks 4 has reached the threshold 0.9990 of total blocks 4. Safe mode will be turned off automatically in 9 seconds.
2013-10-17 15:38:10,551 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks = 4
2013-10-17 15:38:10,554 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid blocks = 0
2013-10-17 15:38:10,554 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of under-replicated blocks = 4
2013-10-17 15:38:10,554 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of  over-replicated blocks = 0
2013-10-17 15:38:10,554 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode termination scan for invalid, over- and under-replicated blocks completed in 58 msec
2013-10-17 15:38:10,555 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 62 secs
2013-10-17 15:38:10,555 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe mode is OFF
2013-10-17 15:38:10,555 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 1 racks and 1 datanodes
2013-10-17 15:38:10,555 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 4 blocks
2013-10-17 15:38:11,727 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* addToInvalidates: blk_1495635567323588217 to 140.129.25.15:50010
2013-10-17 15:38:12,162 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /var/hadoop/hadoop-n4540/mapred/system/jobtracker.info. blk_-6381374421843005691_1036
2013-10-17 15:38:12,427 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* addStoredBlock: blockMap updated: 140.129.25.15:50010 is added to blk_-6381374421843005691_1036 size 4
2013-10-17 15:38:12,431 INFO org.apache.hadoop.hdfs.StateChange: Removing lease on  /var/hadoop/hadoop-n4540/mapred/system/jobtracker.info from client DFSClient_NONMAPREDUCE_186707238_1
2013-10-17 15:38:12,432 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /var/hadoop/hadoop-n4540/mapred/system/jobtracker.info is closed by DFSClient_NONMAPREDUCE_186707238_1
2013-10-17 15:38:12,526 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 3 msec
2013-10-17 15:38:12,526 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* ask 140.129.25.15:50010 to delete  blk_1495635567323588217_1034
2013-10-17 15:38:12,526 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 1 blocks in 0 msec
2013-10-17 15:38:12,526 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 1 blocks in 0 msec processing time, 0 msec clock time, 1 cycles
2013-10-17 15:42:22,640 INFO org.apache.hadoop.hdfs.StateChange: *BLOCK* processReport: from 140.129.25.15:50010, blocks: 4, processing time: 0 msecs
2013-10-17 15:42:29,028 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 140.129.25.15
2013-10-17 15:42:29,029 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 8 Total time for transactions(ms): 10 Number of transactions batched in Syncs: 0 Number of syncs: 6 SyncTimes(ms): 205
2013-10-17 15:42:29,031 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=788, editlog=/var/hadoop/hadoop-n4540/dfs/name/current/edits
2013-10-17 15:42:29,031 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 788, editlog=/var/hadoop/hadoop-n4540/dfs/name/current/edits
2013-10-17 15:42:29,759 INFO org.apache.hadoop.hdfs.server.namenode.TransferFsImage: Opening connection to http://0.0.0.0:50090/getimage?getimage=1
2013-10-17 15:42:29,965 INFO org.apache.hadoop.hdfs.server.namenode.GetImageServlet: Downloaded new fsimage with checksum: 8ceb10eb93042e089a80de3b3bf2bde7
2013-10-17 15:42:29,979 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll FSImage from 140.129.25.15
2013-10-17 15:42:29,979 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 0 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number of syncs: 1 SyncTimes(ms): 25
2013-10-17 15:42:29,980 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/var/hadoop/hadoop-n4540/dfs/name/current/edits.new
2013-10-17 15:42:29,981 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/var/hadoop/hadoop-n4540/dfs/name/current/edits.new
2013-10-17 15:42:40,563 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Host01/140.129.25.15
************************************************************/


回頂端
 個人資料 E-mail  
 
 文章主題 : Re: Nodes數為0還有50070無法開啟?
文章發表於 : 2013-10-18, 22:48 
離線

註冊時間: 2009-11-09, 19:52
文章: 2897
看不出錯誤。

倒是 NameNode 怎麼最後的訊息是停止?您下指令把 NameNode 關掉嘛?

- Jazz


回頂端
 個人資料 E-mail  
 
顯示文章 :  排序  
發表新文章 回覆主題  [ 14 篇文章 ]  前往頁數 12  下一頁

所有顯示的時間為 UTC + 8 小時


誰在線上

正在瀏覽這個版面的使用者:沒有註冊會員 和 1 位訪客


不能 在這個版面發表主題
不能 在這個版面回覆主題
不能 在這個版面編輯您的文章
不能 在這個版面刪除您的文章
不能 在這個版面上傳附加檔案

搜尋:
前往 :  
cron
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group
正體中文語系由 竹貓星球 維護製作