hdfs - browse file system link - hadoop - localhost link -


i using hadoop 2.2 on ubuntu.

i able load link in browser.

http://[my_ip]:50070/dfshealth.jsp

from there, when click "browse filesystem" link, sent to

http://localhost:50075/browsedirectory.jsp?namenodeinfoport=50070&dir=/&nnaddr=127.0.0.1:9000

while here think want my_ip instead of localhost , 127.0.0.1

also, if type manually

http://my_ip:50075/browsedirectory.jsp?namenodeinfoport=50070&dir=/&nnaddr=my_ip:9000

it still not work.

the my_ip external/global ip throughout whole question text.

how can working? want able browse hdfs filesystem browser.

core-site.xml

<configuration>      <property>     <name>fs.default.name</name>     <value>hdfs://localhost:9000</value>     <!-- <value>hdfs://my_ip:9000</value> -->     </property>  <!--    fs.default.name    hdfs://localhost:9000 -->  </configuration> 

hdfs-site.xml

<?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>  <!-- put site-specific property overrides in file. -->  <configuration>    <property> <name>dfs.replication</name> <value>1</value> </property>   <property> <name>dfs.namenode.name.dir</name> <value>file:/var/lib/hadoop/hdfs/namenode</value> </property>   <property> <name>dfs.datanode.data.dir</name> <value>file:/var/lib/hadoop/hdfs/datanode</value> </property>  <!--     dfs.replication    1     dfs.namenode.name.dir    file:/var/lib/hadoop/hdfs/namenode     dfs.datanode.data.dir    file:/var/lib/hadoop/hdfs/datanode  -->    <property>     <name>dfs.http.address</name>     <value>my_ip:50070</value> </property>  <property>     <name>dfs.datanode.http.address</name>     <value>my_ip:50075</value> </property>   </configuration> 

/etc/hosts

127.0.0.1       localhost test02  # following lines desirable ipv6 capable hosts ::1     ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 

edit error:

http error 500  problem accessing /nn_browsedfscontent.jsp. reason:      cannot issue delegation token. name node in safe mode. reported blocks 21 has reached threshold 0.9990 of total blocks 21. number of live datanodes 1 has reached minimum number 0. safe mode turned off automatically in 2 seconds.  caused by:  org.apache.hadoop.hdfs.server.namenode.safemodeexception: cannot issue delegation token. name node in safe mode. reported blocks 21 has reached threshold 0.9990 of total blocks 21. number of live datanodes 1 has reached minimum number 0. safe mode turned off automatically in 2 seconds.     @ org.apache.hadoop.hdfs.server.namenode.fsnamesystem.getdelegationtoken(fsnamesystem.java:5887)     @ org.apache.hadoop.hdfs.server.namenode.namenoderpcserver.getdelegationtoken(namenoderpcserver.java:447)     @ org.apache.hadoop.hdfs.server.namenode.namenodejsphelper$1.run(namenodejsphelper.java:623)     @ org.apache.hadoop.hdfs.server.namenode.namenodejsphelper$1.run(namenodejsphelper.java:620)     @ java.security.accesscontroller.doprivileged(native method)     @ javax.security.auth.subject.doas(subject.java:422) 

in hdfs-site.xml, replace

<property>     <name>dfs.http.address</name>     <value>my_ip:50070</value> </property>  <property>     <name>dfs.datanode.http.address</name>     <value>my_ip:50075</value> </property> 

by

<property>     <name>dfs.namenode.http-address</name>     <value>localhost:50070</value> </property>  <property>     <name>dfs.datanode.http.address</name>     <value>localhost:50075</value> </property> 

but usually, in pseudo-ditributed mode it's not necessary specify properties.
reboot cluster after changing properties.


Comments

Popular posts from this blog

database - VFP Grid + SQL server 2008 - grid not showing correctly -

jquery - Set jPicker field to empty value -

.htaccess - htaccess convert request to clean url and add slash at the end of the url -