HDFS - NFS gateway configuration - getting exception for NFS3 -


i trying configure nfs gateway access hdfs data, , followed http://hadoop.apache.org/docs/r2.4.0/hadoop-project-dist/hadoop-hdfs/hdfsnfsgateway.html ..

in brief, above link, have followed below steps:

sudo service rpcbind start // start portmapper , nfs daemons.

sudo netstat -taupen | grep 111 // confirm propgram listenining port 111

rpcinfo -p ubuntu // tells programs listening rpc clients.

sudo service nfs-kernel-server start // start mountd

rpcinfo -p ubuntu // should show mountd

sudo service rpcbind stop // start system’s portmapper

sudo netstat -taupen | grep 111 // make sure no other program running port 111. if yes, use “kill -9 portnum”

sudo ./hadoop-daemon.sh start portmap // start portmap using hadoop program

sudo ./hadoop-daemon.sh start nfs3

sudo mount -t nfs -o vers=3,proto=tcp,nolock 192.168.125.156:/ /var/hdnfs

mount.nfs: requested nfs version or transport protocol not supported

above error gone (make sure stop system nfs calling service nfs-kernel-server stop) , getting below exception nsf3:

org.apache.hadoop.ipc.remoteexception(org.apache.hadoop.security.authorize.authorizationexception): user: srini not allowed impersonate root         @ org.apache.hadoop.ipc.client.call(client.java:1410)         @ org.apache.hadoop.ipc.client.call(client.java:1363)         @ org.apache.hadoop.ipc.protobufrpcengine$invoker.invoke(protobufrpcengine.java:206)         @ com.sun.proxy.$proxy14.getfilelinkinfo(unknown source)         @ sun.reflect.nativemethodaccessorimpl.invoke0(native method)         @ sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:57)         @ sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl.java:43)         @ java.lang.reflect.method.invoke(method.java:622)         @ org.apache.hadoop.io.retry.retryinvocationhandler.invokemethod(retryinvocationhandler.java:190)         @ org.apache.hadoop.io.retry.retryinvocationhandler.invoke(retryinvocationhandler.java:103)         @ com.sun.proxy.$proxy14.getfilelinkinfo(unknown source)         @ org.apache.hadoop.hdfs.protocolpb.clientnamenodeprotocoltranslatorpb.getfilelinkinfo(clientnamenodeprotocoltranslatorpb.java:712)         @ org.apache.hadoop.hdfs.dfsclient.getfilelinkinfo(dfsclient.java:1796)         @ org.apache.hadoop.hdfs.nfs.nfs3.nfs3utils.getfilestatus(nfs3utils.java:58)         @ org.apache.hadoop.hdfs.nfs.nfs3.nfs3utils.getfileattr(nfs3utils.java:79)         @ org.apache.hadoop.hdfs.nfs.nfs3.rpcprogramnfs3.fsinfo(rpcprogramnfs3.java:1723)         @ org.apache.hadoop.hdfs.nfs.nfs3.rpcprogramnfs3.handleinternal(rpcprogramnfs3.java:1963)         @ org.apache.hadoop.oncrpc.rpcprogram.messagereceived(rpcprogram.java:162)         @ org.jboss.netty.channel.simplechannelupstreamhandler.handleupstream(simplechannelupstreamhandler.java:70)         @ org.jboss.netty.channel.defaultchannelpipeline.sendupstream(defaultchannelpipeline.java:560)         @ org.jboss.netty.channel.defaultchannelpipeline$defaultchannelhandlercontext.sendupstream(defaultchannelpipeline.java:787)         @ org.jboss.netty.channel.channels.firemessagereceived(channels.java:281)         @ org.apache.hadoop.oncrpc.rpcutil$rpcmessageparserstage.messagereceived(rpcutil.java:132)         @ org.jboss.netty.channel.simplechannelupstreamhandler.handleupstream(simplechannelupstreamhandler.java:70)         @ org.jboss.netty.channel.defaultchannelpipeline.sendupstream(defaultchannelpipeline.java:560)         @ org.jboss.netty.channel.defaultchannelpipeline$defaultchannelhandlercontext.sendupstream(defaultchannelpipeline.java:787)         @ org.jboss.netty.channel.channels.firemessagereceived(channels.java:296)         @ org.jboss.netty.handler.codec.frame.framedecoder.unfoldandfiremessagereceived(framedecoder.java:462)         @ org.jboss.netty.handler.codec.frame.framedecoder.calldecode(framedecoder.java:443)         @ org.jboss.netty.handler.codec.frame.framedecoder.messagereceived(framedecoder.java:303)         @ org.jboss.netty.channel.simplechannelupstreamhandler.handleupstream(simplechannelupstreamhandler.java:70)         @ org.jboss.netty.channel.defaultchannelpipeline.sendupstream(defaultchannelpipeline.java:560)         @ org.jboss.netty.channel.defaultchannelpipeline.sendupstream(defaultchannelpipeline.java:555)         @ org.jboss.netty.channel.channels.firemessagereceived(channels.java:268)         @ org.jboss.netty.channel.channels.firemessagereceived(channels.java:255)         @ org.jboss.netty.channel.socket.nio.nioworker.read(nioworker.java:88)         @ org.jboss.netty.channel.socket.nio.abstractnioworker.process(abstractnioworker.java:107)         @ org.jboss.netty.channel.socket.nio.abstractnioselector.run(abstractnioselector.java:312)         @ org.jboss.netty.channel.socket.nio.abstractnioworker.run(abstractnioworker.java:88)         @ org.jboss.netty.channel.socket.nio.nioworker.run(nioworker.java:178)         @ org.jboss.netty.util.threadrenamingrunnable.run(threadrenamingrunnable.java:108)  @ org.jboss.netty.util.internal.deadlockproofworker$1.run(deadlockproofworker.java:42)         @ java.util.concurrent.threadpoolexecutor.runworker(threadpoolexecutor.java:1146)         @ java.util.concurrent.threadpoolexecutor$worker.run(threadpoolexecutor.java:615)         @ java.lang.thread.run(thread.java:701) 2014-06-11 13:51:14,035 warn org.apache.hadoop.hdfs.nfs.nfs3.rpcprogramnfs3: exception org.apache.hadoop.ipc.remoteexception(org.apache.hadoop.security.authorize.authorizationexception): user: srini not allowed impersonate root 

i think result of fix added in response https://issues.apache.org/jira/browse/hdfs-5804

specifically, notice daryn sharp's comment on 14/jan/27 wanting away having different code paths based on issecurityenabled(). take mean old default behavior removed , new behavior requires amount of configuration. code fixed support security , configuration/documentation required old default unsecure behavior never updated reflect changes. open source, closed doc.

i think there 2 new additional pieces of information necessary working -- notice in documentation followed first step add "nfsserver proxyuser" details core-site.xml (note, think needed on hadoop server side -- particularly name node -- not on client side you're running nfsserver, although time got working, had set everywhere). followed step, changed values of both settings * (star) "nfsserver" impersonate when trying connect anywhere. specically, nfsserver needs able impersonate root past issue both encountering.

<property>   <name>hadoop.proxyuser.nfsserver.groups</name>   <value>*</value>   <description>          'nfsserver' user allowed proxy members of 'nfs-users1' ,           'nfs-users2' groups. set '*' allow nfsserver user proxy group.   </description> </property>  <property>   <name>hadoop.proxyuser.nfsserver.hosts</name>   <value>*</value>   <description>          host nfs gateway running. set '*' allow          requests hosts proxied.   </description> </property> 

that leads me second key piece of information necessary fix -- must run nfs3 server userid "nfsserver" -- not hdfs documentation more interpreted:

  1. start mountd , nfsd.

no root privileges required command. however, ensure user starting hadoop cluster , user starting nfs gateway same.

 hadoop nfs3   or   hadoop-daemon.sh start nfs3 

note, if hadoop-daemon.sh script starts nfs gateway, log can found in hadoop log folder.

i believe second change introduced part of jira 5804. likely, in past supposed run nfs3 hdfs , in event of unsecure cluster, there no impersonation going on. impersonation appears default, , user appears can configure impersonation literally "nfsserver" -- means need provision user "nfsserver".

so finally, after adding configuration mentioned above, need provision nfsserver user:

#create system user named nfsserver hadoop default group sudo useradd -r -g hadoop nfsserver 

and start nfs3 service user (in addition having started portmap)

sudo -u nfsserver hadoop-daemon.sh start nfs3 

Comments

Popular posts from this blog

database - VFP Grid + SQL server 2008 - grid not showing correctly -

jquery - Set jPicker field to empty value -

.htaccess - htaccess convert request to clean url and add slash at the end of the url -