Configuring Kerberos Authentication for Hue
To configure the Hue server to use Kerberos for authentication:
- Create a Hue user principal in the same realm as the cluster of the form:
kadmin: addprinc -randkey hue/hue.server.fully.qualified.domain.name@YOUR-REALM.COM
where: hue is the principal the Hue server is running as, hue.server.fully.qualified.domain.name is the fully qualified domain name (FQDN) of your Hue server, YOUR-REALM.COM is the name of the Kerberos realm your Hadoop cluster is in - Create a keytab file for the Hue principal using the same procedure that you used to create the keytab for the hdfs or mapred principal for a specific host. You should name this file hue.keytab and put this keytab file in the directory /etc/hue on the machine running the Hue server. Like all keytab files, this file should have the most limited set of permissions possible. It should be owned by the user running the hue server (usually hue) and should have the permission 400.
- To test that the keytab file was created properly, try to obtain Kerberos credentials as the Hue principal using only the keytab file. Substitute your FQDN and realm in the following
command:
$ kinit -k -t /etc/hue/hue.keytab hue/hue.server.fully.qualified.domain.name@YOUR-REALM.COM
- In the /etc/hue/hue.ini configuration file, add the following lines in the sections shown. Replace the kinit_path value,
/usr/kerberos/bin/kinit, shown below with the correct path on the user's system.
[desktop] [[kerberos]] # Path to Hue's Kerberos keytab file hue_keytab=/etc/hue/hue.keytab # Kerberos principal name for Hue hue_principal=hue/FQDN@REALM # add kinit path for non root users kinit_path=/usr/kerberos/bin/kinit [beeswax] # If Kerberos security is enabled, use fully qualified domain name (FQDN) ## hive_server_host=<FQDN of Hive Server> # Hive configuration directory, where hive-site.xml is located ## hive_conf_dir=/etc/hive/conf [impala] ## server_host=localhost # The following property is required when impalad and Hue # are not running on the same host ## impala_principal=impala/impalad.hostname.domainname.com [search] # URL of the Solr Server ## solr_url=http://localhost:8983/solr/ # Requires FQDN in solr_url if enabled ## security_enabled=false [hadoop] [[hdfs_clusters]] [[[default]]] # Enter the host and port on which you are running the Hadoop NameNode namenode_host=FQDN hdfs_port=8020 http_port=50070 security_enabled=true # Thrift plugin port for the name node ## thrift_port=10090 # Configuration for YARN (MR2) # ------------------------------------------------------------------------ [[yarn_clusters]] [[[default]]] # Enter the host on which you are running the ResourceManager ## resourcemanager_host=localhost # Change this if your YARN cluster is Kerberos-secured ## security_enabled=false # Thrift plug-in port for the JobTracker ## thrift_port=9290 [liboozie] # The URL where the Oozie service runs on. This is required in order for users to submit jobs. ## oozie_url=http://localhost:11000/oozie # Requires FQDN in oozie_url if enabled ## security_enabled=false
Important:In the /etc/hue/hue.ini file, verify that:- The jobtracker_host property is set to the fully-qualified domain name (FQDN) of the host running the JobTracker.
- The fs.defaultfs property under each [[hdfs_clusters]] section contains the FQDN of the file system access point, which is typically the NameNode.
- The hive_conf_dir property under the [beeswax] section points to a directory containing a valid hive-site.xml (either the original or a synced copy).
- The FQDN specified for HiveServer2 is the same as the FQDN specified for the hue_principal configuration property.
Also note that HiveServer2 currently does not support TLS/SSL when using Kerberos.
- In the /etc/hadoop/conf/core-site.xml configuration file on each node in the cluster, add the following lines:
<!-- Hue security configuration --> <property> <name>hue.kerberos.principal.shortname</name> <value>hue</value> </property> <property> <name>hadoop.proxyuser.hue.groups</name> <value>*</value> <!-- A group which all users of Hue belong to, or the wildcard value "*" --> </property> <property> <name>hadoop.proxyuser.hue.hosts</name> <value>hue.server.fully.qualified.domain.name</value> </property>
Important:Change the /etc/hadoop/conf/core-site.xml configuration file on all nodes in the cluster.
- For Hue setups that include HttpFS for communication to Hadoop, add the following properties to httpfs-site.xml:
<property> <name>httpfs.proxyuser.hue.hosts</name> <value>fully.qualified.domain.name</value> </property> <property> <name>httpfs.proxyuser.hue.groups</name> <value>*</value> </property>
- Add the following properties to the Oozie server oozie-site.xml configuration file in the Oozie configuration directory:
<property> <name>oozie.service.ProxyUserService.proxyuser.hue.hosts</name> <value>*</value> </property> <property> <name>oozie.service.ProxyUserService.proxyuser.hue.groups</name> <value>*</value> </property>
- Restart the JobTracker to load the changes from the core-site.xml file.
$ sudo service hadoop-0.20-mapreduce-jobtracker restart
- Restart Oozie to load the changes from the oozie-site.xml file.
$ sudo service oozie restart
- Restart the NameNode, JobTracker, and all DataNodes to load the changes from the core-site.xml file.
$ sudo service hadoop-0.20-(namenode|jobtracker|datanode) restart
Page generated May 18, 2018.
<< Hue Authentication | ©2016 Cloudera, Inc. All rights reserved | Enable Hue to Use Kerberos for Authentication >> |
Terms and Conditions Privacy Policy |